id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
36104
https://en.wikipedia.org/wiki/Nanosecond
Nanosecond
A nanosecond (ns) is a unit of time in the International System of Units (SI) equal to one billionth of a second, that is, of a second, or seconds. The term combines the SI prefix nano- indicating a 1 billionth submultiple of an SI unit (e.g. nanogram, nanometre, etc.) and second, the primary unit of time in the SI. A nanosecond is to one second, as one second is to approximately 31.69 years. A nanosecond is equal to 1000 picoseconds or  microsecond. Time units ranging between 10 and 10 seconds are typically expressed as tens or hundreds of nanoseconds. Time units of this granularity are commonly found in telecommunications, pulsed lasers, and related aspects of electronics. Common measurements 0.001 nanoseconds – one picosecond 0.96 nanoseconds – 100 Gigabit Ethernet Interpacket gap 1.0 nanosecond – cycle time of an electromagnetic wave with a frequency of 1 GHz (). 1.0 nanosecond – electromagnetic wavelength of 1 light-nanosecond. Equivalent to 0.3 m radio band.  nanoseconds (by definition) – time taken by light to travel 1 foot in vacuum.  nanoseconds (by definition) – time taken by light to travel 1 metre in vacuum. 8 nanoseconds – typical propagation delay of 74HC series logic chips based on HCMOS technology, commonly used for digital electronics in the mid-1980s. 10 nanoseconds – one "shake", (as in a "shake of a lamb's tail") approximate time of one generation of a nuclear chain reaction with fast neutrons 10 nanoseconds – cycle time for frequency 100 MHz (), radio wavelength 3 m (VHF, FM band) 10 nanoseconds – half-life of lithium-12 12 nanoseconds – mean lifetime of a charged K meson 20–40 nanoseconds – time of fusion reaction in a hydrogen bomb 30 nanoseconds – half-life of carbon-21 77 nanoseconds – a sixth (a 60th of a 60th of a 60th of a 60th of a second) 96 nanoseconds – Gigabit Ethernet Interpacket gap 100 nanoseconds – cycle time for frequency 10 MHz, radio wavelength 30 m (shortwave) 294.4 nanoseconds – half-life of polonium-212 333 nanoseconds – cycle time of highest medium wave radio frequency, 3 MHz 500 nanoseconds – T1 time of Josephson phase qubit (see also Qubit) as of May 2005 nanoseconds – one microsecond
Physical sciences
Time
Basics and measurement
36153
https://en.wikipedia.org/wiki/Millisecond
Millisecond
A millisecond (from milli- and second; symbol: ms) is a unit of time in the International System of Units equal to one thousandth (0.001 or 10−3 or 1/1000) of a second or 1000 microseconds. A millisecond is to one second, as one second is to approximately 16.67 minutes. A unit of 10 milliseconds may be called a centisecond, and one of 100 milliseconds a decisecond, but these names are rarely used. To help compare orders of magnitude of different times, this page lists times between 10−3 seconds and 100 seconds (1 millisecond and one second).
Physical sciences
Time
Basics and measurement
36156
https://en.wikipedia.org/wiki/Microsecond
Microsecond
A microsecond is a unit of time in the International System of Units (SI) equal to one millionth (0.000001 or 10−6 or ) of a second. Its symbol is μs, sometimes simplified to us when Unicode is not available. A microsecond is to one second, as one second is to approximately 11.57 days. A microsecond is equal to 1000 nanoseconds or of a millisecond. Because the next SI prefix is 1000 times larger, measurements of 10−5 and 10−4 seconds are typically expressed as tens or hundreds of microseconds. Examples 1 microsecond (1 μs) – cycle time for frequency (1 MHz), the inverse unit. This corresponds to radio wavelength 300 m (AM medium wave band), as can be calculated by multiplying 1 μs by the speed of light (approximately ). 1 microsecond – the length of time of a high-speed, commercial strobe light flash (see air-gap flash). 1 microsecond – protein folding takes place on the order of microseconds (thus this is the speed of carbon-based life). 1.8 microseconds – the amount of time subtracted from the Earth's day as a result of the 2011 Japanese earthquake. 2 microseconds – the lifetime of a muonium particle. 2.68 microseconds – the amount of time subtracted from the Earth's day as a result of the 2004 Indian Ocean earthquake. 3.33564095 microseconds – the time taken by light to travel one kilometre in a vacuum. 5.4 microseconds – the time taken by light to travel one mile in a vacuum (or radio waves point-to-point in a near vacuum). 8 microseconds – the time taken by light to travel one mile in typical single-mode fiber optic cable. 10 microseconds (μs) – cycle time for frequency 100 kHz, radio wavelength 3 km. 18 microseconds – net amount per year that the length of the day lengthens, largely due to tidal acceleration. 20.8 microseconds – sampling interval for digital audio with 48,000 samples/s. 22.7 microseconds – sampling interval for CD audio (44,100 samples/s). 38 microseconds – discrepancy in GPS satellite time per day (compensated by clock speed) due to relativity. 50 microseconds – cycle time for highest human-audible tone (20 kHz). 50 microseconds – to read the access latency for a modern solid state drive which holds non-volatile computer data. 100 microseconds (0.1 ms) – cycle time for frequency 10 kHz. 125 microseconds – common sampling interval for telephone audio (8000 samples/s). 164 microseconds – half-life of polonium-214. 240 microseconds – half-life of copernicium-277. 260 to 480 microseconds - return trip ICMP ping time, including operating system kernel TCP/IP processing and answer time, between two Gigabit Ethernet devices connected to the same local area network switch fabric. 277.8 microseconds – a fourth (a 60th of a 60th of a second), used in astronomical calculations by al-Biruni and Roger Bacon in 1000 and 1267 AD, respectively. 490 microseconds – time for light at a 1550 nm frequency to travel 100 km in a singlemode fiber optic cable (where speed of light is approximately 200 million metres per second due to its index of refraction). The average human eye blink takes 350,000 microseconds (just over second). The average human finger snap takes 150,000 microseconds (just over second). A camera flash illuminates for 1,000 microseconds. Standard camera shutter speed opens the shutter for 4,000 microseconds or 4 milliseconds. 584542 years of microseconds fit in 64 bits: (2**64)/(1e6*60*60*24*365.25).
Physical sciences
Time
Basics and measurement
439282
https://en.wikipedia.org/wiki/Gully
Gully
A gully is a landform created by running water, mass movement, or commonly a combination of both eroding sharply into soil or other relatively erodible material, typically on a hillside or in river floodplains or terraces. Gullies resemble large ditches or small valleys, but are metres to tens of metres in depth and width, are characterized by a distinct 'headscarp' or 'headwall' and progress by headward (i.e., upstream) erosion. Gullies are commonly related to intermittent or ephemeral water flow, usually associated with localised intense or protracted rainfall events or snowmelt. Gullies can be formed and accelerated by cultivation practices on hillslopes (often gentle gradients) in farmland, and they can develop rapidly in rangelands from existing natural erosion forms subject to vegetative cover removal and livestock activity. Etymology The earliest known usage of the term is from 1657. It originates from the French word goulet, a diminutive form of goule which means throat. The term may be connected to the name of a type of knife used at the time, a gully-knife. Water erosion is more likely to occur on steep terrain because of erosive pressures, splashes, scour, and transport. Slope characteristics, such as slope length and amounts proportionate to slope length, affect soil erosion. Relief and soil erosion are positively correlated in southeast Nigeria. There are three types of topography: mountains, cuesta landscapes, and plains and lowlands. While highlands with stable lithology avoid gullying yet allow for vigorous runoff, uplands with friable sandstones are more prone to erosion. Formation and consequences Gully erosion can progress through a variety and combination of processes. The erosion processes include incision and bank erosion by water flow, mass movement of saturated or unsaturated bank or wall material, groundwater seepage - sapping the overlying material, collapse of soil pipes or tunnels in dispersive soils, or a combination of these to a greater or lesser degree. Hillsides are more prone to gully erosion when they are cleared of vegetation cover through deforestation, over-grazing, or other means. Gullies in rangelands can be initiated by concentrated water flow down tracks worn by livestock or vehicle tracks. The flowing water easily carries the eroded soil after being dislodged from the ground, typically when rainfall falls during short, intense storms such as thunderstorms. A gully may grow in length through headward (i.e., upstream) erosion at a knick point. This erosion can result from interflow and soil piping (internal erosion) as well as surface runoff. Gully erosion may also advance laterally through similar methods, including mass movement, acting on the gully walls (banks), and the development of 'branches' (a type of tributary). Gullies reduce the productivity of farmlands where they incise into the land and produce sediment that may choke downstream waterbodies and reduce water quality within the drainage system and lake or coastal system. Because of this, much effort is invested into the study of gullies within the scope of geomorphology and soil science, in the prevention of gully erosion, and the in remediation and rehabilitation of gullied landscapes. The total soil loss from gully formation and subsequent downstream river sedimentation can be substantial, especially from unstable soil materials prone to dispersion. When water is directed over exposed ground, gully erosion removes soil near drainage lines. This may result in divided properties, loss of arable land, diminished amenities, and decreased property values. Additionally, it can lead to sedimentation, discoloration of the water supply, and creating a haven for rodents. Water rushing over exposed, naked soil creates gullies and ridges that erode rock and soil. When water rushes across exposed terrain, it erodes or pushes dirt away, creating rills. Gravity causes rift erosion on a downward slope, with steeper slopes generating greater water flow. Sandier terrains are more commonly affected by rills most prevalent during the rainier months. Gullies develop when a rill is neglected for an extended time, thickening and expanding as soil erosion persists. The factors influencing gully erosion were investigated in Zaria, Kaduna state, Nigeria, utilizing SRTM data, soil samples, rainfall data, and satellite imagery. The findings indicated that the factors that had the biggest effects on gully erosion were slope (56%) and rainfall (26%), land cover (12%), and soil (6%). The investigation concluded that each particular component significantly influenced soil loss. Effects of gullies The loss of fertile farmland due to gully erosion is a severe environmental problem that lowers crop quality and may cause famine and food shortages. It also causes the soil to lose organic content, which has an impact on plant viability. As items washed from fields end up in rivers, streams, or vacant land, erosion also contaminates the ecosystem. Because of increased population expansion and increasing land demand, erosion also threatens the natural ecosystem, encroaching on natural forests. Important assets including homes, power poles, and water pipelines may also be destroyed. Prevention of gullies Effective land management techniques can prevent gullies. These techniques include keeping vegetation along drainage lines, using more water, classifying drainage lines as distinct land classes, stabilizing erosion, preventing vermin, distributing runoff evenly, keeping soil organic matter levels high, and avoiding over-cultivation. These tactics guarantee uniform rates of penetration and robust plant coverage. One serious environmental problem endangering sustainable development is gully erosion. Gullying prevention and control methods are dispersed and lacking, and they have low success and efficacy rates. This review attempts to make a valuable contribution to effective gully prevention and management techniques by combining information from previous research. It is possible to stop the creation of gullies by changing how land is used, conserving water and soil, or implementing specific actions in areas with concentrated flow. Plant leftovers and other vegetation barriers can prevent erosion, although their usefulness is limited. The biophysical environment, terrain, climate, and geomorphology are examples of external elements that affect gully prevention and control. Stabilising gullies Stabilizing gullies entails altering water flow to lessen scouring, sediment buildup, and revegetation. Water can be securely moved from the natural level to the gully floor using a variety of structures, including drop structures, pipe structures, grass chutes, and rock chutes. Structural modifications can be required along steep gully floors. Vegetation can reestablish itself thanks to sediments deposited over flatter gradients. Until the restoration is finished, damaged areas should be walled off. Gully remediation in Eastern Nigeria Eastern Nigeria's people and ecology are seriously threatened by gully erosion. A research project focused on 370 families and nine risk regions evaluated the region's gully erosion issues. The greatest perceived problem, according to the results, was biodiversity loss. In contrast, damage to properties, roads, and walkways was ranked as the least important issue. This implies a notable variation in the average evaluations across impacted individuals, underscoring the necessity for long-term repair approaches. Reducing soil loss, raising public knowledge of environmental issues, passing environmental legislation, and giving residents funds to strengthen their coping mechanisms are all advised by the study. In Agulu-Nanka, Southeast Nigeria, a study examined the geoenvironmental causes driving gully erosion. It focuses on catchment management for gully erosion and geotechnical analysis. Through fieldwork, data was gathered utilizing GIS and GPS methods. According to the study, gully erosion occurs throughout, with Nanka/Oko having the highest concentration. The gully characteristic map shows variations in length and depth, emphasizing the necessity of considering gully vulnerability and giving erosion hazards immediate attention. Artificial gullies Gullies can be formed or enlarged by several human activities. Artificial gullies are formed during hydraulic mining when jets or streams of water are projected onto soft alluvial deposits to extract gold or tin ore. The remains of such mining methods are very visible landform features in old goldfields such as in California and northern Spain. The badlands at Las Medulas, for example, was created during the Roman period by hushing or hydraulic mining of the gold-rich alluvium with water supplied by numerous aqueducts tapping nearby rivers. Each aqueduct produced large gullies below by erosion of the soft deposits. The effluvium was carefully washed with smaller streams of water to extract the nuggets and gold dust. Termination of gullies Gully initiation results from localized erosion by surface runoff, often focusing on areas where forest cover has been removed for agricultural purposes, uneven compaction of surface soils by foot and wheeled traffic, and poorly designed road culverts and gutters. Termination of gully processes requires water-resource management, soil conservation, and community migration. Gully erosion is localized in the Coastal Plain Sands, Nanka Sands, and Nsukka Sandstone of the Anambra-Imo basin region. The most affected deposits are unconsolidated or poorly consolidated and have short dispersion times. Public education is essential for a sustainable termination strategy, and collaboration between the government, donors, the private sector, and rural people is crucial. On Mars Gullies are widespread at mid-to-high latitudes on the surface of Mars and are some of the youngest features observed on that planet, probably forming within the last few 100,000 years. There, they are one of the best lines of evidence for the presence of liquid water on Mars in the recent geological past, probably resulting from the slight melting of snowpacks on the surface or ice in the shallow subsurface on the warmest days of the Martian year. Flow as springs from deeper seated liquid water aquifers in the deeper subsurface is also a possible explanation for the formation of some Martian gullies. Gallery
Physical sciences
Fluvial landforms
Earth science
439344
https://en.wikipedia.org/wiki/Right%20whale%20dolphin
Right whale dolphin
Right whale dolphins are cetaceans belonging to the genus Lissodelphis. It contains the northern right whale dolphin (Lissodelphis borealis) and the southern right whale dolphin (Lissodelphis peronii). These cetaceans are predominantly black, white beneath, and some of the few without a dorsal fin or ridge. They are smaller members of the delphinid family, oceanic dolphins, and very slender. Despite scientists being long acquainted with the species (the Northern species was identified by Titian Peale in 1848 and the Southern by Bernard Germain de la Cépède in 1804), little is known about them in terms of life history and behaviour. Physical description Both species have slender bodies, small, pointed flippers and a small fluke. Conspicuously, neither species has a dorsal fin; nor do right whales and this may explain the dolphins' name. The northern right whale dolphin is the only dolphin in the Pacific with this property. Similarly, the Southern is the only finless dolphin in the southern hemisphere. The two species can be readily distinguished (apart from the geographical separation in their ranges) by the extent of the whiteness on the body. Both have white bellies; however, the area of white coloration on the Southern species covers much more of the body — including the flanks, flippers, beak and forehead. Northern males are about long at sexual maturity. Females are . Both sexes become mature at about 10 years. New-born right whale dolphins are about half the length of their parents. The southern species is typically larger (up to ) and heavier (up to compared with the Northern's maximum of ). The dolphins live for about 40 years. Distribution The northern right whale dolphin is widely distributed in the temperate North Pacific in a band running from Kamchatka and mainland Japan in the west to British Columbia down to the Baja California Peninsula in the east. It is not known with certainty if they follow a migratory pattern. However, individuals have been observed close to the Californian shore following their main food source, squid, in winter and spring. Such sightings have not been recorded in summer. Otherwise these dolphins are pelagic. No global population estimates exist. There are an estimated 14,000 individuals close to the North American shoreline. The southern right whale dolphin has a circumpolar distribution running from about 40° to 55°. They are sighted in the Tasman Sea in particular. Behaviour Both species are highly gregarious. They move in pods of several hundred individuals and sometimes congregate in groups of 3000. The groups may also contain dusky dolphins and pilot whales (in the south) and Pacific white-sided dolphins (in the north). These dolphins are some of the fastest swimmers (in excess of 40 km/h). They can by turns become very boisterous and breach and tail-slap or become very quiet and almost undetectable at sea. At high speed they can leap up to 7 metres across the ocean's surface in a graceful bouncing motion. The species will generally avoid boats, but bow-riding has been recorded on occasion. A single and rare stranding has been recorded for the northern species. On 9 June 2018, a 5.5-foot female was found deceased on Manzanita Beach on the coast of Oregon. There has been one recorded instance of 77 southern right whale dolphins stranding on Chatham Island. Conservation The southern species is under pressure from Peruvian whaling operations. The northern species has never been commercially targeted. However, tens of thousands of the northern species were killed in the 1980s due to their becoming caught in oceanic drift gillnets introduced at that time. Gillnets were banned by the United Nations in 1993. Conservation campaigners work vigorously to try to ensure these bans are retained. Attempts to keep right whale dolphins in aquaria have all ended in failure and death. In all cases but one, they have died within three weeks.
Biology and health sciences
Toothed whale
Animals
439497
https://en.wikipedia.org/wiki/Classical%20limit
Classical limit
The classical limit or correspondence limit is the ability of a physical theory to approximate or "recover" classical mechanics when considered over special values of its parameters. The classical limit is used with physical theories that predict non-classical behavior. Quantum theory A heuristic postulate called the correspondence principle was introduced to quantum theory by Niels Bohr: in effect it states that some kind of continuity argument should apply to the classical limit of quantum systems as the value of the Planck constant normalized by the action of these systems becomes very small. Often, this is approached through "quasi-classical" techniques (cf. WKB approximation). More rigorously, the mathematical operation involved in classical limits is a group contraction, approximating physical systems where the relevant action is much larger than the reduced Planck constant , so the "deformation parameter" / can be effectively taken to be zero (cf. Weyl quantization.) Thus typically, quantum commutators (equivalently, Moyal brackets) reduce to Poisson brackets, in a group contraction. In quantum mechanics, due to Heisenberg's uncertainty principle, an electron can never be at rest; it must always have a non-zero kinetic energy, a result not found in classical mechanics. For example, if we consider something very large relative to an electron, like a baseball, the uncertainty principle predicts that it cannot really have zero kinetic energy, but the uncertainty in kinetic energy is so small that the baseball can effectively appear to be at rest, and hence it appears to obey classical mechanics. In general, if large energies and large objects (relative to the size and energy levels of an electron) are considered in quantum mechanics, the result will appear to obey classical mechanics. The typical occupation numbers involved are huge: a macroscopic harmonic oscillator with  = 2 Hz,  = 10 g, and maximum amplitude 0 = 10 cm, has  = , so that  ≃ 1030. Further see coherent states. It is less clear, however, how the classical limit applies to chaotic systems, a field known as quantum chaos. Quantum mechanics and classical mechanics are usually treated with entirely different formalisms: quantum theory using Hilbert space, and classical mechanics using a representation in phase space. One can bring the two into a common mathematical framework in various ways. In the phase space formulation of quantum mechanics, which is statistical in nature, logical connections between quantum mechanics and classical statistical mechanics are made, enabling natural comparisons between them, including the violations of Liouville's theorem (Hamiltonian) upon quantization. In a crucial paper (1933), Dirac explained how classical mechanics is an emergent phenomenon of quantum mechanics: destructive interference among paths with non-extremal macroscopic actions  »  obliterate amplitude contributions in the path integral he introduced, leaving the extremal action class, thus the classical action path as the dominant contribution, an observation further elaborated by Feynman in his 1942 PhD dissertation. (Further see quantum decoherence.) Time-evolution of expectation values One simple way to compare classical to quantum mechanics is to consider the time-evolution of the expected position and expected momentum, which can then be compared to the time-evolution of the ordinary position and momentum in classical mechanics. The quantum expectation values satisfy the Ehrenfest theorem. For a one-dimensional quantum particle moving in a potential , the Ehrenfest theorem says Although the first of these equations is consistent with the classical mechanics, the second is not: If the pair were to satisfy Newton's second law, the right-hand side of the second equation would have read . But in most cases, . If for example, the potential is cubic, then is quadratic, in which case, we are talking about the distinction between and , which differ by . An exception occurs in case when the classical equations of motion are linear, that is, when is quadratic and is linear. In that special case, and do agree. In particular, for a free particle or a quantum harmonic oscillator, the expected position and expected momentum exactly follows solutions of Newton's equations. For general systems, the best we can hope for is that the expected position and momentum will approximately follow the classical trajectories. If the wave function is highly concentrated around a point , then and will be almost the same, since both will be approximately equal to . In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position. Now, if the initial state is very localized in position, it will be very spread out in momentum, and thus we expect that the wave function will rapidly spread out, and the connection with the classical trajectories will be lost. When the Planck constant is small, however, it is possible to have a state that is well localized in both position and momentum. The small uncertainty in momentum ensures that the particle remains well localized in position for a long time, so that expected position and momentum continue to closely track the classical trajectories for a long time. Relativity and other deformations Other familiar deformations in physics involve: The deformation of classical Newtonian into relativistic mechanics (special relativity), with deformation parameter ; the classical limit involves small speeds, so , and the systems appear to obey Newtonian mechanics. Similarly for the deformation of Newtonian gravity into general relativity, with deformation parameter Schwarzschild-radius/characteristic-dimension, we find that objects once again appear to obey classical mechanics (flat space), when the mass of an object times the square of the Planck length is much smaller than its size and the sizes of the problem addressed. See Newtonian limit. Wave optics might also be regarded as a deformation of ray optics for deformation parameter . Likewise, thermodynamics deforms to statistical mechanics with deformation parameter .
Physical sciences
Physics basics: General
Physics
439973
https://en.wikipedia.org/wiki/Herbal%20medicine
Herbal medicine
Herbal medicine (also called herbalism, phytomedicine or phytotherapy) is the study of pharmacognosy and the use of medicinal plants, which are a basis of traditional medicine. With worldwide research into pharmacology, some herbal medicines have been translated into modern remedies, such as the anti-malarial group of drugs called artemisinin isolated from Artemisia annua, a herb that was known in Chinese medicine to treat fever. There is limited scientific evidence for the safety and efficacy of many plants used in 21st-century herbalism, which generally does not provide standards for purity or dosage. The scope of herbal medicine sometimes includes fungal and bee products, as well as minerals, shells and certain animal parts. Paraherbalism describes alternative and pseudoscientific practices of using unrefined plant or animal extracts as unproven medicines or health-promoting agents. Paraherbalism relies on the belief that preserving various substances from a given source with less processing is safer or more effective than manufactured products, a concept for which there is no evidence. History Archaeological evidence indicates that the use of medicinal plants dates back to the Paleolithic age, approximately 60,000 years ago. Written evidence of herbal remedies dates back over 5,000 years to the Sumerians, who compiled lists of plants. Some ancient cultures wrote about plants and their medical uses in books called herbals. In ancient Egypt, herbs were mentioned in Egyptian medical papyri, depicted in tomb illustrations, or on rare occasions found in medical jars containing trace amounts of herbs. In ancient Egypt, the Ebers papyrus dates from about 1550 BCE, and covers more than 700 compounds, mainly of plant origin. The earliest known Greek herbals came from Theophrastus of Eresos who, in the 4th century BCE, wrote in Greek Historia Plantarum, from Diocles of Carystus who wrote during the 3rd century BCE, and from Krateuas who wrote in the 1st century BCE. Only a few fragments of these works have survived intact, but from what remains, scholars have noted overlap with the Egyptian herbals. Seeds likely used for herbalism were found in archaeological sites of Bronze Age China dating from the Shang dynasty (). Over a hundred of the 224 compounds mentioned in the Huangdi Neijing, an early Chinese medical text, are herbs. Herbs were also commonly used in the traditional medicine of ancient India, where the principal treatment for diseases was diet. De Materia Medica, originally written in Greek by Pedanius Dioscorides () of Anazarbus, Cilicia, a physician and botanist, is one example of herbal writing used over centuries until the 1600s. Modern herbal medicine The World Health Organization (WHO) estimates that 80 percent of the population of some Asian and African countries presently uses herbal medicine for some aspect of primary health care. Some prescription drugs have a basis as herbal remedies, including artemisinin, digitalis, quinine and taxanes. Regulatory review In 2015, the Australian Government's Department of Health published the results of a review of alternative therapies that sought to determine if any were suitable for being covered by health insurance; herbalism was one of 17 topics evaluated for which no clear evidence of effectiveness was found. Establishing guidelines to assess the safety and efficacy of herbal products, the European Medicines Agency provided criteria in 2017 for evaluating and grading the quality of clinical research in preparing monographs about herbal products. In the United States, the National Center for Complementary and Integrative Health of the National Institutes of Health funds clinical trials on herbal compounds, provides fact sheets evaluating the safety, potential effectiveness and side effects of many plant sources, and maintains a registry of clinical research conducted on herbal products. According to Cancer Research UK as of 2015, "there is currently no strong evidence from studies in people that herbal remedies can treat, prevent or cure cancer". Prevalence of use The use of herbal remedies is more prevalent in people with chronic diseases, such as cancer, diabetes, asthma, and end-stage kidney disease. Multiple factors such as gender, age, ethnicity, education and social class are also shown to have associations with the prevalence of herbal remedy use. Herbal preparations There are many forms in which herbs can be administered, the most common of which is a liquid consumed as a herbal tea or a (possibly diluted) plant extract. Herbal teas, or tisanes, are the resultant liquid of extracting herbs into water, though they are made in a few different ways. Infusions are hot water extracts of herbs, such as chamomile or mint, through steeping. Decoctions are the long-term boiled extracts, usually of harder substances like roots or bark. Maceration is the cold infusion of plants with high mucilage-content, such as sage or thyme. To make macerates, plants are chopped and added to cold water. They are then left to stand for 7 to 12 hours (depending on the herb used). For most macerates, 10 hours is used. Tinctures are alcoholic extracts of herbs, which are generally stronger than herbal teas. Tinctures are usually obtained by combining pure ethanol (or a mixture of pure ethanol with water) with the herb. A completed tincture has an ethanol percentage of at least 25% (sometimes up to 90%). Non-alcoholic tinctures can be made with glycerin but it is believed to be less absorbed by the body than alcohol based tinctures and has a shorter shelf life. Herbal wine and elixirs are alcoholic extracts of herbs, usually with an ethanol percentage of 12–38%. Extracts include liquid extracts, dry extracts, and nebulisates. Liquid extracts are liquids with a lower ethanol percentage than tinctures. They are usually made by vacuum distilling tinctures. Dry extracts are extracts of plant material that are evaporated into a dry mass. They can then be further refined to a capsule or tablet. The exact composition of a herbal product is influenced by the method of extraction. A tea will be rich in polar components because water is a polar solvent. Oil on the other hand is a non-polar solvent and it will absorb non-polar compounds. Alcohol lies somewhere in between. Many herbs are applied topically to the skin in a variety of forms. Essential oil extracts can be applied to the skin, usually diluted in a carrier oil. Many essential oils can burn the skin or are simply too high dose used straight; diluting them in olive oil or another food grade oil such as almond oil can allow these to be used safely as a topical. Salves, oils, balms, creams, and lotions are other forms of topical delivery mechanisms. Most topical applications are oil extractions of herbs. Taking a food grade oil and soaking herbs in it for anywhere from weeks to months allows certain phytochemicals to be extracted into the oil. This oil can then be made into salves, creams, lotions, or simply used as an oil for topical application. Many massage oils, antibacterial salves, and wound healing compounds are made this way. Inhalation, as in aromatherapy, can be used as a treatment. Safety It is a popular misconception that herbal medicines are safe and side-effect free. Consumption of herbs may cause adverse effects. Furthermore, "adulteration, inappropriate formulation, or lack of understanding of plant and drug interactions have led to adverse reactions that are sometimes life threatening or lethal." Proper double-blind clinical trials are needed to determine the safety and efficacy of each plant before medical use. Although many consumers believe that herbal medicines are safe because they are natural, herbal medicines and synthetic drugs may interact, causing toxicity to the consumer. Herbal remedies can also be dangerously contaminated, and herbal medicines without established efficacy, may unknowingly be used to replace prescription medicines. Standardization of purity and dosage is not mandated in the United States, but even products made to the same specification may differ as a result of biochemical variations within a species of plant. Plants have chemical defense mechanisms against predators that can have adverse or lethal effects on humans. Examples of highly toxic herbs include poison hemlock and nightshade. They are not marketed to the public as herbs, because the risks are well known, partly due to a long and colorful history in Europe, associated with "sorcery", "magic" and intrigue. Although not frequent, adverse reactions have been reported for herbs in widespread use. On occasion serious untoward outcomes have been linked to herb consumption. A case of major potassium depletion has been attributed to chronic licorice ingestion, and consequently professional herbalists avoid the use of licorice where they recognize that this may be a risk. Black cohosh has been implicated in a case of liver failure. Few studies are available on the safety of herbs for pregnant women, and one study found that use of complementary and alternative medicines is associated with a 30% lower ongoing pregnancy and live birth rate during fertility treatment. Examples of herbal treatments with likely cause-effect relationships with adverse events include aconite (which is often a legally restricted herb), Ayurvedic remedies, broom, chaparral, Chinese herb mixtures, comfrey, herbs containing certain flavonoids, germander, guar gum, liquorice root, and pennyroyal. Examples of herbs that may have long-term adverse effects include ginseng, the endangered herb goldenseal, milk thistle, senna, aloe vera juice, buckthorn bark and berry, cascara sagrada bark, saw palmetto, valerian, kava (which is banned in the European Union), St. John's wort, khat, betel nut, the restricted herb ephedra, and guarana. There is also concern with respect to the numerous well-established interactions of herbs and drugs. In consultation with a physician, usage of herbal remedies should be clarified, as some herbal remedies have the potential to cause adverse drug interactions when used in combination with various prescription and over-the-counter pharmaceuticals, just as a customer should inform a herbalist of their consumption of actual prescription and other medication. For example, dangerously low blood pressure may result from the combination of a herbal remedy that lowers blood pressure together with prescription medicine that has the same effect. Some herbs may amplify the effects of anticoagulants. Certain herbs as well as common fruit interfere with cytochrome P450, an enzyme critical to much drug metabolism. In a 2018 study, the FDA identified active pharmaceutical additives in over 700 analyzed dietary supplements sold as "herbal", "natural" or "traditional". The undisclosed additives included "unapproved antidepressants and designer steroids", as well as prescription drugs, such as sildenafil or sibutramine. Labeling accuracy Researchers at the University of Adelaide found in 2014 that almost 20 percent of herbal remedies surveyed were not registered with the Therapeutic Goods Administration, despite this being a condition for their sale. They also found that nearly 60 percent of products surveyed had ingredients that did not match what was on the label. Out of 121 products, only 15 had ingredients that matched their TGA listing and packaging. In 2015, the New York Attorney General issued cease and desist letters to four major U.S. retailers (GNC, Target, Walgreens, and Walmart) who were accused of selling herbal supplements that were mislabeled and potentially dangerous. Twenty-four products were tested by DNA barcoding as part of the investigation, with all but five containing DNA that did not match the product labels. Practitioners of herbalism In some countries, formalized training and minimum education standards exist for herbalists, although these are not necessarily uniform within or between countries. In Australia, for example, the self-regulated status of the profession (as of 2009) resulted in variable standards of training, and numerous loosely formed associations setting different educational standards. One 2009 review concluded that regulation of herbalists in Australia was needed to reduce the risk of interaction of herbal medicines with prescription drugs, to implement clinical guidelines and prescription of herbal products, and to assure self-regulation for protection of public health and safety. In the United Kingdom, the training of herbalists is done by state-funded universities offering Bachelor of Science degrees in herbal medicine. In the United States, according to the American Herbalist Guild, "there is currently no licensing or certification for herbalists in any state that precludes the rights of anyone to use, dispense, or recommend herbs." However, there are U.S. federal restrictions for marketing herbs as cures for medical conditions, or essentially practicing as an unlicensed physician. United States herbalism fraud Over the years 2017–2021, the U.S. Food and Drug Administration (FDA) issued warning letters to numerous herbalism companies for illegally marketing products under "conditions that cause them to be drugs under section 201(g)(1) of the Act [21 U.S.C. § 321(g)(1)], because they are intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease and/or intended to affect the structure or any function of the body" when no such evidence existed. During the COVID-19 pandemic, the FDA and U.S. Federal Trade Commission issued warnings to several hundred American companies for promoting false claims that herbal products could prevent or treat COVID-19 disease. Government regulations The World Health Organization (WHO), the specialized agency of the United Nations (UN) that is concerned with international public health, published Quality control methods for medicinal plant materials in 1998 to support WHO Member States in establishing quality standards and specifications for herbal materials, within the overall context of quality assurance and control of herbal medicines. In the European Union (EU), herbal medicines are regulated under the Committee on Herbal Medicinal Products. In the United States, herbal remedies are regulated dietary supplements by the Food and Drug Administration (FDA) under current good manufacturing practice (cGMP) policy for dietary supplements. Manufacturers of products falling into this category are not required to prove the safety or efficacy of their product so long as they do not make 'medical' claims or imply uses other than as a 'dietary supplement', though the FDA may withdraw a product from sale should it prove harmful. Canadian regulations are described by the Natural and Non-prescription Health Products Directorate which requires an eight-digit Natural Product Number or Homeopathic Medicine Number on the label of licensed herbal medicines or dietary supplements. Some herbs, such as cannabis and coca, are outright banned in most countries though coca is legal in most of the South American countries where it is grown. The Cannabis plant is used as a herbal medicine, and as such is legal in some parts of the world. Since 2004, the sales of ephedra as a dietary supplement is prohibited in the United States by the FDA, and subject to Schedule III restrictions in the United Kingdom. Scientific criticism Herbalism has been criticized as a potential "minefield" of unreliable product quality, safety hazards, and the potential for misleading health advice. Globally, there are no standards across various herbal products to authenticate their contents, safety or efficacy, and there is generally an absence of high-quality scientific research on product composition or effectiveness for anti-disease activity. Presumed claims of therapeutic benefit from herbal products, without rigorous evidence of efficacy and safety, receive skeptical views by scientists. Unethical practices by some herbalists and manufacturers, which may include false advertising about health benefits on product labels or literature, and contamination or use of fillers during product preparation, may erode consumer confidence about services and products. Paraherbalism Paraherbalism is the pseudoscientific use of extracts of plant or animal origin as supposed medicines or health-promoting agents. Phytotherapy differs from plant-derived medicines in standard pharmacology because it does not isolate and standardize the compounds from a given plant believed to be biologically active. It relies on the false belief that preserving the complexity of substances from a given plant with less processing is safer and potentially more effective, for which there is no evidence either condition applies. Phytochemical researcher Varro Eugene Tyler described paraherbalism as "faulty or inferior herbalism based on pseudoscience", using scientific terminology but lacking scientific evidence for safety and efficacy. Tyler listed ten fallacies that distinguished herbalism from paraherbalism, including claims that there is a conspiracy to suppress safe and effective herbs, herbs cannot cause harm, whole herbs are more effective than molecules isolated from the plants, herbs are superior to drugs, the doctrine of signatures (the belief that the shape of the plant indicates its function) is valid, dilution of substances increases their potency (a doctrine of the pseudoscience of homeopathy), astrological alignments are significant, animal testing is not appropriate to indicate human effects, anecdotal evidence is an effective means of proving a substance works and herbs were created by God to cure disease. Tyler suggests that none of these beliefs have any basis in fact. Traditional systems Africa Up to 80% of the population in Africa uses traditional medicine as primary health care. Americas Native Americans used about 2,500 of the approximately 20,000 plant species that are native to North America. In Andean healing practices, the use of entheogens, in particular the San Pedro cactus (Echinopsis pachanoi) is still a vital component, and has been around for millennia. China Some researchers trained in both Western and traditional Chinese medicine have attempted to deconstruct ancient medical texts in the light of modern science. In 1972, Tu Youyou, a pharmaceutical chemist and Nobel Prize winner, extracted the anti-malarial drug artemisinin from sweet wormwood, a traditional Chinese treatment for intermittent fevers. India In India, Ayurvedic medicine has quite complex formulas with 30 or more ingredients, including a sizable number of ingredients that have undergone "alchemical processing", chosen to balance dosha. In Ladakh, Lahul-Spiti, and Tibet, the Tibetan Medical System is prevalent, also called the "Amichi Medical System". Over 337 species of medicinal plants have been documented by C.P. Kala. Those are used by Amchis, the practitioners of this medical system. The Indian book, Vedas, mentions treatment of diseases with plants. Indonesia In Indonesia, especially among the Javanese, the jamu traditional herbal medicine may have originated in the Mataram Kingdom era, some 1300 years ago. The bas-reliefs on Borobudur depict the image of people grinding herbs with stone mortar and pestle, a drink seller, a herbalist, and masseuse treating people. The Madhawapura inscription from Majapahit period mentioned a specific profession of herb mixer and combiner (herbalist), called Acaraki. The book from Mataram dated from circa 1700 contains 3,000 entries of jamu herbal recipes, while Javanese classical literature Serat Centhini (1814) describes some jamu herbal concoction recipes. Though possibly influenced by Indian Ayurveda systems, the Indonesia archipelago holds numerous indigenous plants not found in India, including plants similar to those in Australia beyond the Wallace Line. Jamu practices may vary from region to region, and are often not recorded, especially in remote areas of the country. Although primarily herbal, some Jamu materials are acquired from animals, such as honey, royal jelly, milk, and Ayam Kampung eggs. Beliefs Herbalists tend to use extracts from parts of plants, such as the roots or leaves, believing that plants are subject to environmental pressures and therefore develop resistance to threats such as radiation, reactive oxygen species and microbial attack to survive, providing defensive phytochemicals of use in herbalism. Use of plants by animals Indigenous healers often claim to have learned by observing that sick animals change their food preferences to nibble at bitter herbs they would normally reject. Field biologists have provided corroborating evidence based on observation of diverse species, such as chickens, sheep, butterflies, and chimpanzees. The habit of changing diet has been shown to be a physical means of purging intestinal parasites. Sick animals tend to forage plants rich in secondary metabolites, such as tannins and alkaloids.
Biology and health sciences
Alternative and traditional medicine
null
440367
https://en.wikipedia.org/wiki/Sea%20spider
Sea spider
Sea spiders are marine arthropods of the class Pycnogonida, hence they are also called pycnogonids (; named after Pycnogonum, the type genus; with the suffix ). The class include the only now-living order Pantopoda ( ‘all feet’), alongside a few fossil species which could trace back to the early or mid Paleozoic. They are cosmopolitan, found in oceans around the world. The over 1,300 known species have leg spans ranging from to over . Most are toward the smaller end of this range in relatively shallow depths; however, they can grow to be quite large in Antarctic and deep waters. Despite their name and brief resemblance, "sea spiders" are not spiders, nor even arachnids. While some literatures around 2000s suggests they may be a sister group to all other living arthropods, their traditional classification as a member of chelicerates alongside horseshoe crabs and arachnids regained wide support in subsequent studies. Morphology Many sea spiders are recognised by their enormous walking legs in contrast to a reduced body region, resulting into the so-called "all legs" or "no body" appearance. The body segments (somites) are generally interpreted as three main sections (tagma): cephalon (head, aka cephalosoma), trunk (aka thorax) and abdomen. However, the definition of cephalon and trunk might differ between literature (see text), and some studies might follow a prosoma (=cephalon+trunk)-opisthosoma (=abdomen) definition, aligning to the tagmosis of other chelicerates. The exoskeleton of the body is tube-like, lacking the dorsoventral division (tergite and sternite) seen in most other arthropods. The cephalon is formed by the fusion of ocular somite and four anterior segments behind it (somite 1-4). It consists of an anterior proboscis, a dorsal ocular tubercle with eyes, and up to four pairs of appendages (chelifores, palps, s and first walking legs). Although some literature might consider the segment carrying the first walking leg (somite 4) to be part of the trunk, it is completely fused to the remaining head section to form a single cephalic tagma. The proboscis has three-fold symmetry, terminating with a typically Y-shaped mouth (vertical slit in Austrodecidae). It usually has fairly limited dorsoventral and lateral movement. However In those species that have reduced chelifores and palps, the proboscis is well developed and flexible, often equipped with numerous sensory bristles and strong rasping ridges around the mouth. The proboscis is unique to pycnogonids, and its exact homology with other arthropod mouthparts is enigmatic, as well as its relationship with the absence of labrum (preoral upper lip of ocular somite) in pycnogonid itself. The ocular tubercle has up to two pairs of simple eyes (ocelli) on it, though sometimes the eyes can be reduced or missing, especially among species living in the deep oceans. All of the eyes are median eyes in origin, homologous to the median ocelli of other arthropods, while the lateral eyes (e.g. compound eyes) found in most other arthropods are completely absent. In adult pycnogonids, the chelifores (aka cheliphore), palps and ovigers (aka ovigerous legs) are variably reduced or absent, depending on taxa and sometimes sex. Nymphonidae is the only family where all of three pairs are always functional. The ovigers can be reduced or missing in females, but are present in almost all males. In a functional condition, the chelifores terminate with a pincer (chela) formed by 2 segments (podomeres), like the chelicerae of most other chelicerates. The scape (peduncle) behind the pincer is usually unsegmented, but could be 2-segmented in some species, resulting into a total of 3 or 4 chelifore segments. The palps and ovigers have up to 9 and 10 segments respectively, but can have fewer even when in a functional condition. The palps are rather featureless and never have claws in adult Pantopoda, while the ovigers may or may not possess a terminal claw and rows of specialised spines on its curved distal segments (strigilis). The chelifores are used for feeding and the palps are used for sensing and manipulating food items, while the ovigers are used for cleaning themselves, with the additional function of carrying offsprings in males. The leg-bearing somites (somite 4 and all trunk somites, the alternatively defined "trunk/thorax") are either segmented or fused to each other, carrying the walking legs via a series of lateral processes (lateral tubular extension of the somites). In most species, the legs are much larger than the body in both length and volume, only being shorter and more slender than the body in Rhynchothoracidae. Each leg is typically composed of 8 tubular segments, commonly known as coxa 1, 2 and 3, femur, tibia 1 and 2, tarsus, and propodus. This terminology, with 3 coxae, no trochanter, and using the term "propodus", is unusual for arthropods. However, based on muscular system and serial homology to the podomeres of other chelicerates, they are most likely coxa (=coxa 1), trochanter (=coxa 2), prefemur/basifemur (=coxa 3), postfemur/telofemur (=femur), patella (=tibia 1), tibia (tibia 2) and 2 tarsomeres (=tarsus and propodus) in origin. The leg segmentation of Paleozoic taxa are a bit different, noticeably they have annulated coxa 1 and further divided into 2 types: one with flattened distal (femur and beyond) segments and first leg pair with one less segment than the other leg pairs (e.g. Palaeoisopus, Haliestes), and another one with immobile joint between the apparently fourth and fifth segment which altogether might represent a divided femur (e.g. Palaeopantopus, Flagellopantopus). Each leg terminated with a main claw (aka pretarsus/apotele, the true terminal segment), which may or may not have a pair of auxiliary claws on its base. Most of the joints move vertically, except the joint between coxa 1-2 (coxa-trochanter joint) which provide lateral mobility (promotor-remotor motion), and the joint between tarsus and propodus did not have muscles, just like the subdivided tarsus of other arthropods. There are usually 8 (4 pairs) of legs in total, but a few species have 5 to 6 pairs. These are known as polymerous (i.e., extra-legged) species, which are distributed among 6 genera in the families Pycnogonidae (5 pairs in Pentapycnon), Colossendeidae (5 pairs in Decolopoda and Pentacolossendeis, 6 pairs in Dodecolopoda) and Nymphonidae (5 pairs in Pentanymphon, 6 pairs in Sexanymphon). Several alternatives had been proposed for the position homology of pycnogonid appendages, such as chelifores being protocerebral/homologous to the labrum (see text) or ovigers being duplicated palps. Conclusively, the classic, morphology-based one-by-one alingment to the prosomal appendages of other chelicerates was confirm by both neuroanatomic and genetic evidences. Noticeably, the order of pycnogonid leg pairs are mismatched to those of other chelicerates, starting from the ovigers which are homologous to the 1st leg pair of arachnids. While the 4th walking leg pair was considered aligned to the variably reduced 1st opisthosomal segment (somite 7, also counted as part of the prosoma based on different studies and/or taxa) of euchelicerates, the origin of the additional 5-6th leg pairs in the polymerous species are still enigmatic. Together with the cephalic position of 1st walking legs, the anterior and posterior boundary of pycnogonid leg pairs are not align to those of euchelicerate prosoma and opisthosoma, nor the cephalon and trunk of pycnogonid itself. The abdomen (aka trunk end) does not have any appendages. In Pantopoda it is also called anal tubercle, which is always unsegmented, highly reduced and almost vestigial, simply terminated by the anus. It is consider to be a remnant of opisthosoma/trunk of other chelicerates, but it is unknown which somite (s) it actually aligned to. So far only Paleozoic species have segmented abdomen (at least up to 4 segments, presumably somite 8-11 which aligned to opisthosomal segment 2-5 of euchelicerates), with some of them even terminated by a long telson (tail). Internal anatomy and physiology A striking feature of pycnogonid anatomy is the distribution of their digestive and reproductive system. The pharynx inside the proboscis was lined by dense setae, which is possibly related to their feeding behaviour. A pair of gonads (ovaries in female, testes in male) located dorsally in relation to the digestive tract, but the majority of these organs are branched diverticula throughout the legs because its body is too small to accommodate all of them alone. The midgut diverticula are very long, usually reach beyond the femur (variably down to tibia 2, tarsus or propodus) of each leg, except in Rhynchothoracidae which only reach coxa 1. Some species have additional branches (in some Pycnogonum) or irregular pouches (in Endeis) on the diverticula. There is also a pair of anterior diverticula which corresponded to the chelifores or inserted to the proboscis in some chelifores-less species. The palps and ovigers never contain diverticula, although some might possess a pair of small diverticula near the bases of these appendages. The gonad diverticula (pedal gonad) reaching each femur and opened via a gonopore located at coxa 2. The structure and number of the gonopores might differ between sexes (e.g. larger in female, variably absent at the anterior legs of some male). In males, the femur or both femur and tibia possess cement glands. Pycnogonids do not require a traditional respiratory system (e.g. gills). Instead, gasses are absorbed by the legs via the non-calcareous, porous exoskeleton and transferred through the body by diffusion. The morphology of pycnogonid creates an efficient surface-area-to-volume ratio for respiration to occur through direct diffusion. Oxygen is absorbed by the legs and is transported via the hemolymph to the rest of the body with an open circulatory system. The small, long, thin pycnogonid heart beats vigorously at 90 to 180 beats per minute, creating substantial blood pressure. The beating of the heart drives circulation in the trunk and in the part of the legs closest to the trunk, but is not important for the circulation in the rest of the legs. Hemolymph circulation in the legs is mostly driven by the peristaltic movement of the gut diverticula that extends into every leg, a process called gut peristalsis. In the case of taxa without a heart (e.g. Pycnogonidae), the whole circulatory system was presume to be solely maintained by gut peristalsis. The central nervous system of pycnogonid largely retain a segmented ladder-like structure. It consisting of a dorsal brain (supraesophageal ganglion) and a pair of ventral nerve cords, intercepted by the esophagus. The former is a fusion of the first and second brain segments (cerebral ganglia): protocererum and deutocerebrum, corresponded to the eyes/ocular somite and chelifores/somite 1 respectively. The whole section was rotated, as the protocerebrum goes upward and the deutocerebrum shifted forward. The third brain segment, tritocerebrum (corresponded to the palps/somite 2), were fused to the oviger/somite 3 ganglia instead, which was followed up by a series of leg ganglia (somite 4 and so on). The leg ganglia might shift anteriorly or even clustered together, but never highly fused into the ring-like synganglion of other chelicerates. The abdominal ganglia are vestigal, absorb by the preceeded leg ganglia during juvenile development. Distribution and ecology Sea spiders live in many different oceanic regions of the world, from Australia, New Zealand, and the Pacific coast of the United States, to the Mediterranean Sea and the Caribbean Sea, to the north and south poles. They are most common in shallow waters, but can be found as deep as , and live in both marine and estuarine habitats. Pycnogonids are well camouflaged beneath the rocks and among the algae that are found along shorelines. Sea spiders are benthic in general, usually walk along the bottom with their stilt-like legs, but they also capable of swimming by using an umbrella pulsing motion, and some Paleozoic species with flatten legs might even have a nektonic lifestyle. Sea spiders are mostly carnivorous predators or scavengers that feed on soft-bodied invertebrates such as cnidarians, sponges, polychaetes, and bryozoans, by inserting their proboscis into targeted prey item. Although they are known to feed on sea anemones, most sea anemones survive this ordeal, making the sea spider a parasite rather than a predator of sea anemones. A few species such as Nymphonella tapetis are specialised endoparasites of bivalve mollusks. Not much is known about the primary predators of sea spiders, if any. At least some species have obvious defensive methods such as amputating and regenerating body parts, or making itself unpleasant meal via high level of ecdysteroids (ecdysis hormone). On the other hand, sea spiders are known to be infected by parasitic gastropod mollusks or hitch‐rided by sessile animals such as goose barnacles, which may negatively affect their locomotion and respiratory efficiency. Reproduction and development All sea spiders have separate sexes, except the only known hermaphroditic species Ascorhynchus corderoi and some extremely rare gynandromorph cases. Among all extant families, the Colossendeidae and Austrodecidae are the only two that still lacking any observations on their reproductive behaviour and life cycle. Reproduction involves external fertilisation when male and female stack together (usually male on top), exceeding sperm and eggs from the gonopores of their leg coxae. After fertilisation, males glue the egg cluster with cement glands and using their ovigers (the oviger-lacking Nulloviger using only the ventral body wall) to take care of the laid eggs and young. In most cases, the offsprings hatched as a distinct larval stage known as protonymphon. It has a blind gut and the body consists of a cephalon and its first 3 pairs of cephalic appendages only: the chelifores, palps and ovigers. In this stage, The chelifores usually have attachment glands, while the palps and ovigers are subequal, 3-segmented appendages known as palpal and ovigeral larval limbs. When the larvae moult into the postlarval stage, they undergoing transitional metamorphosis: the leg-bearing segments develop and the 3 pairs of cephalic appendages further develop or reduce. The postlarva eventually metamorph into a juvenile that looks like a miniature adult, which will continue to moult into adult with fix number of walking legs. In Pycnogonidae, the ovigers reduced in juveniles but reappeared in oviger-bearing adult males. These kind of "head-only" larvae and its anamorphic metamorphosis resemble crustacean nauplius larvae and megacheiran larvae, all together might reflects how the larvae of a common ancestor of all arthropods developed: starting its life as a tiny animal with a few head appendages, while new body segments and appendages were gradually added as it was growing. Further details of the postembryonic developments of sea spiders vary, but their categorization might differ between literatures. As of 2010s, there are 5 types being identified as follows: The type 1 (typical protonymphon) is the most common and possibly an ancestral one. When the type 2 and 5 (attaching larva) hatches it immediately attaches itself to the ovigers of the father, where it will stay until it has turned into a small and young juvenile with 2 or 3 pairs of walking legs ready for a free-living existence. The type 3 (atypical protonymphon) have limited observations. The adults are free living, while the larvae and the juveniles are living on or inside temporary hosts such as polychaetes and clams. The type 4 (encysted larva) is a parasite that hatches from the egg and finds a host in the shape of a polyp colony where it burrows into and turns into a cyst, and will not leave the host before it has turned into a young juvenile. Taxonomy Phylogenetic position Sea spiders had been interpreted as some kind of arachnids or crustaceans in historical studies. However, after the concept of Chelicerata being established in 20th century, sea spiders have long been considered part of the subphylum, alongside euchelicerate taxa such as Xiphosura (horseshoe crabs) and Arachnida (spiders, scorpions, mites, ticks, harvestmen and other lesser-known orders). A competing hypothesis around 2000s proposes that Pycnogonida belong to their own lineage, sister to the lineage lead to other extant arthropods (i.e. euchelicerates, myriapods, crustaceans and hexapods, collectively known as Cormogonida). This Cormogonida hypothesis was first indicated by early phylogenomic analysis aroud that time, followed by another study suggest that the sea spider's chelifores are not positionally homologous to the chelicerae of euchelicerates (originated from the deutocerebral segment/somite 1), as was previously supposed. Instead, the chelifore nerves were thought to be innervated by the protocerebrum, the first segment of the arthropod brain which corresponded to the ocular somite, bearing the eyes and labrum. This condition of having paired protocerebral appendages is not found anywhere else among arthropods, except in other panarthropods such as onychophoran (primary antennae) and contestably in Cambrian stem-group arthropods like radiodonts (frontal appendages), which was taken as evidence that Pycnogonida may be basal than all other living arthropods, since the protocerebral appendages were thought to be reduced and fused into a labrum in the last common ancestor of crown-group arthropods, and pycnogonids did not have a labrum coexist with the chelifores. If that's true, it would have meant the sea spiders are the last surviving (and highly modified) members of an ancient, basal arthropods that originated in Cambrian oceans. However, the basis of this hypothesis was immediately refuted by subsequent studies using Hox gene expression patterns, demonstrated the developmental homology between chelicerae and chelifores, with chelifore nerves innervated by a deuterocerebrum that has been rotated forwards, which was misinterpreted as protocerebrum by the aforementioned study. Since 2010s, the chelicerate affinity of Pycnogonida regain wide support as the sister group of Euchelicerata. Under the basis of phylogenomics, this is one of the only stable topology of chelicerate interrelationships in contrast to the uncertain relationship of many euchelicerate taxa (e.g. poorly resolved position of arachnid orders other than tetrapulmonates and scorpions; non-monophyly of Arachnida in respect to Xiphosura). This is consistent with the chelifore-chelicera homology, as well as other morphological similarities and differences between pycnogonids and euchelicerates. However, due to pycnogonid's highly modified anatomy and lack of intermediate fossils, their evolutional origin and relationship with the basal fossil chelicerates (such as habeliids and Mollisonia) are still difficult to compare and interpret. Interrelationship The class Pycnogonida comprises over 1,300 species, which are split into over 80 genera. All extant genera are considered part of the single order Pantopoda, which was subdivided into 11 families. Historically there are only 9 families, with species of nowadays Ascorhynchidae placed under Ammotheidae and Pallenopsidae under Callipallenidae. Both were eventually separated after they are considered distinct from the once-belonged families. Phylogenomic analysis of extant sea spiders was able to establish a backbone tree for Pantopoda, revealing some consistent relationship such as the basal position of Austrodecidae, monophyly of some major branches (later redefined as superfamilies) and the paraphyly of Callipallenidae in respect to Nymphonidae. The topology also suggest Pantopoda undergoing multiple times of cephalic appendage reduction/reappearance and polymerous species acquisition, contray to previous hypothesis on pantopod evolution (cephalic appendages were thought to be progressively reduced along the branches, and polymerus condition were though to be ancestral). On the other hand, the position of Ascorhynchidae and Nymphonella are less certain across multiple results. The position of Paleozoic pycnogonids are poorly examined, but most, if not, all of them most likely represent members of stem-group basal than Pantopoda (crown-group Pycnogonida), especially those with segmented abdomen, a feature that was most likely ancestral and reduce in the Pantopoda lineage. While some phylogenetic analysis placing them within Pantopoda, this result is questionable as they have low support values and based on outdated interpretation of the fossil taxa. According to the World Register of Marine Species, the Class Pycnogonida is subdivided as follows (with subsequent updates on fossil taxa after Sabroux et al. (2023, 2024)): Genus †Cambropycnogon Waloszek & Dunlop, 2002 Genus †Flagellopantopus Poschmann & Dunlop, 2005 (classified under Pantopoda incertae sedis by WoRMS) Genus †Haliestes Siveter et al., 2004 (previously classified under Order Nectopantpoda Bamber, 2007 and Family Haliestidae Bamber, 2007) Genus †Palaeoisopus Broili, 1928 (Previously classified under Order Palaeoisopoda Hedgpeth, 1978 and Family Palaeoisopodidae Dubinin, 1957) Genus †Palaeomarachne Rudkin et al., 2013 Genus †Palaeopantopus Broili, 1929 (Previously classified under Order Palaeopantopoda Broili, 1930 and Family Palaeopantopodidae Hedgpeth, 1955) Genus †Palaeothea Bergstrom, Sturmer & Winter, 1980 (previously classified under Pantopoda, potential nomen dubium) Genus †Pentapantopus Kühl, Poschmann & Rust, 2013 (previously classified under Pantopoda) Order Pantopoda Gerstäcker, 1863 Suborder Eupantopodida Fry, 1978 Superfamily Ammotheoidea Dohrn, 1881 Family Ammotheidae Dohrn, 1881 Family Pallenopsidae Fry, 1978 Superfamily Ascorhynchoidea Pocock, 1904 Family Ascorhynchidae Hoek, 1881 (=Eurycydidae Sars, 1891) Superfamily Colossendeoidea Hoek, 1881 (=Pycnogonoidea Pocock, 1904; Rhynchothoracoidea Fry, 1978) Family Colossendeidae Jarzynsky, 1870 Family Pycnogonidae Wilson, 1878 Family Rhynchothoracidae Thompson, 1909 Superfamily Nymphonoidea Pocock, 1904 Family Callipallenidae Hilton, 1942 Family Nymphonidae Wilson, 1878 Superfamily Phoxichilidioidea Sars, 1891 Family Endeidae Norman, 1908 Family Phoxichilidiidae Sars, 1891 Suborder Stiripasterida Fry, 1978 Family Austrodecidae Stock, 1954 Suborder incertae sedis Family †Palaeopycnogonididae Sabroux, Edgecombe, Pisani & Garwood, 2023 Genus Alcynous Costa, 1861 (nomen dubium) Genus Foxichilus Costa, 1836 (nomen dubium) Genus Oiceobathys Hesse, 1867 (nomen dubium) Genus Oomerus Hesse, 1874 (nomen dubium) Genus Paritoca Philippi, 1842 (nomen dubium) Genus Pephredro Goodsir, 1842 (nomen dubium) Genus Phanodemus Costa, 1836 (nomen dubium) Genus Platychelus Costa, 1861 (nomen dubium) Fossil record The fossil record of pycnogonids is scant, represented only by a handful of fossil sites with exceptional preservation (Lagerstätte). While most of them are discovered from Paleozoic era, unambiguous evidence of crown-group (Pantopoda) only restricted to Mesozoic era. The earliest fossils are Cambropycnogon discovered from the Cambrian 'Orsten' of Sweden (ca. 500 Ma). So far only its protonymphon larvae had been described, featuring some traits unknown from other pycnogonids such as paired anterior projections, gnathobasic larval limbs and annulated terminal appendages. Due to its distinct morphology, some studies have argued that this genus is not a pycnogonid at all. Ordovician pycnogonids are only known by Palaeomarachne (ca. 450 Ma), a genus found in William Lake Provincial Park, Manitoba and described in 2013. It only preserve possible moults of the fragmental body segments, with one showing an apparently segmented head region. However, just like Cambropycnogon, its pycnogonid affinity was questioned by some studies as well. The Silurian Coalbrookdale Formation of England (Haliestes, ca. 425 Ma) and the Devonian Hunsrück Slate of Germany (Flagellopantopus, Palaeopantopus, Palaeoisopus, Palaeothea and Pentapantopus, ca. 400 Ma) include unambigious fossil pycnogonids with exceptional preservation. The latter is by far the most diverse community of fossil pycnogonids in terms of both species number and morphology. Some of them are significant in that they possess something never seen in pantopods: annulated coxae, flatten swimming legs, segmented abdomen and elongated telson. These provide some clues on the evolution of sea spider bodyplan before the arose and diversification of Pantopoda. Fossil of Mesozoic pycnogonids are even rare, and so far all of them are Jurassic pantopods. Historically there are two genus (Pentapalaeopycnon and Pycnogonites) from the Solnhofen Limestone (ca. 150 Ma) of Germany being described as such, which are in fact misidentified phyllosoma larvae of decapod crustaceans. The actual first report of Mesozoic pycnogonids was described by researchers from the University of Lyon in 2007, discovering 3 new genus (Palaeopycnogonides, Colossopantopodus and Palaeoendeis) from La Voulte-sur-Rhône of Jurassic La Voulte Lagerstätte (ca. 160 Ma), south-east France. The discovery fill in an enormous fossil gap in the record between Devonian and extant sea spiders. In 2019, a new species of Colossopantopodus and a specimen possibly belong to the extant genus Eurycyde were discovered from the aforementioned Solnhofen limestone.
Biology and health sciences
Chelicerata (except arachnids)
Animals
440479
https://en.wikipedia.org/wiki/Influenza%20A%20virus
Influenza A virus
Influenza A virus (IAV) is the only species of the genus Alphainfluenzavirus of the virus family Orthomyxoviridae. It is a pathogen with strains that infect birds and some mammals, as well as causing seasonal flu in humans. Mammals in which different strains of IAV circulate with sustained transmission are bats, pigs, horses and dogs; other mammals can occasionally become infected. IAV is an enveloped negative-sense RNA virus, with a segmented genome. Through a combination of mutation and genetic reassortment the virus can evolve to acquire new characteristics, enabling it to evade host immunity and occasionally to jump from one species of host to another. Subtypes of IAV are defined by the combination of the antigenic H and N proteins in the viral envelope; for example, "H1N1" designates an IAV subtype that has a type-1 hemagglutinin (H) protein and a type-1 neuraminidase (N) protein. Almost all possible combinations of H (1 through 16) and N (1 through 11) have been isolated from wild birds. Further variations exist within the subtypes and can lead to very significant differences in the virus's ability to infect and cause disease, as well as to the severity of symptoms. Symptoms of human seasonal flu usually include fever, cough, sore throat, muscle aches, conjunctivitis and, in severe cases, breathing problems and pneumonia that may be fatal. Humans can rarely become infected with strains of avian or swine influenza, usually as a result of close contact with infected animals; symptoms range from mild to severe including death. Bird-adapted strains of the virus can be asymptomatic in some aquatic birds but lethal if they spread to other species, such as chickens. IAV disease in poultry can be prevented by vaccination, however biosecurity control measures are preferred. In humans, seasonal influenza can be treated in its early stages with antiviral medicines. A global network, the Global Influenza Surveillance and Response System (GISRS) monitors the spread of influenza with the aim to inform development of both seasonal and pandemic vaccines. Several millions of specimens are tested by the GISRS network annually through a network of laboratories in 127 countries. As well as human viruses, GISRS monitors avian, swine, and other potentially zoonotic influenza viruses. IAV vaccines need to be reformulated regularly in order to keep up with changes in the virus. Virology Classification There are two methods of classification, one based on surface proteins (originally serotypes), and the other based on its behavior, mainly the host animal. Subtypes There are two antigenic proteins on the surface of the viral envelope, hemagglutinin and neuraminidase. Different influenza virus genomes encode different hemagglutinin and neuraminidase proteins. Based on their serotype, there are 18 known types of hemagglutinin and 11 types of neuraminidase. Subtypes of IAV are classified by their combination of H and N proteins. For example, "H5N1" designates an influenza A subtype that has a type-5 hemagglutinin (H) protein and a type-1 neuraminidase (N) protein. Further variations exist within the subtypes and can lead to very significant differences in the virus's behavior. By definition, the subtyping scheme only takes into account the two outer proteins, not the at least eight proteins internal to the virus. Almost all possible combinations of H (1 through 16) and N (1 through 11) have been isolated from wild birds. H17 and H18 have only been discovered in bats. Influenza virus nomenclature Due to the high variability of the virus, subtyping is not sufficient to uniquely identify a strain of influenza A virus. To unambiguously describe a specific isolate of virus, researchers use the Influenza virus nomenclature, which describes, among other things, the subtype, year, and place of collection. Some examples include: . The starting indicates that the virus is an influenza A virus. indicates the place of collection. is a laboratory sequence number. (or just ) indicates that the sample was collected in 2021. No species is mentioned so by default, the sample was collected from a human. indicates the subtype of the virus. . This example shows an additional field before the place: . It indicates that the sample was collected from a pig. . This example carries an unusual designation in the last part: instead of a usual , it uses . This was in order to distinguish the Pandemic H1N1/09 virus lineage from older H1N1 viruses. Structure and genetics Structure The influenza A virus has a negative-sense, single-stranded, segmented RNA genome, enclosed in a lipid envelope. The virus particle (also called the "virion") is 80–120 nanometers in diameter, such that the smallest virions adopt an elliptical shape; larger virions have a filamentous shape. Core – The central core of the virion contains the viral RNA genome, which is made of eight separate segments. The nucleoprotein (NP) coats the viral RNA to form a ribonucleoprotein that assumes a helical (spiral) configuration. Three large proteins (PB1, PB2, and PA), which are responsible for RNA transcription and replication, are bound to each segment of viral RNP. Capsid – The matrix protein M1 forms a layer between the nucleoprotein and the envelope, called the capsid. Envelope – The viral envelope consists of a lipid bilayer derived from the host cell. Two viral proteins; hemagglutinin (HA) and neuraminidase (NA), are inserted into the envelope and are exposed as spikes on the surface of the virion. Both proteins are antigenic; a host's immune system can react to them and produce antibodies in response. The M2 protein forms an ion channel in the envelope and is responsible for uncoating the virion once it has bound to a host cell. Genome The table below presents a concise summary of the influenza genome and the principal functions of the proteins which are encoded. Segments are conventionally numbered from 1 to 8 in descending order of length. Three viral proteins - PB1, PB2, and PA – associate to form the RNA-dependent RNA polymerase (RdRp) which functions to transcribe and replicate the viral RNA. Viral messenger RNA transcription – The RdRp complex transcribes viral mRNAs by using a mechanism called cap-snatching. It consists in the hijacking and cleavage of host capped pre-mRNAs. Host cell mRNA is cleaved near the cap to yield a primer for the transcription of positive-sense viral mRNA using the negative-sense viral RNA as a template. The host cell then transports the viral mRNA into the cytoplasm where ribosomes manufacture the viral proteins. Replication of the viral RNA – The replication of the influenza virus, unlike most other RNA viruses, takes place in the nucleus and involves two steps. The RdRp first of all transcribes the negative-sense viral genome into a positive-sense complimentary RNA (cRNA), then the cRNAs are used as templates to transcribe new negative-sense vRNA copies. These are exported from the nucleus and assemble near the cell membrane to form the core of new virions. Epidemiology Evolution and history The predominant natural reservoir of influenza viruses is thought to be wild waterfowl. The subtypes of influenza A virus are estimated to have diverged 2,000 years ago. Influenza viruses A and B are estimated to have diverged from a single ancestor around 4,000 years ago, while the ancestor of influenza viruses A and B and the ancestor of influenza virus C are estimated to have diverged from a common ancestor around 8,000 years ago. Outbreaks of influenza-like disease can be found throughout recorded history. The first probable record is by Hippocrates in 412 BCE. The historian Fujikawa listed 46 epidemics of flu-like illness in Japan between 862 and 1868. In Europe and the Americas, a number of epidemics were recorded through the Middle Ages and up to the end of the 19th century.In 1918-1919 came the first flu pandemic of the 20th century, known generally as the "Spanish flu", which caused an estimated 20 to 50 million deaths worldwide. It is now known that this was caused by an immunologically novel H1N1 subtype of influenza A. The next pandemic took place in 1957, the "Asian flu", which was caused by a H2N2 subtype of the virus in which the genome segments coding for HA and NA appeared to have derived from avian influenza strains by reassortment, while the remainder of the genome was descended from the 1918 virus. The 1968 pandemic ("Hong Kong flu") was caused by a H3N2 subtype in which the NA segment was derived from the 1957 virus, while the HA segment had been reassorted from an avian strain of influenza. In the 21st century, a strain of H1N1 flu (since titled "H1N1pdm09") was antigenically very different from previous H1N1 strains, leading to a pandemic in 2009. Because of its close resemblance to some strains circulating in pigs, this became known as "swine flu". Influenza A virus continues to circulate and evolve in birds and pigs. Almost all possible combinations of H (1 through 16) and N (1 through 11) have been isolated from wild birds. As of June 2024, two particularly virulent IAV strains - H5N1 and H7N9 – are predominant in wild bird populations. These frequently cause outbreaks in domestic poultry, with occasional spillover infections in humans who are in close contact with poultry. Pandemic potential Influenza viruses have a relatively high mutation rate that is characteristic of RNA viruses. The segmentation of the influenza A virus genome facilitates genetic recombination by segment reassortment in hosts who become infected with two different strains of influenza viruses at the same time. With reassortment between strains, an avian strain which does not affect humans may acquire characteristics from a different strain which enable it to infect and pass between humans – a zoonotic event. It is thought that all influenza A viruses causing outbreaks or pandemics among humans since the 1900s originated from strains circulating in wild aquatic birds through reassortment with other influenza strains. It is possible (though not certain) that pigs may act as an intermediate host for reassortment. Surveillance The Global Influenza Surveillance and Response System (GISRS) is a global network of laboratories that monitor the spread of influenza with the aim to provide the World Health Organization with influenza control information and to inform vaccine development. Several millions of specimens are tested by the GISRS network annually through a network of laboratories in 127 countries. As well as human viruses, GISRS also monitors avian, swine, and other potentially zoonotic influenza viruses. Seasonal flu Flu season is an annually recurring time period characterized by the prevalence of an outbreak of influenza, caused either by Influenza A or by Influenza B. The season occurs during the cold half of the year in temperate regions; November through February in the northern hemisphere and May to October in the southern hemisphere. Flu seasons also exist in the tropics and subtropics, with variability from region to region. Annually, about 3 to 5 million cases of severe illness and 290,000 to 650,000 deaths from seasonal flu occur worldwide. There are several possible reasons for the winter peak in temperate regions: During the winter, people spend more time indoors with the windows sealed, so they are more likely to breathe the same air as someone who has the flu and thus contract the virus. Days are shorter during the winter, and lack of sunlight leads to low levels of vitamin D and melatonin, both of which require sunlight for their generation. This compromises our immune systems, which in turn decreases ability to fight the virus. The influenza virus may survive better in colder, drier climates, and therefore be able to infect more people. Cold air reduces the ability of the nasal membranes to resist infection. Zoonotic infections A zoonosis a disease in a human caused by a pathogen (such as a bacterium, or virus) that has jumped from a non-human to a human. Avian and pig influenza viruses can, on rare occasions, transmit to humans and cause zoonotic influenza virus infections; these infections are usually confined to people who have been in close contact with infected animals or material such as infected feces and meat, they do not spread to other humans. Symptoms of these infections in humans vary greatly; some are in asymptomatic or mild while others can cause severe disease, leading to severe pneumonia and death. A wide range of Influenza A virus subtypes have been found to cause zoonotic disease. Zoonotic infections can be prevented by good hygiene, by preventing farmed animals from coming into contact with wild animals, and by using appropriate personal protective equipment. As of June 2024, there is concern about two subtypes of avian influenza which are circulating in wild bird populations worldwide, H5N1 and H7N9. Both of these have potential to devastate poultry stocks, and both have jumped to humans with relatively high case fatality rates. H5N1 in particular has infected a wide range of mammals and may be adapting to mammalian hosts. Prevention and treatment Vaccine As of June 2024, the influenza viruses which circulate widely in humans are IAV subtypes H1N1 and H3N2, together with Influenza B. Annual vaccination is the primary and most effective way to prevent influenza and influenza-associated complications, especially for high-risk groups. Vaccines against the flu are trivalent or quadrivalent, providing protection against the dominant strains of IAV(H1N1) and IAV(H3N2), and one or two influenza B virus strains; the formulation is continually reviewed in order to match the predominant strains in circulation. It is possible to vaccinate poultry and pigs against specific strains of influenza. Vaccination should be combined with other control measures such as infection monitoring, early detection and biosecurity. Treatment The main treatment for mild influenza is supportive; rest, fluids, and over-the-counter medicines to alleviate symptoms while the body's own immune system works to recover from infection. Antiviral drugs are recommended for those with severe symptoms, or for those who are at risk of developing complications such as pneumonia. Signs and symptoms Humans The symptoms of seasonal flu are similar to those of a cold, although usually more severe and less likely to include a runny nose. The onset of symptoms is sudden, and initial symptoms are predominately non-specific: a sudden fever; muscle aches; cough; fatigue; sore throat; headache; difficulty sleeping; loss of appetite; diarrhoea or abdominal pain; nausea and vomiting. Humans can rarely become infected with strains of avian or swine influenza, usually as a result of close contact with infected animals or contaminated material; symptoms generally resemble seasonal flu but occasionally can be severe, including death. Other animals Birds Some species of wild aquatic birds act as natural asymptomatic carriers of a large variety of influenza A viruses, which they can spread over large distances in their annual migration. Symptoms of avian influenza vary according to both the strain of virus underlying the infection, and on the species of bird affected. Symptoms of influenza in birds may include swollen head, watery eyes, unresponsiveness, lack of coordination, respiratory distress such as sneezing or gurgling. Highly pathogenic avian influenza Because of the impact of avian influenza on economically important chicken farms, avian virus strains are classified as either highly pathogenic (and therefore potentially requiring vigorous control measures) or low pathogenic. The test for this is based solely on the effect on chickens - a virus strain is highly pathogenic avian influenza (HPAI) if 75% or more of chickens die after being deliberately infected with it, or if it is genetically similar to such a strain. The alternative classification is low pathogenic avian influenza (LPAI). Classification of a virus strain as either LPAI or HPAI is based on the severity of symptoms in domestic chickens and does not predict severity of symptoms in other species. Chickens infected with LPAI display mild symptoms or are asymptomatic, whereas HPAI causes serious breathing difficulties, significant drop in egg production, and sudden death. Since 2006, the World Organization for Animal Health requires all detections of LPAI H5 and H7 subtypes to be reported because of their potential to mutate into highly pathogenic strains. Pigs Signs of swine flu in pigs can include fever, depression, coughing (barking), discharge from the nose or eyes, sneezing, breathing difficulties, eye redness or inflammation, and going off feed. Some pigs infected with influenza, however, may show no signs of illness at all. Swine flu subtypes are principally H1N1, H1N2, and H3N2; it is spread either through close contact between animals or by the movement of contaminated equipment between farms. Humans who are in close contact with pigs can sometimes become infected. Horses Equine influenza can affect horses, donkeys, and mules; it has a very high rate of transmission among horses, and a relatively short incubation time of one to three days. Clinical signs of equine influenza include fever, nasal discharge, have a dry, hacking cough, depression, loss of appetite and weakness. EI is caused by two subtypes of influenza A viruses: H7N7 and H3N8, which have evolved from avian influenza A viruses. Dogs Most animals infected with canine influenza A will show symptoms such as coughing, runny nose, fever, lethargy, eye discharge, and a reduced appetite lasting anywhere from 2–3 weeks. There are two different influenza A dog flu viruses: one is an H3N8 virus and the other is an H3N2 virus. The H3N8 strain has evolved from an equine influenza avian virus which has adapted to sustained transmission among dogs. The H3N2 strain is derived from an avian influenza which jumped to dogs in 2004 in either Korea or China. It is likely that the virus persists in both animal shelters and kennels, as well as in farms where dogs are raised for meat production. Bats The first bat flu virus, IAV(H17N10), was first discovered in 2009 in little yellow-shouldered bats (Sturnira lilium) in Guatemala. In 2012 a second bat influenza A virus IAV(H18N11) was discovered in flat-faced fruit-eating bats (Artibeus planirostris) from Peru. Bat influenza viruses have been found to be poorly adapted to non-bat species. Research Influenza research includes efforts to understand how influenza viruses enter hosts, the relationship between influenza viruses and bacteria, how influenza symptoms progress, and why some influenza viruses are deadlier than others. Past pandemics, and especially the 1918 pandemic, are the subject of much research to understand and prevent flu pandemics. The World Health Organization has published a Research Agenda with five streams: Stream 1. Reducing the risk of emergence of pandemic influenza. This stream is entirely focused on preventing and limiting pandemic influenza; this includes research into what characteristics make a strain either mild or deadly, worldwide surveillance of influenza A viruses with pandemic potential, and the prevention and management of potentially zoonotic influenza in domestic and farmed animals. Stream 2. Limiting the spread of pandemic, zoonotic and seasonal epidemic influenza. This is more broadly targeted at both pandemic and seasonal influenza, looking at the transmission of the virus between people and the ways in which it can spread globally, as well as the environmental and social factors which affect transmission. Stream 3. Minimizing the impact of pandemic, zoonotic, and seasonal epidemic influenza. This is principally concerned with vaccination – improving the effectiveness of vaccines, vaccine technology, as well as the speed with which an effective vaccine can be developed and ways in which vaccines can be manufactured and delivered worldwide. Stream 4. Optimizing the treatment of patients. This stream aims to reduce the impact of influenza by looking at methods of treatment, vulnerable groups, genetic predispositions, the interaction of influenza infection with other diseases, and influenza sequelae. Stream 5. Promoting the development and application of modern public health tools. Aiming to improve the ways in which public policy can combat influenza; this includes the introduction of new technologies, epidemic and pandemic modelling, and the communication of accurate and trustworthy information to the public.
Biology and health sciences
Specific viruses
Health
440649
https://en.wikipedia.org/wiki/Oligochaeta
Oligochaeta
Oligochaeta () is a subclass of soft-bodied animals in the phylum Annelida, which is made up of many types of aquatic and terrestrial worms, including all of the various earthworms. Specifically, oligochaetes comprise the terrestrial megadrile earthworms (some of which are semiaquatic or fully aquatic), and freshwater or semiterrestrial microdrile forms, including the tubificids, pot worms and ice worms (Enchytraeidae), blackworms (Lumbriculidae) and several interstitial marine worms. With around 10,000 known species, the Oligochaeta make up about half of the phylum Annelida. These worms usually have few setae (chaetae) or "bristles" on their outer body surfaces, and lack parapodia, unlike polychaeta. Diversity Oligochaetes are well-segmented worms and most have a spacious body cavity (coelom) used as a hydroskeleton. They range in length from less than up to in the 'giant' species such as the giant Gippsland earthworm (Megascolides australis) and the Mekong worm (Amynthas mekongianus). Terrestrial oligochaetes are commonly known as earthworms and burrow into the soil. The four main families with large numbers of species are Glossoscolecidae, Lumbricidae, Megascolecidae and Moniligastridae. Earthworms are found in all parts of the world except for deserts. They have a requirement for moist surroundings and the larger species create burrows that may go down several metres (yards) while young individuals and smaller species are restricted to the top few centimetres of soil. The largest numbers are found in humus-rich soils and acid soils. A few species are found in trees, among damp moss and in the debris that accumulates in leaf axils and crevices; some others make their homes in the rosettes of bromeliads. The majority of aquatic oligochaetes are small, slender worms, whose organs can be seen through the transparent body wall. They burrow into the sediment or live among the vegetation mostly in shallow, freshwater environments. Some are transitional between terrestrial and aquatic habitats, inhabiting swamps, mud or the borders of water bodies. About two hundred species are marine, mostly in the families Enchytraeidae and Naididae; these are found largely in the tidal and shallow subtidal zones, but a few are found at abyssal depths. Anatomy The first segment, or prostomium, of oligochaetes is usually a smooth lobe or cone without sensory organs, although it is sometimes extended to form a tentacle. The remaining segments have no appendages, but they do have a small number of bristles, or chaetae. These tend to be longer in aquatic forms than in the burrowing earthworms, and can have a variety of shapes. Each segment has four bundles of chaetae, with two on the underside, and the others on the sides. The bundles can contain one to 25 chaetae, and include muscles to pull them in and out of the body. This enables the worm to gain a grip on the soil or mud as it burrows into the substrate. When burrowing, the body moves peristaltically, alternately contracting and stretching to push itself forward. A number of segments in the forward part of the body are modified by the presence of numerous secretory glands. Together, they form the clitellum, which is important in reproduction. Internal anatomy Most oligochaetes are detritus feeders, although some genera are predaceous, such as Agriodrilus and Phagodrilus. The digestive tract is essentially a tube running the length of the body, but has a powerful muscular pharynx immediately behind the mouth cavity. In many species, the pharynx simply helps the worm suck in food, but in many aquatic species, it can be turned inside out and placed over food like a suction cup before being pulled back in. The remainder of the digestive tract may include a crop for storage of food, and a gizzard for grinding it up, although these are not present in all species. The oesophagus includes "calciferous glands" that maintain calcium balance by excreting indigestible calcium carbonate into the gut. A number of yellowish chloragogen cells surround the intestine and the dorsal blood vessel, forming a tissue that functions in a similar fashion to the vertebrate liver. Some of these cells also float freely in the body cavity, where they are referred to as "eleocytes". Most oligochaetes have no gills or similar structures, and simply breathe through their moist skin. The few exceptions generally have simple, filamentous gills. Excretion is through small ducts known as metanephridia. Terrestrial oligochaetes secrete urea, but the aquatic forms typically secrete ammonia, which dissolves rapidly into the water. The vascular system consists of two main vessels connected by lateral vessels in each segment. Blood is carried forward in the dorsal vessel (in the upper part of the body) and back through the ventral vessel (underneath), before passing into a sinus surrounding the intestine. Some of the smaller vessels are muscular, effectively forming hearts; from one to five pairs of such hearts is typical. The blood of oligochaetes contains haemoglobin in all but the smallest of species, which have no need of respiratory pigments. The nervous system consists of two ventral nerve cords, which are usually fused into a single structure, and three or four pairs of smaller nerves per body segment. Only a few aquatic oligochaetes have eyes, and even then they are only simply ocelli. Nonetheless, their skin has several individual photoreceptors, allowing the worm to sense the presence of light, and burrow away from it. Oligochaetes can taste their surroundings using chemoreceptors located in tubercles across their body, and their skin is also supplied with numerous free nerve endings that presumably contribute to their sense of touch. Distribution and habitat Oligochaetes occur in every continent in the world occupying terrestrial, freshwater and marine habitats. Of the 1700 known aquatic species, about 600 are marine and 100 inhabit groundwater. Aquatic oligochaetes occur in most groups, with the Naididae being the most speciose. Locomotion Movement and burrowing of earthworms is performed by peristalsis, with the alternation of contraction and relaxation of the circular and longitudinal muscles. To move forward, the anterior portion of the worm is extended forward by the contraction of the circular muscles, while the portion just behind this is made shorter and fatter by the contraction of longitudinal muscles. Next the anterior circular muscles relax, and a wave of circular contraction moves backwards along the worm. At the same time, the cheatae expand to grip the ground as the body shortens and are retracted as it lengthens. The steps are typically long and the worm moves at the rate of seven to ten steps per minute. The worm is able to reverse its direction of travel with the tail leading. Aquatic species use a similar means of locomotion to work their way through sediment and massed vegetation, but the tiny Aeolosomatids swim by means of the cilia on their prostomia. Burrowing is performed by forcing the front end of the worm into a crevice and widening the gap by body expansion. Large quantities of soil are swallowed in the process. This is mixed with mucus as it passes through the gut, being used to plaster the tunnel walls, forming a lining. Excess material is extruded on the ground surface, forming a faecal casting. The burrow may have two entrances and several vertical and horizontal tunnels. Reproduction Whereas in general, polychaetes are marine and have separate sexes, external sperm transfer and external fertilisation, oligochaetes live on land or in fresh water, are hermaphrodites, have no external sperm transfer and fertilisation takes place in the clitellum or cocoon. However there are exceptions to this, with some polychaetes inhabiting non-marine environments and a few species of oligochaetes being marine. Development of the offspring also differs between the two subclasses. The eggs of polychaetes are deposited in the sea where they develop into trochophore larvae that disperse as part of the plankton, while the yolky eggs of oligochaetes do not have a larval stage and develop directly into juvenile worms in the cocoon. Reproduction among oligochaetes is mainly by sexual means but clonal reproduction is common in some genera, especially among aquatic species. Members of the Naididae reproduce asexually, primarily by paratomy, in which the body breaks into two pieces after the "pregeneration" of certain anterior structures by the posterior portion. Other species undergo fragmentation, in which the worm breaks into several pieces, each of which develops into a new worm. Parthenogenesis also occurs in some species. Evolution and taxonomy With their soft bodies, earthworms do not fossilize well, though they may form trace fossils. The name Protoscolex was given to a genus of segmented worms without bristles found in the Upper Ordovician of Kentucky, United States. Another species placed in the same genus was found in Herefordshire, England, but it is unclear whether these worms are in fact oligochaetes. Stephenson postulated in 1930 that the common ancestor of oligochaetes came from the primitive aquatic family Lumbriculidae. The more advanced families such as Glossoscolecidae, Hormogastridae, Lumbricidae and Microchaetidae may have evolved later than the other families. Because of its ability to colonise new areas and become dominant, the Lumbricidae has followed humans round the world and displaced many native species of earthworm. An early but now outdated classification system was to divide the oligochaetes into "Megadrili", the larger terrestrial species, and "Microdili", the smaller, mostly aquatic ones. Families Acanthodrilidae Claus, 1880 (including Diplocardiinae Michaelsen, 1900) Ailoscolecidae Bouché, 1969 (including Komarekionidae Gates, 1974) Alluroididae Michaelsen, 1900 Almidae Duboscq, 1902 Criodrilidae Vejdovsky, 1884 (including Biwadrilidae Brinkhurst & Jamieson, 1971) Dorydrilidae Cook, 1971 Enchytraeidae Vejdovsky, 1879 Eudrilidae Claus, 1880 Exxidae Blakemore, 2000 Glossoscolecidae Michaelsen, 1900 Haplotaxidae Michaelsen, 1900 Hormogastridae Michaelsen, 1900 (including Vignysinae Bouché, 1970 and Xaninae Diaz Cosin et al., 1989) Kynotidae Brinkhurst & Jamieson, 1971 Lumbricidae Claus, 1876 (including Diporodrilinae Bouché, 1970; Eiseniinae Omodeo, 1956; Spermophorodrilinae Omodeo & Rota, 1989; Postandrilinae Qiu & Bouché, 1998; Allolobophorinae Kvavadze, 2000 and Helodrilinae Kvavadze, 2000) Lumbriculidae Vejdovsky, 1884 Lutodrilidae McMahan, 1978 Megascolecidae Rosa, 1891 (including Pontodrilinae Vejdovsky, 1884; Plutellinae Vejdovsky, 1884 and Argilophilinae Fender & McKey-Fender, 1990) Microchaetidae Michaelsen, 1900 Moniligastridae Claus, 1880 Naididae / Tubificidae Vejdovsky, 1884 (including Naidinae Ehrenberg, 1831) Narapidae Righi, 1983 Ocnerodrilidae Beddard, 1891 (including Malabariinae Gates, 1966) Octochaetidae Michaelsen, 1900 (including Benhamiinae Michaelsen, 1895/7) Opistocystidae Cernosvitov, 1936 Parvidrilidae Erséus, 1999 Phreodrilidae Beddard, 1891 Propappidae Coates, 1986 Randiellidae Erséus & Strehlow, 1986 Sparganophilidae Michaelsen, 1918 Syngenodrilidae Smith & Green, 1919 Tiguassuidae Brinkhurst, 1988 Tritogeniidae Plisko, 2013 Tumakidae Righi, 1995
Biology and health sciences
Lophotrochozoa
Animals
440789
https://en.wikipedia.org/wiki/Gyrator
Gyrator
A gyrator is a passive, linear, lossless, two-port electrical network element proposed in 1948 by Bernard D. H. Tellegen as a hypothetical fifth linear element after the resistor, capacitor, inductor and ideal transformer. Unlike the four conventional elements, the gyrator is non-reciprocal. Gyrators permit network realizations of two-(or-more)-port devices which cannot be realized with just the four conventional elements. In particular, gyrators make possible network realizations of isolators and circulators. Gyrators do not however change the range of one-port devices that can be realized. Although the gyrator was conceived as a fifth linear element, its adoption makes both the ideal transformer and either the capacitor or inductor redundant. Thus the number of necessary linear elements is in fact reduced to three. Circuits that function as gyrators can be built with transistors and op-amps using feedback. Tellegen invented a circuit symbol for the gyrator and suggested a number of ways in which a practical gyrator might be built. An important property of a gyrator is that it inverts the current–voltage characteristic of an electrical component or network. In the case of linear elements, the impedance is also inverted. In other words, a gyrator can make a capacitive circuit behave inductively, a series LC circuit behave like a parallel LC circuit, and so on. It is primarily used in active filter design and miniaturization. Behaviour An ideal gyrator is a linear two-port device which couples the current on one port to the voltage on the other and conversely. The instantaneous currents and instantaneous voltages are related by where is the gyration resistance of the gyrator. The gyration resistance (or equivalently its reciprocal the gyration conductance) has an associated direction indicated by an arrow on the schematic diagram. By convention, the given gyration resistance or conductance relates the voltage on the port at the head of the arrow to the current at its tail. The voltage at the tail of the arrow is related to the current at its head by minus the stated resistance. Reversing the arrow is equivalent to negating the gyration resistance, or to reversing the polarity of either port. Although a gyrator is characterized by its resistance value, it is a lossless component. From the governing equations, the instantaneous power into the gyrator is identically zero: A gyrator is an entirely non-reciprocal device, and hence is represented by antisymmetric impedance and admittance matrices: If the gyration resistance is chosen to be equal to the characteristic impedance of the two ports (or to their geometric mean if these are not the same), then the scattering matrix for the gyrator is which is likewise antisymmetric. This leads to an alternative definition of a gyrator: a device which transmits a signal unchanged in the forward (arrow) direction, but reverses the polarity of the signal travelling in the backward direction (or equivalently, 180° phase-shifts the backward-travelling signal). The symbol used to represent a gyrator in one-line diagrams (where a waveguide or transmission line is shown as a single line rather than as a pair of conductors), reflects this one-way phase shift. As with a quarter-wave transformer, if one port of a gyrator is terminated with a linear load, then the other port presents an impedance inversely proportional to the impedance of that load: A generalization of the gyrator is conceivable, in which the forward and backward gyration conductances have different magnitudes, so that the admittance matrix is However, this no longer represents a passive device. Name Tellegen named the element gyrator as a blend of gyroscope and the common device suffix -tor (as in resistor, capacitor, transistor etc.) The -tor ending is even more suggestive in Tellegen's native Dutch, where the related element transformer is called transformator. The gyrator is related to the gyroscope by an analogy in its behaviour. The analogy with the gyroscope is due to the relationship between the torque and angular velocity of the gyroscope on the two axes of rotation. A torque on one axis will produce a proportional change in angular velocity on the other axis and conversely. A mechanical–electrical analogy of the gyroscope making torque and angular velocity the analogs of voltage and current results in the electrical gyrator. Relationship to the ideal transformer An ideal gyrator is similar to an ideal transformer in being a linear, lossless, passive, memoryless two-port device. However, whereas a transformer couples the voltage on port 1 to the voltage on port 2, and the current on port 1 to the current on port 2, the gyrator cross-couples voltage to current and current to voltage. Cascading two gyrators achieves a voltage-to-voltage coupling identical to that of an ideal transformer. Cascaded gyrators of gyration resistance and are equivalent to a transformer of turns ratio . Cascading a transformer and a gyrator, or equivalently cascading three gyrators produces a single gyrator of gyration resistance . From the point of view of network theory, transformers are redundant when gyrators are available. Anything that can be built from resistors, capacitors, inductors, transformers and gyrators, can also be built using just resistors, gyrators and inductors (or capacitors). Magnetic circuit analogy In the two-gyrator equivalent circuit for a transformer, described above, the gyrators may be identified with the transformer windings, and the loop connecting the gyrators with the transformer magnetic core. The electric current around the loop then corresponds to the rate-of-change of magnetic flux through the core, and the electromotive force (EMF) in the loop due to each gyrator corresponds to the magnetomotive force (MMF) in the core due to each winding. The gyration resistances are in the same ratio as the winding turn-counts, but collectively of no particular magnitude. So, choosing an arbitrary conversion factor of ohms per turn, a loop EMF is related to a core MMF by and the loop current is related to the core flux-rate by The core of a real, non-ideal, transformer has finite permeance (non-zero reluctance ), such that the flux and total MMF satisfy which means that in the gyrator loop corresponding to the introduction of a series capacitor in the loop. This is Buntenbach's capacitance–permeance analogy, or the gyrator–capacitor model of magnetic circuits. Application Simulated inductor A gyrator can be used to transform a load capacitance into an inductance. At low frequencies and low powers, the behaviour of the gyrator can be reproduced by a small op-amp circuit. This supplies a means of providing an inductive element in a small electronic circuit or integrated circuit. Before the invention of the transistor, coils of wire with large inductance might be used in electronic filters. An inductor can be replaced by a much smaller assembly containing a capacitor, operational amplifiers or transistors, and resistors. This is especially useful in integrated circuit technology. Operation In the circuit shown, one port of the gyrator is between the input terminal and ground, while the other port is terminated with the capacitor. The circuit works by inverting and multiplying the effect of the capacitor in an RC differentiating circuit, where the voltage across the resistor R behaves through time in the same manner as the voltage across an inductor. The op-amp follower buffers this voltage and applies it back to the input through the resistor RL. The desired effect is an impedance of the form of an ideal inductor L with a series resistance RL: From the diagram, the input impedance of the op-amp circuit is With RLRC = L, it can be seen that the impedance of the simulated inductor is the desired impedance in parallel with the impedance of the RC circuit. In typical designs, R is chosen to be sufficiently large such that the first term dominates; thus, the RC circuit's effect on input impedance is negligible: This is the same as a resistance RL in series with an inductance L = RLRC. There is a practical limit on the minimum value that RL can take, determined by the current output capability of the op-amp. The impedance cannot increase indefinitely with frequency, and eventually the second term limits the impedance to the value of R. Comparison with actual inductors Simulated elements are electronic circuits that imitate actual elements. Simulated elements cannot replace physical inductors in all the possible applications as they do not possess all the unique properties of physical inductors. Magnitudes. In typical applications, both the inductance and the resistance of the gyrator are much greater than that of a physical inductor. Gyrators can be used to create inductors from the microhenry range up to the megahenry range. Physical inductors are typically limited to tens of henries, and have parasitic series resistances from hundreds of microhms through the low kilohm range. The parasitic resistance of a gyrator depends on the topology, but with the topology shown, series resistances will typically range from tens of ohms through hundreds of kilohms. Quality. Physical capacitors are often much closer to "ideal capacitors" than physical inductors are to "ideal inductors". Because of this, a synthesized inductor realized with a gyrator and a capacitor may, for certain applications, be closer to an "ideal inductor" than any (practical) physical inductor can be. Thus, use of capacitors and gyrators may improve the quality of filter networks that would otherwise be built using inductors. Also, the Q factor of a synthesized inductor can be selected with ease. The Q of an LC filter can be either lower or higher than that of an actual LC filter – for the same frequency, the inductance is much higher, the capacitance much lower, but the resistance also higher. Gyrator inductors typically have higher accuracy than physical inductors, due to the lower cost of precision capacitors than inductors. Energy storage. Simulated inductors do not have the inherent energy storing properties of the real inductors and this limits the possible power applications. The circuit cannot respond like a real inductor to sudden input changes (it does not produce a high-voltage back EMF); its voltage response is limited by the power supply. Since gyrators use active circuits, they only function as a gyrator within the power supply range of the active element. Hence gyrators are usually not very useful for situations requiring simulation of the 'flyback' property of inductors, where a large voltage spike is caused when current is interrupted. A gyrator's transient response is limited by the bandwidth of the active device in the circuit and by the power supply. Externalities. Simulated inductors do not react to external magnetic fields and permeable materials the same way that real inductors do. They also don't create magnetic fields (and induce currents in external conductors) the same way that real inductors do. This limits their use in applications such as sensors, detectors and transducers. Grounding. The fact that one side of the simulated inductor is grounded restricts the possible applications (real inductors are floating). This limitation may preclude its use in some low-pass and notch filters. However the gyrator can be used in a floating configuration with another gyrator so long as the floating "grounds" are tied together. This allows for a floating gyrator, but the inductance simulated across the input terminals of the gyrator pair must be cut in half for each gyrator to ensure that the desired inductance is met (the impedance of inductors in series adds together). This is not typically done as it requires even more components than in a standard configuration and the resulting inductance is a result of two simulated inductors, each with half of the desired inductance. Applications The primary application for a gyrator is to reduce the size and cost of a system by removing the need for bulky, heavy and expensive inductors. For example, RLC bandpass filter characteristics can be realized with capacitors, resistors and operational amplifiers without using inductors. Thus graphic equalizers can be achieved with capacitors, resistors and operational amplifiers without using inductors because of the invention of the gyrator. Gyrator circuits are extensively used in telephony devices that connect to a POTS system. This has allowed telephones to be much smaller, as the gyrator circuit carries the DC part of the line loop current, allowing the transformer carrying the AC voice signal to be much smaller due to the elimination of DC current through it. Gyrators are used in most DAAs (data access arrangements). Circuitry in telephone exchanges has also been affected with gyrators being used in line cards. Gyrators are also widely used in hi-fi for graphic equalizers, parametric equalizers, discrete bandstop and bandpass filters such as rumble filters), and FM pilot tone filters. There are many applications where it is not possible to use a gyrator to replace an inductor: High voltage systems utilizing flyback (beyond working voltage of transistors/amplifiers) RF systems commonly use real inductors as they are quite small at these frequencies and integrated circuits to build an active gyrator are either expensive or non-existent. However, passive gyrators are possible. Power conversion, where a coil is used as energy storage. Impedance inversion In microwave circuits, impedance inversion can be achieved using a quarter-wave impedance transformer instead of a gyrator. The quarter-wave transformer is a passive device and is far simpler to build than a gyrator. Unlike the gyrator, the transformer is a reciprocal component. The transformer is an example of a distributed-element circuit. In other energy domains Analogs of the gyrator exist in other energy domains. The analogy with the mechanical gyroscope has already been pointed out in the name section. Also, when systems involving multiple energy domains are being analysed as a unified system through analogies, such as mechanical-electrical analogies, the transducers between domains are considered either transformers or gyrators depending on which variables they are translating. Electromagnetic transducers translate current into force and velocity into voltage. In the impedance analogy however, force is the analog of voltage and velocity is the analog of current, thus electromagnetic transducers are gyrators in this analogy. On the other hand, piezoelectric transducers are transformers (in the same analogy). Thus another possible way to make an electrical passive gyrator is to use transducers to translate into the mechanical domain and back again, much as is done with mechanical filters. Such a gyrator can be made with a single mechanical element by using a multiferroic material using its magnetoelectric effect. For instance, a current carrying coil wound around a multiferroic material will cause vibration through the multiferroic's magnetostrictive property. This vibration will induce a voltage between electrodes embedded in the material through the multiferroic's piezoelectric property. The overall effect is to translate a current into a voltage resulting in gyrator action.
Technology
Functional circuits
null
440872
https://en.wikipedia.org/wiki/Common%20snapping%20turtle
Common snapping turtle
The common snapping turtle (Chelydra serpentina) is a species of large freshwater turtle in the family Chelydridae. Its natural range extends from southeastern Canada, southwest to the edge of the Rocky Mountains, as far east as Nova Scotia and Florida. The present-day Chelydra serpentina population in the Middle Rio Grande suggests that the common snapping turtle has been present in this drainage since at least the seventeenth century and is likely native. The three species of Chelydra and the larger alligator snapping turtles (genus Macrochelys) are the only extant chelydrids, a family now restricted to the Americas. The common snapping turtle, as its name implies, is the most widespread. The common snapping turtle is noted for its combative disposition when out of the water with its powerful beak-like jaws, and highly mobile head and neck (hence the specific epithet serpentina, meaning "snake-like"). In water, it is likely to flee and hide underwater in sediment. The common snapping turtle has a life-history strategy characterized by high and variable mortality of embryos and hatchlings, delayed sexual maturity, extended adult longevity, and iteroparity (repeated reproductive events) with low reproductive success per reproductive event. Females, and presumably also males, in more northern populations mature later (at 15–20 years) and at a larger size than in more southern populations (about 12 years). Lifespan in the wild is poorly known, but long-term mark-recapture data from Algonquin Park in Ontario, Canada, suggest a maximum age over 100 years. Anatomy and morphology C. serpentina has a rugged, muscular build with a ridged carapace (upper shell) that varies in color from tan, brown, and black, although ridges tend to be more pronounced in younger individuals. The straight-line carapace length in adulthood may be nearly , though is more common. C. serpentina usually weighs . Per one study, breeding common snapping turtles were found to average in carapace length, in plastron length and weigh about . Males are larger than females, with almost all weighing in excess of being male and quite old, as the species continues to grow throughout life. Any specimen above the aforementioned weights is exceptional, but the heaviest wild specimen caught reportedly weighed . Common snapping turtles kept in captivity can be quite overweight due to overfeeding and have weighed as much as . In the northern part of its range, the common snapping turtle is often the heaviest native freshwater turtle. According to a study by Nakamuta et al. (2016), common snapping turtles have well-developed olfactory organs, nerves, and bulbs that suggest that this species has a great sense of smell. Ecology and life history Common habitats are shallow ponds or streams. Some may inhabit brackish environments, such as estuaries. These sources of water tend to have an abundance of aquatic vegetation due to the shallow pools. Some describe them as habitat generalists as they can occupy most permanent bodies of water. Common snapping turtles sometimes bask—though rarely observed—by floating on the surface with only their carapaces exposed, though in the northern parts of their range, they also readily bask on fallen logs in early spring. In shallow waters, common snapping turtles may lie beneath a muddy bottom with only their heads exposed, stretching their long necks to the surface for an occasional breath. Their nostrils are positioned on the very tip of the snout, effectively functioning as snorkels. Common snapping turtles are omnivorous. Important aquatic scavengers, they are also active hunters that use ambush tactics to prey on anything they can swallow, including many invertebrates, fish, frogs, other amphibians, reptiles (including snakes and smaller turtles), unwary birds, and small mammals. In a recent study, young common snapping turtles showed that their lower bite force matches their active foraging behavior, meaning they have to travel and seek out more prey to make up for their inability to eat some items. In some areas adult common snapping turtles can occasionally be incidentally detrimental to breeding waterfowl, but their effect on such prey as ducklings and goslings is frequently exaggerated. As omnivorous scavengers though, they will also feed on carrion and a surprisingly large amount of aquatic vegetation. Common snapping turtles have few predators when older, but eggs are subject to predation by crows, American mink, skunks, foxes, and raccoons. Egg predators use three types of cues to locate turtle nests: Visual cues – seeing where the female has dug the soil for the nest chamber and seeing the turtle Tactile cues – soft surface around the nest site Chemosensory cues – scent of the musk of the female that she leaves on the surface of the soil as she digs As hatchlings and juveniles, most of the same predators will attack them as well as herons (mostly great blue herons), bitterns, hawks, owls, fishers, American bullfrogs, large fish, and snakes. There are records during winter in Canada of hibernating adult common snapping turtles being ambushed and preyed on by northern river otters. Other natural predators which have reportedly preyed on adults include coyotes, American black bears, American alligators and their larger cousins, alligator snapping turtles. Large, old male common snapping turtles have very few natural threats due to their formidable size and defenses, and tend to have a very low annual mortality rate. These turtles travel extensively over land to reach new habitats or to lay eggs. Pollution, habitat destruction, food scarcity, overcrowding, and other factors drive snappers to move; it is quite common to find them traveling far from the nearest water source. Experimental data supports the idea that common snapping turtles can sense the Earth's magnetic field, which could also be used for such movements (together with a variety of other possible orientation cues). This species mates from April through November, with their peak laying season in June and July. The female can hold sperm for several seasons, using it as necessary. Females travel over land to find sandy soil in which to lay their eggs, often some distance from the water. After digging a hole, the female typically deposits 25 to 80 eggs each year, guiding them into the nest with her hind feet and covering them with sand for incubation and protection. These eggs have a leathery, flexible shell and they typically measure only 26-28 mm in diameter. Incubation time is temperature-dependent, ranging from 9 to 18 weeks. One study on the incubation period of the common snapping turtle incubated the eggs at two temperatures: 20 °C (68 °F) and 30 °C (86 °F). The research found that the incubation period at the higher temperature was significantly shorter at approximately 63 days, while at the lower temperature the time was approximately 140 days. In cooler climates, hatchlings overwinter in the nest. The common snapping turtle is remarkably cold-tolerant; radiotelemetry studies have shown some individuals do not hibernate, but remain active under the ice during the winter. In addition to incubation time, temperature also affects sex determination. It has been shown that females develop at low and high temperatures, while males develop in the intermediate temperature range. Fall temperatures had a positive effect on clutch size and clutch mass, whereas spring temperatures had no impact. Common snapping turtle hatchlings have recently been found to make sounds before nest exit onto the surface, a phenomenon also known from species in the South American genus Podocnemis and the Ouachita map turtle. These sounds are mostly "clicking" noises, but other sounds, including those that sound somewhat like a “creak” or rubbing a finger along a fine-toothed comb, are also sometimes produced. In the northern part of their range common snapping turtles do not breathe for more than six months because ice covers their hibernating site. These turtles can get oxygen by pushing their head out of the mud and allowing gas exchange to take place through the membranes of their mouth and throat. This is known as extrapulmonary respiration. If they cannot get enough oxygen through this method they start to utilize anaerobic pathways, burning sugars and fats without the use of oxygen. The metabolic by-products from this process are acidic and create very undesirable side effects by spring, which are known as oxygen debt. Although designated as "least concern" on the IUCN redlist, the species has been designated in the Canadian part of its range as "Special Concern" due to its life history being sensitive to disruption by anthropogenic activity. Systematics and taxonomy Currently, no subspecies of the common snapping turtle are recognized. The former Florida subspecies osceola is currently considered a synonym of serpentina, while the other former subspecies Chelydra rossignonii and Chelydra acutirostris are both recognized as full species. Behavior When they encounter a species unfamiliar to them such as humans, in rare instances, they will become curious and survey the situation and even more rarely may bump their nose on a leg of the person standing in the water. Although common snapping turtles have fierce dispositions, when they are encountered in the water or a swimmer approaches, they will slip quietly away from any disturbance or may seek shelter under mud or grass nearby. Relationship with humans As food The common snapping turtle is a traditional ingredient in turtle soup; consumption in large quantities, however, can become a health concern due to potential concentration of toxic environmental pollutants in the turtle's flesh. Captivity The common snapping turtle is not an ideal pet. Its neck is very flexible, and a wild turtle can bite its handler even if picked up by the sides of its shell. The claws are as sharp as those of bears and cannot be trimmed as can dog claws. The turtle uses its paws like a bear for hunting and slicing food, while biting it. Despite this, a common snapping turtle cannot use its claws for either attacking (its legs have no speed or strength in "swiping" motions) or eating (no opposable thumbs), but only as aids for digging and gripping. Veterinary care is best left to a reptile specialist. A wild common snapping turtle will make a hissing sound when it is threatened or encountered, but they prefer not to provoke confrontations. It is a common misconception that common snapping turtles may be safely picked up by the tail with no harm to the animal; in fact, this has a high chance of injuring the turtle, especially the tail itself and the vertebral column. Lifting the turtle with the hands is difficult and dangerous. Snappers can stretch their necks back across their own carapace and to their hind feet on either side to bite. When they feel stressed, they release a musky odor from behind their legs. It may be tempting to rescue a common snapping turtle found on a road by getting it to bite a stick and then dragging it out of immediate danger. This action can, however, severely scrape the legs and underside of the turtle and lead to deadly infections in the wounds. The safest way to pick up a common snapping turtle is by grasping the carapace behind the back legs, being careful to not grasp the tail. There is a large gap behind the back legs that allows for easy grasping of the carapace and keeps hands safe from both the beak and claws of the turtle. It can also be picked up with a shovel, from the back, making sure the shovel is square across the bottom of the shell. The easiest way, though, is with a blanket or tarp, picking up the corners with the turtle in the middle. Common snapping turtles are raised on some turtle farms in Mainland China. In politics The common snapping turtle was the central feature of a famous American political cartoon. Published in 1808 in protest at the Jeffersonian Embargo Act of 1807, the cartoon depicted a common snapping turtle, jaws locked fiercely to an American trader who was attempting to carry a barrel of goods onto a British ship. The trader was seen whimsically uttering the words "Oh! this cursed Ograbme" ("embargo" spelled backwards, and also "O, grab me" as the turtle is doing). This piece is widely considered a pioneering work within the genre of the modern political cartoon. In 2006, the common snapping turtle was declared the state reptile of New York by vote of the New York Legislature after being chosen by the state's public elementary school children. Reputation While it is widely rumored that common snapping turtles can bite off human fingers or toes, and their powerful jaws are more than capable of doing so, no proven cases have ever been presented for this species, as they use their overall size and strength to deter would-be predators. Common snapping turtles are "quite docile" animals underwater that prefer to avoid confrontations rather than provoke them. The ability to bite forcefully is extremely useful for consuming hard-bodied prey items such as mollusks, crustaceans, and turtles along with some plant matter, like nuts and seeds. In 2002, a study reported in the Journal of Evolutionary Biology found that the common snapping turtle (Chelydra serpentina) registered between 208 and 226 Newtons of force when it came to jaw strength. In comparison, the average bite force of a human (molars area) is between 300 and 700 Newtons. Another non-closely related species known as the alligator snapping turtle has been known to bite off fingers, and at least three documented cases are known. Invasive species In recent years in Italy, large mature adult C. serpentina turtles have been taken from bodies of water throughout the country. They were most probably introduced by the release of unwanted pets. In March 2011, an individual weighing was captured in a canal near Rome; another individual was captured near Rome in September 2012. In Japan, the species was introduced as an exotic pet in the 1960s; it has been recorded as the source of serious bite injuries. In 2004 and 2005, some 1,000 individuals were found in Chiba Prefecture, making up the majority of individuals believed to have been introduced. Conservation The species is currently classified as Least Concern by the IUCN, but has declined sufficiently due to pressure from collection for the pet trade and habitat degradation that Canada and several U.S. states have enacted or are proposing stricter conservation measures. In Canada, it is listed as "Special Concern" in the Species at Risk Act in 2011 and is a target species for projects that include surveys, identification of major habitats, investigation and mitigation of threats, and education of the public including landowners. Involved bodies include governmental departments, universities, museums, and citizen science projects. Although common snapping turtles are listed as a species of least concern, anthropogenic factors still may have major effects on populations. Decades of road mortality may cause severe population decline in common snapping turtle populations present in urbanized wetlands. A study in southwestern Ontario monitored a population near a busy roadway and found a loss of 764 individuals in only 17 years. The population decreased from 941 individuals in 1985 to 177 individuals in 2002. Road mortality may put common snapping turtle populations at risk of extirpation. Exclusion fencing could aid in decreasing population loss.
Biology and health sciences
Reptiles
null
441046
https://en.wikipedia.org/wiki/Rove%20beetle
Rove beetle
The rove beetles are a family (Staphylinidae) of beetles, primarily distinguished by their short elytra (wing covers) that typically leave more than half of their abdominal segments exposed. With over 66,000 species in thousands of genera, the group is one of the largest families in the beetle order, and one of the largest families of organisms. It is an ancient group that first appeared during the Middle Jurassic based on definitive records of fossilized rove beetles, with the Late Triassic taxon Leehermania more likely belonging to Myxophaga. They are an ecologically and morphologically diverse group of beetles, and commonly encountered in terrestrial ecosystems. One well-known species is the devil's coach-horse beetle (Ocypus olens). For some other species, see list of British rove beetles. Anatomy As might be expected for such a large family, considerable variation exists among the species. Sizes range from <1 to , with most in the 2–8 mm range, and the form is generally elongated, with some rove beetles being ovoid in shape. Colors range from yellow and red to reddish-brown to brown to black to iridescent blue and green. The antennae usually have 11 segments and are filiform, with moderate clubbing in some genera. The abdomen may be very long and flexible, and some rove beetles superficially resemble earwigs. In the genera Paederinae, Euaesthetinae, and Osoriinae, and partially in Steninae, the tergum and sternum on the visible abdominal segments have fused, making each segment ring-shaped. Due to the small Elytra these beetles have to fold their wings into a sort of origami shape. They are nevertheless good at flying. Some rove beetles, including members of Antimerus and Phanolinus, are metallic in appearance. Some members of Paederina (specifically the genus Paederus), a subtribe of Paederinae, contain a potent vesicant in their haemolymph that can produce a skin irritation called dermatitis linearis, also known as Paederus dermatitis. The irritant pederin is highly toxic, more potent than cobra venom. Ecology Rove beetles are known from every type of habitat in which beetles occur, and their diets include just about everything except the living tissues of higher plants, but now including higher plants with the discovery of the diet of Himalusa thailandensis. Most rove beetles are predators of insects and other invertebrates, living in forest leaf litter and similar decaying plant matter. They are also commonly found under stones, and around freshwater margins. Almost 400 species are known to live on ocean shores that are submerged at high tide, including the pictured rove beetle, although these are much fewer than 1% of the worldwide total of Staphylinidae. Other species have adapted to live as inquilines in ant and termite colonies, and some live in mutualistic relationships with mammals whereby they eat fleas and other parasites, benefiting the host. A few species, notably those of the genus Aleochara, are scavengers and carrion feeders, or are parasitoids of other insects, particularly of certain fly pupae. To profit from the alleged advantages, several Staphylinidae have been transferred into Italy, Hawaii, the continental United States and Easter Island by practitioners. Another advantage of rove beetles is their sensitivity to changes in the environment, such as habitat alteration. This means they have potential as an ecological disturbance indicator in human-dominated environments. Although rove beetles' appetites for other insects would seem to make them obvious candidates for biological control of pests, and empirically they are believed to be important controls in the wild, experiments using them have not been notably successful. Greater success is seen with those species that are parasitoids (genus Aleochara). Rove beetles of the genus Stenus are specialist predators of small invertebrates such as springtails (Collembola). Their labium can shoot out from the head using blood pressure. The thin rod of the labium ends in a pad of bristly hairs and hooks and between these hairs are small pores that exude an adhesive glue-like substance, which sticks to prey. Systematics Classification of the 63,650 (as of 2018) staphylinid species is ongoing and controversial, with some workers proposing an organization of as many as 10 separate families, but the current favored system is one of 32 subfamilies, about 167 tribes (some grouped into supertribes), and about 3,200 genera. About 400 new species are being described each year, and some estimates suggest three-quarters of tropical species are as yet undescribed. Gallery
Biology and health sciences
Beetles (Coleoptera)
Animals
441135
https://en.wikipedia.org/wiki/Pauropoda
Pauropoda
Pauropoda is a class of small, pale, millipede-like arthropods in the subphylum Myriapoda. More than 900 species in twelve families are found worldwide, living in soil and leaf mold. Pauropods look like centipedes or millipedes and may be a sister group of the latter, but a close relationship with Symphyla has also been posited. The name Pauropoda derives from the Greek pauros (meaning small or few) and pous or podus (meaning foot), because most species in this class have only nine pairs of legs as adults, a smaller number than those found among adults in any other class of myriapods. Anatomy Pauropods are soft, cylindrical animals with bodies measuring only 0.3 to 2 mm in length. They have neither eyes nor hearts, although they do have sensory organs which can detect light. The body segments have ventral tracheal/spiracular pouches forming apodemes similar to those in millipedes and Symphyla, although the trachea usually connected to these structures are absent in most species. There are five pairs of long sensory hairs (trichobothria) located throughout the body segments. Pauropods can usually be identified because of their distinctive anal plate, which is unique to pauropods. Different species of pauropods can be identified based on the size and shape of their anal plate. The antennae are branching, biramous, and segmented, which is distinctive for the group. Pauropods are usually either white or brown. Discovery The first pauropod species to be discovered and described was Pauropus huxleyi, found by Lord Avebury in his own garden in London in 1866. He wrote of the creature:Pauropus huxleyi is a bustling, active, neat and cleanly creature. It has, too, a look of cheerful intelligence, which forms a great contrast to the dull stupidity of the Diplopods, or the melancholy ferocity of most Chilopods.'In 1870, Packard discovered a species of North American pauropod, extending the group's range. Evolution and systematics Only one fossil species has been reported: Eopauropus balticus a prehistoric species of pauropod that was found in Baltic Amber. Pauropods are divided into two orders: Hexamerocerata and Tetramerocerata. Hexamerocerata contains only one family, Millotauropodidae, with a single genus and only eight species. Tetramerocerata is much larger and more diverse, with eleven families, including Pauropodidae, Brachypauropodidae, and Eurypauropodidae. The family Pauropodidae is especially large, with 27 genera and 814 species, including most of the genera and species in the class Pauropoda. Adults in the order Tetramerocerata have a scarcely telescopic antennal stalk with four segments, five or six tergites, and eight to ten pairs of legs. Pauropods in this order are small (sometimes quite small) and white or brownish. Most species have nine pairs of legs as adults, but adults in four genera (Cauvetauropus, Aletopauropus, Zygopauropus, and Amphipauropus) have only eight pairs of legs, and adult females in the genus Decapauropus have either nine or ten pairs of legs. The order Tetramerocerata has a subcosmopolitan distribution. Pauropods in the order Hexamerocerata have a strongly telescopic antennal stalk with six segments. Adults in this order have twelve tergites and eleven pairs of legs. The pauropods in this order are white and relatively long and large. The order Hexamerocerata has a mainly tropical range. Reproduction and development Pauropods, like all other myriapods, are gonochoric. Male pauropods place small packets of sperm on the ground, which the females use to impregnate themselves. The females then deposit the fertilized eggs on the ground. Parthogenesis can occur in some species, especially when environmental conditions are unfavourable. The embryo goes through a short pupoid stage before the egg hatches and the first larval instar emerges. Juveniles then develop into adults through a series of molts, adding legs at each stage. Juveniles in the order Tetramerocerata start with three pairs of legs and progress through instars with five, then six, and then eight leg pairs, and in most species, become adults with nine leg pairs. In contrast, the first instar in the order Hexamerocerata has six leg pairs of legs and becomes an adult with eleven leg pairs. In at least some species in each order, adults continue to molt but no longer add legs or segments. This mode of development is known as hemianamorphosis. Behavior and diet Paurapods have a distinctive method of movement characterized by bursts of speed and frequent changes of direction. Pauropods are shy of light and will attempt to distance themselves from it. Pauropods live in the soil, (usually at densities of less than 100 per square metre [9/sq ft]), and under debris and leaf litter. Pauropods occasionally migrate upwards or downwards throughout the soil based on moisture levels. They feed on mold, fungi, and occasionally even the root hairs of plants. As their bodies are too soft to be able to dig and burrow, pauropods follow roots and crevices in the soil, sometimes all the way down to the surface of the groundwater. Gallery
Biology and health sciences
Myriapoda
Animals
441143
https://en.wikipedia.org/wiki/Symphyla
Symphyla
Symphylans, also known as garden centipedes or pseudocentipedes, are soil-dwelling arthropods of the class Symphyla in the subphylum Myriapoda. Symphylans resemble centipedes, but are very small, non-venomous, and may or may not form a clade with centipedes. More than 200 species are known worldwide. Symphyla are primarily herbivores and detritus feeders living deep in the soil, under stones, in decaying wood, and in other moist places. They are rapid runners, can move quickly through the pores between soil particles, and are typically found from the surface down to a depth of about . They consume decaying vegetation, but can do considerable harm in an agricultural setting by consuming seeds, roots, and root hairs in cultivated soil. For example, the garden symphylan, Scutigerella immaculata can be a pest of crops. A species of Hanseniella has been recorded as a pest of sugar cane and pineapples in Queensland. A few species are found in trees and in caves. A species of Symphylella has been shown to be predominantly predatory, and some species are saprophagous. Description Symphyla are small, cryptic myriapods without eyes and without pigment. The body is soft and generally long, divided into two body regions: head and trunk. An exceptional size is reached in Hanseniella magna, which attains lengths of 12-13 mm (0.5 in). The head has long, segmented antennae, a postantennal organ, three pairs of mouthparts: mandibles, the long first maxillae, and the second pair of maxillae which are fused to form the lower lip or labium of the mouth. The antennae serve as sense organs. Disc-like organs of Tömösváry, which probably sense vibrations, are attached to the base of the antennae, as they are in centipedes. The trunk comprises 14 segments, which is covered by microhairs on the lateral and ventral integument, and by a various number of dorsal tergal plates, from 15 in Scutigerella and Hanseniella, and up till 24 in Ribautiella, increasing the flexibility of the body. Legs are found on the first 12 segments. The 13th segment, which is fused with the 12th segment, bears a pair of spinnerets that resemble cerci, and the 14th segment has a pair of long sensory hairs (trichobothria). Around the anal opening there is a small telson. Symphylans have been reported as living up to four years, and moult throughout their life. Immature individuals have six or seven pairs of legs on hatching, but they add an additional pair at each moult until the adult instar, which usually has twelve pairs of legs. This mode of development is known as hemianamorphosis. Although most adult symphylans have twelve leg pairs, the first pair is absent or vestigial in some species (e.g., those in the genus Symphylella), so adults in some species have only eleven leg pairs. The species with 12 pairs are the only myriapods with actual legs on the first body segment, as the first pair of legs is modified into forcipules in centipedes, and in pauropods the segment is a reduced collum which bears ventrally a pair of small papillae, while in millipedes it's a collum without any appendages at all. Symphylans have several features linking them to early insects, such as a labium (fused second maxillae), an identical number of head segments and certain features of their legs. Each pair of legs is associated with an eversible structure, called a "coxal sac", that helps the animal absorb moisture, and a small stylus that may be sensory in function. Similar structures are found in the most primitive insects. Symphylans breathe through a pair of spiracles on the sides of their head, and are the only arthropods with spiracle openings on the head. These are connected to a system of tracheae that branch through the head and the first three segments of the body only. The genital openings are located on the fourth body segment, but the animals do not copulate. Instead, the male deposits 150 to 450 packages of sperm, or spermatophores, on small stalks. The female then picks these up in her mouth, which contains special pouches for storing the sperm. She then lays her eggs, and attaches them to the sides of crevices or to moss or lichen with her mouth, smearing the sperm over them as she does so. The eggs are laid in groups of eight to twelve. The spinnerets produce secretions that turn into a silk-like thread. One fossil species, Symphylella patrickmuelleri, was found preserved in Burmese Amber releasing long threads of silk. The silk plays a role in reproduction: the male deposits up to 450 spermatophores on stalks of silk. Symphylans have also been reported releasing silk as a defense and to suspend themselves in the air. Fossil record and evolution The symphylan fossil record is poorly known, with only five species recorded, all placed in living genera. The oldest records of both families are found in Burmese amber from the middle Cretaceous, approximately 99 million years ago. As a result, both families are thought to have diverged before the end of the Mesozoic Era. Despite their common name, morphological studies commonly place symphylans as more closely related to millipedes and pauropods than the centipedes, in the clade Progoneata. Molecular studies have shown conflicting results, with some supporting the Progoneata clade, others aligning symphylans with centipedes or other arthropods, although some are weakly supported. The clade is believed to be monophyletic.
Biology and health sciences
Myriapoda
Animals
442137
https://en.wikipedia.org/wiki/Loss%20function
Loss function
In mathematical optimization and decision theory, a loss function or cost function (sometimes also called an error function) is a function that maps an event or values of one or more variables onto a real number intuitively representing some "cost" associated with the event. An optimization problem seeks to minimize a loss function. An objective function is either a loss function or its opposite (in specific domains, variously called a reward function, a profit function, a utility function, a fitness function, etc.), in which case it is to be maximized. The loss function could include terms from several levels of the hierarchy. In statistics, typically a loss function is used for parameter estimation, and the event in question is some function of the difference between estimated and true values for an instance of data. The concept, as old as Laplace, was reintroduced in statistics by Abraham Wald in the middle of the 20th century. In the context of economics, for example, this is usually economic cost or regret. In classification, it is the penalty for an incorrect classification of an example. In actuarial science, it is used in an insurance context to model benefits paid over premiums, particularly since the works of Harald Cramér in the 1920s. In optimal control, the loss is the penalty for failing to achieve a desired value. In financial risk management, the function is mapped to a monetary loss. Examples Regret Leonard J. Savage argued that using non-Bayesian methods such as minimax, the loss function should be based on the idea of regret, i.e., the loss associated with a decision should be the difference between the consequences of the best decision that could have been made under circumstances will be known and the decision that was in fact taken before they were known. Quadratic loss function The use of a quadratic loss function is common, for example when using least squares techniques. It is often more mathematically tractable than other loss functions because of the properties of variances, as well as being symmetric: an error above the target causes the same loss as the same magnitude of error below the target. If the target is t, then a quadratic loss function is for some constant C; the value of the constant makes no difference to a decision, and can be ignored by setting it equal to 1. This is also known as the squared error loss (SEL). Many common statistics, including t-tests, regression models, design of experiments, and much else, use least squares methods applied using linear regression theory, which is based on the quadratic loss function. The quadratic loss function is also used in linear-quadratic optimal control problems. In these problems, even in the absence of uncertainty, it may not be possible to achieve the desired values of all target variables. Often loss is expressed as a quadratic form in the deviations of the variables of interest from their desired values; this approach is tractable because it results in linear first-order conditions. In the context of stochastic control, the expected value of the quadratic form is used. The quadratic loss assigns more importance to outliers than to the true data due to its square nature, so alternatives like the Huber, Log-Cosh and SMAE losses are used when the data has many large outliers. 0-1 loss function In statistics and decision theory, a frequently used loss function is the 0-1 loss function using Iverson bracket notation, i.e. it evaluates to 1 when , and 0 otherwise. Constructing loss and objective functions In many applications, objective functions, including loss functions as a particular case, are determined by the problem formulation. In other situations, the decision maker’s preference must be elicited and represented by a scalar-valued function (called also utility function) in a form suitable for optimization — the problem that Ragnar Frisch has highlighted in his Nobel Prize lecture. The existing methods for constructing objective functions are collected in the proceedings of two dedicated conferences. In particular, Andranik Tangian showed that the most usable objective functions — quadratic and additive — are determined by a few indifference points. He used this property in the models for constructing these objective functions from either ordinal or cardinal data that were elicited through computer-assisted interviews with decision makers. Among other things, he constructed objective functions to optimally distribute budgets for 16 Westfalian universities and the European subsidies for equalizing unemployment rates among 271 German regions. Expected loss In some contexts, the value of the loss function itself is a random quantity because it depends on the outcome of a random variable X. Statistics Both frequentist and Bayesian statistical theory involve making a decision based on the expected value of the loss function; however, this quantity is defined differently under the two paradigms. Frequentist expected loss We first define the expected loss in the frequentist context. It is obtained by taking the expected value with respect to the probability distribution, Pθ, of the observed data, X. This is also referred to as the risk function of the decision rule δ and the parameter θ. Here the decision rule depends on the outcome of X. The risk function is given by: Here, θ is a fixed but possibly unknown state of nature, X is a vector of observations stochastically drawn from a population, is the expectation over all population values of X, dPθ is a probability measure over the event space of X (parametrized by θ) and the integral is evaluated over the entire support of X. Bayes Risk In a Bayesian approach, the expectation is calculated using the prior distribution * of the parameter θ: where m(x) is known as the predictive likelihood wherein θ has been "integrated out," * (θ | x) is the posterior distribution, and the order of integration has been changed. One then should choose the action a* which minimises this expected loss, which is referred to as Bayes Risk. In the latter equation, the integrand inside dx is known as the Posterior Risk, and minimising it with respect to decision a also minimizes the overall Bayes Risk. This optimal decision, a* is known as the Bayes (decision) Rule - it minimises the average loss over all possible states of nature θ, over all possible (probability-weighted) data outcomes. One advantage of the Bayesian approach is to that one need only choose the optimal action under the actual observed data to obtain a uniformly optimal one, whereas choosing the actual frequentist optimal decision rule as a function of all possible observations, is a much more difficult problem. Of equal importance though, the Bayes Rule reflects consideration of loss outcomes under different states of nature, θ. Examples in statistics For a scalar parameter θ, a decision function whose output is an estimate of θ, and a quadratic loss function (squared error loss) the risk function becomes the mean squared error of the estimate, An Estimator found by minimizing the Mean squared error estimates the Posterior distribution's mean. In density estimation, the unknown parameter is probability density itself. The loss function is typically chosen to be a norm in an appropriate function space. For example, for L2 norm, the risk function becomes the mean integrated squared error Economic choice under uncertainty In economics, decision-making under uncertainty is often modelled using the von Neumann–Morgenstern utility function of the uncertain variable of interest, such as end-of-period wealth. Since the value of this variable is uncertain, so is the value of the utility function; it is the expected value of utility that is maximized. Decision rules A decision rule makes a choice using an optimality criterion. Some commonly used criteria are: Minimax: Choose the decision rule with the lowest worst loss — that is, minimize the worst-case (maximum possible) loss: Invariance: Choose the decision rule which satisfies an invariance requirement. Choose the decision rule with the lowest average loss (i.e. minimize the expected value of the loss function): Selecting a loss function Sound statistical practice requires selecting an estimator consistent with the actual acceptable variation experienced in the context of a particular applied problem. Thus, in the applied use of loss functions, selecting which statistical method to use to model an applied problem depends on knowing the losses that will be experienced from being wrong under the problem's particular circumstances. A common example involves estimating "location". Under typical statistical assumptions, the mean or average is the statistic for estimating location that minimizes the expected loss experienced under the squared-error loss function, while the median is the estimator that minimizes expected loss experienced under the absolute-difference loss function. Still different estimators would be optimal under other, less common circumstances. In economics, when an agent is risk neutral, the objective function is simply expressed as the expected value of a monetary quantity, such as profit, income, or end-of-period wealth. For risk-averse or risk-loving agents, loss is measured as the negative of a utility function, and the objective function to be optimized is the expected value of utility. Other measures of cost are possible, for example mortality or morbidity in the field of public health or safety engineering. For most optimization algorithms, it is desirable to have a loss function that is globally continuous and differentiable. Two very commonly used loss functions are the squared loss, , and the absolute loss, . However the absolute loss has the disadvantage that it is not differentiable at . The squared loss has the disadvantage that it has the tendency to be dominated by outliers—when summing over a set of 's (as in ), the final sum tends to be the result of a few particularly large a-values, rather than an expression of the average a-value. The choice of a loss function is not arbitrary. It is very restrictive and sometimes the loss function may be characterized by its desirable properties. Among the choice principles are, for example, the requirement of completeness of the class of symmetric statistics in the case of i.i.d. observations, the principle of complete information, and some others. W. Edwards Deming and Nassim Nicholas Taleb argue that empirical reality, not nice mathematical properties, should be the sole basis for selecting loss functions, and real losses often are not mathematically nice and are not differentiable, continuous, symmetric, etc. For example, a person who arrives before a plane gate closure can still make the plane, but a person who arrives after can not, a discontinuity and asymmetry which makes arriving slightly late much more costly than arriving slightly early. In drug dosing, the cost of too little drug may be lack of efficacy, while the cost of too much may be tolerable toxicity, another example of asymmetry. Traffic, pipes, beams, ecologies, climates, etc. may tolerate increased load or stress with little noticeable change up to a point, then become backed up or break catastrophically. These situations, Deming and Taleb argue, are common in real-life problems, perhaps more common than classical smooth, continuous, symmetric, differentials cases.
Mathematics
Optimization
null
18392290
https://en.wikipedia.org/wiki/Hydrodynamic%20voltammetry
Hydrodynamic voltammetry
In analytical chemistry, hydrodynamic voltammetry is a form of voltammetry in which the analyte solution flows relative to a working electrode. In many voltammetry techniques, the solution is intentionally left still to allow diffusion-controlled mass transfer. When a solution is made to flow, through stirring or some other physical mechanism, it is very important to the technique to achieve a very controlled flux or mass transfer in order to obtain predictable results. These methods are types of electrochemical studies which use potentiostats to investigate reaction mechanisms related to redox chemistry among other chemical phenomenon. Structure Most experiments involve a three electrode setup but the setup configuration varies widely. All cell configurations create a laminar flow of solution across the working electrode(s) producing a steady-state current determined by solution flow rather than diffusion. The resulting current can be mathematically predicted and modeled. Among the most common hydrodynamic setup involves the working electrodes rotating to create a laminar flow of solution across the electrode surface. Both rotating disk electrodes (RDE) and rotating ring-disk electrodes (RRDE) are examples where the working electrode rotates. Other configurations, such as flow cells, use pumps to direct solution at or across the working electrode(s). Distinction Hydrodynamic techniques are distinct from still and unstirred experiments such as cyclic voltammetry where the steady-state current is limited by the diffusion of substrate. Experiments are not however limited to linear sweep voltammetry. The configuration of many cells takes the substrate from one working electrode across another, RRDE for example. The potential of one electrode can be varied as the other is held constant or varied. The flow rate can also be varied to adjust the temporal gap the substrates experiences between working electrodes.
Physical sciences
Electrical methods
Chemistry
18399896
https://en.wikipedia.org/wiki/Chalk%20line
Chalk line
A chalk line or chalk box is a tool for marking long, straight lines on relatively flat surfaces, much further than is practical by hand or with a straightedge. They may be used to lay out straight lines between two points, or vertical lines by using the weight of the line reel as a plumb line. It is an important tool in carpentry, and the working of timber in a rough and unplaned state, as it does not require the timber to have a straight or squared edge formed onto it beforehand. Use A chalk line creates straight lines by the action of a taut string that has been previously coated with a loose, powdered dye, usually chalk. The string is then laid across the surface to be marked and pulled tight. Next, the string is plucked or snapped sharply, causing it to strike the surface, which then transfers its chalk to the surface along the straight line where it struck. Chalk lines are typically used to mark relatively flat surfaces. However, as long as the line is taut and the two ends of the chalk line are almost in the same plane, the chalk line will mark all points the string touches on or near that plane once snapped. The objects to be marked do not need to be continuous along the line. Chalk lines can also be used across irregular surfaces and holey surfaces, for example on an unfinished stud wall. The primary problems associated with improper maintenance of a chalk line are string breakage due to excessive tension on the line, and degradation of the line associated with moisture contamination. Chalk lines and plumb-bobs are often sold as a single tool. History Chalk lines were used in ancient Egypt, are mentioned in Homer's Illiad, and have been used continuously by builders in various cultures since. Continuing development of this simple-but-effective tool focuses on the coloration for the chalk or marking compound, as well as the outer case and method of handling. Ink lines In East Asia, an ink line is used in preference to a chalk line. This is a silken cord, stored on a combined reel and inkpot which was invented by Chinese master craftsman Lu Ban. In Japan, it is called a sumitsubo. Alongside the line reel is a cavity filled with ink-soaked cotton fibres, which the line is drawn through as it is unreeled. These sumitsubo are highly decorated and much-prized by their owners. As with many such tools, they are often made by their users while apprentices. Upon the completion of a major building, a large celebration or topping-out ceremony is held. As part of this event, a set of symbolic carpenter's tools are freshly made and presented to the new building. A sumitsubo is a traditional tool included with them.
Technology
Hand tools
null
2505466
https://en.wikipedia.org/wiki/Exomoon
Exomoon
An exomoon or extrasolar moon is a natural satellite that orbits an exoplanet or other non-stellar extrasolar body. Exomoons are difficult to detect and confirm using current techniques, and to date there have been no confirmed exomoon detections. However, observations from missions such as Kepler have observed a number of candidates. Two potential exomoons that may orbit rogue planets have also been detected by microlensing. In September 2019, astronomers reported that the observed dimmings of Tabby's Star may have been produced by fragments resulting from the disruption of an orphaned exomoon. Some exomoons may be potential habitats for extraterrestrial life. Definition and designation Although traditional usage implies moons orbit a planet, the discovery of brown dwarfs with planet-sized satellites blurs the distinction between planets and moons, due to the low mass of brown dwarfs. This confusion is resolved by the International Astronomical Union (IAU) declaration that "Objects with true masses below the limiting mass for thermonuclear fusion of deuterium that orbit stars, brown dwarfs or stellar remnants and that have a mass ratio with the central object below the L4/L5 instability (M/Mcentral < 2/(25+) are planets." The IAU definition does not address the naming convention for the satellites of free-floating objects that are less massive than brown dwarfs and below the deuterium limit (the objects are typically referred to as free-floating planets, rogue planets, low-mass brown dwarfs or isolated planetary-mass objects). The satellites of these objects are typically referred to as exomoons in the literature. Exomoons take their designation from that of their parent body plus a capital Roman numeral; thus, Kepler-1625b orbits Kepler-1625 (synonymous with Kepler-1625a) and itself may be orbited by Kepler-1625b I (no Kepler-1625b II is known, nor is I known to have a submoon). Characteristics Characteristics of any extrasolar satellite are likely to vary, as do the Solar System's moons. For extrasolar giant planets orbiting within their stellar habitable zone, there is the prospect that terrestrial planet-sized satellite may be capable of supporting life. In August 2019, astronomers reported that an exomoon in the WASP-49b exoplanet system may be volcanically active. Orbital inclination For impact-generated moons of terrestrial planets not too far from their star, with a large planet–moon distance, it is expected that the orbital planes of moons will tend to be aligned with the planet's orbit around the star due to tides from the star, but if the planet–moon distance is small it may be inclined. For gas giants, the orbits of moons will tend to be aligned with the giant planet's equator because these formed in circumplanetary disks. Lack of moons around planets close to their stars Planets close to their stars on circular orbits will tend to despin and become tidally locked. As the planet's rotation slows down the radius of a synchronous orbit of the planet moves outwards from the planet. For planets tidally locked to their stars, the distance from the planet at which the moon will be in a synchronous orbit around the planet is outside the Hill sphere of the planet. The Hill sphere of the planet is the region where its gravity dominates that of the star so it can hold on to its moons. Moons inside the synchronous orbit radius of a planet will spiral into the planet. Therefore, if the synchronous orbit is outside the Hill sphere, then all moons will spiral into the planet. If the synchronous orbit is not three-body stable then moons outside this radius will escape orbit before they reach the synchronous orbit. A study on tidal-induced migration offered a feasible explanation for this lack of exomoons. It showed the physical evolution of host planets (i.e. interior structure and size) plays a major role in their final fate: synchronous orbits can become transient states and moons are prone to be stalled in semi-asymptotic semimajor axes, or even ejected from the system, where other effects can appear. In turn, this would have a great impact on the detection of extrasolar satellites. Detection methods The existence of exomoons around many exoplanets is theorized. Despite the great successes of planet hunters with Doppler spectroscopy of the host star, exomoons cannot be found with this technique. This is because the resultant shifted stellar spectra due to the presence of a planet plus additional satellites would behave identically to a single point-mass moving in orbit of the host star. In recognition of this, there have been several other methods proposed for detecting exomoons, including: Direct imaging Microlensing Pulsar timing Transit timing effects Transit method Direct imaging Direct imaging of an exoplanet is extremely challenging due to the large difference in brightness between the star and exoplanet as well as the small size and irradiance of the planet. These problems are greater for exomoons in most cases. However, it has been theorized that tidally heated exomoons could shine as brightly as some exoplanets. Tidal forces can heat up an exomoon because energy is dissipated by differential forces on it. Io, a tidally heated moon orbiting Jupiter, has volcanoes powered by tidal forces. If a tidally heated exomoon is sufficiently tidally heated and is distant enough from its star for the moon's light not to be drowned out, it would be possible for a telescope such as the James Webb Space Telescope to image it. Doppler spectroscopy of host planet Doppler spectroscopy is an indirect detection method that measures the velocity shift and resulting stellar spectrum shift associated with an orbiting planet. This method is also known as the Radial Velocity method. It is most successful for main sequence stars. The spectra of exoplanets have been successfully partially retrieved for several cases, including HD 189733 b and HD 209458 b. The quality of the retrieved spectra is significantly more affected by noise than the stellar spectrum. As a result, the spectral resolution, and number of retrieved spectral features, is much lower than the level required to perform Doppler spectroscopy of the exoplanet. Radio wave emissions from the host planet's magnetosphere During its orbit, Io's ionosphere interacts with Jupiter's magnetosphere, to create a frictional current that causes radio wave emissions. These are called "Io-controlled decametric emissions" and the researchers believe finding similar emissions near known exoplanets could be key to predicting where other moons exist. Microlensing In 2002, Cheongho Han & Wonyong Han proposed microlensing be used to detect exomoons. The authors found detecting satellite signals in lensing light curves will be very difficult because the signals are seriously smeared out by the severe finite-source effect even for events involved with source stars with small angular radii. Pulsar timing In 2008, Lewis, Sackett, and Mardling of the Monash University, Australia, proposed using pulsar timing to detect the moons of pulsar planets. The authors applied their method to the case of PSR B1620-26 b and found that a stable moon orbiting this planet could be detected, if the moon had a separation of about one-fiftieth of that of the orbit of the planet around the pulsar and a mass ratio to the planet of 5% or larger. Transit timing effects In 2007, physicists A. Simon, K. Szatmáry, and Gy. M. Szabó published a research note titled 'Determination of the size, mass, and density of “exomoons” from photometric transit timing variations'. In 2009, David Kipping published a paper outlining how by combining multiple observations of variations in the time of mid-transit (TTV, caused by the planet leading or trailing the planet–moon system's barycenter when the pair are oriented roughly perpendicular to the line of sight) with variations of the transit duration (TDV, caused by the planet moving along the direction path of transit relative to the planet–moon system's barycenter when the moon–planet axis lies roughly along the line of sight) a unique exomoon signature is produced. Furthermore, the work demonstrated how both the mass of the exomoon and its orbital distance from the planet could be determined using the two effects. In a later study, Kipping concluded that habitable zone exomoons could be detected by the Kepler Space Telescope using the TTV and TDV effects. Transit method (star-planet-moon systems) When an exoplanet passes in front of the host star, a small dip in the light received from the star may be observed. The transit method is currently the most successful and responsive method for detecting exoplanets. This effect, also known as occultation, is proportional to the square of the planet's radius. If a planet and a moon pass in front of a host star, both objects should produce a dip in the observed light. A planet–moon eclipse may also occur during the transit, but such events have an inherently low probability. Transit method (planet-moon systems) If the host planet is directly imaged, then transits of an exomoon may be observable. When an exomoon passes in front of the host planet, a small dip in the light received from the directly-imaged planet may be detected. Exomoons of directly imaged exoplanets and free-floating planets are predicted to have a high transit probability and occurrence rate. Moons as small as Io or Titan should be detectable with the James Webb Space Telescope using this method, but this search method requires a substantial amount of observation time. Orbital sampling effects If a glass bottle is held up to the light it is easier to see through the middle of the glass than it is near the edges. Similarly, a sequence of samples of a moon's position will be more bunched up at the edges of the moon's orbit of a planet than in the middle. If a moon orbits a planet that transits its star then the moon will also transit the star and this bunching up at the edges may be detectable in the transit light curves if a sufficient number of measurements are made. The larger the star the greater the number of measurements needed to create observable bunching. The Kepler telescope data may contain enough data to detect moons around red dwarfs using orbital sampling effects but won't have enough data for Sun-like stars. Indirect detection around white dwarfs The atmosphere of white dwarfs can be polluted with metals and in a few cases, the white dwarfs are surrounded by a debris disk. Usually, this pollution is caused by asteroids or comets, but tidally disrupted exomoons were also proposed in the past as a source of white dwarf pollution. In 2021 Klein and collaborators discovered that the white dwarfs GD 378 and GALEXJ2339 had an unusually high pollution with beryllium. The researchers conclude that oxygen, carbon or nitrogen atoms must have been subjected to MeV collisions with protons in order to create this excess of beryllium. In one proposed scenario, the beryllium excess is caused by a tidally disrupted exomoon. In this scenario a moon-forming icy disk exists around a giant planet, which orbits the white dwarf. The strong magnetic field of such a giant planet accelerates stellar wind particles, such as protons, and directs them into the disk. The accelerated proton collides with water ice in the disk, creating elements like beryllium, boron, and lithium in a spallation reaction. These three elements are relatively rare in the universe as they are destroyed in the process of stellar fusion. A moonlet forming in this kind of disk would have a higher beryllium, boron and lithium abundance. The study also predicted that the mid-sized moons of Saturn, for example, Mimas, should be enriched in Be, B, and Li. Candidates Detection projects There are several missions underway now using some of the methods described above, which will find many more candidate exomoons and be able to confirm or disprove some candidates. PLATO, for example, is expected to launch in 2026. As part of the Kepler mission, the Hunt for Exomoons with Kepler (HEK) project was intended to detect exomoons, and generated some of the candidates still discussed today. Habitability The habitability of exomoons has been considered in at least two studies published in peer-reviewed journals. René Heller & Rory Barnes considered stellar and planetary illumination on moons as well as the effect of eclipses on their orbit-averaged surface illumination. They also considered tidal heating as a threat to their habitability. In Sect. 4 in their paper, they introduce a new concept to define the habitable orbits of moons. Referring to the concept of the circumstellar habitable zone for planets, they define an inner border for a moon to be habitable around a certain planet and call it the circumplanetary "habitable edge". Moons closer to their planet than the habitable edge are uninhabitable. In a second study, René Heller then included the effect of eclipses into this concept as well as constraints from a satellite's orbital stability. He found that, depending on a moon's orbital eccentricity, there is a minimum mass for stars to host habitable moons at around 0.2 solar masses. Taking as an example the smaller Europa, at less than 1% the mass of the Earth, Lehmer et al. found if it were to end up near to Earth orbit it would only be able to hold onto its atmosphere for a few million years. However, for any larger, Ganymede-sized moons venturing into its solar system's habitable zone, an atmosphere and surface water could be retained indefinitely. Models for moon formation suggest the formation of even more massive moons than Ganymede is common around many of the super-Jovian exoplanets. Earth-sized exoplanets in the habitable zone around M-dwarfs are often tidally locked to the host star. This has the effect that one hemisphere always faces the star, while the other remains in darkness. An exomoon in an M-dwarf system does not face this challenge, as it is tidally locked to the planet and it would receive light for both hemispheres. Martínez-Rodríguez et al. studied the possibility of exomoons around planets that orbit M-dwarfs in the habitable zone. While they found 33 exoplanets from earlier studies that lie in the habitable zone, only four could host Moon- to Titan-mass exomoons for timescales longer than 0.8 Gyr (HIP 12961 b, HIP 57050 b, Gliese 876 b and c). For this mass range the exomoons could probably not hold onto their atmosphere. The researchers increased the mass for the exomoons and found that exomoons with the mass of Mars around IL Aquarii b and c could be stable on timescales above the Hubble time. The CHEOPS mission could detect exomoons around the brightest M-dwarfs or ESPRESSO could detect the Rossiter–McLaughlin effect caused by the exomoons. Both methods require a transiting exoplanet, which is not the case for these four candidates. Like an exoplanet, an exomoon can potentially become tidally locked to its primary. However, since the exomoon's primary is an exoplanet, it would continue to rotate relative to its star after becoming tidally locked, and thus would still experience a day/night cycle indefinitely. The possible exomoon candidate transiting 2MASS J1119-1137AB lies in the habitable zone of its host (at least initially until the planet cools), but it is unlikely complex life has formed as the system is only 10 Myr old. If confirmed, the exomoon may be similar to primordial earth and characterization of its atmosphere with the James Webb Space Telescope could perhaps place limits on the time scale for the formation of life.
Physical sciences
Planetary science
Astronomy
2506366
https://en.wikipedia.org/wiki/Tanker%20%28ship%29
Tanker (ship)
A tanker (or tank ship or tankship) is a ship designed to transport or store liquids or gases in bulk. Major types of tankship include the oil tanker (or petroleum tanker), the chemical tanker, cargo ships, and a gas carrier. Tankers also carry commodities such as vegetable oils, molasses and wine. In the United States Navy and Military Sealift Command, a tanker used to refuel other ships is called an oiler (or replenishment oiler if it can also supply dry stores) but many other navies use the terms tanker and replenishment tanker. Tankers were first developed in the late 19th century as iron and steel hulls and pumping systems were developed. As of 2005, there were just over 4,000 tankers and supertankers or greater operating worldwide. Description Tankers can range in size of capacity from several hundred tons, which includes vessels for servicing small harbours and coastal settlements, to several hundred thousand tons, for long-range haulage. Besides ocean- or seagoing tankers there are also specialized inland-waterway tankers which operate on rivers and canals with an average cargo capacity up to some thousand tons. A wide range of products are carried by tankers, including: Hydrocarbon products such as oil, liquefied petroleum gas (LPG), and liquefied natural gas (LNG) Chemicals, such as ammonia, chlorine, and styrene monomer Fresh water Wine Molasses Citrus juice Tankers primarily date from the later years of the 19th century. Before this, technology had simply not supported the idea of carrying bulk liquids. The market was also not geared towards transporting or selling cargo in bulk, therefore most ships carried a wide range of different products in different holds and traded outside fixed routes. Liquids were usually loaded in casks—hence the term "tonnage", which refers to the volume of the holds in terms of how many tuns or casks of wine could be carried. Even potable water, vital for the survival of the crew, was stowed in casks. Carrying bulk liquids in earlier ships posed several problems: The holds: on timber ships the holds were not sufficiently water, oil or air-tight to prevent a liquid cargo from spoiling or leaking. The development of iron and steel hulls solved this problem. Loading and discharging: Bulk liquids must be pumped - the development of efficient pumps and piping systems was vital to the development of the tanker. Steam engines were developed as prime-movers for early pumping systems. Dedicated cargo handling facilities were now required ashore too - as was a market for receiving a product in that quantity. Casks could be unloaded using ordinary cranes, and the awkward nature of the casks meant that the volume of liquid was always relatively small - therefore keeping the market more stable. Free surface effect: a large body of liquid carried aboard a ship will affect the ship's stability, particularly when the liquid is flowing around the hold or tank in response to the ship's movements. The effect was negligible in casks, but could cause capsizing if the tank extended the width of the ship; a problem solved by extensive subdivision of the tanks. Tankers were first used by the oil industry to transfer refined fuel in bulk from refineries to customers. This would then be stored in large tanks ashore, and subdivided for delivery to individual locations. The use of tankers caught on because other liquids were also cheaper to transport in bulk, store in dedicated terminals, then subdivide. Even the Guinness brewery used tankers to transport the stout across the Irish Sea. Different products require different handling and transport, with specialised variants such as "chemical tankers", "oil tankers", and "LNG carriers" developed to handle dangerous chemicals, oil and oil-derived products, and liquefied natural gas respectively. These broad variants may be further differentiated with respect to ability to carry only a single product or simultaneously transport mixed cargoes such as several different chemicals or refined petroleum products. Among oil tankers, supertankers are designed for transporting oil around the Horn of Africa from the Middle East. The supertanker Seawise Giant, scrapped in 2010, was in length and wide. Supertankers are one of the three preferred methods for transporting large quantities of oil, along with pipeline transport and rail. Regulations Tighter regulation means that tankers now cause fewer environmental disasters resulting from oil spills than in the 1970s. Amoco Cadiz, , Erika, Exxon Valdez, Prestige and were examples of accidents. Oil spills from tankers amounted to around 1,000 tonnes in 2020 from three incidents (an all-time low), down from 636,000 tonnes from 92 incidents in 1979 - a fall of 99.8%. For ships internationally, the regulations of the International Maritime Organization apply, specifically Annex I, prevention of pollution by oil under MARPOL 73/78 and rules for construction under the SOLAS Convention. These include requirements for inert gas systems designed to supply inert gas to cargo tanks to prevent an explosive atmosphere from being present. For tankers that are either operate in United States waters or are owned by US based companies, rules govern their design, construction and operation. Specifically under the US Code of Federal Regulations Title 33 - Navigation and Navigable Waters, Title 40 - Protection of Environment, Title 46 - Shipping, Title 47 - Telecommunication and Title 49 - Transportation. Design and operational considerations Many modern tankers are designed for a specific cargo and a specific route. Draft is typically limited by the depth of water in loading and unloading harbors; and may be limited by the depth of straits or canals along the preferred shipping route. Cargoes with high vapor pressure at ambient temperatures may require pressurized tanks or vapor recovery systems. Tank heaters may be required to maintain heavy crude oil, residual fuel, asphalt, wax, or molasses in a fluid state for offloading. Designs will vary by the type of tanker. For oil tankers, systems will need to be in place to manage operational hazards, including a means of producing and introducing inert gas into cargo tanks to prevent explosion. Cargo tanks are typically fitted with the ability to monitor levels of liquid within a tank, as well as an overfill or high level alarm function. For Gas carriers, including LNG carriers, gas design cargo containment systems are required. These should include means to monitor temperature, volume and pressure, as well as pressure relief valves and associated safety systems in accordance with the IGC Code. Tank lids and joints between pipes may need to be bonded to prevent static electricity from causing an explosion. The International Safety Guide for Oil Tankers and Terminals is the industry code of practice that applies to oil tankers globally. Tanker capacity Tankers used for liquid fuels are classified according to their capacity. In 1954, Shell Oil developed the average freight rate assessment (AFRA) system, which classifies tankers of different sizes. To make it an independent instrument, Shell consulted the London Tanker Brokers' Panel (LTBP). At first, they divided the groups as General Purpose for tankers under ; Medium Range for ships between 25,000 and and Large Range (later Long Range) for the then-enormous ships that were larger than . The ships became larger during the 1970s, and the list was extended, where the tons are metric tonnes: Under : Extra small tanker 10,000–: Small tanker 25,000–34,999 DWT: Intermediate tanker 35,000–44,999 DWT: Medium Range 1 (MR1) 45,000–: Medium Range 2 (MR2) 55,000–: Long Range 1 (LR1) 80,000–: Long Range 2 (LR2) 160,000–: Very Large Crude Carrier (VLCC) 320,000–: Ultra Large Crude Carrier (ULCC) 550,000-: Hyper Large Crude Carrier (HLCC) 900,000-: Mega Crude Carrier (MCC) Over : Giga Crude Carrier (GCC) Very Large Crude Carrier size range At nearly 380 vessels in the size range to , these are by far the most popular size range among the larger VLCCs. Only seven vessels are larger than this, and approximately 90 between and . Fleets of the world Flag states As of 2005, the United States Maritime Administration's statistics count 4,024 tankers of or greater worldwide. 2,582 of these are double-hulled. Panama is the leading flag state of tankers, with 592 registered ships. Five other flag states have more than two hundred registered tankers: Liberia (520), The Marshall Islands (323), Greece (233), Singapore (274) and The Bahamas (215). These flag states are also the top six in terms of fleet size in terms of deadweight tonnage. Largest fleets Greece, Japan, and the United States are the top three owners of tankers (including those owned but registered to other nations), with 733, 394, and 311 vessels respectively. These three nations account for 1,438 vessels or over 36% of the world's fleet. Builders Asian companies dominate the construction of tankers. Of the world's 4,024 tankers, 2,822 (over 70%) were built in South Korea, Japan and China. Petroleum Tables Petroleum Tables, a book by William Davies, an early tanker captain, was published in 1903, although Davies had printed earlier versions himself. Including his calculations on the expansion and contraction of bulk oil, and other information for tanker officers, it went into multiple editions, and in 1915 The Petroleum World commented that it was "the standard book for computations and conversions." For modern tables, the standard guide for petroleum measurement on oil tankers are those from ASTM International specifically ASTM D1250-08.
Technology
Maritime transport
null
2506590
https://en.wikipedia.org/wiki/Flail%20%28tool%29
Flail (tool)
A flail is an agricultural tool used for threshing, the process of separating grains from their husks. It is usually made from two or more large sticks attached by a short chain; one stick is held and swung, causing the other (the swipple) to strike a pile of grain, loosening the husks. The precise dimensions and shape of flails were determined by generations of farmers to suit the particular grain they were harvesting. For example, flails used by farmers in Quebec to process wheat were generally made from two pieces of wood, the handle being about long by in diameter, and the second stick being about long by about in diameter, with a slight taper towards the end. Flails for other grains, such as rice or spelt, would have had different dimensions. Flails have generally fallen into disuse in many nations because of the availability of technologies such as combine harvesters that require much less manual labour. But in many places, such as Minnesota, wild rice can only be harvested legally using manual means, specifically through the use of a canoe and a flail that is made of smooth, round wood no more than 30 inches long. Non-agricultural uses As with most agricultural tools, flails were often used as weapons by farmers lacking better weapons. The flail is proposed as one of the origins of the two-piece baton known in the Okinawan kobudō weapon system as the nunchaku. One of the first recorded use of a flail as a weapon was at the siege of Damietta in 1218 during the Fifth Crusade, as depicted in the chronicle by Matthew Paris, though there are several references that predate this; tradition has it the man was the Frisian Hayo of Wolvega who bashed the standard bearer of the Muslim defenders with it and captured the flag. Flails were also used as weapons by farmers under the leadership of Jan Žižka during the 15th-century Hussite Wars in Bohemia. In ancient Egypt what has popularly been interpreted as a flail was a symbol associated with the pharaoh, said to symbolize the monarch's ability to provide for the people, though it is currently still not known exactly what the "flail" implement seen in artwork actually was.
Technology
Agricultural tools
null
2508874
https://en.wikipedia.org/wiki/Longbow
Longbow
A longbow is a type of tall bow that makes a fairly long draw possible. Longbows for hunting and warfare have been made from many different woods in many cultures; in Europe they date from the Paleolithic era and, since the Bronze Age, were made mainly from yew, or from wych elm if yew was unavailable. The historical longbow was a self bow made of a single piece of wood, but modern longbows may also be made from modern materials or by gluing different timbers together. History Europe Prehistory A longbow was found in 1991 in the Ötztal Alps with a natural mummy known as Ötzi. His bow was made from yew and was long; the body has been dated to around 3300 BC. A slightly shorter bow comes from the Scottish parish of Tweedsmuir in a peat bog known as Rotten Bottom. The bow, made from yew, has been given a calibrated radiocarbon date of 4040 BC to 3640 BC. Another bow made from yew, found within some peat in Somerset, England has been dated to 2700–2600 BC. Forty longbows, which date from the 4th century AD, have been discovered in a peat bog at Nydam in Denmark. Middle Ages In the Middle Ages the English and Welsh were famous for their very powerful longbows, used en masse to great effect against the French in the Hundred Years' War, with notable success at the battles of Crécy (1346), Poitiers (1356), and Agincourt (1415). During the reign of Edward III of England, laws were passed allowing fletchers and bowyers to be impressed into the army and enjoining them to practice archery. The dominance of the longbow on the battlefield continued until the French began to use cannon to break the formations of English archers at the Battle of Formigny (1450) and the Battle of Castillon (1453). Their use continued in the Wars of the Roses. They survived as a weapon of war in England well beyond the introduction of effective firearms. The Battle of Flodden (1513) was "a landmark in the history of archery, as the last battle on English soil to be fought with the longbow as the principal weapon..." Sixteenth and seventeenth centuries In 1588, the militia was called out in anticipation of an invasion by the Spanish Armada and it included many archers in its ranks; the Kent militia for instance, had 1,662 archers out of 12,654 men mustered. The Battle of Tippermuir (1644), in Scotland, may have been the last battle in the British Isles to involve the longbow in significant numbers. It has also been claimed that longbows may have been used as late as 1654 at the Battle of Tullich in northeast Scotland. Early literature The earliest known book on European longbow archery is the anonymous L'Art D'Archerie, produced in France in the late 15th or early 16th century. The first book in English about longbow archery was Toxophilus by Roger Ascham, first published in London in 1545 and dedicated to King Henry VIII. Modern recreational and hunting use Although firearms supplanted bows in warfare, wooden or fibreglass laminated longbows continue to be used by traditional archers and some tribal societies for recreation and hunting. A longbow has practical advantages compared with a modern recurve or compound bow; it is usually lighter, quicker to prepare for shooting, and shoots more quietly. However, other things being equal, the modern bow will shoot a faster arrow more accurately than the longbow. Organisations that run archery competitions have set out formal definitions for various classes of bow; many definitions of the longbow would exclude some medieval examples, materials, and techniques of use. Some archery clubs in the United States classify longbows simply as bows with strings that do not come in contact with their limbs. According to the British Longbow Society, the English longbow is made so that its thickness is at least (62.5%) of its width, as in Victorian longbows, and is widest at the grip. A similar, more inclusive, definition was created by the International Longbow Archers Association (ILAA) which defined the bow as fitting within a rectangular template of the proportions 1:0.625. Design and construction Because the longbow can be made from a single piece of wood, it can be crafted relatively easily and quickly. Amateur bowyers today can make a longbow in about ten to twenty hours. One of the simpler longbow designs is known as the self bow, by definition made from a single piece of wood. Traditional English longbows are self bows made from yew wood. The bowstave is cut from the radius of the tree so that sapwood (on the outside of the tree) becomes the back and forms about one third of the total thickness; the remaining two-thirds or so is heartwood (50/50 is about the maximum sapwood/heartwood ratio generally used). Yew sapwood is good only in tension, while the heartwood is good in compression. However, compromises must be made when making a yew longbow, as it is difficult to find perfect unblemished yew. The demand for yew bowstaves was such that by the late 16th century mature yew trees were almost extinct in northern Europe. In other desirable woods such as Osage orange and mulberry the sapwood is almost useless and is normally removed entirely. Longbows, because of their narrow limbs and rounded cross-section (which does not spread out stress within the wood as evenly as a flatbow’s rectangular cross section), need to be less powerful, longer or of more elastic wood than an equivalent flatbow. In Europe the last approach was used, with yew being the wood of choice, because of its high compressive strength, light weight, and elasticity. Yew is the best widespread European timber that will make good self longbows, (other woods such as elm can make longbows but require heat-treating of the belly and a wider belly/narrower back, while still falling into the definition of a longbow) and has been the main wood used in European bows since Neolithic times. More common and cheaper hard woods, including elm, oak, hickory, ash, hazel and maple, are good for flatbows. A narrow longbow with high draw-weight can be made from these woods, but it is likely to take a permanent bend (known as "set" or "following the string") and would probably be outshot by an equivalent made of yew. Wooden laminated longbows can be made by gluing together two or more different pieces of wood. Usually this is done to take advantage of the inherent properties of different woods: some woods can better withstand compression while others are better at withstanding tension. Examples include hickory and lemonwood, or bamboo and yew longbows: hickory or bamboo is used on the back of the bow (the part facing away from the archer when shooting) and so is in tension, while the belly (the part facing the archer when shooting) is made of lemonwood or yew and undergoes compression (see bending for a further explanation of stresses in a bending beam). Traditionally made Japanese yumi are also laminated longbows, made from strips of wood: the core of the bow is bamboo, the back and belly are bamboo or hardwood, and hardwood strips are laminated to the bow's sides to prevent twisting. Any wooden bow must have gentle treatment and be protected from excessive damp or dryness. Wooden bows may shoot as well as fiberglass, but they are more easily dented or broken by abuse. Bows made of modern materials can be left strung for longer than wood bows, which may take a large amount of set if not unstrung immediately after use. Legacy The longbow and its historical significance, arising from its adoption by the Welsh fighting alongside the English during the Hundred Years' War, have created a lasting legacy for the longbow, which has given its name to modern military equipment, including: The AGM-114L Longbow Hellfire, an air-to-ground missile; and The Dakota Longbow T-76, a sniper rifle.
Technology
Archery
null
743997
https://en.wikipedia.org/wiki/Cheek
Cheek
The cheeks () constitute the area of the face below the eyes and between the nose and the left or right ear. Buccal means relating to the cheek. In humans, the region is innervated by the buccal nerve. The area between the inside of the cheek and the teeth and gums is called the vestibule or buccal pouch or buccal cavity and forms part of the mouth. In other animals, the cheeks may also be referred to as "jowls". Structure Cheeks are fleshy in humans, the skin being suspended by the chin and the jaws, and forming the lateral wall of the human mouth, visibly touching the cheekbone below the eye. The inside of the cheek is lined with a mucous membrane (buccal mucosa, part of the oral mucosa). During mastication (chewing), the cheeks and tongue between them serve to keep the food between the teeth. Clinical significance The cheek is the most common location from which a DNA sample can be taken. (Some saliva is collected from inside the mouth, e.g. using a cotton-tipped rod called a swab or "". The procedure of collecting a sample in that way is typically called a "cheek swab".) Other animals The cheeks are covered externally by hairy skin, and internally by stratified squamous epithelium. This is mostly smooth, but may have caudally directed papillae (e.g., in ruminants). The mucosa is supplied with secretions from the buccal glands, which are arranged in superior and inferior groups. In carnivores, the superior buccal gland is large and discrete: the zygomatic gland. During mastication, the cheeks and tongue between them serve to keep the food between the teeth. Some animals such as squirrels and hamsters use the buccal pouch to carry food or other items. In some vertebrates, markings on the cheek area, particularly immediately beneath the eye, often serve as important distinguishing features between species or individuals.
Biology and health sciences
External anatomy and regions of the body
Biology
744143
https://en.wikipedia.org/wiki/Bank%20vault
Bank vault
A bank vault is a secure room used by banks to store and protect valuables, cash, and important documents. Modern bank vaults are typically made of reinforced concrete and steel, with complex locking mechanisms and security systems. This article covers the design, construction, and security features of bank vaults. Unlike safes, vaults are an integral part of the building within which they are built, using armored walls and a tightly fashioned door closed with a complex lock. Historically, strongrooms were built in the basements of banks where the ceilings were vaulted, hence the name. Modern bank vaults typically contain many safe deposit boxes, as well as places for teller cash drawers and other valuable assets of the bank or its customers. They are also common in other buildings where valuables are kept such as post offices, grand hotels, rare book libraries and certain government ministries. Vault technology developed in a type of arms race with bank robbers. As burglars came up with new ways to break into vaults, vault makers found new ways to foil them. Modern vaults may be armed with a wide array of alarms and anti-theft devices. Some 19th and early 20th century vaults were built so well that today they are difficult to destroy, even with specialized demolition equipment. These older vaults were typically made with steel-reinforced concrete. The walls were usually at least 1 ft (0.3 m) thick, and the door itself was typically 3.5 ft (1.1 m) thick. Total weight ran into the hundreds of tons . Today vaults are made with thinner, lighter materials that, while still secure, are easier to dismantle than their earlier counterparts. Design Bank vaults are custom-designed and are usually one of the first elements considered when planning a new bank building. The vault manufacturer works with the bank to determine specifications like size, shape, and security features. Modern vaults are typically constructed using steel-reinforced modular concrete panels engineered for maximum strength and crush resistance. A 3-inch thick panel of specialized concrete can be up to 10 times stronger than an 18-inch panel of standard concrete. Bank vaults are typically made with steel-reinforced concrete. This material was not substantially different from that used in construction work. It relies on its immense thickness for strength. An ordinary vault from the middle of the 20th century might have been 18 in (45.72 cm) thick and was quite heavy and difficult to remove or remodel around. Modern bank vaults are now typically made of modular concrete panels using a special proprietary blend of concrete and additives for extreme strength. The concrete has been engineered for maximum crush resistance. A panel of this material, though only 3 in (7.62 cm) thick, may be up to 10 times as strong as an 18 in-thick (45.72-cm) panel of regular formula concreted. There are at least two public examples of vaults withstanding a nuclear blast. The most famous is the Teikoku Bank in Hiroshima whose two Mosler Safe Company vaults survived the atomic blast with all contents intact. The bank manager wrote a congratulatory note to Mosler. A second is a vault at the Nevada National Security Site (formerly the Nevada Test Site) in which an above ground Mosler vault was one of many structures specifically constructed to be exposed to an atomic blast in Operation Plumb Bob - Project 30.4:Response of Protective Vaults to Blast Loading. Manufacturing process Panels The wall panels are molded first using a special reinforced concrete mix. In addition to the usual cement powder, stone, etc., additional materials such as metal shavings or abrasive materials may be added to resist drilling penetration of the slab. Unlike regular concrete used in construction, the concrete for bank vaults is so thick that it cannot be poured. The consistency of concrete is measured by its "slump". Vault concrete has zero slump. It also sets very quickly, curing in only six to 12 hours, instead of the three to four days needed for most concrete. A network of reinforcing steel rods is manually placed into the damp mix. The molds are vibrated for several hours. The vibration settles the material and eliminates air pockets. The edges are smoothed with a trowel, and the concrete is allowed to harden. The panels are removed from the mold and placed on a truck for transport to the customer's construction site. Door The vault door is also molded of special concrete used to make the panels, but it can be made in several ways. The door mold differs from the panel molds because there is a hole for the lock and the door will be clad in stainless steel. Some manufacturers use the steel cladding as the mold and pour the concrete directly into it. Other manufacturers use a regular mold and screw the steel on after the panel is dry. Round vault doors were popular in the early 20th century and are iconic images for a bank's high security. They fell out of favor due to manufacturing complexities, maintenance issues (door sag due to weight) and cost, but a few examples are still available. A day gate is a second door inside the main vault door frame used for limited vault protection while the main door is open. It is often made of open metal mesh or glass and is intended to keep a casual visitor out rather than to provide true security. Lock A vault door, much like the smaller burglary safe door, is secured with numerous massive metal bolts (cylinders) extending from the door into the surrounding frame. Holding those bolts in place is some sort of lock. The lock is invariably mounted on the inside (behind) of the difficult-to-penetrate door and is usually very modest in size and strength, but very difficult to gain access to from the outside. There are many types of lock mechanisms in use: A combination lock similar in principle to that of a padlock or safe door is very common. This is usually a mechanical device but products incorporating both mechanical and electronic mechanisms are available, making certain safe cracking techniques very difficult. Some high-end vaults employ a two piece key to be used in conjunction with a combination lock. This key consists of a long stem as well as a short stamp which should be safeguarded separately and joined to open the vault door. A dual control (dual custody) combination lock has two dials controlling two locking mechanisms for the door. They are usually configured so that both locks must be dialed open at the same time for the door to be unlocked. No single person is given both combinations, requiring two people to cooperate to open the door. Some doors may be configured so that either dial will unlock the door, trading off increased convenience for lessened security. A time lock is a clock that prevents the vault's door from opening until a specified number of hours have passed. This is still the "theft proof" lock system that Sargent invented in the late nineteenth century. Such locks are manufactured by only a few companies worldwide. The locking system is supplied to the vault manufacturer preassembled. Many safe-cracking techniques also apply to the locking mechanism of the vault door. They may be complicated by the sheer thickness and strength of the door and panel. Installation The finished vault panels, door, and lock assembly are transported to the bank construction site. The vault manufacturer's workers then place the panels enclosed in steel at the designated spots and weld them together. The vault manufacturer may also supply an alarm system, which is installed at the same time. While older vaults employed various weapons against burglars, such as blasts of steam or tear gas, modern vaults instead use technological countermeasures. They can be wired with a listening device that picks up unusual sounds, or observed with a camera. An alarm is often present to alert local police if the door or lock is tampered with. US resistance standards Quality control for much of the world's vault industry is overseen by Underwriters Laboratories, Inc. (UL) in Northbrook, Illinois. UL rates vaults based on their resistance to mock break-in attempts. Key points of the UL-608 standard include: Testing uses common hand tools, power tools, and cutting torches A breach is defined as a 96 square inch hole or disabling of locking bolts Only active working time is counted (excludes setup, breaks, etc.) Does not test resistance to thermal lance or explosives Applies to the door and all vault sides Separate standards cover locks, ventilation, and alarms European resistance standards As with the US, Europe has agreed a series of test standards to assure a common view of penetrative resistance to forcible attack. The testing regime is covered under the auspices of Euronorm 1143-1:2012 (also known as BS EN 1143-1: 2012), which can be purchased from approved European standards agencies. Key points include: Standard covers burglary resistance tests against free-standing safes and ATMs, as well as strongrooms and doors Tests are undertaken to arrive at a grade (0 to XIII) with two extra resistance qualifiers (one for the use of explosives the other for core drills) Test attack tools fall into five categories with increasing penetrative capability, i.e. Categories A–D and S Penetration success is measured as partial (125mm diameter hole) or full (350mm diameter hole) Considers only the time actually spent working (excludes setup, rests, etc.) EN 1143-1 makes no claims as to the fire resistance of the vault EN 1300 covers high security locks, i.e. four lock classes (A, B, C and D) Applies to the door and all vault sides. Future Bank vault technology changed rapidly in the 1980s and 1990s with the development of improved concrete material. Bank burglaries are also no longer the substantial problem they were in the late 19th century up through the 1930s, but vault makers continue to alter their products to counter new break-in methods. An issue in the 21st century is the thermal lance. Burning iron rods in pure oxygen ignited by an oxyacetylene torch, it can produce temperatures of . The thermal lance user bores a series of small holes that can eventually be linked to form a gap. Vault manufacturers work closely with the banking industry and law enforcement in order to keep up with such advances in burglary.
Technology
Containers
null
744639
https://en.wikipedia.org/wiki/Thalweg
Thalweg
In geography, hydrography, and fluvial geomorphology, a thalweg or talweg () is the line or curve of lowest elevation within a valley or watercourse. Its vertical position in maps is the nadir (greatest depth, sounding) in the stream profile. Under international law, a thalweg is instead taken to be the middle of the primary navigable channel of a waterway which is the default legal presumption for the boundary between entities such as states. Thalwegs can have local proprietorial and administrative significance because their formerly somewhat shifting position, reliant on renewed soundings, now more fixed as described internationally, is part of centuries-old custom and practice in some jurisdictions. In some jurisdictions and between some states the median line (between banks) is the preferred boundary presumption as may extend from estuaries. Also being easy to map, drawing "turning points" are the solution for a few major rivers such as the St Lawrence River-Great Lakes system. Etymology The word thalweg is of 19th-century German origin. The German word (modern spelling ) is a compound noun that is built from the German elements (since Duden's orthography reform of 1901 written ) meaning valley (cognate with dale in English), and , meaning way. It means "valley way" and is used, with its modern spelling , in daily German to describe a path or road which follows the bottom of a valley, or in geography with the more technical meaning also adopted by English. Hydrology In hydrological and fluvial landforms, the thalweg is a line drawn to join the lowest points along the length of a stream bed or valley in its downward slope, defining its deepest channel. The thalweg thus marks the natural direction (the profile) of a watercourse. The term is sometimes used to refer to a subterranean stream that percolates under the surface and in the same general direction as the surface stream. Bouldering of thalweg of non-canalised rivers Slowing stream-bed erosion by bouldering a thalweg helps stabilize natural rivers' course and depth. Placing boulders along the thalweg helps to protect the channel's sedimentary erosion and deposit balance. In concurrence with this, doing so along an instream to form artificial sills helps to slow the sedimentary erosion and deposit of watercourses, while keeping the esteem (fishing, local wildlife, and recreation) and natural resources of the running water source intact. Placement of boulders along a thalweg and the creation of instream sills makes drying up rarer and less severe during late summer, and abates cases of severe sediment erosion and deposit in the spring and fall months when the flow rates are high, particularly if those rates have increased. Such partial infilling of a thalweg was prototyped in Meacham Creek in Umatilla, Oregon. Thalweg principle The thalweg principle (also known as the thalweg doctrine or the rule of thalweg) is the legal principle that if the boundary between two political entities is stated to be a waterway without further description (e.g., a median line, right bank, eastern shore, low-tide line, etc.), the boundary follows the thalweg of that watercourse. A thalweg is the center of the principal navigable channel of the waterway (which is presumed to be the deepest part). If there are multiple navigable channels in a river, the one principally used for downstream travel (likely having the strongest current) is used. The definition has been used in specific descriptions as well. The Treaty of Versailles, for example, specifies that "In the case of boundaries which are defined by a [navigable] waterway" the boundary is to follow "the median line of the principal channel of navigation." The precise drawing of river boundaries has been important on countless occasions. Notable examples include the Shatt al-Arab between Iraq and Iran, the Danube in central Europe (Croatia–Serbia border dispute), the Kasikili/Sedudu Island dispute between Namibia and Botswana (settled by the International Court of Justice in 1999), and the 2004 dispute settlement under the UN Law of the Sea concerning the offshore boundary between Guyana and Suriname, in which the thalweg of the Courantyne River played a role in the ruling. In the 20th century dispute between the USSR and China (PRC) over Zhenbao Island, China held that the Thalweg principle supported their position. The doctrine is also applied to sub-national boundaries, such as those between American states.
Physical sciences
Hydrology
Earth science
745387
https://en.wikipedia.org/wiki/Malathion
Malathion
Malathion is an organophosphate insecticide which acts as an acetylcholinesterase inhibitor. In the USSR, it was known as carbophos, in New Zealand and Australia as maldison and in South Africa as mercaptothion. Pesticide use Malathion is a pesticide that is widely used in agriculture, residential landscaping, public recreation areas, and in public health pest control programs such as mosquito eradication. In the US, it is the most commonly used organophosphate insecticide. A malathion mixture with corn syrup was used in the 1980s in Australia and California to combat the Mediterranean fruit fly. In Canada and the US starting in the early 2000s, malathion was sprayed in many cities to combat west Nile virus. Malathion was used over the last couple of decades on a regular basis during summer months to kill mosquitoes, but homeowners were allowed an exemption for their properties if they chose.. In the United Kingdom, malathion was withdrawn from sale in 2002. Mechanism of action Malathion is an acetylcholinesterase inhibitor, a diverse family of chemicals. Upon uptake into the target organism, it binds irreversibly to the serine residue in the active catalytic site of the cholinesterase enzyme. The resultant phosphoester group is strongly bound to the cholinesterase, and irreversibly deactivates the enzyme which leads to rapid build-up of acetylcholine at the synapse. Production method Malathion is produced by the addition of dimethyl dithiophosphoric acid to diethyl maleate or diethyl fumarate. The compound is chiral but is used as a racemate. Medical use Malathion in low doses (0.5% preparations) is used as a treatment for: Head lice and body lice. Malathion is approved by the US Food and Drug Administration for treatment of pediculosis. It is claimed to effectively kill both the eggs and the adult lice, but in fact has been shown in UK studies to be only 36% effective on head lice, and less so on their eggs. This low efficiency was noted when malathion was applied to lice found on schoolchildren in the Bristol area in the UK, and is caused by the tested population of lice having developed resistance against malathion. Scabies Preparations include Derbac-M, Prioderm, Quellada-M and Ovide. Safety General Malathion is of low toxicity. In arthropods it is metabolized into malaoxon which is 61x more toxic, being a more potent inhibitor of acetylcholinesterase. According to the United States Environmental Protection Agency, no reliable information is available on adverse health effects of chronic exposure. In 1981, Malathion was sprayed over a area to control an outbreak of Mediterranean fruit flies in California. In order to demonstrate the chemical's safety, B. T. Collins, director of the California Conservation Corps, publicly swallowed a mouthful of dilute malathion solution. Carcinogenicity Malathion is classified by the IARC as probable carcinogen (group 2A). Malathion is classified by US EPA as having "suggestive evidence of carcinogenicity". This classification was based on the occurrence of liver tumors at excessive doses in mice and female rats and the presence of rare oral and nasal tumors in rats that occurred following exposure to very large doses. Exposure to organophosphates is associated with non-Hodgkin's lymphoma. Malathion used as a fumigant was not associated with increased cancer risk. Between 1993 and 1997, as part of the Agricultural Health Study, no clear association between malathion exposure and cancer was reported. Amphibians Malathion is toxic to leopard frog tadpoles. Risks Malathion is of low toxicity; however, absorption or ingestion into the human body readily results in its metabolism to malaoxon, which is substantially more toxic. In studies of the effects of long-term exposure to oral ingestion of malaoxon in rats, malaoxon has been shown to be 61 times more toxic than malathion, and malaoxon is 1,000 times more potent than malathion in terms of its acetylcholinesterase inhibition. Indoor spillage of malathion can thus be more poisonous than expected, as malathion breaks down in a confined space into the more toxic malaoxon. It is cleared from the body quickly, in three to five days. Resistance Because it is an acetylcholinesterase inhibitor, this resistance is a type of AChEI resistance. Malathion resistance is thought to always be due to either increased carboxylesterase concentrations or altered acetylcholinesterases. COE because it metabolizes malathion but into non-malaoxon products, altered AChEs because we mean specifically those altered to be less sensitive to malathion and malaoxon.
Technology
Pest and disease control
null
745701
https://en.wikipedia.org/wiki/Free-radical%20addition
Free-radical addition
In organic chemistry, free-radical addition is an addition reaction which involves free radicals. These reactions can happen due to the free radicals having an unpaired electron in their valence shell, making them highly reactive. Radical additions are known for a variety of unsaturated substrates, both olefinic or aromatic and with or without heteroatoms. Free-radical reactions depend on one or more relatively weak bonds in a reagent. Under reaction conditions (typically heat or light), some weak bonds homolyse into radicals, which then induce further decomposition in their compatriots before recombination. Different mechanisms typically apply to reagents without such a weak bond. Mechanism and regiochemistry The basic steps in any free-radical process (the radical chain mechanism) divide into: Radical initiation: A radical is created from a non-radical precursor. Chain propagation: A radical reacts with a non-radical to produce a new radical species Chain termination: Two radicals react with each other to create a non-radical species In a free-radical addition, there are two chain propagation steps. In one, the adding radical attaches to a multiply-bonded precursor to give a radical with lesser bond order. In the other, the newly-formed radical product abstracts another substituent from the adding reagent to regenerate the adding radical. In general, the adding radical attacks the alkene at the most sterically accessible (typically, least substituted) carbon; the radical then stabilizes on the more substituted carbon. The result is typically anti-Markovnikov addition, a phenomenon Morris Kharasch called the "peroxide effect". Reaction is slower with alkynes than alkenes. In the paradigmatic example, hydrogen bromide radicalyzes to monatomic bromine. These bromine atoms add to an alkene at the most accessible site, to give a bromoalkyl radical, with the radical on the more substituted carbon. That radical then abstracts a hydrogen atom from another HBr molecule to regenerate the monatomic bromine and continue the reaction. Compounds that add radically Radical addition of hydrogen bromide is a valuable synthetic technique for anti-Markovnikov carbon substitution, but free-radical addition does not occur with the other hydrohalic acids. Radical formation from HF, HCl, or HI is extremely endothermic and chemically disfavored. Hydrogen bromide is incredibly selective as a reagent, and does not produce detectable quantities of polymeric byproducts. The behavior of hydrogen bromide generalizes in two separate directions. Halogenated compounds with a relatively stable radical can dissociate from the halogen. Thus, for example, sulfonyl, sulfenyl, and other sulfur halides can add radically to give respectively βhalo sulfones, sulfoxides, or sulfides. Separately, unsubsituted compounds with a relative stable radical can dissociate from hydrogen. In general, these reactions risk polymerized byproducts (see ). For example, in the thiol-ene reaction, thiols, disulfides, and hydrogen sulfide add across a double bond. But if the unsaturated substrate polymerizes easily, they catalyze polymerization instead. In thermal silane additions, telomerization usually proceeds to about 6 units. In the case of silicon, germanium, or phosphorus, the energetics are unfavorable unless the heavy atom bears a pendant hydrogen. Other electronegative substituents on silicon appear to reduce the barrier. Although nitrogen oxides naturally radicalize, careful control of the radical species is difficult. Dinitrogen tetroxide adds to give a mixture: a vicinal dinitro compound, but also a nitro substituent adjacent to a nitrite ester. To aryl radicals Although aromatic resonance stabilizes aryl radicals, bonds between arenes and their substituents are (in)famously strong. Radical reactions with arenes typically present retrosynthetically as instances of nucleophilic aromatic substitution, because generating the aryl radical requires a strong (radical) leaving group. One example is the Meerwein arylation. Side reactions A radical addition which leaves an unsaturated product can undergo radical cyclization between the two propagation steps. In general, radical additions can also start radical polymerization processes. With stable inorganic radicals In self-terminating oxidative radical cyclization, inorganic radicals oxidize alkynes to ketones through an intramolecular radical cyclization. This reaction is not catalytic, and requires the oxidized radical source in stoichiometric amounts. In effect, the radical species is synthetically equivalent to monatomic oxygen. In the paradigmatic example, a nitrate radical (from photolysis of ceric ammonium nitrate) adds to an alkyne to generate a very reactive vinyl nitrate ester radical. The vinyl radical abstracts an intramolecular hydrogen atom 5 atoms away before 5-exo-trig ring-closure. The resulting alkyl nitrate radical can then fragment to a ketone and the stable radical nitrogen dioxide. Sulfate (from ammonium persulfate) and hydroxyl radicals show similar reactivity.
Physical sciences
Organic reactions
Chemistry
746462
https://en.wikipedia.org/wiki/Tungsten%20carbide
Tungsten carbide
Tungsten carbide (chemical formula: WC) is a chemical compound (specifically, a carbide) containing equal parts of tungsten and carbon atoms. In its most basic form, tungsten carbide is a fine gray powder, but it can be pressed and formed into shapes through sintering for use in industrial machinery, engineering facilities, molding blocks, cutting tools, chisels, abrasives, armor-piercing bullets and jewelry. Tungsten carbide is approximately three times as stiff as steel, with a Young's modulus of approximately 530–700 GPa, and is twice as dense as steel. It is comparable with corundum (α-) in hardness, approaching that of a diamond, and can be polished and finished only with abrasives of superior hardness such as cubic boron nitride and diamond powder, wheels and compounds. Tungsten carbide tools can be operated at cutting speeds much higher than high-speed steel (a special steel blend for cutting tools). Tungsten carbide powder was first synthesized by H. Moissan in 1893, and the industrial production of the cemented form started 20 to 25 years later (between 1913 and 1918). Naming Colloquially among workers in various industries (such as machining), tungsten carbide is often simply called carbide. Synthesis Powder Tungsten carbide powder is prepared by reaction of tungsten metal (or powder) and carbon at 1,400–2,000 °C. Other methods include a lower temperature fluid bed process that reacts either tungsten metal (or powder) or blue with CO/ gas mixture and gas between 900 and 1,200 °C. WC can also be produced by heating WO3 with graphite, either directly at 900 °C or in hydrogen at 670 °C, followed by carburization in argon at 1,000 °C. Chemical vapor deposition methods that have been investigated include: reacting tungsten hexachloride with hydrogen (as a reducing agent) and methane (as the source of carbon) at + + → WC + 6HCl reacting tungsten hexafluoride with hydrogen (as reducing agent) and methanol (as source of carbon) at + 2 + → WC + 6HF + Cemented form Solid tungsten carbide is prepared using techniques from powder metallurgy developed in the 1920s. Powdered tungsten carbide is mixed with another powdered metal, usually cobalt (alternatives include nickel, iron and paraffin wax) which acts as a binder. The mixture is pressed, then sintered by heating it to temperatures of to ; the binder melts, wets, and partially dissolves the tungsten grains, binding them together. The cobalt-tungsten composites specifically are known by a number of trade names, including Widia and Carboloy. Chemical properties There are two well-characterized compounds of tungsten and carbon: tungsten carbide, , and tungsten semicarbide, . Both compounds may be present in coatings and the proportions can depend on the coating method. Another meta-stable compound of tungsten and carbon can be created by heating the WC phase to high temperatures using plasma, then quenching in inert gas (plasma spheroidization). This process causes macrocrystalline WC particles to spheroidize and results in the non-stoichiometric high temperature phase existing in a meta-stable form at room temperature. The fine microstructure of this phase provides high hardness (28003500 HV) combined with good toughness when compared with other tungsten carbide compounds. The meta-stable nature of this compound results in reduced high temperature stability. At high temperatures WC decomposes to tungsten and carbon and this can occur during high-temperature thermal spray, e.g., in high velocity oxygen fuel (HVOF) and high energy plasma (HEP) methods. Oxidation of WC starts at . It is resistant to acids and is only attacked by hydrofluoric acid/nitric acid (HF/) mixtures above room temperature. It reacts with fluorine gas at room temperature and chlorine above and is unreactive to dry up to its melting point. Finely powdered WC oxidizes readily in hydrogen peroxide aqueous solutions. At high temperatures and pressures it reacts with aqueous sodium carbonate forming sodium tungstate, a procedure used for recovery of scrap cemented carbide due to its selectivity. Physical properties Tungsten carbide has a high melting point at , a boiling point of when under a pressure equivalent to , a thermal conductivity of 110 W/m·K, and a coefficient of thermal expansion of 5.5 μm/m·K. Tungsten carbide is extremely hard, ranking about 9 to 9.5 on the Mohs scale, and with a Vickers number of around 2600. It has a Young's modulus of approximately 530–700 GPa, a bulk modulus of 379-381 GPa, and a shear modulus of 274 GPa. It has an ultimate tensile strength of 344 MPa, an ultimate compression strength of about 2.7 GPa and a Poisson's ratio of 0.31. The speed of a longitudinal wave (the speed of sound) through a thin rod of tungsten carbide is 6220 m/s. Tungsten carbide's low electrical resistivity of about 0.2 μΩ·m is comparable with that of some metals (e.g. vanadium 0.2 μΩ·m). WC is readily wetted by both molten nickel and cobalt. Investigation of the phase diagram of the W-C-Co system shows that WC and Co form a pseudo binary eutectic. The phase diagram also shows that there are so-called η-carbides with composition that can be formed and the brittleness of these phases makes control of the carbon content in WC-Co cemented carbides important. In the presence of a molten phase such as cobalt, abnormal grain growth is known to occur in the sintering of tungsten carbide, with this having significant effects on the performance of the product material. Structure There are two forms of WC, a hexagonal form, α-WC (hP2, space group Pm2, No. 187), and a cubic high-temperature form, β-WC, which has the rock salt structure. The hexagonal form can be visualized as made up of a simple hexagonal lattice of metal atoms of layers lying directly over one another (i.e. not close packed), with carbon atoms filling half the interstices giving both tungsten and carbon a regular trigonal prismatic, 6 coordination. From the unit cell dimensions the following bond lengths can be determined: the distance between the tungsten atoms in a hexagonally packed layer is 291 pm, the shortest distance between tungsten atoms in adjoining layers is 284 pm, and the tungsten carbon bond length is 220 pm. The tungsten-carbon bond length is therefore comparable to the single bond in (218 pm) in which there is strongly distorted trigonal prismatic coordination of tungsten. Molecular WC has been investigated and this gas phase species has a bond length of 171 pm for . Applications Cutting tools for machining Sintered tungsten carbide–cobalt cutting tools are very abrasion resistant and can also withstand higher temperatures than standard high-speed steel (HSS) tools. Carbide cutting surfaces are often used for machining tough materials such as carbon steel or stainless steel, and in applications where steel tools would wear quickly, such as high-quantity and high-precision production. Because carbide tools maintain a sharp cutting edge better than steel tools, they generally produce a better finish on parts, and their temperature resistance allows faster machining. The material is usually called cemented carbide, solid carbide, hardmetal or tungsten-carbide cobalt. It is a metal matrix composite, where tungsten carbide particles are the aggregate, and metallic cobalt serves as the matrix. It has been found wear and oxidation properties of cemented carbide can be improved by replacing cobalt with iron aluminide. Using iron also reduces cost, as cobalt is particularly expensive, but the mixing is best done with resonant acoustic mixing. Tungsten carbide cutting tools can be further enhanced with coatings such as titanium aluminium nitride or titanium chromium nitride to increase their thermal stability, and prolong tool life. Ammunition Tungsten carbide, in its monolithic sintered form, or much more often in cemented tungsten carbide cobalt composite (see above), is often used in armor-piercing ammunition, especially where depleted uranium is not available or is politically unacceptable. projectiles were first used by German Luftwaffe tank-hunter squadrons in World War II. However, owing to the limited German reserves of tungsten, material was reserved for making machine tools and small numbers of projectiles. It is an effective penetrator due to its combination of great hardness and very high density. Tungsten carbide ammunition is now generally of the sabot type. SLAP, or saboted light armour penetrator, where a plastic sabot discards at the barrel muzzle, is one of the primary types of saboted small arms ammunition. Non-discarding jackets, regardless of the jacket material, are not perceived as sabots but as bullets. Both of the designs are, however, common in designated light armor-piercing small arms ammunition. Discarding sabots such as are used with M1A1 Abrams main gun are more commonplace in precision high-velocity gun ammunition. Mining and foundation drilling Tungsten carbide is used extensively in mining in top hammer rock drill bits, downhole hammers, roller-cutters, long wall plough chisels, long wall shearer picks, raiseboring reamers, and tunnel boring machines. In these applications it is also used for wear and corrosion resistant components in inlet control for well screens, sub-assemblies, seal rings and bushings common in oil and gas drilling. It is generally utilised as a button insert, mounted in a surrounding matrix of steel that forms the substance of the bit. As the tungsten carbide button is worn away the softer steel matrix containing it is also worn away, exposing yet more button insert. Nuclear Tungsten carbide is also an effective neutron reflector and as such was used during early investigations into nuclear chain reactions, particularly for weapons. A criticality accident occurred at Los Alamos National Laboratory on 21 August 1945 when Harry Daghlian accidentally dropped a tungsten carbide brick onto a plutonium sphere, known as the demon core, causing the subcritical mass to go supercritical with the reflected neutrons. He fell into a coma and died 25 days after the accident. Sports usage Trekking poles, used by many hikers for balance and to reduce pressure on leg joints, generally use carbide tips in order to gain traction when placed on hard surfaces (like rock); carbide tips last much longer than other types of tip. While ski pole tips are generally not made of carbide, since they do not need to be especially hard even to break through layers of ice, rollerski tips usually are. Roller skiing emulates cross country skiing and is used by many skiers to train during warm weather months. Sharpened carbide tipped spikes (known as studs) can be inserted into the drive tracks of snowmobiles. These studs enhance traction on icy surfaces. Longer v-shaped segments fit into grooved rods called wear rods under each snowmobile ski. The relatively sharp carbide edges enhance steering on harder icy surfaces. The carbide tips and segments reduce wear encountered when the snowmobile must cross roads and other abrasive surfaces. Car, motorcycle and bicycle tires with tungsten carbide studs provide better traction on ice. They are generally preferred to steel studs because of their superior resistance to wear. Tungsten carbide may be used in farriery, the shoeing of horses, to improve traction on slippery surfaces such as roads or ice. Carbide-tipped hoof nails may be used to attach the shoes; in the United States, borium – chips of tungsten carbide in a matrix of softer metal such as bronze or mild steel – may be welded to small areas of the underside of the shoe before fitting. Surgical instruments and medical Tungsten carbide is also used for making surgical instruments meant for open surgery (scissors, forceps, hemostats, blade-handles, etc.) and laparoscopic surgery (graspers, scissors/cutter, needle holder, cautery, etc.). They are much costlier than their stainless-steel counterparts and require delicate handling, but give better performance. Jewelry Tungsten carbide, typically in the form of a cemented carbide (carbide particles brazed together by metal), has become a popular material in the bridal jewelry industry due to its extreme hardness and high resistance to scratching. Even with high-impact resistance, this extreme hardness also means that it can occasionally be shattered under certain circumstances. Some consider this useful, since an impact would shatter a tungsten ring, quickly removing it, where precious metals would bend flat and require cutting. Tungsten carbide is roughly 10 times harder than 18k gold. In addition to its design and high polish, part of its attraction to consumers is its technical nature. Special tools, such as locking pliers, may be required if such a ring must be removed quickly (e.g. due to medical emergency following a hand injury accompanied by swelling). Other Tungsten carbide is widely used to make the rotating ball in the tips of ballpoint pens that disperse ink during writing. English guitarist Martin Simpson uses a custom-made tungsten carbide guitar slide. The hardness, weight, and density of the slide give it superior sustain and volume compared to standard glass, steel, ceramic, or brass slides. Tungsten carbide has been investigated for its potential use as a catalyst and it has been found to resemble platinum in its catalysis of the production of water from hydrogen and oxygen at room temperature, the reduction of tungsten trioxide by hydrogen in the presence of water, and the isomerisation of 2,2-dimethylpropane to 2-methylbutane. It has been proposed as a replacement for the iridium catalyst in hydrazine-powered satellite thrusters. A tungsten carbide coating has been utilized on brake discs in high performance automotive applications to improve performance, increase service intervals and reduce brake dust. Toxicity The primary health risks associated with tungsten carbide relate to inhalation of dust, leading to silicosis-like pulmonary fibrosis. Cobalt-cemented tungsten carbide is also anticipated to be a human carcinogen by the American National Toxicology Program.
Physical sciences
Ceramic compounds
Chemistry
746495
https://en.wikipedia.org/wiki/Bioreactor
Bioreactor
A bioreactor is any manufactured device or system that supports a biologically active environment. In one case, a bioreactor is a vessel in which a chemical process is carried out which involves organisms or biochemically active substances derived from such organisms. This process can either be aerobic or anaerobic. These bioreactors are commonly cylindrical, ranging in size from litres to cubic metres, and are often made of stainless steel. It may also refer to a device or system designed to grow cells or tissues in the context of cell culture. These devices are being developed for use in tissue engineering or biochemical/bioprocess engineering. On the basis of mode of operation, a bioreactor may be classified as batch, fed batch or continuous (e.g. a continuous stirred-tank reactor model). An example of a continuous bioreactor is the chemostat. Organisms or biochemically active substances growing in bioreactors may be submerged in liquid medium or may be anchored to the surface of a solid medium. Submerged cultures may be suspended or immobilized. Suspension bioreactors may support a wider variety of organisms, since special attachment surfaces are not needed, and can operate at a much larger scale than immobilized cultures. However, in a continuously operated process the organisms will be removed from the reactor with the effluent. Immobilization is a general term describing a wide variety of methods for cell or particle attachment or entrapment. It can be applied to basically all types of biocatalysis including enzymes, cellular organelles, animal and plant cells and organs. Immobilization is useful for continuously operated processes, since the organisms will not be removed with the reactor effluent, but is limited in scale because the microbes are only present on the surfaces of the vessel. Large scale immobilized cell bioreactors are: moving media, also known as moving bed biofilm reactor (MBBR) packed bed fibrous bed membrane Design Bioreactor design is a relatively complex engineering task, which is studied in the discipline of biochemical/bioprocess engineering. Under optimum conditions, the microorganisms or cells are able to perform their desired function with limited production of impurities. The environmental conditions inside the bioreactor, such as temperature, nutrient concentrations, pH, and dissolved gases (especially oxygen for aerobic fermentations) affect the growth and productivity of the organisms. The temperature of the fermentation medium is maintained by a cooling jacket, coils, or both. Particularly exothermic fermentations may require the use of external heat exchangers. Nutrients may be continuously added to the fermenter, as in a fed-batch system, or may be charged into the reactor at the beginning of fermentation. The pH of the medium is measured and adjusted with small amounts of acid or base, depending upon the fermentation. For aerobic (and some anaerobic) fermentations, reactant gases (especially oxygen) must be added to the fermentation. Since oxygen is relatively insoluble in water (the basis of nearly all fermentation media), air (or purified oxygen) must be added continuously. The action of the rising bubbles helps mix the fermentation medium and also "strips" out waste gases, such as carbon dioxide. In practice, bioreactors are often pressurized; this increases the solubility of oxygen in water. In an aerobic process, optimal oxygen transfer is sometimes the rate limiting step. Oxygen is poorly soluble in water—even less in warm fermentation broths—and is relatively scarce in air (20.95%). Oxygen transfer is usually helped by agitation, which is also needed to mix nutrients and to keep the fermentation homogeneous. Gas dispersing agitators are used to break up air bubbles and circulate them throughout the vessel. Fouling can harm the overall efficiency of the bioreactor, especially the heat exchangers. To avoid it, the bioreactor must be easily cleaned. Interior surfaces are typically made of stainless steel for easy cleaning and sanitation. Typically bioreactors are cleaned between batches, or are designed to reduce fouling as much as possible when operated continuously. Heat transfer is an important part of bioreactor design; small vessels can be cooled with a cooling jacket, but larger vessels may require coils or an external heat exchanger. Types Photobioreactor A photobioreactor (PBR) is a bioreactor which incorporates some type of light source (that may be natural sunlight or artificial illumination). Virtually any translucent container could be called a PBR, however the term is more commonly used to define a closed system, as opposed to an open storage tank or pond. Photobioreactors are used to grow small phototrophic organisms such as cyanobacteria, algae, or moss plants. These organisms use light through photosynthesis as their energy source and do not require sugars or lipids as energy source. Consequently, risk of contamination with other organisms like bacteria or fungi is lower in photobioreactors when compared to bioreactors for heterotroph organisms. Sewage treatment Conventional sewage treatment utilises bioreactors to undertake the main purification processes. In some of these systems, a chemically inert medium with very high surface area is provided as a substrate for the growth of biological film. Separation of excess biological film takes place in settling tanks or cyclones. In other systems aerators supply oxygen to the sewage and biota to create activated sludge in which the biological component is freely mixed in the liquor in "flocs". In these processes, the liquid's biochemical oxygen demand (BOD) is reduced sufficiently to render the contaminated water fit for reuse. The biosolids can be collected for further processing, or dried and used as fertilizer. An extremely simple version of a sewage bioreactor is a septic tank whereby the sewage is left in situ, with or without additional media to house bacteria. In this instance, the biosludge itself is the primary host for the bacteria. Bioreactors for specialized tissues Many cells and tissues, especially mammalian ones, must have a surface or other structural support in order to grow, and agitated environments are often destructive to these cell types and tissues. Higher organisms, being auxotrophic, also require highly specialized growth media. This poses a challenge when the goal is to culture larger quantities of cells for therapeutic production purposes, and a significantly different design is needed compared to industrial bioreactors used for growing protein expression systems such as yeast and bacteria. Many research groups have developed novel bioreactors for growing specialized tissues and cells on a structural scaffold, in attempt to recreate organ-like tissue structures in-vitro. Among these include tissue bioreactors that can grow heart tissue, skeletal muscle tissue, ligaments, cancer tissue models, and others. Currently, scaling production of these specialized bioreactors for industrial use remains challenging and is an active area of research. For more information on artificial tissue culture, see tissue engineering. Modelling Mathematical models act as an important tool in various bio-reactor applications including wastewater treatment. These models are useful for planning efficient process control strategies and predicting the future plant performance. Moreover, these models are beneficial in education and research areas. Bioreactors are generally used in those industries which are concerned with food, beverages and pharmaceuticals. The emergence of biochemical engineering is of recent origin. Processing of biological materials using biological agents such as cells, enzymes or antibodies are the major pillars of biochemical engineering. Applications of biochemical engineering cover major fields of civilization such as agriculture, food and healthcare, resource recovery and fine chemicals. Until now, the industries associated with biotechnology have lagged behind other industries in implementing control over the process and optimization strategies. A main drawback in biotechnological process control is the problem of measuring key physical and biochemical parameters. Operational stages in a bio-process A bioprocess is composed mainly of three stages—upstream processing, bioreaction, and downstream processing—to convert raw material to finished product. The raw material can be of biological or non-biological origin. It is first converted to a more suitable form for processing. This is done in an upstream processing step which involves chemical hydrolysis, preparation of liquid medium, separation of particulate, air purification and many other preparatory operations. After the upstream processing step, the resulting feed is transferred to one or more bioreaction stages. The biochemical reactors or bioreactors form the base of the bioreaction step. This step mainly consists of three operations, namely, production of biomass, metabolite biosynthesis and biotransformation. Finally, the material produced in the bioreactor must be further processed in the downstream section to convert it into a more useful form. The downstream process mainly consists of physical separation operations which include solid liquid separation, adsorption, liquid-liquid extraction, distillation, drying etc. Specifications A typical bioreactor consists of following parts: Agitator – Used for the mixing of the contents of the reactor which keeps the cells in the perfect homogenous condition for better transport of nutrients and oxygen to the desired product(s). Baffle – Used to break the vortex formation in the vessel, which is usually highly undesirable as it changes the center of gravity of the system and consumes additional power. Sparger – In aerobic cultivation process, the purpose of the sparger is to supply adequate oxygen to the growing cells. Jacket – The jacket provides the annular area for circulation of constant temperature of water which keeps the temperature of the bioreactor at a constant value.
Technology
Biotechnology
null
746513
https://en.wikipedia.org/wiki/Safflower
Safflower
Safflower (Carthamus tinctorius) is a highly branched, herbaceous, thistle-like annual plant in the family Asteraceae. It is one of the world's oldest crops, and today is commercially cultivated for vegetable oil extracted from the seeds. Plants are tall with globular flower heads having yellow, orange, or red flowers. Each branch will usually have from one to five flower heads containing 15 to 20 seeds per head. Safflower is native to arid environments having seasonal rain. It grows a deep taproot which enables it to thrive in such environments. Biology Plant morphology Safflower is a fast growing, erect, winter/spring-growing annual herb, that resembles a thistle. Originating from a leaf rosette emerges a branched central stem (also referred to as terminal stem), when day length and temperature increase. The main shoot reaches heights of . The plant also develops a strong taproot, growing as deep as . First lateral branches develop, once the main stem is about high. These lateral branches can then branch again to produce secondary and tertiary branches. The chosen variety as well as growing conditions influence the extent of branching. The elongated and serrated leaves reach lengths of and widths of and run down the stem. The upper leaves that form the bracts are usually short, stiff and ovate, terminating in a spine. Buds are borne on the ends of branches, and each composite flower head (capitulum) contains 20–180 individual florets. Depending on variety, crop management and growing conditions, each plant can develop 3–50 or more flower heads of diameter. Flowering commences with terminal flower heads (central stem), followed sequentially by primary, secondary and sometimes tertiary branch flower heads. Individual florets usually flower for 3–4 days. Commercial varieties are largely self-pollinated. Flowers are commonly yellow, orange and red, but white and cream coloured forms exist. The dicarpelled, epigynous ovary forms the ovule. The safflower plant then produces achenes. Each flower head commonly contains 15–50 seeds; however, the number can exceed 100. The shell content of the seeds varies between 30 and 60%, the oil content of the seeds varies between 20 and 40%. Plant development Safflower usually emerges 1–3 weeks after sowing and grows slower under low temperatures. Germination of safflower is epigeal. The first true leaves emerging form a rosette. This stage occurs in winter with short daylength and cold temperature, as the safflower can tolerate frosts up to during the rosette stage. When temperature and daylength start to increase, the central stem begins to elongate and branch, growing more rapidly. Early sowing allows more time for developing a large rosette and more extensive branching, which results in a higher yield. Flowering is mainly influenced by daylength. The period from the end of flowering to maturity is usually 4 weeks. The total period from sowing to harvest maturity varies with variety, location, sowing time and growing conditions; for June or July sowings, it may be about 26–31 weeks. Both wild and cultivated forms have a diploid set of 2n = 24 chromosomes. Crossings with Carthamus palaestinus, Carthamus oxyacanthus and Carthamus persicus can produce fertile offspring. History Safflower is one of humanity's oldest crops. It was first cultivated in Mesopotamia, with archaeological traces possibly dating as early as 2500 BC. Chemical analysis of ancient Egyptian textiles dated to the Twelfth Dynasty (1991–1802 BC) identified dyes made from safflower, and garlands made from safflowers were found in the tomb of the pharaoh Tutankhamun. John Chadwick reports that the Greek name for safflower (, ) occurs many times in Linear B tablets, distinguished into two kinds: a white safflower (ka-na-ko re-u-ka, , ), which was measured, and red (ka-na-ko e-ru-ta-ra, , ) which was weighed. "The explanation is that there are two parts of the plant which can be used; the pale seeds and the red florets." The early Spanish colonies along the Rio Grande in New Mexico used safflower as a substitute for saffron in traditional recipes. An heirloom variety originating in Corrales, New Mexico, called "Corrales Azafran", is still cultivated and used as a saffron substitute in New Mexican cuisine. Cultivation Climate Safflower prefers high temperatures and grows best at . It tolerates , but there are also some varieties which grow under very low temperatures. Safflower is cultivated in different seasons: as a winter crop in south central India, as an early summer crop in California and as a mid-summer crop in the Northern Great Plains of the United States. Minimum length of the growing season is 120 and 200 days for summer and winter cultivars, respectively. Plant performance is highly dependent on the different planting dates in terms of temperature and day length. Winter hard varieties only form a rosette in late fall and elongates in spring. In early stages, safflower tolerates humidity but after bud stage the danger of a Botrytis blight infestation increases Safflower is drought tolerant. The tap root makes moisture from deep soil layers available. Additionally, this tolerance can also be explained by the higher water use efficiency compared to other oil crops such as linseed and mustard. Shortly before and during maximum flowering water requirements are the highest. Beside drought tolerance, all parts of the plant are sensitive to moisture in terms of diseases. In the case of excessive water supply, it is susceptible to root rot. Therefore, many varieties are not suitable in irrigated agriculture especially on soils with danger of waterlogging. Safflower tolerates wind and hail better than cereals. It stays erect and can retain the seeds in the head. Soil Safflower prefers well-drained, neutral, fertile and deep soils. It can adapt well to soil pH (pH 5–8) and tolerates salinity. Safflower can be well grown on different soil types, with water supply as its main driving factor for suitability, depending on climate and irrigation, and the resulting different water regimes of the different soil types. Therefore, cultivation on shallow soils and especially on soils with danger of waterlogging is not suitable. The deep rooting promotes water and air movement and improves the soil quality for subsequent crops in a rotation. Nutrient requirements can be compared to wheat and barley, except nitrogen amendment should be increased by 20%. Therefore, soils with an adequate nitrogen supply are favorable. Agricultural practice Crop rotation and sowing Safflower is frequently grown in crop rotation with small grains, fallow and annual legumes. Close rotation with crops susceptible to Sclerotinia sclerotiorum should be avoided (e.g. sunflower, canola, mustard plant and pea). A four-year rotation is recommended to reduce disease pressure. Seeds should be sown in spring as early as soil temperature is exceeded, to take advantage of the full growing season. If wireworms were a problem in the field in previous seasons, a respective seed treatment is recommended. A planting depth between is optimal. Shallow seeding promotes uniform emergence resulting in a better stand. Seeding rate recommendations are around of live seed. Where lower seeding rates promote branching, a longer flowering period and later maturity and higher rates promote thicker stands with a higher disease incidence. Sufficient moisture is necessary for germination. Usually, row spacing between are chosen using similar drill settings as recommended for barley. Management The total N recommendation is . This should include credits based on previous crops and soil available N. For the latter, deeper positioned nutrients need to be taken into account as safflower will root deeper than small grains and therefore access nutrients unavailable to them. Safflower growing in soils low in phosphorus need to be fertilized. Up to of phosphate can be drill-applied safely. A weed control program is essential when growing safflower as it is a poor competitor with weeds during the rosette stage. Cultivation on fields with heavy infestation of perennial weeds is not recommended. Harvest Safflower is mature when most leaves have turned brown approximately 30 days after flowering. Seeds should fall from the head when rubbed. Rain and high humidity after maturity may cause the seeds to sprout on the head. Harvesting is usually done using a small-grain combine harvester. Moisture in seeds should not exceed 8% to allow for a safe, long-term storage. Drying can be done similar to sunflower. Temperatures must not exceed to prevent damage to the seed and ensure quality. Pests Gram pod borer/capsule borer: Helicoverpa armigera Safflower caterpillar: Perigaea capensis Safflower aphid: Uroleucon carthami Capsule fly/safflower bud fly: Acanthiophilus helianthi Diseases Alternaria spp. present one of the most prevalent diseases causing losses as high as 50% in India. In a field trial in Switzerland, Botrytis cinerea was the most prevalent disease. Production In 2020, global production of safflower seeds was 653,030 tonnes, led by Kazakhstan with 35% of the world total (table). Other significant producers were Russia and Mexico, with 28% of world production combined. Uses Traditionally, the crop was grown for its seeds, and used for coloring and flavoring foods, in medicines, and making red (carthamin) and yellow dyes, especially before cheaper aniline dyes became available. Safflower oil For the last fifty years or so, the plant has been cultivated mainly for the vegetable oil extracted from its seeds. Safflower seed oil is flavorless and colorless. It is used mainly in cosmetics and as a cooking oil, in salad dressing, and for the production of margarine. INCI nomenclature is Carthamus tinctorius. There are two types of safflower that produce different kinds of oil: one high in monounsaturated fatty acid (oleic acid) and the other high in polyunsaturated fatty acid (linoleic acid). Currently the predominant edible oil market is for the former, which is lower in saturated fats than olive oil. The latter is used in painting in the place of linseed oil, particularly with white paints, as it does not have the yellow tint which linseed oil possesses. In one review of small clinical trials, safflower oil consumption reduced blood low-density lipoprotein levels – a risk factor for cardiovascular diseases – more than those seen from butter or lard consumption. Flowers for human consumption Safflower flowers are occasionally used in cooking as a cheaper substitute for saffron, sometimes referred to as "bastard saffron". The dried safflower petals are also used as a herbal tea variety. Dye from flowers Safflower petals contain one red and two yellow dyes. In coloring textiles, dried safflower flowers are used as a natural dye source for the orange-red pigment carthamin. Carthamin is also known, in the dye industry, as Carthamus Red or Natural Red 26. Yellow dye from safflower is known as Carthamus yellow or Natural Yellow 5. One of the yellow pigments is fugitive and will wash away in cold water. The dye is suitable for cotton, which takes up the red dye, and silk, which takes up the yellow and red color yielding orange. No mordant is required. In Japan, dyers have long utilised a technique of producing a bright red to orange-red dye (known as carthamin) from the dried florets of safflower (Carthamus tinctorius). Darker shades are achieved by repeating the dyeing process several times, having the fabric dry, and redyed. Due to the expensive nature of the dye, safflower dye was sometimes diluted with other dyestuffs, such as turmeric and sappan. Biodegradable oil In Australia in 2005, CSIRO and Grains Research and Development Corporation launched the Crop Biofactories initiative to produce 93% oleic oil for use as a biodegradable oil for lubricants, hydraulic fluids, and transformer oils, and as a feedstock for biopolymers and surfactants.
Biology and health sciences
Asterales
Plants
747188
https://en.wikipedia.org/wiki/Spinosaurus
Spinosaurus
Spinosaurus (; ) is a genus of spinosaurid dinosaur that lived in what now is North Africa during the Cenomanian stage of the Late Cretaceous period, about 100 to 94 million years ago. The genus was known first from Egyptian remains discovered in 1912 and described by German palaeontologist Ernst Stromer in 1915. The original remains were destroyed in World War II, but additional material came to light in the early 21st century. It is unclear whether one or two species are represented in the fossils reported in the scientific literature. The type species S. aegyptiacus is mainly known from Egypt and Morocco. Although a potential second dubious species, S. maroccanus, has been recovered from Morocco, this dubious species is likely a junior synonym of S. aegyptiacus. Other possible junior synonyms include Sigilmassasaurus from the Kem Kem beds in Morocco and Oxalaia from the Alcântara Formation in Brazil, though other researchers propose both genera to be distinct taxa. Spinosaurus is the longest known terrestrial carnivore; other large carnivores comparable to Spinosaurus include theropods such as Tyrannosaurus, Giganotosaurus and the coeval Carcharodontosaurus. The most recent study suggests that previous body size estimates are overestimated, and that S. aegyptiacus reached in length and in body mass. The skull of Spinosaurus was long, low, and narrow, similar to that of a modern crocodilian, and bore straight conical teeth with no serrations. It would have had large, robust forelimbs bearing three-fingered hands, with an enlarged claw on the first digit. The distinctive neural spines of Spinosaurus, which were long extensions of the vertebrae (or backbones), grew to at least long and were likely to have had skin connecting them, forming a sail-like structure, although some authors have suggested that the spines were covered in fat and formed a hump. The hip bones of Spinosaurus were reduced, and the legs were very short in proportion to the body. Its long and narrow tail was deepened by tall, thin neural spines and elongated chevrons, forming a flexible fin or paddle-like structure. Spinosaurus is known to have eaten fish and small to medium terrestrial prey as well. Evidence suggests that it was semiaquatic; how capable it was of swimming has been strongly contested. Spinosaurus's leg bones had osteosclerosis (high bone density), allowing for better buoyancy control. Multiple functions have been put forward for the dorsal sail, including thermoregulation and display; either to intimidate rivals or attract mates. It lived in a humid environment of tidal flats and mangrove forests alongside many other dinosaurs, as well as fish, crocodylomorphs, lizards, turtles, pterosaurs, and plesiosaurs. Discovery and naming Naming of species Two species of Spinosaurus have been named: Spinosaurus aegyptiacus (meaning "Egyptian spine lizard") and the disputed Spinosaurus maroccanus (meaning "Moroccan spine lizard"). The first described remains of Spinosaurus were found and described in the early 20th century. In 1912, Richard Markgraf discovered a partial skeleton of a giant theropod dinosaur in the Bahariya Formation of western Egypt. In 1915, German paleontologist Ernst Stromer published an article assigning the specimen to a new genus and species, Spinosaurus aegyptiacus. Fragmentary additional remains from Bahariya, including vertebrae and hindlimb bones, were designated by Stromer as "Spinosaurus B" in 1934. Stromer considered them different enough to belong to another species, and this has been borne out. With the advantage of more expeditions and material, it appears that they pertain either to Carcharodontosaurus or to Sigilmassasaurus. S. maroccanus was originally described by Dale Russell in 1996 as a new species based on the length of its neck vertebrae. Specifically, Russell claimed that the ratio of the length of the centrum (body of vertebra) to the height of the posterior articular facet was 1.1 in S. aegyptiacus and 1.5 in S. maroccanus. Later authors have been split on this topic. Some authors note that the length of the vertebrae can vary from individual to individual, that the holotype specimen was destroyed and thus cannot be compared directly with the S. maroccanus specimen, and that it is unknown which cervical vertebrae the S. maroccanus specimens represent. Therefore, though some have retained the species as valid without much comment, most researchers regard S. maroccanus as a nomen dubium (dubious name) or as a junior synonym of S. aegyptiacus. Some studies have referred the holotype and other referred specimens of S. maroccanus (NMC 50791 and MNHN SAM 124–128) as S. cf. aegyptiacus. The specimens previously ascribed as paratypes of S. maroccanus (NMC 41768 and NMC 50790) are reidentified as indeterminate spinosaurid specimens that are currently not identifiable at the generic level. Specimens Six main partial specimens of Spinosaurus have been described. BSP 1912 VIII 19, described by Stromer in 1915 from the Bahariya Formation, was the holotype. The material consisted of the following items, most of which were incomplete: right and left dentaries and splenials from the lower jaw measuring long; a straight piece of the left maxilla that was described but not drawn; 20 teeth; 2 cervical vertebrae; 7 dorsal (trunk) vertebrae; 3 sacral vertebrae; 1 caudal vertebra; 4 thoracic ribs; and gastralia. Of the nine neural spines whose heights are given, the longest ("i," associated with a dorsal vertebra) was in length. Stromer claimed that the specimen was from the early Cenomanian, about 97 million years ago. It was destroyed in World War II, specifically "during the night of 24/25 April 1944 in a British bombing raid of Munich" that severely damaged the building housing the Paläontologisches Museum München (Bavarian State Collection of Paleontology). However, detailed drawings and descriptions of the specimen remain. Stromer's son donated Stromer's archives to the Paläontologische Staatssammlung München in 1995, and Smith and colleagues analyzed two photographs of the Spinosaurus holotype specimen BSP 1912 VIII 19 discovered in the archives in 2000. On the basis of a photograph of the lower jaw and a photograph of the entire specimen as mounted, Smith concluded that Stromer's original 1915 drawings were slightly inaccurate. In 2003, Oliver Rauhut suggested that Stromer's Spinosaurus holotype was a chimera, composed of vertebrae and neural spines from a carcharodontosaurid similar to Acrocanthosaurus and a dentary from Baryonyx or Suchomimus. The analysis was rejected in at least one subsequent paper. NMC 50791, held by the Canadian Museum of Nature, is a mid-cervical vertebra which is long from the Kem Kem Beds of Morocco. It is the holotype of Spinosaurus maroccanus, as described by Russell in 1996. Other specimens referred to S. maroccanus in the same paper were two other mid-cervical vertebrae (NMC 41768 and NMC 50790), an anterior dentary fragment (NMC 50832), a mid-dentary fragment (NMC 50833), and an anterior dorsal neural arch (NMC 50813). Russell stated that "only general locality information could be provided" for the specimen, and therefore it could be dated only "possibly" to the Albian. MNHN SAM 124, housed at the Muséum National d'Histoire Naturelle, is a snout (consisting of partial premaxillae, partial maxillae, vomers, and a dentary fragment). Described by Taquet and Russell in 1998, the specimen is in width; no length was stated. The specimen was located in Algeria, and "is of Albian age." Taquet and Russell believed that the specimen, along with a premaxilla fragment (SAM 125), two cervical vertebrae (SAM 126–127), and a dorsal neural arch (SAM 128), belonged to S. maroccanus. Although it was originally ascribed to S. maroccanus, based on their examination of this cranial material, the 2016 study considered the difference between the two species to be not taxonomically significant and either ontogenetic or intraspecific, and thus tentatively assigned the specimen to S. aegyptiacus. The 2017 study considered MNHN SAM 124 to belong to same taxon as MSNM V4047. BM231 (in the collection of the Office National des Mines, Tunis) was described by Buffetaut and Ouaja in 2002. It consists of a partial anterior dentary in length from an early Albian stratum of the Chenini Formation of Tunisia. The dentary fragment, which included four alveoli and two partial teeth, was "extremely similar" to existing material of S. aegyptiacus. UCPC-2 in the University of Chicago Paleontological Collection consists mainly of two narrow connected nasals with a fluted (ridged) crest from the region between the eyes. The specimen, which is long, was located in an early Cenomanian part of the Moroccan Kem Kem Beds in 1996 and described in the scientific literature in 2005 by Cristiano Dal Sasso of the Civic Natural History Museum in Milan and colleagues. MSNM V4047 (in the Museo di Storia Naturale di Milano), described by Dal Sasso and colleagues in 2005 as Spinosaurus cf. S. aegyptiacus, consists of a snout (premaxillae, partial maxillae, and partial nasals) long from the Kem Kem Beds. An isolated fish vertebra, tentatively referred to Onchopristis, has been associated with the tooth alveolus of this specimen. Similarly, the dentary fragment of Spinosaurus aegyptiacus, MPDM 31, is associated with the rostral tooth of Onchopristis. Like UCPC-2, it is thought to have come from the early Cenomanian. Arden and colleagues in 2018 tentatively assigned this specimen to Sigilmassasaurus brevicollis given its size. However, this assignment was later rejected by other researchers who considered the uniqueness of this specimen to be based on misinterpretations and poor preservation of another specimen, NHMUK R16665, another snout stored that is stored in the Natural History Museum, London. FSAC-KK 11888 is a partial subadult skeleton recovered from the Kem Kem beds of North Africa. It was described by Ibrahim and colleagues in 2014 and designated as the neotype specimen, though Evers and colleagues rejected the neotype designation for FSAC-KK-11888 in 2015. It includes cervical vertebrae, dorsal vertebrae, neural spines, a complete sacrum, femora, tibiae, pedal phalanges, caudal vertebra, several dorsal ribs, and fragments of the skull. The body proportions of the specimen have been debated, as the hind limbs are disproportionately shorter in the specimen than in previous reconstructions. However, it has been demonstrated by multiple paleontologists that the specimen is not a chimera, and is indeed a specimen of Spinosaurus that suggests that the animal had much smaller hind limbs than previously thought. Other known specimens consist mainly of very fragmentary remains and scattered teeth. These include: A 1986 paper described prismatic structures in tooth enamel from two Spinosaurus teeth from Tunisia. Buffetaut (1989, 1992) referred three specimens from the Institut und Museum für Geologie und Paläontologie of the University of Göttingen in Germany to Spinosaurus: a right maxilla fragment IMGP 969–1, a jaw fragment IMGP 969–2, and a tooth IMGP 969–3. These had been found in a Lower Cenomanian or Upper Albian deposit in southeastern Morocco in 1971. Kellner and Mader (1997) described two unserrated spinosaurid teeth from Morocco (LINHM 001 and 002) that were "highly similar" to the teeth of the S. aegyptiacus holotype. Teeth from the Chenini Formation in Tunisia which are "narrow, somewhat rounded in cross-section, and lack the anterior and posterior serrated edges characteristic of theropods and basal archosaurs" were assigned to Spinosaurus in 2000. Material possibly belonging to Spinosaurus from the Turkana Grits of Kenya has been noted in 2004. Teeth from the Echkar Formation of Niger were tentatively referred to Spinosaurus in 2007. A partial tooth long purchased at a fossil trade show, reportedly from the Kem Kem Bed of Morocco and attributed to Spinosaurus maroccanus, showed wide longitudinal striations and micro-structures (irregular ridges) among the striations in a 2010 paper. Isolated teeth attributed to S. aegyptiacus are reported from Algeria in 2015. Pedal ungual (MSNM V6894), cervical vertebra (FSAC-KK-7280) and dorsal vertebra (FSAC-KK-18118) from the Kem Kem beds are referred to juvenile cf. Spinosaurus aegyptiacus. MHNM.KK374, MHNM.KK375, MHNM.KK376, MHNM.KK377, MHNM.KK378 and MSNM V6896 are six isolated quadrates (skull bones) of different sizes that were collected by locals and acquired commercially in the Kem Kem region of southeastern Morocco, provided by François Escuillié and are deposited in the collections of the Muséum d’Histoire Naturelle of Marrakech. Only MHNM.KK376 is assigned to Sigilmassasaurus brevicollis, while the other five specimens are assigned to S. aegyptiacus, since the quadrates show two different morphologies, suggesting the existence of two spinosaurines in Morocco. However, a 2020 study on variation within Spinosaurus considers these differences in morphology to be indicative of variation in skull morphology within a single species, as is the case in Allosaurus. Possible synonyms Sigilmassasaurus Some scientists have considered the genus Sigilmassasaurus a junior synonym of Spinosaurus. In Ibrahim and colleagues (2014), the specimens of Sigilmassasaurus was referred to Spinosaurus aegyptiacus together with "Spinosaurus B" as the neotype and Spinosaurus maroccanus was considered as a nomen dubium following the conclusions of the other papers. A 2015 re-description of Sigilmassasaurus disputed these conclusions, and considered the genus valid, with inclusion of S. maroccanus as a synonym of Sigilmassasaurus instead. This conclusion was further supported in 2018 by Arden and colleagues, who consider Sigilmassasaurus to be a distinct genus, though a very close relative of Spinosaurus, the two unified in the tribe Spinosaurini, coined in the study. The 2020 study indicates synonymy between Spinosaurus and Sigilmassasaurus, and considered specimens previously referred to Sigilmassasaurus as those of Spinosaurus. For instance, the referral of an isolated quadrate (specimen MHNM.KK376) to Sigilmassasaurus brevicollis, based on its difference from other specimens assigned to Spinosaurus aegyptiacus, was rejected by the 2020 study which noted that these differences in morphology are indicative of variation in skull morphology within a single species. The 2019 study assigned a juvenile specimen FSAC-KK-18122 to Sigilmassasaurus brevicollis based on its identical proportion to BSPG 2011 I 115 which was assigned to the taxon in a 2015 study, but this referral was also rejected in a 2020 study based on the fact that the median tubercle and median suture is present in BSPG 2011 I 115 but absent in FSAC-KK-18122, so the presence or absence of such feature should not be used to taxonomically separate isolated spinosaurid remains. Regardless of the synonymy of Sigilmassasaurus with Spinosaurus, some authors consider the possibility that there could be a second distinct spinosaurid in North Africa during the Cenomanian age. Additionally, in 2024, a complete posterior cervical vertebra (specimen NHMUK PV R 38358) was assigned to Sigilmassasaurus brevicollis. Oxalaia Since the National Museum of Brazil fire in 2018 engulfed the palace housing the museum, with specimens of Oxalaia possibly being destroyed, any classification should remain tentative. In a 2020 paper written by Symth et al. in assessing spinosaurine specimens from the Kem Kem Group suggested the Brazilian spinosaurine Oxalaia to be a potential junior synonym of Spinosaurus aegyptiacus. This was based on looking at the specimens assigned to Oxalaia, and the supposed autapomorphies of this taxon to be insignificant and fall within the hypodigm of Spinosaurus aegyptiacus. If supported by future studies, this would imply Spinosaurus aegyptiacus had a wider distribution and supports the faunal exchange between South America and Africa during this time. However, subsequent studies have rejected the synonymy of Oxalaia with Spinosaurus aegyptiacus based on diagnostic features of the holotype (MN 6117-V) and the referred specimen (MN 6119-V). In 2021, Lacerda, Grillo and Romano noted that the anteromedial processes of the holotype maxillae (MN 6117-V) contact medially, a condition not observed in MSNM V4047 which has been referred to as a specimen of Spinosaurus, and thus adding a new possible diagnostic feature of Oxalaia. They also suggested that the premaxilla of Oxalaia is wider in the posterior portion than that of MSNM V4047, and that the lateral morphology of its rostrum was distinguished from other spinosaurines based on their morphometric analysis. In 2023, Isasmendi and colleagues considered Oxalaia as a valid taxon based on the examination of its referred maxilla (MN 6119-V) which suggests that the position of its external naris would have been more anteriorly located, a condition similar to that of Irritator and baryonychines, differing from African spinosaurines including Spinosaurus aegyptiacus. Description Size Since its discovery, Spinosaurus has been a contender for the largest theropod dinosaur. Both Friedrich von Huene in 1926 and Donald F. Glut in 1982 listed it as among the most massive theropods in their surveys, at in length and upwards of in weight. In 1988, Gregory S. Paul also listed it as the longest theropod at , but gave a lower mass estimate of . In 2005, Dal Sasso and colleagues assumed that Spinosaurus and the related Suchomimus had the same body proportions in relation to their skull lengths, and thereby calculated that Spinosaurus was in length and in weight. The estimates were criticized because the skull length estimate was uncertain, and (assuming that body mass increases as the cube of body length) scaling Suchomimus, which was long and in mass, to the range of estimated lengths of Spinosaurus would produce an estimated body mass of . François Therrien and Donald Henderson, in a 2007 paper using scaling based on skull length, challenged previous estimates of the size of Spinosaurus, finding the length too great and the weight too small. Based on estimated skull lengths of , their estimates include a body length of and a body mass of . The lower estimates for Spinosaurus would imply that the animal was shorter and lighter than Carcharodontosaurus and Giganotosaurus. The Therrien and Henderson study has been criticized for the choice of theropods used for comparison (e.g., most of the theropods used to set the initial equations were tyrannosaurids and carnosaurs, which have a different build than spinosaurids), and for the assumption that the Spinosaurus skull could be as little as in length. In 2014, Ibrahim and his colleagues suggested that Spinosaurus aegyptiacus could reach over in length. In 2022, however, Paul Sereno and his colleagues suggested that Spinosaurus aegyptiacus reached a maximum body length of and a maximum body mass of by constructing a CT-based 3D skeletal model "with the axial column in neutral pose." They argued that the 2D graphical reconstruction of the aquatic hypothesis by Ibrahim and his colleagues in 2020 overestimated the presacral column length by 10%, ribcage depth by 25%, and forelimb length by 30% over dimensions based on CT-scanned fossils; these proportional overestimates shift the center of mass anteriorly when translated to a flesh model, and thus the estimate from Ibrahim and his colleagues cannot be considered a reliable body size estimate. Skull Its skull had a narrow snout filled with straight conical teeth that lacked serrations. There were six or seven teeth on each side of the very front of the upper jaw, in the premaxillae, and another twelve in both maxillae behind them. The second and third teeth on each side were noticeably larger than the rest of the teeth in the premaxilla, creating a space between them and the large teeth in the front of the maxilla; large teeth in the lower jaw faced this space. The very tip of the snout holding those few large front teeth was expanded, and a small crest was present in front of the eyes. Using the dimensions of three specimens known as MSNM V4047, UCPC-2, and BSP 1912 VIII 19, and assuming that the postorbital part of the skull of MSNM V4047 had a shape similar to the postorbital part of the skull of Irritator, Dal Sasso and colleagues (2005) estimated that the skull of Spinosaurus was long, but more recent estimates suggest a length of . The Dal Sasso and colleagues skull length estimate is questioned because skull shapes can vary across spinosaurid species and because MSNM V4047 may not belong to Spinosaurus itself, though recent studies have reconfirmed it as a specimen of Spinosaurus. Postcranial skeleton As a spinosaurid, Spinosaurus would have had a long, muscular neck, curved in a sigmoid, or S-shape. Its shoulders were prominent, and the forelimbs large and stocky, bearing three clawed digits on each hand. The first finger (or "thumb") would have been the largest. Spinosaurus had long phalanges (finger bones), and only somewhat recurved claws, suggesting that its hands were longer compared to those of other spinosaurids. Very tall neural spines growing on the back vertebrae of Spinosaurus formed the basis of what is usually called the animal's "sail". The lengths of the neural spines reached over 10 times the diameters of the centra (vertebral bodies) from which they extended. The neural spines were slightly longer front to back at the base than higher up, and were unlike the thin rods seen in the pelycosaur finbacks Edaphosaurus and Dimetrodon, contrasting also with the thicker spines in the iguanodontian Ouranosaurus. Spinosaurus sails were unusual, although other dinosaurs, namely Ouranosaurus, which lived a few million years earlier in the same general region as Spinosaurus, and the Early Cretaceous South American sauropod Amargasaurus, might have developed similar structural adaptations of their vertebrae. The sail may be an analog of the sail of the Permian synapsid Dimetrodon, which lived before the dinosaurs even appeared, produced by convergent evolution. The structure may also have been more hump-like than sail-like, as noted by Stromer in 1915 ("one might rather think of the existence of a large hump of fat [German: Fettbuckel], to which the [neural spines] gave internal support") and by Jack Bowman Bailey in 1997. In support of his "buffalo-back" hypothesis, Bailey argued that in Spinosaurus, Ouranosaurus, and other dinosaurs with long neural spines, the spines were relatively shorter and thicker than the spines of pelycosaurs (which are known to have sails); instead, the dinosaurs' neural spines were similar to the neural spines of extinct hump-backed mammals such as Megacerops and Bison latifrons. In 2014, Ibrahim and colleagues instead posited that the spines were covered tightly by skin, similar to a crested chameleon, given their compactness, sharp edges, and likely poor blood flow. Spinosaurus had a significantly smaller pelvis (hip bone) than that of other giant theropods, with the surface area of the ilium (main body of the pelvis) half that of most members of the clade. The hind limbs were short, at just over 25 percent of the total body length, with the tibia (calf bone) being longer than the femur (thigh bone). Unlike in other theropods, the hallux (or fourth toe) of Spinosaurus touched the ground, and the phalanges of the toe bones were unusually long and well-built. At their ends were shallow claws that had flat bottoms. This type of foot morphology is also seen in shorebirds, indicating that Spinosaurus's feet evolved for walking across unstable substrate and that they may have been webbed. From the caudal vertebrae of the tail projected significantly elongated, thin neural spines, akin to the condition observed in some other spinosaurids, though to a more extreme degree. Coupled with the also elongated chevron bones on the underside of the caudals, this resulted in a deep and narrow tail with a paddle or fin-like shape, comparable to the tails of newts and crocodilians. Classification Spinosaurus gives its name to the dinosaur family Spinosauridae, which includes two subfamilies: Baryonychinae and Spinosaurinae. Baryonychinae includes Baryonyx from southern England and Suchomimus from Niger in central Africa. Spinosaurinae includes Spinosaurus, Siamosaurus, Ichthyovenator, Irritator, Angaturama (which may be synonymous with Irritator), Sigilmassasaurus and Oxalaia (both of which may be synonymous with Spinosaurus). The spinosaurines share unserrated straight teeth that are widely spaced (e.g., 12 on one side of the maxilla), as opposed to the baryonychines, which have serrated curved teeth that are numerous (e.g., 30 on one side of the maxilla). An analysis of Spinosauridae by Arden and colleagues (2018) named the clade Spinosaurini and defined it as all spinosaurids closer to Spinosaurus aegyptiacus than to Irritator challengeri or Oxalaia quilombensis; it also found Siamosaurus suteethorni and Icthyovenator laosensis to be members of Spinosaurinae. Phylogeny The subfamily Spinosaurinae was named by Sereno in 1998, and defined by Holtz and colleagues (2004) as all taxa closer to Spinosaurus aegyptiacus than to Baryonyx walkeri. The subfamily Baryonychinae was named by Charig & Milner in 1986. They erected both the subfamily and the family Baryonychidae for the newly discovered Baryonyx, before it was referred to Spinosauridae. Their subfamily was defined by Holtz and colleagues in 2004, as the complementary clade of all taxa closer to Baryonyx walkeri than to Spinosaurus aegyptiacus. Examinations by Marcos Sales, Cesar Schultz, and colleagues (2017) indicate that the South American spinosaurids Angaturama, Irritator, and Oxalaia were intermediate between Baronychinae and Spinosaurinae based on their craniodental features and cladistic analysis. This indicates that Baryonychinae may in fact be non-monophyletic. Their cladogram can be seen below. The cladogram below depicts the findings of Arden and colleagues (2018): Paleobiology Function of neural spines The function of the dinosaur's sail or hump is uncertain; scientists have proposed several hypotheses including heat regulation and display. In addition, such a prominent feature on its back could make it appear even larger than it was, intimidating other animals. The structure may have been used for thermoregulation. If the structure contained abundant blood vessels, the animal could have used the sail's large surface area to absorb heat. This would imply that the animal was only partly warm-blooded at best and lived in climates where night-time temperatures were cool or low and the sky usually not cloudy. It is also possible that the structure was used to radiate excess heat from the body, rather than to collect it. Large animals, due to the relatively small ratio of surface area of their body compared to the overall volume (Haldane's principle), face far greater problems of dissipating excess heat at higher temperatures than gaining it at lower. Sails of large dinosaurs added considerably to the skin area of their bodies, with minimum increase of volume. Furthermore, if the sail was turned away from the sun, or positioned at a 90 degree angle towards a cooling wind, the animal would quite effectively cool itself in the warm climate of Cretaceous Africa. However, Bailey (1997) was of the opinion that a sail could have absorbed more heat than it radiated. Bailey proposed instead that Spinosaurus and other dinosaurs with long neural spines had fatty humps on their backs for energy storage, insulation, and shielding from heat. Many elaborate body structures of modern-day animals serve to attract members of the opposite sex during mating. It is possible that the sail of Spinosaurus was used for courtship, in a way similar to a peacock's tail. Stromer speculated that the size of the neural spines may have differed between males and females. Gimsa and colleagues (2015) suggest that the dorsal sail of Spinosaurus was analogous to the dorsal fins of sailfish and served a hydrodynamic purpose. Gimsa and others point out that more basal, long-legged spinosaurids had otherwise round or crescent-shaped dorsal sails, whereas in Spinosaurus, the dorsal neural spines formed a shape that was roughly rectangular, similar in shape to the dorsal fins of sailfish. They therefore argue that Spinosaurus used its dorsal neural sail in the same manner as sailfish, and that it also employed its long narrow tail to stun prey like a modern thresher shark. Sailfish employ their dorsal fins for herding schools of fish into a "bait ball" where they cooperate to trap the fish into a certain area where the sailfish can snatch the fish with their bills. The sail could have possibly reduced yaw rotation by counteracting the lateral force in the direction opposite to the slash as suggested by Gimsa and colleagues (2015). Spinosaurus anatomy exhibits another feature that may have a modern analogy: its long tail resembled that of the thresher shark, employed to slap the water to herd and stun shoals of fish before devouring them (Oliver and colleagues, 2013). The strategies that sailfish and thresher sharks employ against shoaling fish are more effective when the shoal is first concentrated into a ‘bait ball’ (Helfman, Collette & Facey, 1997; Oliver and colleagues, 2013; Domenici and colleagues, 2014). Since this is difficult for individual predators to achieve, they cooperate in this effort. When herding a shoal of fish or squid, sailfish also raise their sails to make themselves appear larger. When they slash or wipe their bills through shoaling fish by turning their heads, their dorsal sail and fins are outstretched to stabilize their bodies hydrodynamically (Lauder & Drucker, 2004). Domenici and colleagues (2014) postulate that these fin extensions enhance the accuracy of tapping and slashing. The sail can reduce yaw rotation by counteracting the lateral force in the direction opposite to the slash. This means that prey is less likely to recognize the massive trunk as being part of an approaching predator (Marras and colleagues, 2015; Webb & Weihs 2015). Spinosaurus exhibited the anatomical features required to combine all three hunting strategies: a sail for herding prey more efficiently, as well as flexible tail and neck to slap the water for stunning, injuring or killing prey. The submerged dorsal sail would have provided a strong centreboard-like counterforce for powerful sidewards movements of the strong neck and long tail, as performed by sailfish (Domenici and colleagues, 2014) or thresher sharks (Oliver and colleagues, 2013). While smaller dorsal sails or fins make the dorsal water volume better accessible for slashing, it can be speculated that their smaller stabilization effect makes lateral slashing less efficient (e.g. for thresher sharks). Forming a hydrodynamic fulcrum and hydrodynamically stabilizing the trunk along the dorsoventral axis, Spinosaurus’ sail would also have compensated for the inertia of the lateral neck by tail movements and vice versa not only for predation but also for accelerated swimming. This behavior might also have been one reason for Spinosaurus’ muscular chest and neck reported by Ibrahim and colleagues (2014). Diet and feeding It is unclear whether Spinosaurus was primarily a terrestrial predator or a piscivore, as indicated by its elongated jaws, conical teeth and raised nostrils. The hypothesis of spinosaurs as specialized fish eaters has been suggested before by A. J. Charig and A. C. Milner for Baryonyx. They base this on the anatomical similarity with crocodilians and the presence of digestive acid-etched fish scales in the rib cage of the type specimen. Large fish are known from the faunas containing other spinosaurids, including the Mawsonia, in the mid-Cretaceous of northern Africa and Brazil. Direct evidence for spinosaur diet comes from related European and South American taxa. Baryonyx was found with fish scales and bones from juvenile Iguanodon in its stomach, while a tooth embedded in a South American pterosaur bone suggests that spinosaurs occasionally preyed on pterosaurs, but Spinosaurus was likely to have been a generalized and opportunistic predator, possibly a Cretaceous equivalent of large grizzly bears, being biased toward fishing, though it undoubtedly scavenged and took many kinds of small or medium-sized prey. In 2009, Dal Sasso and colleagues. reported the results of X-ray computed tomography of the MSNM V4047 snout. As the foramina on the outside all communicated with a space on the inside of the snout, the authors speculated that Spinosaurus had pressure receptors inside the space that allowed it to hold its snout at the surface of the water to detect swimming prey species without seeing them. A 2013 study by Andrew R. Cuff and Emily J. Rayfield concluded that bio-mechanical data suggests that Spinosaurus was not an obligate piscivore and that its diet was more closely associated with each individual's size. The characteristic rostral morphology of Spinosaurus allowed its jaws to resist bending in the vertical direction, but its jaws were poorly adapted with respect to resisting lateral bending compared to other members of this group (Baryonyx) and modern alligators. This suggests that Spinosaurus preyed more regularly on fish than it did on land animals, although considered predators of the former too. In 2022, Sakamoto estimated that Spinosaurus had an anterior bite force of 4,829 newtons and a posterior bite force of 11,936 newtons. Based on this estimate, he asserted that the jaws of Spinosaurus are adapted for generating relatively faster shutting speeds with less muscle input force, indicating that the animal likely killed its prey with fast-snapping jaws rather than slow-crushing bites, a trait commonly observed in animals which have a semi-aquatic feeding habit. A 2024 paper suggests that Spinosaurus and other spinosaurines in addition to fish also preyed upon small to medium-sized terrestrial vertebrates, and had relatively weak bite forces compared to those of other theropods. Aquatic habits A 2010 isotope analysis by Romain Amiot and colleagues found that oxygen isotope ratios of spinosaurid teeth, including teeth of Spinosaurus, indicate semiaquatic lifestyles. Isotope ratios from tooth enamel and from other parts of Spinosaurus (found in Morocco and Tunisia) and of other predators from the same area such as Carcharodontosaurus were compared with isotopic compositions from contemporaneous theropods, turtles, and crocodilians. The study found that Spinosaurus teeth from five of six sampled localities had oxygen isotope ratios closer to those of turtles and crocodilians when compared with other theropod teeth from the same localities. The authors postulated that Spinosaurus switched between terrestrial and aquatic habitats to compete for food with large crocodilians and other large theropods respectively. A 2018 study by Donald Henderson, however, refutes the claim that Spinosaurus was semiaquatic. By studying the buoyancy in lungs of crocodilians and comparing it to the lung placement in Spinosaurus, it was discovered that Spinosaurus could not sink or dive below the water surface. It was also capable of keeping its entire head above the water surface while floating, much like other non-aquatic theropods. Furthermore, the study found that Spinosaurus had to continually paddle its hind legs to prevent itself from tipping over onto its side, something that extant semiaquatic animals do not need to perform. Henderson therefore theorized that Spinosaurus probably did not hunt completely submerged in water as previously hypothesized, but instead would have spent much of its time on land or in shallow water. Recent studies of the tail vertebrae of Spinosaurus refute Henderson's proposal that Spinosaurus mainly inhabited areas of land near and in shallow water and was too buoyant to submerge. Studies of the tail, thanks to fossils recovered and analyzed by Ibrahim, Pierce, Lauder, and Sereno and colleagues in 2018 indicate that Spinosaurus had a keeled tail that was well adapted to propelling the animal through water. The elongated neural spines and chevrons, which run to the end of the tail on both dorsal and ventral sides, indicate that Spinosaurus was able to swim in a similar manner to modern crocodilians. Through experimentation by Lauder and Pierce, the tail of Spinosaurus was found to have eight times as much forward thrust as the tails of terrestrial theropods like Coelophysis and Allosaurus, as well as being twice as efficient at achieving forward thrust. The discovery indicates that Spinosaurus may have had a lifestyle comparable to modern alligators and crocodiles, remaining in water for long periods of time while hunting. David Hone and Thomas Holtz published a paper in 2021 in which they argue that the anatomy of Spinosaurus is more consistent with a shoreline generalist lifestyle rather than an active aquatic pursuit predator as suggested by Ibrahim. They highlight the positioning of the nostrils and orbits as one reason why a crocodile-like lifestyle is unlikely: they are ventrally positioned in such a way that the whole head would have to be lifted inefficiently out of the water in order to breathe. Additionally, they argue that the general body shape of Spinosaurus is poorly adapted for this lifestyle, drawing on the amount of water drag and aquatic instability from the sail, as well as the rigid trunk and seemingly scarcely-muscled tail. Animals like crocodilians require a flexible body in order to move through the water and make sharp turns when chasing prey, and this is directly contradicted by Hone and Holtz's findings. A 2022 study by Fabbri et al., made comparisons of Spinosaurus bone structure and compared it to that of Baryonyx and Suchomimus. The study revealed that Spinosaurus and Baryonyx had dense bones, which allowed them to dive and pursue prey underwater. Compared to these, Suchomimus had more hollow bones, suggesting it preferred to hunt in shallow water. These findings also suggest that various spinosaurid genera were more ecologically disparate than previously believed, as some were better suited to hunting in subaqueous environments than other, closely related genera. In the same year, contradicting the study by Fabbri and colleagues, Sereno and his colleagues suggested that Spinosaurus was wholly bipedal on land and an unstable, slow moving surface swimmer in deep water. Their results, taken from reconstructing a CT model of the skeleton, and then adding internal air and muscles. Their results, coupled with fossils from Spinosaurus that showed it also lived further inland along rivers and lakes, suggest it was a semi-aquatic, ambush piscivore that preferred waterside environments both along the coasts and further inland along rivers and lakes. Simultaneously, they suggested that the large tail fin was probably utilized more for display than swimming, as tails in living animals have the same function when they possess comparably tall neural spines. A 2024 paper by Myrhvold et al. also contends that Spinosaurus and Baryonyx were diving pursuit predators. Instead they also argue that Spinosaurus and Baryonyx hunted more like herons instead of diving after prey. Another paper in the same year analyzed the linear measurements of the skull of Spinosaurus, and concluded that the skull morphology and hunting method of Spinosaurus would likely be the most similar to those of wading birds like herons, though the authors noted that they're uncertain how beneficial the skull would have been for the diving pursuit predation method. Locomotion and posture Although traditionally depicted in the scientific community as a biped, Spinosaurus was occasionally depicted in the mid-20th century as an obligate quadruped akin to Dimetrodon. Starting in the mid-1970s, it was hypothesized Spinosaurus was at least an occasional quadruped, bolstered by the discovery of Baryonyx, a relative with robust arms. Because of the mass of the hypothesized fatty dorsal humps of Spinosaurus, Bailey (1997) was open to the possibility of a quadrupedal posture, leading to new restorations of it as such. Theropods, including spinosaurids, could not pronate their hands (rotate the forearm so the palm faced the ground), but a resting position on the side of the hand was possible, as shown by fossil prints from an Early Jurassic theropod. The hypothesis that Spinosaurus had a typical quadrupedal gait since fell out of favor, however it was still believed that spinosaurids may have crouched in a quadrupedal posture, due to biological and physiological constraints. The possibility of a quadrupedal Spinosaurus was revived by a 2014 paper by Ibrahim and colleagues that described new material of the animal. The paper found that the hind limbs of Spinosaurus were much shorter than previously believed, and that its center of mass was located in the midpoint of the torso region, as opposed to near the hip as in typical bipedal theropods. It was therefore proposed that Spinosaurus was poorly adapted for bipedal terrestrial locomotion, and must have been an obligate quadruped on land. The reconstruction used in the study was an extrapolation based on different sized individuals, scaled to what were assumed to be the correct proportions. Paleontologist John Hutchinson of the Royal Veterinary College of the University of London has expressed skepticism to the new reconstruction, and cautioned that using different specimens can result in inaccurate chimaeras. Scott Hartman also expressed criticism because he believed the legs and the pelvis were inaccurately scaled (27% too short) and didn't match the published lengths. However, Mark Witton expressed agreement with the proportions reported in the paper. In their 2015 re-description of Sigilmassasaurus, Evers and colleagues argued that Sigilmassasaurus was in fact a distinct genus from Spinosaurus, and therefore doubted whether the material assigned to Spinosaurus by Ibrahim et al. should be assigned to Spinosaurus or Sigilmassasaurus. In 2018, an analysis by Henderson found that Spinosaurus probably was competent at bipedal terrestrial locomotion; the center of mass was instead found to be close to the hips, allowing Spinosaurus to stand upright like other bipedal theropods. A 2024 article co-authored by Sereno stated that the previous calculations by Sereno that were used to argue quadrupedality for Spinosaurus had erroneously shifted the center of mass in front of the hips. They instead suggested that the dinosaur fit the criteria of being a graviportal (or slow-moving) biped. Ontogeny An ungual phalanx measuring belonging to a very young juvenile cf. S. aegyptiacus indicates that the theropod developed its semiaquatic adaptations at a very young age or at birth and maintained them throughout its life. The specimen, found in 1999 and described by Simone Maganuco and Cristiano Dal Sasso and colleagues, is believed to have come from an animal measuring (assuming it resembled a smaller version of the adult), making it the smallest specimen of Spinosaurus currently known. Palaeopathology A cf. Spinosaurus sp. tooth from the Ifezouane Formation displays enhanced lingual curvature to the tooth's crown, the development of three deep grooves extending from crown root junction in the direction of the crown's apex, an attenuated carina that does not extend apically nor to the base of the tooth, and a wear facet at the tip. Paleoenvironment The environment inhabited by Spinosaurus is only partially understood, and covers a great deal of what is now northern Africa. The region of Africa Spinosaurus is preserved in dates from 112 to 93.5 million years ago. A specimen tentatively referred to as cf. Spinosaurus has been found in the Campanian Quseir Formation of Egypt, but no detailed description of the specimen was provided and now reclassified as Theropoda indet. A 1996 study concluded from Moroccan fossils that Spinosaurus, Carcharodontosaurus, and Deltadromeus "ranged across north Africa during the late Cretaceous (Cenomanian)." Those Spinosaurus that lived in the Bahariya Formation of what is now Egypt may have contended with shoreline conditions on tidal flats and channels, living in mangrove forests alongside similarly large dinosaurian predators Bahariasaurus and Tameryraptor (originally assigned to Carcharodontosaurus), the titanosaur sauropods Paralititan and Aegyptosaurus, crocodylomorphs, bony and cartilaginous fish, turtles, lizards, and plesiosaurs. In the dry season it might have resorted to preying on pterosaurs. This situation resembles that in the Late Jurassic Morrison Formation of North America, which boasts up to five theropod genera over in weight, as well as several smaller genera (Henderson, 1998; Holtz and colleagues, 2004). Differences in head shape and body size among the large North African theropods may have been enough to allow niche partitioning as seen among the many different predator species found today in the African savanna (Farlow & Pianka, 2002). In popular culture Spinosaurus appeared in the 2001 film Jurassic Park III, replacing Tyrannosaurus as the main antagonist. The film's consulting paleontologist John R. Horner was quoted as saying, "If we base the ferocious factor on the length of the animal, there was nothing that ever lived on this planet that could match this creature [Spinosaurus]. Also my hypothesis is that T-rex was actually a scavenger rather than a killer. Spinosaurus was really the predatory animal." He has since retracted the statement about T. rex being a scavenger. In the film, Spinosaurus was portrayed as larger and more powerful than Tyrannosaurus: in a scene depicting a battle between the two resurrected predators, Spinosaurus emerges victorious by snapping the Tyrannosaurus neck. In the fourth film, Jurassic World, there is a nod to this fight where the T. rex smashes through the skeleton of a Spinosaurus in the climactic fight near the end of the film. The Spinosaurus would appear in many Jurassic Park games most notably Jurassic World Evolution, and its sequel. The same Spinosaurus from the third film returns in the fourth, and fifth season of Jurassic World Camp Cretaceous, this time battling two T. rex. Spinosaurus has long been depicted in popular books about dinosaurs, although only recently has there been enough information about spinosaurids for an accurate depiction. After an influential 1955 skeletal reconstruction by Lapparent and Lavocat based on a 1936 diagram by Stromer, it has been treated as a generalized upright theropod, with a skull similar to that of other large theropods and a sail on its back, even having four-fingered hands. In addition to films, action figures, video games, and books, Spinosaurus has been depicted on postage stamps from countries such as Angola, The Gambia, and Tanzania.
Biology and health sciences
Theropods
Animals
747239
https://en.wikipedia.org/wiki/Body%20odor
Body odor
Body odor or body odour (BO) is present in all animals and its intensity can be influenced by many factors (behavioral patterns, survival strategies). Body odor has a strong genetic basis, but can also be strongly influenced by various factors, such as sex, diet, health, and medication. The body odor of human males plays an important role in human sexual attraction, as a powerful indicator of MHC/HLA heterozygosity. Significant evidence suggests that women are attracted to men whose body odor is different from theirs, indicating that they have immune genes that are different from their own, which may produce healthier offspring. Causes In humans, the formation of body odors is caused by factors such as diet, sex, health, and medication, but the major contribution comes from bacterial activity on skin gland secretions. Humans have three types of sweat glands: eccrine sweat glands, apocrine sweat glands and sebaceous glands. Eccrine sweat glands are present from birth, while the latter two become activated during puberty. Among the different types of human skin glands, body odor is primarily the result of the apocrine sweat glands, which secrete the majority of chemical compounds that the skin flora metabolize into odorant substances. This happens mostly in the axillary (armpit) region, although the gland can also be found in the areola, anogenital region, and around the navel. In humans, the armpit regions seem more important than the genital region for body odor, which may be related to human bipedalism. The genital and armpit regions also contain springy hairs which help diffuse body odors. The main components of human axillary odor are unsaturated or hydroxylated branched fatty acids with E-3-methylhex-2-enoic acid (E-3M2H) and 3-hydroxy-3-methylhexanoic acid (HMHA), sulfanylalkanols and particularly 3-methyl-3-sulfanylhexan-1-ol (3M3SH), and the odoriferous steroids androstenone (5α-androst-16-en-3-one) and androstenol (5α-androst-16-en-3α-ol). E-3M2H is bound and carried by two apocrine secretion odor-binding proteins, ASOB1 and ASOB2, to the skin surface. Body odor is influenced by the actions of the skin flora, including members of Corynebacterium, which manufacture enzymes called lipases that break down the lipids in sweat to create smaller molecules like butyric acid. Greater bacteria populations of Corynebacterium jeikeium are found more in the armpits of men, whereas greater population numbers of Staphylococcus haemolyticus are found in the armpits of women. This causes male armpits to give off a rancid/cheese-like smell, whereas female armpits give off a more fruity/onion-like smell. Staphylococcus hominis is also known for producing thioalcohol compounds that contribute to odors. These smaller molecules smell, and give body odor its characteristic aroma. Propionic acid (propanoic acid) is present in many sweat samples. This acid is a breakdown product of some amino acids by propionibacteria, which thrive in the ducts of adolescent and adult sebaceous glands. Because propionic acid is chemically similar to acetic acid, with similar characteristics including odor, body odors may be identified as having a pungent, cheesy and vinegar-like smell although certain people might find it pleasant at lower concentrations. Isovaleric acid (3-methyl butanoic acid) is the other source of body odor as a result of actions of the bacteria Staphylococcus epidermidis, which is also present in several types of strong cheese. Factors such as food, drink, gut microbiome, and genetics can affect body odor. Function Animals In many animals, body odor plays an important survival function. Strong body odor can be a warning signal for predators to stay away (such as porcupine stink), or it can also be a signal that the prey animal is unpalatable. For example, some animal species that feign death to survive (like opossums), in this state produce a strong body odor to deceive a predator that the prey animal has been dead for a long time and is already in the advanced stage of decomposing. Some animals with strong body odor are rarely attacked by most predators, although they can still be killed and eaten by birds of prey, which are tolerant of carrion odors. Body odor is an important feature of animal physiology. It plays a different role in different animal species. For example, in some predator species that hunt by stalking (such as big and small cats), the absence of body odor is important, and they spend plenty of time and energy to keep their body free of odor. For other predators, such as those that hunt by visually locating prey and running for long distances after it (such as dogs and wolves), the absence of body odor is not critical. In most animals, body odor intensifies in moments of stress and danger. Humans In humans, body odor serves as a means of chemosensory signal communication between members of the species. These signals are called pheromones and they can be transmitted through a variety of mediums. The most common way that human pheromones are transmitted is through bodily fluids. Human pheromones are contained in sweat, semen, vaginal secretions, breast milk, and urine. The signals carried in these fluids serve a range of functions from reproductive signaling to infant socialization. Each person produces a unique spread of pheromones that can be identified by others. This differentiation allows the formation of sexual attraction and kinship ties to occur. Sebaceous and apocrine glands become active at puberty. This, as well as many apocrine glands being close to the sex organs, points to a role related to mating. Sebaceous glands line the human skin while apocrine glands are located around body hairs. Compared to other primates, humans have extensive axillary hair and have many odor producing sources, in particular many apocrine glands. In humans, the apocrine glands have the ability to secrete pheromones. These steroid compounds are produced within the peroxisomes of the apocrine glands by enzymes such as mevalonate kinases. Sexual selection Pheromones are a factor seen in the mating selection and reproduction in humans. In women, the sense of olfaction is strongest around the time of ovulation, significantly stronger than during other phases of the menstrual cycle and also stronger than the sense in males. Pheromones can be used to deliver information about the major histocompatibility complex (MHC). The MHC in humans is referred to as the Human Leukocyte Antigen (HLA). Each type has a unique scent profile that can be utilized during the mating selection process. When selecting mates, women tend to be attracted to those that have different HLA-types than their own. This is thought to increase the strength of the family unit and increase the chances of survival for potential offspring. Studies have suggested that people might be using odor cues associated with the immune system to select mates. Using a brain-imaging technique, Swedish researchers have shown that homosexual and heterosexual males' brains respond in different ways to two odors that may be involved in sexual arousal, and that homosexual men respond in the same way as heterosexual women, though it could not be determined whether this was cause or effect. When the study was expanded to include lesbian women, the results were consistent with previous findings – meaning that lesbian women were not as responsive to male-identified odors, while responding to female odors in a similar way as heterosexual males. According to the researchers, this research suggests a possible role for human pheromones in the biological basis of sexual orientation. Kinship communication Humans can olfactorily detect blood-related kin. Mothers can identify by body odor their biological children, but not their stepchildren. Preadolescent children can olfactorily detect their full siblings, but not half-siblings or step-siblings, and this might explain incest avoidance and the Westermarck effect. Babies can recognize their mothers by smell while mothers, fathers, and other relatives can identify a baby by smell. This connection between genetically similar family members is due to the habituation of familial pheromones. In the case of babies and mothers, this chemosensory information is primarily contained within breastmilk and the mother's sweat. When compared to that of strangers, babies are observed to have stronger neural connections with their mothers. This strengthened neurological connection allows for the biological development and socialization of the infant by their mother. Using these connections, the mother transmits olfactory signals to the infant which are then perceived and integrated. In terms of biological functioning, olfactory signaling allows for functional breastfeeding to occur. In cases of effective latching, breastfed infants are able to locate their mother's nipples for feeding using the sensory information enclosed in their mother's body odor. While no specific human breast pheromones have been identified, studies compare the communication to that of the rabbit mammary pheromone 2MB2. The perception and integration of these signals is an evolutionary response that allows newborns to locate their source of nutrition. Signaling contains a level of precision that allows babies to differentiate their mother's breasts from that of other women. Once the baby recognizes the familiar olfactory signal, the behavioral response of latching follows. Over time the infant becomes habituated to their mother's breast pheromones which increases latch efficiency. Beyond a biological function, a mother's body odor plays a role in developing a baby's social capabilities. The ability of an infant to evaluate the properties of human faces stems from the olfactory cues given from their mother. Frequent exposure to the pheromones exuded by their mother allows the connection between vision and smell to form in infants. This type of connection is only found between mothers and babies and over time it socializes the ability to recognize the features that distinguish human faces from inanimate objects. Environmental threats The connection between olfactory and visual cues has also been observed outside of familial relationships. Evolutionarily, body odor has been used to communicate messages about potentially dangerous stimuli in the environment. Body odor produced during particularly stressful situations can produce a cascade of reactions in the brain. Once the olfactory system is activated by a threatening stimuli, heightened activity in the amygdala and occipital cortex is triggered. This chain reaction serves to help assess the nature of the threat and increase chance of survival. Humans have few olfactory receptor cells compared to dogs and few functional olfactory receptor genes compared to rats. This is in part due to a reduction of the size of the snout in order to achieve depth perception as well as other changes related to bipedalism. However, it has been argued that humans may have larger brain areas associated with olfactory perception compared to other species. Genes affecting body odor MHC Body odor is influenced by major histocompatibility complex (MHC) molecules. These are genetically determined and play an important role in immunity of the organism. The vomeronasal organ contains cells sensitive to MHC molecules in a genotype-specific way. Experiments on animals and volunteers have shown that potential sexual partners tend to be perceived more attractive if their MHC composition is substantially different. Married couples are more different regarding MHC genes than would be expected by chance. This behavior pattern promotes variability of the immune system of individuals in the population, thus making the population more robust against new diseases. Another reason may be to prevent inbreeding. ABCC11 The ABCC11 gene determines axillary body odor and the type of earwax. The loss of a functional ABCC11 gene is caused by a 538G>A single-nucleotide polymorphism, resulting in a loss of body odor in people who are specifically homozygous for it. Firstly, it affects apocrine sweat glands by reducing secretion of odorous molecules and its precursors. The lack of ABCC11 function results in a decrease of the odorant compounds 3M2H, HMHA, and 3M3SH via a strongly reduced secretion of the precursor amino-acid conjugates 3M2H–Gln, HMHA–Gln, and Cys–Gly–(S) 3M3SH; and a decrease of the odoriferous steroids androstenone and androstenol, possibly due to the reduced secretion of dehydroepiandrosterone sulfate (DHEAS) and dehydroepiandrosterone (DHEA), possibly bacterial substrates for odoriferous steroids; research has found no difference, however, in testosterone secretion in apocrine sweat between ABCC11 mutants and non-mutants. Secondly, it is also associated with a strongly reduced/atrophic size of apocrine sweat glands and a decreased protein (such as ASOB2) concentration in axillary sweat. The non-functional ABCC11 allele is predominant among East Asians (80–95%), but very low among European and African populations (0–3%). Most of the world's population has the gene that codes for the wet-type earwax and average body odor; however, East Asians are more likely to inherit the allele associated with the dry-type earwax and a reduction in body odor. The reduction in body odor may be due to adaptation to colder climates by their ancient Northeast Asian ancestors. However, research has observed that this allele is not solely responsible for ethnic differences in scent. A 2016 study analyzed differences across ethnicities in volatile organic compounds (VOCs), across racial groups and found that while they largely did not differ significantly qualitatively, they did differ quantitatively. Of the observed differences, they were found to vary with ethnic origin, but not entirely with ABCC11 genotype. One large study failed to find any significant differences across ethnicity in residual compounds on the skin, including those located in sweat. If there were observed ethnic variants in skin odor, one would find sources to be much more likely in diet, hygiene, microbiome, and other environmental factors. Research has indicated a strong association between people with axillary osmidrosis and the ABCC11-genotypes GG or GA at the SNP site (rs17822931) in comparison to the genotype AA. Age-Related Differences As seen in non-human animals such as mice, black-tailed deer, rabbits, otters, and owl monkeys, body odor contains age-related signals that these animals can detect and process. Similarly, humans have been seen to distinguish age-related information from body odor, particularly relating to odors of those of old age. In a study determining if there is a difference between the body odor of individuals of various ages, three groups were studied: those aged 20-30, aged 45-55, and aged 75-95, corresponding to young age, middle-aged, and old age, respectively. This study determined that individuals could distinguish between odors of various ages and group odors of old age, suggesting that there are certain chemical differences in age resulting in “age-dependent odor characteristics”. Another study evaluated the components of body odor in participants aged 26 through 75 using headspace gas chromatography and mass spectroscopy. This study demonstrated that in individuals 40 years or older, 2-Nonenal, an unsaturated aldehyde producing a greasy and grassy odor, was detected in increasing concentrations of those individuals. The detection of increasing amounts of 2-Nonenal in individuals 40 years or older suggested that 2-Nonenal contributes to the deteriorating body odor seen with aging. Body Odor and Disease In mammals, body odor can also be used as a symptom of disease. One's body odor is completely unique to themselves, similar to a fingerprint, and can change due to sexual life, genetics, age and diet. Body odor, however, can be used as an indication for disease. For example, typically, human urine contains 95% water, however, for a person with an abnormal amount of blood sugar, their urine becomes more concentrated with glucose. Therefore, if a person's body odor or urine smells unusually fruity or sweet, that can be a sign of diabetes. Additionally, an ammonia smell that occurs in one's body, urine, or breath could also be an indicator of kidney disease. Typically, the liver converts ammonia to urea because ammonia has a high level of toxicity. The kidneys are responsible for removing waste, such as urea, out from the body. However, if the kidneys are not functioning properly, this urea is kept as ammonia, causing the urine and even one's breath to smell like ammonia. In conclusion, body odor could be used as a helpful indicator of disease, especially when it suddenly deviates from normal. * ND indicates that no detectable peak is found on the [M+H]+ ion trace of the selected analyte at the correct retention time. * HMHA: 3-hydroxy-3-methyl-hexanoic acid; 3M2H: (E)-3-methyl-2-hexenoic acid; 3M3SH: 3-methyl-3-sulfanylhexan-1-ol. Alterations Body odor may be reduced or prevented or even aggravated by using deodorants, antiperspirants, disinfectants, underarm liners, triclosan, special soaps or foams with antiseptic plant extracts such as ribwort and liquorice, chlorophyllin ointments and sprays topically, and chlorophyllin supplements internally. Although body odor is commonly associated with hygiene practices, its presentation can be affected by changes in diet as well as the other factors. Skin spectrophotometry analysis found that males who consumed more fruits and vegetables were significantly associated with more pleasant smelling sweat, which was described as "floral, fruity, sweet and medicinal qualities". Industry As many as 90% of Americans and 92% of teenagers use antiperspirants or deodorants. In 2014, the global market for deodorants was estimated at US$13 billion with a compound annual growth rate of 5.62% between 2015 and 2020. Medical conditions Osmidrosis or bromhidrosis is defined by a foul odor due to a water-rich environment that supports bacteria, which is caused by an abnormal increase in perspiration (hyperhidrosis). This can be particularly strong when it happens in the axillary region (underarms). In this case, the condition may be referred to as axillary osmidrosis. The condition can also be known medically as apocrine bromhidrosis, ozochrotia, fetid sweat, body smell, or malodorous sweating. Treatment If body odor is affecting a person’s quality of life, then seeing a primary care physician may be helpful. A doctor could recommend prescription antiperspirants containing aluminum-chloride. This chemical agent helps temporarily block sweat pores which reduces the amount a person will sweat. Deodorant is another remedy for body odor. It specifically targets odor but will not reduce sweat. Deodorants are usually alcohol-based which fights off bacteria. Most deodorants contain perfumes which also help with masking odor. If someone is experiencing severe body odor, a doctor may recommend a surgical procedure called endoscopic thoracic sympathectomy. This surgery will cut nerves that control sweating. This surgery poses the risk of harming other nerves in the body. Prevention There are a number of ways to prevent body odor. These suggestions may help with those suffering from body odor. Bathing daily with antibacterial soap helps reduce the amount of bacteria found on the skin. This is especially important after doing any type of physical activity. Shaving armpit hair allows for sweat to evaporate more quickly so it won’t produce an odor. Applying deodorant or antiperspirant after showering which helps kill bacteria and prevent someone from sweating is helpful. Wearing fresh and clean clothes is also very important especially if you sweat a lot. Trimethylaminuria (TMAU), also known as fish odor syndrome or fish malodor syndrome, is a rare metabolic disorder where trimethylamine is released in the person's sweat, urine, and breath, giving off a strong fishy odor or strong body odor.
Biology and health sciences
Hygiene and grooming: General
Health
747331
https://en.wikipedia.org/wiki/Limnognathia
Limnognathia
Limnognathia maerski is a microscopic acoelomate freshwater animal, discovered living in cold springs on Disko Island, Greenland, in 1994. Since then, it has also been found on the Crozet Islands of Antarctica as well as in the British Isles, suggesting a worldwide distribution, although there are likely different species yet to be described. With an average length of 100 micrometers (μm), it is one of the smallest known animals. Etymology of Micrognathozoa: From the Greek Micros (= very small) Gnathos (= jaw) and Zoon (= animal) L. maerski is the only species that belongs to the Micrognathozoa, a relatively new phylum of animals that was only described in 2000. Description Feeding L. maerski mainly feeds on bacteria, blue-green algae, and diatoms. It has very complex jaws, with fifteen separate elements; these elements are very small, ranging from 4 μm to 14 μm. The animal can extend part of its jaw structure outside its mouth while eating. It also extends much of its jaw structure outside its mouth when it is regurgitating indigestible items. Anatomy L. maerski has a large ganglion, or 'brain', in its head, and paired nerve cords extending ventrally (along the lower side of the body) towards the tail. Stiff sensory bristles made up of one to three cilia are scattered about the body. These bristles are similar to ones found on gnathostomulids, but up to three cilia may arise from a single cell in L. maerski, while gnathostomulids never have more than one cilium per cell. Flexible cilia are arranged in a horseshoe-shaped area on the forehead, and in spots on the sides of the head and in two rows on the underside of the body. The cilia on the forehead create a current that moves food particles towards the mouth. The other cilia move the animal. Reproduction All specimens of L. maerski that have been collected have had female organs. They lay two kinds of eggs: thin-walled eggs that hatch quickly, and thick-walled eggs that are believed to be resistant to freezing, and thus capable of over-wintering and hatching in the spring. The same pattern is known from rotifers, where thick-walled eggs only form after fertilization by males. The youngest L. maerski specimens collected may also have male organs, and it is now hypothesized that the animals hatch as males and then become females (sequential hermaphroditism). Taxonomy and phylogeny Taxonomic status Limnognathia maerski is nominally a platyzoan, but has variously been assigned as a class or subphylum in the clade Gnathifera or as a phylum in a Gnathifera superphylum, named Micrognathozoa. It is related to the rotifers and gnathostomulids, grouped together as the Gnathifera. Phylogeny Cladogram showing the relationships of Limnognathia: The Gnathifera is the sister group to the rest of the spiralians and is crucial to understand because of its relationship to animal evolution.
Biology and health sciences
Platyzoa
Animals
747364
https://en.wikipedia.org/wiki/Lophotrochozoa
Lophotrochozoa
Lophotrochozoa (, "crest/wheel animals") is a clade of protostome animals within the Spiralia. The taxon was established as a monophyletic group based on molecular evidence. The clade includes animals like annelids, molluscs, bryozoans, and brachiopods. Groups Lophotrochozoa was defined in 1995 as the "last common ancestor of the three traditional lophophorate taxa (brachiopods, bryozoans, and phoronid worms), the mollusks and the annelids, and all of the descendants of that common ancestor". It is a cladistic definition (a node-based name), so the affiliation to Lophotrochozoa of spiralian groups not mentioned directly in the definition depends on the topology of the spiralian tree of life, and in some phylogenetic hypotheses, Lophotrochozoa may even be synonymous to Spiralia. Nemertea and Orthonectida (if not directly considered as part of Annelida) are probably lophotrochozoan phyla; Dicyemida, Gastrotricha, and Platyhelminthes may be lophotrochozoans or placed in the Rouphozoa clade outside Lophotrochozoa; Chaetognatha, Gnathostomulida, Micrognathozoa, and Syndermata are probably gnathiferans and so placed as a basal spiralian clade outside Lophotrochozoa; Cycliophora could be a gnathiferan or a lophotrochozoan phylum. One of the candidate hypotheses is presented below. The Lophotrochozoa has basal Cycliophora and Mollusca groups, and more derived Lophophorate, Nemertea and Annelida groups. With the introduction of Platytrochozoa and Rouphozoa, one candidate phylogeny is pictured below – though other studies recover a range of alternative possibilities: In the most recent research, the three phyla Cycliophora, Entoprocta and Bryozoa makes up a single clade and are the first to branch off from the other lophotrochozoans. The second split is the molluscs, and the third consists of two sister phyla, annelids and nemerteans. Lastly remains the clade that consist of the phoronids and the brachiopods. Another study recovers Lophotrochozoa as equivalent to Platytrochozoa, forming a sister group with Gnathifera at the base of Spiralia. A number of fossil taxa can be identified as early Lophotrochozoans, even if their precise affinity remains contested. However, relevant Cambrian fossils are debated. Characteristics The clade Lophotrochozoa is named after the two distinct characteristics of its members; the lophophore, a feeding structure consisting of a ciliated crown of tentacles surrounding a mouth, and the developmental stage of the trochophore larva. Lophophorata such as Brachiozoa and Bryozoa have lophophores, while members of Trochozoa such as molluscs and annelids have trochophore larvae, although some may have none.
Biology and health sciences
Lophotrochozoa
Animals
747516
https://en.wikipedia.org/wiki/Tan%20%28color%29
Tan (color)
Tan is a pale tone of brown. The name is derived from tannum (oak bark) used in the tanning of leather. The first recorded use of tan as a color name in English was in the year 1590. Colors which are similar or may be considered synonymous to tan include: tawny, tenné, and fulvous. Variations of tan Sandy tan Displayed at right is the color Sandy tan. This color was formulated by Crayola in 2000 as a Crayola marker color. Tan (Crayola) Displayed at right is the orangish tone of tan called tan since 1958 in Crayola crayons and 1990 in Crayola markers. Windsor tan Displayed at right is the color Windsor tan. The first recorded use of Windsor tan as a color name in English was in 1925. Tuscan tan Displayed at right is the color Tuscan tan. The first recorded use of Tuscan tan as a color name in English was in 1926. The normalized color coordinates for Tuscan tan are identical to café au lait and French beige, which were first recorded as color names in English in 1839 and 1927, respectively. In human culture Military Tan is the color of the United States Army Rangers beret as well as Canada's Canadian Special Operations Regiment and Joint Task Force 2. Sunbathing When a person sunbathes to make their skin darker, they are said to be getting a tan. United States politics The Barack Obama tan suit controversy was an incident when US President Barack Obama wore a tan colored suit during a press conference.
Physical sciences
Colors
Physics
3444072
https://en.wikipedia.org/wiki/Push%E2%80%93relabel%20maximum%20flow%20algorithm
Push–relabel maximum flow algorithm
In mathematical optimization, the push–relabel algorithm (alternatively, preflow–push algorithm) is an algorithm for computing maximum flows in a flow network. The name "push–relabel" comes from the two basic operations used in the algorithm. Throughout its execution, the algorithm maintains a "preflow" and gradually converts it into a maximum flow by moving flow locally between neighboring nodes using push operations under the guidance of an admissible network maintained by relabel operations. In comparison, the Ford–Fulkerson algorithm performs global augmentations that send flow following paths from the source all the way to the sink. The push–relabel algorithm is considered one of the most efficient maximum flow algorithms. The generic algorithm has a strongly polynomial time complexity, which is asymptotically more efficient than the Edmonds–Karp algorithm. Specific variants of the algorithms achieve even lower time complexities. The variant based on the highest label node selection rule has time complexity and is generally regarded as the benchmark for maximum flow algorithms. Subcubic time complexity can be achieved using dynamic trees, although in practice it is less efficient. The push–relabel algorithm has been extended to compute minimum cost flows. The idea of distance labels has led to a more efficient augmenting path algorithm, which in turn can be incorporated back into the push–relabel algorithm to create a variant with even higher empirical performance. History The concept of a preflow was originally designed by Alexander V. Karzanov and was published in 1974 in Soviet Mathematical Dokladi 15. This pre-flow algorithm also used a push operation; however, it used distances in the auxiliary network to determine where to push the flow instead of a labeling system. The push-relabel algorithm was designed by Andrew V. Goldberg and Robert Tarjan. The algorithm was initially presented in November 1986 in STOC '86: Proceedings of the eighteenth annual ACM symposium on Theory of computing, and then officially in October 1988 as an article in the Journal of the ACM. Both papers detail a generic form of the algorithm terminating in along with a sequential implementation, a implementation using dynamic trees, and parallel/distributed implementation. As explained in, Goldberg-Tarjan introduced distance labels by incorporating them into the parallel maximum flow algorithm of Yossi Shiloach and Uzi Vishkin. Concepts Definitions and notations Let: be a network with capacity function , a flow network, where and are chosen source and sink vertices respectively, denote a pre-flow in , denote the excess function with respect to the flow , defined by , denote the residual capacity function with respect to the flow , defined by , being the edges where , and denote the residual network of with respect to the flow . The push–relabel algorithm uses a nonnegative integer valid labeling function which makes use of distance labels, or heights, on nodes to determine which arcs should be selected for the push operation. This labeling function is denoted by . This function must satisfy the following conditions in order to be considered valid: Valid labeling: for all Source condition: Sink conservation: In the algorithm, the label values of and are fixed. is a lower bound of the unweighted distance from to in   if is reachable from . If has been disconnected from , then is a lower bound of the unweighted distance from to . As a result, if a valid labeling function exists, there are no paths in   because no such paths can be longer than . An arc   is called admissible if . The admissible network is composed of the set of arcs   that are admissible. The admissible network is acyclic. For a fixed flow , a vertex is called active if it has positive excess with respect to , i.e., . Operations Initialization The algorithm starts by creating a residual graph, initializing the preflow values to zero and performing a set of saturating push operations on residual arcs exiting the source, where . Similarly, the labels are initialized such that the label at the source is the number of nodes in the graph, , and all other nodes are given a label of zero. Once the initialization is complete the algorithm repeatedly performs either the push or relabel operations against active nodes until no applicable operation can be performed. Push The push operation applies on an admissible out-arc of an active node in . It moves units of flow from to . push(u, v): assert xf[u] > 0 and 𝓁[u] == 𝓁[v] + 1 Δ = min(xf[u], c[u][v] - f[u][v]) f[u][v] += Δ f[v][u] -= Δ xf[u] -= Δ xf[v] += Δ A push operation that causes to reach is called a saturating push since it uses up all the available capacity of the residual arc. Otherwise, all of the excess at the node is pushed across the residual arc. This is called an unsaturating or non-saturating push. Relabel The relabel operation applies on an active node which is neither the source nor the sink without any admissible out-arcs in . It modifies to be the minimum value such that an admissible out-arc is created. Note that this always increases and never creates a steep arc, which is an arc such that , and . relabel(u): assert xf[u] > 0 and 𝓁[u] <= 𝓁[v] for all v such that cf[u][v] > 0 𝓁[u] = 1 + min(𝓁[v] for all v such that cf[u][v] > 0) Effects of push and relabel After a push or relabel operation, remains a valid labeling function with respect to . For a push operation on an admissible arc , it may add an arc to , where ; it may also remove the arc from , where it effectively removes the constraint . To see that a relabel operation on node preserves the validity of , notice that this is trivially guaranteed by definition for the out-arcs of u in . For the in-arcs of in , the increased can only satisfy the constraints less tightly, not violate them. The generic push–relabel algorithm The generic push–relabel algorithm is used as a proof of concept only and does not contain implementation details on how to select an active node for the push and relabel operations. This generic version of the algorithm will terminate in . Since , , and there are no paths longer than in , in order for to satisfy the valid labeling condition must be disconnected from . At initialisation, the algorithm fulfills this requirement by creating a pre-flow that saturates all out-arcs of , after which is trivially valid for all . After initialisation, the algorithm repeatedly executes an applicable push or relabel operation until no such operations apply, at which point the pre-flow has been converted into a maximum flow. generic-push-relabel(G, c, s, t): create a pre-flow f that saturates all out-arcs of s let 𝓁[s] = |V| let 𝓁[v] = 0 for all v ∈ V \ {s} while there is an applicable push or relabel operation do execute the operation Correctness The algorithm maintains the condition that is a valid labeling during its execution. This can be proven true by examining the effects of the push and relabel operations on the label function . The relabel operation increases the label value by the associated minimum plus one which will always satisfy the constraint. The push operation can send flow from to if . This may add to and may delete from . The addition of to will not affect the valid labeling since . The deletion of from removes the corresponding constraint since the valid labeling property only applies to residual arcs in . If a preflow and a valid labeling for exists then there is no augmenting path from to in the residual graph . This can be proven by contradiction based on inequalities which arise in the labeling function when supposing that an augmenting path does exist. If the algorithm terminates, then all nodes in are not active. This means all have no excess flow, and with no excess the preflow obeys the flow conservation constraint and can be considered a normal flow. This flow is the maximum flow according to the max-flow min-cut theorem since there is no augmenting path from to . Therefore, the algorithm will return the maximum flow upon termination. Time complexity In order to bound the time complexity of the algorithm, we must analyze the number of push and relabel operations which occur within the main loop. The numbers of relabel, saturating push and nonsaturating push operations are analyzed separately. In the algorithm, the relabel operation can be performed at most times. This is because the labeling value for any node u can never decrease, and the maximum label value is at most for all nodes. This means the relabel operation could potentially be performed times for all nodes (i.e. ). This results in a bound of for the relabel operation. Each saturating push on an admissible arc removes the arc from . For the arc to be reinserted into for another saturating push, must first be relabeled, followed by a push on the arc , then must be relabeled. In the process, increases by at least two. Therefore, there are saturating pushes on , and the total number of saturating pushes is at most . This results in a time bound of for the saturating push operations. Bounding the number of nonsaturating pushes can be achieved via a potential argument. We use the potential function (i.e. is the sum of the labels of all active nodes). It is obvious that is initially and stays nonnegative throughout the execution of the algorithm. Both relabels and saturating pushes can increase . However, the value of must be equal to 0 at termination since there cannot be any remaining active nodes at the end of the algorithm's execution. This means that over the execution of the algorithm, the nonsaturating pushes must make up the difference of the relabel and saturating push operations in order for to terminate with a value of 0. The relabel operation can increase by at most . A saturating push on activates if it was inactive before the push, increasing by at most . Hence, the total contribution of all saturating pushes operations to is at most . A nonsaturating push on always deactivates , but it can also activate as in a saturating push. As a result, it decreases by at least . Since relabels and saturating pushes increase , the total number of nonsaturating pushes must make up the difference of . This results in a time bound of for the nonsaturating push operations. In sum, the algorithm executes relabels, saturating pushes and nonsaturating pushes. Data structures can be designed to pick and execute an applicable operation in time. Therefore, the time complexity of the algorithm is . Example The following is a sample execution of the generic push-relabel algorithm, as defined above, on the following simple network flow graph diagram. In the example, the and values denote the label and excess , respectively, of the node during the execution of the algorithm. Each residual graph in the example only contains the residual arcs with a capacity larger than zero. Each residual graph may contain multiple iterations of the loop. The example (but with initial flow of 0) can be run here interactively. Practical implementations While the generic push–relabel algorithm has time complexity, efficient implementations achieve or lower time complexity by enforcing appropriate rules in selecting applicable push and relabel operations. The empirical performance can be further improved by heuristics. "Current-arc" data structure and discharge operation The "current-arc" data structure is a mechanism for visiting the in- and out-neighbors of a node in the flow network in a static circular order. If a singly linked list of neighbors is created for a node, the data structure can be as simple as a pointer into the list that steps through the list and rewinds to the head when it runs off the end. Based on the "current-arc" data structure, the discharge operation can be defined. A discharge operation applies on an active node and repeatedly pushes flow from the node until it becomes inactive, relabeling it as necessary to create admissible arcs in the process. discharge(u): while xf[u] > 0 do if current-arc[u] has run off the end of neighbors[u] then relabel(u) rewind current-arc[u] else let (u, v) = current-arc[u] if (u, v) is admissible then push(u, v) let current-arc[u] point to the next neighbor of u Finding the next admissible edge to push on has amortized complexity. The current-arc pointer only moves to the next neighbor when the edge to the current neighbor is saturated or non-admissible, and neither of these two properties can change until the active node is relabelled. Therefore, when the pointer runs off, there are no admissible unsaturated edges and we have to relabel the active node , so having moved the pointer times is paid for by the relabel operation. Active node selection rules Definition of the discharge operation reduces the push–relabel algorithm to repeatedly selecting an active node to discharge. Depending on the selection rule, the algorithm exhibits different time complexities. For the sake of brevity, we ignore and when referring to the nodes in the following discussion. FIFO selection rule The FIFO push–relabel algorithm organizes the active nodes into a queue. The initial active nodes can be inserted in arbitrary order. The algorithm always removes the node at the front of the queue for discharging. Whenever an inactive node becomes active, it is appended to the back of the queue. The algorithm has time complexity. Relabel-to-front selection rule The relabel-to-front push–relabel algorithm organizes all nodes into a linked list and maintains the invariant that the list is topologically sorted with respect to the admissible network. The algorithm scans the list from front to back and performs a discharge operation on the current node if it is active. If the node is relabeled, it is moved to the front of the list, and the scan is restarted from the front. The algorithm also has time complexity. Highest label selection rule The highest-label push–relabel algorithm organizes all nodes into buckets indexed by their labels. The algorithm always selects an active node with the largest label to discharge. The algorithm has time complexity. If the lowest-label selection rule is used instead, the time complexity becomes . Implementation techniques Although in the description of the generic push–relabel algorithm above, is set to zero for each node u other than and at the beginning, it is preferable to perform a backward breadth-first search from to compute exact labels. The algorithm is typically separated into two phases. Phase one computes a maximum pre-flow by discharging only active nodes whose labels are below . Phase two converts the maximum preflow into a maximum flow by returning excess flow that cannot reach to . It can be shown that phase two has time complexity regardless of the order of push and relabel operations and is therefore dominated by phase one. Alternatively, it can be implemented using flow decomposition. Heuristics are crucial to improving the empirical performance of the algorithm. Two commonly used heuristics are the gap heuristic and the global relabeling heuristic. The gap heuristic detects gaps in the labeling function. If there is a label for which there is no node such that , then any node with has been disconnected from and can be relabeled to immediately. The global relabeling heuristic periodically performs backward breadth-first search from in to compute the exact labels of the nodes. Both heuristics skip unhelpful relabel operations, which are a bottleneck of the algorithm and contribute to the ineffectiveness of dynamic trees. Sample implementations #include <stdlib.h> #include <stdio.h> #define NODES 6 #define MIN(X,Y) ((X) < (Y) ? (X) : (Y)) #define INFINITE 10000000 void push(const int * const * C, int ** F, int *excess, int u, int v) { int send = MIN(excess[u], C[u][v] - F[u][v]); F[u][v] += send; F[v][u] -= send; excess[u] -= send; excess[v] += send; } void relabel(const int * const * C, const int * const * F, int *height, int u) { int v; int min_height = INFINITE; for (v = 0; v < NODES; v++) { if (C[u][v] - F[u][v] > 0) { min_height = MIN(min_height, height[v]); height[u] = min_height + 1; } } }; void discharge(const int * const * C, int ** F, int *excess, int *height, int *seen, int u) { while (excess[u] > 0) { if (seen[u] < NODES) { int v = seen[u]; if ((C[u][v] - F[u][v] > 0) && (height[u] > height[v])) { push(C, F, excess, u, v); } else { seen[u] += 1; } } else { relabel(C, F, height, u); seen[u] = 0; } } } void moveToFront(int i, int *A) { int temp = A[i]; int n; for (n = i; n > 0; n--) { A[n] = A[n-1]; } A[0] = temp; } int pushRelabel(const int * const * C, int ** F, int source, int sink) { int *excess, *height, *list, *seen, i, p; excess = (int *) calloc(NODES, sizeof(int)); height = (int *) calloc(NODES, sizeof(int)); seen = (int *) calloc(NODES, sizeof(int)); list = (int *) calloc((NODES-2), sizeof(int)); for (i = 0, p = 0; i < NODES; i++){ if ((i != source) && (i != sink)) { list[p] = i; p++; } } height[source] = NODES; excess[source] = INFINITE; for (i = 0; i < NODES; i++) push(C, F, excess, source, i); p = 0; while (p < NODES - 2) { int u = list[p]; int old_height = height[u]; discharge(C, F, excess, height, seen, u); if (height[u] > old_height) { moveToFront(p, list); p = 0; } else { p += 1; } } int maxflow = 0; for (i = 0; i < NODES; i++) maxflow += F[source][i]; free(list); free(seen); free(height); free(excess); return maxflow; } void printMatrix(const int * const * M) { int i, j; for (i = 0; i < NODES; i++) { for (j = 0; j < NODES; j++) printf("%d\t",M[i][j]); printf("\n"); } } int main(void) { int **flow, **capacities, i; flow = (int **) calloc(NODES, sizeof(int*)); capacities = (int **) calloc(NODES, sizeof(int*)); for (i = 0; i < NODES; i++) { flow[i] = (int *) calloc(NODES, sizeof(int)); capacities[i] = (int *) calloc(NODES, sizeof(int)); } // Sample graph capacities[0][1] = 2; capacities[0][2] = 9; capacities[1][2] = 1; capacities[1][3] = 0; capacities[1][4] = 0; capacities[2][4] = 7; capacities[3][5] = 7; capacities[4][5] = 4; printf("Capacity:\n"); printMatrix(capacities); printf("Max Flow:\n%d\n", pushRelabel(capacities, flow, 0, 5)); printf("Flows:\n"); printMatrix(flow); return 0; } def relabel_to_front(C, source: int, sink: int) -> int: n = len(C) # C is the capacity matrix F = [[0] * n for _ in range(n)] # residual capacity from u to v is C[u][v] - F[u][v] height = [0] * n # height of node excess = [0] * n # flow into node minus flow from node seen = [0] * n # neighbours seen since last relabel # node "queue" nodelist = [i for i in range(n) if i != source and i != sink] def push(u, v): send = min(excess[u], C[u][v] - F[u][v]) F[u][v] += send F[v][u] -= send excess[u] -= send excess[v] += send def relabel(u): # Find smallest new height making a push possible, # if such a push is possible at all. min_height = ∞ for v in range(n): if C[u][v] - F[u][v] > 0: min_height = min(min_height, height[v]) height[u] = min_height + 1 def discharge(u): while excess[u] > 0: if seen[u] < n: # check next neighbour v = seen[u] if C[u][v] - F[u][v] > 0 and height[u] > height[v]: push(u, v) else: seen[u] += 1 else: # we have checked all neighbours. must relabel relabel(u) seen[u] = 0 height[source] = n # longest path from source to sink is less than n long excess[source] = ∞ # send as much flow as possible to neighbours of source for v in range(n): push(source, v) p = 0 while p < len(nodelist): u = nodelist[p] old_height = height[u] discharge(u) if height[u] > old_height: nodelist.insert(0, nodelist.pop(p)) # move to front of list p = 0 # start from front of list else: p += 1 return sum(F[source])
Mathematics
Graph theory
null
3444568
https://en.wikipedia.org/wiki/Big%20Dipper
Big Dipper
The Big Dipper (US, Canada) or the Plough (UK, Ireland) is an asterism consisting of seven bright stars of the constellation Ursa Major; six of them are of second magnitude and one, Megrez (δ), of third magnitude. Four define a "bowl" or "body" and three define a "handle" or "head". It is recognized as a distinct grouping in many cultures. The North Star (Polaris), the current northern pole star and the tip of the handle of the Little Dipper (Little Bear), can be located by extending an imaginary line through the front two stars of the asterism, Merak (β) and Dubhe (α). This makes it useful in celestial navigation. Names and places The constellation of Ursa Major (Latin: Greater Bear) has been seen as a bear, a wagon, or a ladle. The "bear" tradition is Indo-European (appearing in Greek, as well as in Vedic India), but apparently the name "bear" has parallels in Siberian or North American traditions. European astronomy The name "Bear" is Homeric, and apparently native to Greece, while the "Wain" tradition is Mesopotamian. Book XVIII of Homer's Iliad mentions it as "the Bear, which men also call the Wain". In Latin, these seven stars were known as the "Seven Oxen" (, from ). Classical Greek mythography identified the "Bear" as the nymph Callisto, changed into a she-bear by Hera, the jealous wife of Zeus. In Ireland and the United Kingdom, this pattern is known as the Plough (Irish: An Camchéachta – the bent plough). The symbol of the Starry Plough has been used as a political symbol by Irish Republican and Irish left wing movements. Former names include the Great Wain (i.e., wagon), Arthur's Wain or Butcher's Cleaver. The terms Charles's Wain and Charles his Wain are derived from the still older Carlswæn. A folk etymology holds that this derived from Charlemagne, but the name is common to all the Germanic languages and the original reference was to the churls' (i.e., the men's) wagon, in contrast to the women's wagon, (the Little Dipper). An older "Odin's Wain" may have preceded these Nordic designations. In German, it is known as the "Great Wagon" () and, less often, the "Great Bear" (). Likewise, in the North Germanic languages, it is known by variations of "Charles's Wagon" (Karlavagnen, Karlsvogna, or Karlsvognen), but also the "Great Bear" (Stora Björn), and to the Norse pagans, it was known as Óðins vagn; "Woden's wagon". In Dutch, its official name is the "Great Bear" (Grote Beer), but it is popularly known as the "Saucepan" (Steelpannetje). In Italian, it is called either the "Great Wagon" (Grande Carro) or "Orsa Maggiore" ("Greater Bear"). Romanian and most Slavic languages also call it the "Great Wagon". In Hungarian, it is commonly known as "Göncöl's Wagon" () or, less often, "Big Göncöl" () after a táltos (shaman) in Hungarian mythology who carried medicine that could cure any disease. In Finnish, the figure is known as Otava with established etymology in the archaic meaning 'salmon net', although other uses of the word refer to 'bear' and 'wheel'. The bear relation is claimed to stem from the animal's resemblance to—and mythical origin from—the asterism rather than vice versa. In Lithuanian, the stars of Ursa Major are known as Didieji Grįžulo Ratai ("The Big Back Wheels"). Other names for the constellation include Perkūno Ratai ("The Wheels of Perkūnas"), Kaušas ("The Bucket"), Vežimas ("The Carriage"), and Samtis ("The Ladle"). In the Sámi languages of Northern Europe, the constellation is identified as the bow of the great hunter Fávdna (the star Arcturus). In the main Sámi language, North Sámi it is called Fávdnadávgi ("Fávdna's bow") or simply dávggát ("the bow"). The constellation features prominently in the Sámi anthem, which begins with the words Guhkkin davvin dávggáid vuolde sabmá suolggai Sámieanan, which translates to "Far to the north, under the Bow, the Land of the Sámi slowly comes into view." The Bow is an important part of the Sámi traditional narrative about the night sky, in which various hunters try to chase down Sarva, the Great Reindeer, a large constellation that takes up almost half the sky. According to the legend, Fávdna stands ready to fire his Bow every night but hesitates because he might hit Stella Polaris, known as Boahji ("the Rivet"), which would cause the sky to collapse and end the world. Asian traditions In Chinese astronomy and Chinese constellation records, The Big Dipper is called "Beidou" (), which literally means Northern Dipper. It refers to an asterism equivalent to the Big Dipper. The Chinese name for Alpha Ursae Majoris is Beidou Yi () and Tianshu (). The asterism name was mentioned in Warring States period (c. 475–221 BCE) stellar records, in which the asterism is described to have seven stars in the shape of a dipper or a chariot. The Chinese astronomy records were translated to other East Asian cultures in the Sinosphere. The most prominent name is the "Northern Dipper" () and the "Seven Stars of the Northern Dipper" (). In astrology, these stars are generally considered to compose the Right Wall of the Purple Forbidden Enclosure which surrounds the Northern Celestial Pole, although numerous other groupings and names have been made over the centuries. Similarly, each star has a distinct name, which likewise has varied over time and depending upon the asterism being constructed. The personification of the Big Dipper itself is also known as "Doumu" () in Chinese folk religion and Taoism, and Marici in Buddhism. In Vietnam, the colloquial name for the asterism is Sao Bánh lái lớn (The Big Rudder Stars), contrasted with Ursa Minor, which is known as Sao Bánh lái nhỏ (The Little Rudder Stars). Although this name has now been replaced by the Sino-Vietnamese "Bắc Đẩu" in everyday speech, many coastal communities in central and southern Vietnam still refer to the asterism as such and use it to navigate when their fishing vessels return from the sea at night. In Shinto religion, the seven largest stars of Ursa Major belong to Amenominakanushi, the oldest and most powerful of all kami. In Malay, it is known as the "Boat Constellation" (); in Indonesian, as the "Canoe Stars" (Bintang Biduk). In Burmese, these stars are known as Pucwan Tārā (ပုဇွန် တာရာ, pronounced "bazun taja"). Pucwan (ပုဇွန်) is a general term for a crustacean, such as prawn, shrimp, crab, lobster, etc. While its Western name comes from the star pattern's resemblance to a kitchen ladle, in Filipino, the Big Dipper and its sister constellation the Little Dipper are more often associated with the tabo, a one-handled water pot used ubiquitously in Filipino households and bathrooms for purposes of personal hygiene. In the earliest Indian astronomy, the Big Dipper was called "the Bear" (Ṛkṣa, ) in the Rigveda, but was later more commonly known by the name of Saptarishi, "Seven Sages." Inuit traditions In Inuit astronomy, the same grouping of stars is referred to as "the Caribou" (Tukturjuit). Many of the stars within the constellation "were used as hour hands on the night sky to indicate hours of the night, or as calendar stars to help determine the date in fall, winter, or spring." In North America The asterism name "Big Dipper" is mostly used in the United States and Canada. However, the origin of the term is disputed. A popular myth claimed the name originated from African-American folk songs; however, a more recent source challenges the authenticity of the claim. In an 1824 book on the history of the constellations' mythology, contrasted the "Dipper or Ladle" descriptors used in the United States with "Charles's Wagon or Wain" which were common in England. Descriptions of "the dipper" appear in American astronomy textbooks throughout the 19th century. Stars Within Ursa Major the stars of the Big Dipper have Bayer designations in consecutive Greek alphabetical order from the bowl to the handle. In the same line of sight as Mizar, but about one light-year beyond it, is the star Alcor (80 UMa). Together they are known as the "Horse and Rider". At fourth magnitude, Alcor would normally be relatively easy to see with the unaided eye, but its proximity to Mizar renders it more difficult to resolve, and it has served as a traditional test of sight. Mizar itself has four components and thus enjoys the distinction of being part of an optical binary as well as being the first-discovered telescopic binary (1617) and the first-discovered spectroscopic binary (1889). Five of the stars of the Big Dipper are at the core of the Ursa Major Moving Group. The two at the ends, Dubhe and Alkaid, are not part of the swarm, and are moving in the opposite direction. Relative to the central five, they are moving down and to the right in the map. This will slowly change the Dipper's shape, with the bowl opening up and the handle becoming more bent. In 50,000 years the Dipper will no longer exist as we know it, but be re-formed into a new Dipper facing the opposite way. The stars Alkaid to Phecda will then constitute the bowl, while Phecda, Merak, and Dubhe will be the handle. Guidepost Not only are the stars in the Big Dipper easily found themselves, they may also be used as guides to other stars outside of the asterism. Thus it is often the starting point for introducing Northern Hemisphere beginners to the night sky: Polaris, the North Star, is found by imagining a line from Merak (β) to Dubhe (α) and then extending it for five times the distance between the two Pointers. Extending a line from Megrez (δ) to Phecda (γ), on the inside of the bowl, leads to Regulus (α Leonis) and Alphard (α Hydrae). A mnemonic for this is "A hole in the bowl will leak on Leo." Extending a line from Phecda (γ) to Megrez (δ) leads to Thuban (α Draconis), which was the pole star 4,000 years ago. Crossing the top of the bowl from Megrez (δ) to Dubhe (α) takes one in the direction of Capella (α Aurigae). A mnemonic for this is "Cap to Capella." Castor (α Geminorum) is reached by imagining a diagonal line from Megrez (δ) to Merak (β) and then extending it for approximately five times that distance. By following the curve of the handle from Alioth (ε) to Mizar (ζ) to Alkaid (η), one reaches Arcturus (α Boötis) and Spica (α Virginis). A mnemonic for this is "Arc to Arcturus then speed (or spike) to Spica." Projecting a line from Alkaid (η) through the pole star will point to Cassiopeia. Additionally, the Dipper may be used as a guide to telescopic objects: The approximate location of the Hubble Deep Field can be found by following a line from Phecda (γ) to Megrez (δ) and continuing on for the same distance again. Crossing the bowl diagonally from Phecda (γ) to Dubhe (α) and proceeding onward for a similar stretch leads to the bright galaxy pair M81 and M82. Two spectacular spiral galaxies flank Alkaid (η), the Pinwheel (M101) to the north and the Whirlpool (M51) to the south. Cultural associations The "Seven Stars" referenced in the Bible's Book of Amos may refer to these stars or, more likely, to the Pleiades. In traditional Hindu astronomy, the seven stars of the Big Dipper are identified with the names of the Saptarshi. In addition, the asterism has also been used in corporate logos and the Alaska flag. The seven stars on a red background of the Flag of the Community of Madrid, Spain, are the stars of the Big Dipper Asterism. The same can be said about the seven stars pictured in the bordure azure of the Coat of arms of Madrid, capital city of Spain. The asterism's prominence on the north of the night sky produced the adjective "septentrional" (literally, pertaining to seven plow oxen) in Romance languages and English, meaning "Northern [Hemisphere]". "Follow the Drinkin' Gourd" is an African American folk song first published in 1928. The "Drinkin' Gourd" is thought to refer to the Big Dipper. Folklore has it that escaped southern slaves in the United States used the Big Dipper as a point of reference to go north. A mythological origin of the asterism was described in a children's story which circulated in the United States in various versions. A version of this story taken from the pacifist magazine Herald of Peace was translated into Russian and incorporated into Leo Tolstoy's compilation A Calendar of Wisdom. The Constellation was also used on the flag of the Italian Regency of Carnaro within the Ouroboros.
Physical sciences
Asterism
Astronomy
3445437
https://en.wikipedia.org/wiki/Zingiberene
Zingiberene
Zingiberene is a monocyclic sesquiterpene that is the predominant constituent of the oil of ginger (Zingiber officinale), from which it gets its name. It can contribute up to 30% of the essential oils in ginger rhizomes. This is the compound that gives ginger its distinct flavoring. Biosynthesis Zingiberene is formed in the isoprenoid pathway from farnesyl pyrophosphate (FPP). FPP undergoes a rearrangement to give nerolidyl diphosphate. After the removal of pyrophosphate, the ring closes leaving a carbocation on the tertiary carbon attached to the ring. A 1,3-hydride shift then takes place to give a more stable allylic carbocation. The final step in the formation of zingiberene is the removal of the cyclic allylic proton and consequent formation of a double bond. Zingiberene synthase is the enzyme responsible for catalyzing the reaction forming zingiberene as well as other mono- and sesquiterpenes.
Physical sciences
Terpenes and terpenoids
Chemistry
1202646
https://en.wikipedia.org/wiki/Tear%20gas
Tear gas
Tear gas, also known as a lachrymatory agent or lachrymator (), sometimes colloquially known as "mace" after the early commercial self-defense spray, is a chemical weapon that stimulates the nerves of the lacrimal gland in the eye to produce tears. In addition, it can cause severe eye and respiratory pain, skin irritation, bleeding, and blindness. Common lachrymators both currently and formerly used as tear gas include pepper spray (OC gas), PAVA spray (nonivamide), CS gas, CR gas, CN gas (phenacyl chloride), bromoacetone, xylyl bromide, chloropicrin (PS gas) and Mace (a branded mixture). While lachrymatory agents are commonly deployed for riot control by law enforcement and military personnel, its use in warfare is prohibited by various international treaties. During World War I, increasingly toxic and deadly lachrymatory agents were used. The short and long-term effects of tear gas are not well studied. The published peer-reviewed literature consists of lower quality evidence that do not establish causality. Exposure to tear gas agents may produce numerous short-term and long-term health effects, including development of respiratory illnesses, severe eye injuries and diseases (such as traumatic optic neuropathy, keratitis, glaucoma, and cataracts), dermatitis, damage of cardiovascular and gastrointestinal systems, and death, especially in cases with exposure to high concentrations of tear gas or application of the tear gases in enclosed spaces. Effects Tear gas generally consists of aerosolized solid or liquid compounds (bromoacetone or xylyl bromide), not gas. Tear gas works by irritating mucous membranes in the eyes, nose, mouth and lungs. It causes crying, sneezing, coughing, difficulty breathing, pain in the eyes, and temporary blindness. With CS gas, symptoms of irritation typically appear after 20 to 60 seconds of exposure and commonly resolve within 30 minutes of leaving (or being removed from) the area. Risks As with all non-lethal or less-lethal weapons, there is a risk of serious permanent injury or death when tear gas is used. This includes risks from being hit by tear gas cartridges that may cause severe bruising, loss of eyesight, or skull fracture, resulting in immediate death. A case of serious vascular injury from tear gas shells has also been reported from Iran, with high rates of associated nerve injury (44%) and amputation (17%), as well as instances of head injuries in young people. Novel findings suggest that menstrual changes are one of the most commonly reported health issues in women. While the medical consequences of the gases themselves are typically limited to minor skin inflammation, delayed complications are also possible. People with pre-existing respiratory conditions such as asthma are particularly at risk. They are likely to need medical attention and may sometimes require hospitalization or even ventilation support. Skin exposure to CS may cause chemical burns or induce allergic contact dermatitis. When people are hit at close range or are severely exposed, eye injuries involving scarring of the cornea can lead to a permanent loss in visual acuity. Frequent or high levels of exposure carry increased risks of respiratory illness. Venezuelan chemist Mónica Kräuter studied thousands of tear gas canisters fired by Venezuelan authorities since 2014. She concluded that the majority of canisters used the main component CS gas, but that 72% of the tear gas used was expired. She noted that expired tear gas "breaks down into cyanide oxide, phosgenes and nitrogens that are extremely dangerous". In the 2019–20 Chilean protests various people have had complete and permanent loss of vision in one or both eyes as result of the impact of tear gas grenades. The majority (2116; 93.8%) of protestors who reported exposure to tear gas during the 2020 protests in Portland, Oregon (USA) reported physical (2114; 93.7%) or psychological (1635; 72.4%) health issues experienced immediately after (2105; 93.3%) or days following (1944; 86.1%) the exposure. The majority (1233; 54.6%) of respondents who reported exposure to tear gas during the 2020 protests in Portland, Oregon (US) have also reported receiving or planning to seek medical or mental healthcare for their tear gas-related health issues. It has been shown that health issues associated with the exposure to tear gas are often require medical attention. Site of action TRPA1 ion channels expressed on nociceptors have been implicated as the site of action for CS gas, CR gas, CN gas (phenacyl chloride), chloropicrin and bromoacetone in rodent models. Use Warfare During World War I, various forms of tear gas were used in combat and tear gas was the most common form of chemical weapon used. None of the belligerents believed that the use of irritant gases violated the Hague Convention of 1899 which prohibited the use of "poison or poisoned weapons" in warfare. Use of chemical weapons escalated during the war to lethal gases, after 1914 (during which only tear gas was used). The US Chemical Warfare Service developed tear gas grenades for use in riot control in 1919. Use of tear gas in interstate warfare, as with all other chemical weapons, was prohibited by the Geneva Protocol of 1925: it prohibited the use of "asphyxiating gas, or any other kind of gas, liquids, substances or similar materials", a treaty that most states have signed. Police and civilian self-defense use is not banned in the same manner. Tear gas was used in combat by Italy in the Second Italo-Ethiopian War, by Japan in the Second Sino-Japanese War, by Spain in the Rif War and by the United States in the Vietnam War, and Israel Israel–Palestine conflict. Tear gas exposure is an element of military training programs, typically as a means of improving trainees' tolerance to tear gas and encouraging confidence in the ability of their issued protective equipment to prevent chemical weapons exposure. Riot control Certain lachrymatory agents, most notably tear gas, are often used by police to force compliance. In some countries (e.g., Finland, Australia, and United States), another common substance is mace. The self-defense weapon form of mace is based on pepper spray which comes in small spray cans. Versions including CS are manufactured for police use. Xylyl bromide, CN and CS are the oldest of these agents. CS is the most widely used. CN has the most recorded toxicity. Typical manufacturer warnings on tear gas cartridges state "Danger: Do not fire directly at person(s). Severe injury or death may result." Tear gas guns do not have a manual setting to adjust the range of fire. The only way to adjust the projectile's range is to aim towards the ground at the correct angle. Incorrect aim will send the capsules away from the targets, causing risk for non-targets instead. Counter-measures A variety of protective equipment may be used, including gas masks and respirators. In riot control situations, protesters sometimes use equipment (aside from simple rags or clothing over the mouth) such as swimming goggles and adapted water bottles, as well as covering as much skin as possible. Activists in United States, the Czech Republic, Venezuela and Turkey have reported using antacid solutions such as Maalox diluted with water to repel effects of tear gas attacks, with Venezuelan chemist Mónica Kräuter recommending the usage of diluted antacids as well as baking soda. There have also been reports of these antacids being helpful for tear gas, and for capsaicin-induced skin pain. During the 2019 Hong Kong protests, frontline protesters became adept at extinguishing tear gas: they formed special teams that sprang into action as soon as it was fired. These individuals generally wore protective clothing, including heat-proof gloves, or covered their arms and legs with cling film to prevent the painful skin irritation. Canisters were sometimes picked up and lobbed back at police or extinguished straight away with water, or neutralized using objects such as traffic cones. They shared information about models of 3M respirator filters which had been found to be most effective against tear gas, and where those models could be purchased. Other volunteers carried saline solutions to rinse the eyes of those affected. Similarly, Chilean protesters of Primera Línea had specialized individuals collecting and extinguishing the tear gas grenades. Others acted as tear gas medics, and another group, the "shield-bearers," protected the protesters from the direct physical impact of the grenades. Treatment There is no specific antidote to common tear gases. At the first sign of exposure or potential exposure, masks are applied when available. People are removed from the affected area when possible. Immediate removal of contact lenses has also been recommended, as they can retain particles. Decontamination is by physical or mechanical removal (brushing, washing, rinsing) of solid or liquid agents. Water may transiently exacerbate the pain caused by CS gas and pepper spray but is still effective, although fat-containing oils or soaps may be more effective against pepper spray. Eyes are decontaminated by copious flushing with sterile water or saline or (with OC) open-eye exposure to wind from a fan. Referral to an ophthalmologist is needed if slit-lamp examination shows impaction of solid particles of agent. Blowing the nose to get rid of the chemicals is recommended, as is avoiding rubbing of the eyes. There are reports that water may increase pain from CS gas, but the balance of limited evidence currently suggests water or saline are the best options. Some evidence suggests that Diphoterine, a hypertonic amphoteric salt solution, a first aid product for chemical splashes, may help with ocular burns or chemicals in the eye. Bathing and washing the body vigorously with soap and water can remove particles that adhere to the skin. Clothes, shoes and accessories that come into contact with vapors must be washed well since all untreated particles can remain active for up to a week. Some advocate using fans or hair dryers to evaporate the spray, but this has not been shown to be better than washing out the eyes and it may spread contamination. Anticholinergics can work like some antihistamines as they reduce lacrymation and decrease salivation, acting as an antisialagogue, and for overall nose discomfort as they are used to treat allergic reactions in the nose (e.g., itching, runny nose, and sneezing). Oral analgesics may help relieve eye pain. Most effects resulting from riot-control agents are transient and do not require treatment beyond decontamination, and most patients do not need observation beyond 4 hours. However, patients should be instructed to return if they develop effects such as blistering or delayed-onset shortness of breath. Home remedies Vinegar, petroleum jelly, milk and lemon juice solutions have also been used by activists. It is unclear how effective these remedies are. In particular, vinegar itself can burn the eyes and prolonged inhalation can also irritate the airways. Vegetable oil and vinegar have been reported as helping relieve burning caused by pepper spray, Kräuter suggests the usage of baking soda or toothpaste, stating that they trap the particles emanating from the gas near the airways that are more feasible to inhale. A small trial of baby shampoo for washing out the eyes did not show any benefit.
Technology
Less-lethal weapons
null
1202733
https://en.wikipedia.org/wiki/Sea%20lamprey
Sea lamprey
The sea lamprey (Petromyzon marinus) is a parasitic lamprey native to the Northern Hemisphere. It is sometimes referred to as the "vampire fish". In its original habitats, the sea lamprey coevolved with its hosts, and those hosts evolved a measure of resistance to the sea lampreys. It was likely introduced to the Great Lakes region through the Erie Canal in 1825 and the Welland Canal in 1919 where it has attacked native fish such as lake trout, lake whitefish, chub, and lake herring, Sea lampreys are considered a pest in the Great Lakes region as each individual has the potential of killing 40 pounds of fish through its 12–18 month feeding period. Description The sea lamprey has an eel-like body without paired fins. Its mouth is jawless, round and sucker-like, and as wide or wider than the head; sharp teeth are arranged in many concentric circular rows around a sharp, rasp-like tongue. There are seven branchial or gill-like openings behind the eye. Sea lampreys are olive or brown-yellow on the dorsal and lateral part of the body, with some black marblings, with lighter coloration on the belly. Within their seven-year lifespans, adults can reach a length of up to and a body weight up t . Etymology The etymology of the genus name Petromyzon is from petro- "stone" and myzon "sucking"; marinus is Latin for "of the sea". Distribution and habitat The species is found in the northern and western Atlantic Ocean along the shores of Europe and North America, in the western Mediterranean Sea, the Black Sea, and as an invasive species in the Great Lakes. They have been found at depths up to 4000 meters and can tolerate temperatures of . In North America, they are native to the Connecticut River basin in the United States, and invasive to the inland Great Lakes and Lake Champlain in New York and Vermont. The largest European populations of sea lampreys are located throughout the southwestern areas of Europe (north-central Portugal, north-northwest of Spain, and west–southwest of France). These countries also support the main fisheries of the species. Ecology Sea lampreys are anadromous; from their lake or sea habitats, they migrate up rivers to spawn. Females deposit a large number of eggs in nests made by males in the substrate of streams with moderately strong current. Spawning is followed by the death of the adults. Larvae burrow in the sand and silt bottom in quiet water downstream from spawning areas and filter-feed on plankton and detritus. After several years in freshwater habitats, the larvae undergo a metamorphosis that allows young, post-metamorphic lampreys to migrate to the sea or lakes, and start the adult hematophagous method of feeding. Some individuals start hematophagous feeding in the river before migrating to the sea, where sea lampreys prey on a wide variety of fish. The lamprey uses its suction cup-like mouth to attach itself to the skin of a fish and rasps away tissue with its sharp, probing tongue and keratinized teeth. A fluid produced in the lamprey's mouth, called lamphredin, prevents the victim's blood from clotting. Victims typically die from excessive blood loss or infection. After one year of hematophagous feeding, lampreys return to the river to spawn and die, a year and a half after the completion of metamorphosis. Lampreys are considered a delicacy in some parts of Europe, and are seasonally available in France, Spain, and Portugal. They are served pickled in Finland. Mostly known for preparing cooked or grilled river lamprey, the sea lamprey occasionally is caught in the rivers of Latvia as well together with river lampreys. Physiology Due to its lifecycle that switches between fresh and salt water, the sea lamprey is adapted to tolerate a wide range of salinities. Cell membranes on the surface of the gills are major contributors to ionoregulation. Changes in membrane composition influence the movement of different ions across the membrane, changing amounts of components to change the membranes' environment. In some instances, the sea lamprey has adapted to living exclusively in fresh water, as evidenced by the population in the Great Lakes. As the larvae (called ammocoetes) move towards the oceans, the ratio between saturated fatty acids (SFA) and polyunsaturated fatty acids (PUFA) in the gills shifts towards higher amounts of SFA, as they affect the fluidity of the membrane, and higher levels of SFA lead to a decrease in permeability compared to PUFA. Lamprey ammocoetes have a relatively narrow range of salinity tolerance, but become better able to withstand wider ranges of salinity concentrations as they reach later stages of life. Tight regulation of Na/K-ATPase and an overall decrease in expression of H-ATPase assists in regulating the lamprey's internal fluid and ion balance as it moves to areas of higher salinity. Lampreys also maintain acid-base homeostasis. When introduced to higher levels of acids, they are able to excrete excess acids at higher rates than most other saltwater fishes, and in much shorter times, with the majority of the transfer of ions occurring at the gill surface. Sea lampreys parasitize other fishes for their diet, including elasmobranchs such as sharks and rays, which have naturally high levels of urea in their blood. Urea is toxic to most fishes in high concentrations, and is usually excreted immediately. Lampreys are able to tolerate much higher concentrations than most other fish and excrete it at extremely high rates, obtained from ingested blood. Trimethylamine oxides present in ingested elasmobranch blood aid in counteracting the detrimental effects of high urea concentration in the lamprey's bloodstream as it feeds. Immunology Two presumptive apolipoprotein B mRNA editing enzyme, catalytic polypeptide-like (APOBEC)s expressed in lymphocytes—CDA1 and CDA2—have been discovered in P. marinus. Genetics The genome of Petromyzon marinus was sequenced in 2013. This sequencing effort revealed that the lamprey has unusual guanine-cytosine content and amino acid usage patterns compared to other vertebrates. The full sequence and annotation of the lamprey genome is available on the Ensembl genome browser. The lamprey genome may serve as a model for developmental biology and evolution studies involving transposition of repetitive sequences. The lamprey genome undergoes drastic rearrangements during early embryogenesis in which about 20% of the germline DNA from somatic tissues is shed. The genome is highly repetitive. About 35% of the current genome assembly is composed of repetitive elements with high sequence identity. Northern lampreys have the highest number of chromosomes (164–174) among vertebrates. Two genes important to immune function—CDA1 and CDA2—were first discovered in P. marinus and then found to be conserved across lampreys. See §Immunology above. Invasive species Sea lampreys are considered a pest in the Great Lakes region. Whether it is native to Lake Ontario, where it was first noticed in the 1830s, or whether it was introduced through the Erie Canal which opened in 1825 was not clear as of 2007. The species was first contained to Lake Ontario due to the natural barrier formed by Niagara Falls. However, after the Welland Canal was built in the late 1800s - early 1900s, they were able to bypass Niagara Falls and invade the remaining Great Lakes: Lakes Erie (1921), Michigan (1936), Huron (1937), and Superior (1938), where it decimated indigenous fish populations in the 1930s and 1940s. In its original habitats, the sea lamprey coevolved with its hosts, and those hosts evolved a measure of resistance to the sea lampreys. However, in the Great Lakes, the sea lamprey attacks native fish such as lake trout, lake whitefish, chub, and lake herring, which historically did not face sea lampreys. Elimination of these predators allowed the alewife, another invasive species, to explode in population, with adverse effects on many native fish species. The lake trout plays a vital role in the Lake Superior ecosystem. The lake trout has traditionally been considered an apex predator, which means that it has no predators. The sea lamprey is an aggressive predator by nature, which gives it a competitive advantage in a lake system where it has no predators and its prey lacks defenses against it. The sea lamprey played a large role in the destruction of the Lake Superior trout population. Lamprey introduction along with poor, unsustainable fishing practices caused the lake trout populations to decline drastically. The relationship between predators and prey in the Great Lakes ecosystem then became unbalanced. Each individual sea lamprey has the potential of killing 40 pounds of fish through its 12–18 month feeding period. Efforts at control Control efforts, including electric current and chemical lampricides have met with varied success. The control programs are carried out under the Great Lakes Fishery Commission, a joint Canada–U.S. body, specifically by the agents of the Fisheries and Oceans Canada and the United States Fish and Wildlife Service. Genetic researchers have mapped the sea lamprey's genome in the hope of finding out more about evolution; scientists trying to eliminate the Great Lakes problem are coordinating with these genetic scientists, hoping to find out more about its immune system and fitting it into its place in the phylogenetic tree. Researchers from Michigan State University have teamed up with others from the Universities of Minnesota, Guelph, and Wisconsin, and others in a research effort into newly synthesized pheromones. These are believed to have independent influences on the sea lamprey behavior. One group of pheromones serves a migratory function in that when they are made by larvae, they are thought to lure maturing adults into streams with suitable spawning habitat. Sex pheromones emitted from males are capable of luring females long distances to specific locations. These pheromones are both several different compounds thought to elicit different behaviors that collectively influence the lampreys to exhibit migratory or spawning behaviors. Scientists are trying to characterize the function of each pheromone, and each part of the molecules, to determine if they can be used in a targeted effort at environmentally friendly lamprey control. However, as of 2017, the most effective control measures still involve the application of (3-trifluoromethyl-4-nitrophenol), or TFM, a selective pesticide, into rivers. no lampricide resistance has been detected in the Great Lakes. Further research and combined use of multiple control methods are needed to forestall future development of resistance. Another technique used in the prevention of lamprey population growth is the use of barriers in major reproduction streams of high value to the lamprey. The purpose of the barriers is to block their upstream migration to reduce reproduction. The issue with these barriers is that other aquatic species are also inhibited by this barrier. Fish that use tributaries are impeded from traveling upstream to spawn. To account for this, barriers have been altered and designed to allow the passage of most fish species, but still impede others. Restoration The intent of lamprey control programs is a safer habitat and a healthier population growth for vulnerable native fish species such as lake trout. The Connecticut Department of Energy and Environmental Protection (DEEP) has taken a different path to this same goal by introducing sea lampreys to freshwater rivers and lakes of the Connecticut River watershed, and providing easier access around dams and other barriers for the lampreys to reach spawning sites high upstream. After preying on larger fish at sea, the adult lampreys migrate up the rivers to spawn, whereupon they quickly die of natural causes and decompose, thus providing a food source for the native freshwater fish species.
Biology and health sciences
Agnatha
Animals
1202747
https://en.wikipedia.org/wiki/Atlantic%20salmon
Atlantic salmon
The Atlantic salmon (Salmo salar) is a species of ray-finned fish in the family Salmonidae. It is the third largest of the Salmonidae, behind Siberian taimen and Pacific Chinook salmon, growing up to a meter in length. Atlantic salmon are found in the northern Atlantic Ocean and in rivers that flow into it. Most populations are anadromous, hatching in streams and rivers but moving out to sea as they grow where they mature, after which the adults seasonally move upstream again to spawn. When the mature fish re-enter rivers to spawn, they change in colour and appearance. Some populations of this fish only migrate to large lakes, and are "landlocked", spending their entire lives in freshwater. Such populations are found throughout the range of the species. Unlike Pacific species of salmon, S. salar is iteroparous, which means it can survive spawning and return to sea to repeat the process again in another year with 5–10% returning to the sea to spawn again. Such individuals can grow to extremely large sizes, although they are rare. The different life stages of the fish are known by many different names in English: alevin, fry, parr and smolt. Atlantic salmon is considered a very healthy food and one of the fish with a more refined taste in many cultures. As such it features in numerous popular traditional cuisines and can fetch a higher price than some other fish. It has thus long been the target of recreational and commercial fishing, and this, as well as habitat destruction, has impacted the population in some areas. As a result, the species is the subject of conservation efforts in several countries, which appear to have been somewhat successful since the 2000s. Techniques to farm this species using aquacultural methods have also been developed, and at present it is farmed in great numbers in many places around the world. Although this is now a viable alternative to wild-caught fish, farming methods have attracted criticism from environmentalists. Nomenclature The Atlantic salmon was given its scientific binomial name by Swedish zoologist and taxonomist Carl Linnaeus in 1758. The name, Salmo salar, derives from the Latin salmo, meaning salmon, and salar, meaning leaper, according to M. Barton, but more likely meaning "resident of salt water" . Lewis and Short's Latin Dictionary (Clarendon Press, Oxford, 1879) translates salar as a kind of trout from its use in the Idylls of the poet Ausonius (4th century CE). Later, the differently coloured smolts were found to be the same species. Other names used for the Atlantic salmon are: bay salmon, black salmon, caplin-scull salmon, fiddler, sebago salmon, silver salmon, outside salmon and winnish. At different points in their maturation and life cycle, they are known as parr, smolt, grilse, grilt, kelt, slink, and spring salmon. Atlantic salmon that do not journey to sea are known as landlocked salmon (or in North America). Description Atlantic salmon are the largest species in their genus, Salmo. After two years at sea, the fish average in length and in weight. But specimens that spend four or more winters feeding at sea can be much larger. An Atlantic salmon netted in 1960 in Scotland, in the estuary of the river Hope, weighed , the heaviest recorded in all available literature. Another netted in 1925 in Norway measured in length, the longest Atlantic salmon on record. The colouration of young Atlantic salmon does not resemble the adult stage. While they live in fresh water, they have blue and red spots. At maturity, they take on a silver-blue sheen. The easiest way of identifying them as an adult is by the black spots predominantly above the lateral line, though the caudal fin is usually unspotted. When they reproduce, males take on a slight green or red colouration. The salmon has a fusiform body, and well-developed teeth. All fins, except the adipose fin, are bordered with black. Distribution and habitat The natural breeding grounds of Atlantic salmon are rivers in Europe and the northeastern coast of North America. In Europe, Atlantic salmon are still found as far south as Spain, and as far north as Russia. Because of sport-fishing, some of the species' southern populations in northern Spain are growing smaller. The species distribution is easily influenced by changes in freshwater habitat and climate. Atlantic salmon are a cold-water fish species and are particularly sensitive to changes in water temperature. The Housatonic River, and its Naugatuck River tributary, hosted the southernmost Atlantic salmon spawning runs in the United States. However, there is a 1609 account by Henry Hudson that Atlantic salmon once ran up the Hudson River. In addition, fish scale evidence dating to 10,000 years BP places Atlantic salmon in a coastal New Jersey pond. Two publications from 1988 and 1996 questioned the notion that Atlantic salmon were prehistorically plentiful in New England, when the climate was warmer as it is now. This argument was primarily based on a paucity of bone data in archaeological sites relative to other fish species, and the assertion that historical claims of abundance may have been exaggerated. This argument was later challenged in another paper which claimed that lack of archaeological bone fragments could be explained by salmon bones being rare at sites that still have large salmon runs and that salmonid bones in general are poorly recovered relative to other fish species. Atlantic salmon populations were significantly reduced in the United States following European settlement. The fur trade, timber harvesting, dams and mills and agriculture degraded freshwater habitats and lowered the carrying capacity of most North American streams. Beaver populations were trapped to near-extinction by 1800, and log drives and clear-cutting further exacerbated stream erosion and habitat loss. As timber and fur gave way to agriculture, freshwater Atlantic salmon habitat was further compromised. According to historian D.W. Dunfield (1985) "over half of the historical Atlantic salmon runs had been lost in North America by 1850". As early as 1798, a bill for the preservation of Atlantic Salmon was introduced in Canadian Parliament, to protect populations in Lake Ontario. In the Gulf Region of Nova Scotia it was reported that 31 of the 33 Atlantic salmon streams were blocked off by lumber dams, leading to the extirpation of early-run fish in many watersheds. The inshore Atlantic salmon fishery became a major export of the New World, with major fishing operations establishing along the shores of major river systems. The southernmost populations were the first to disappear. Young salmon spend one to four years in their natal river. When they are large enough (c. ), they smoltify, changing camouflage from stream-adapted with large, gray spots to sea-adapted with shiny sides. They also undergo some endocrinological changes to adapt to osmotic differences between fresh water and seawater habitat. When smoltification is complete, the parr (young fish) now begin to swim with the current instead of against it. With this behavioral change, the fish are now referred to as smolt. When the smolt reach the sea, they follow sea surface currents and feed on plankton or fry from other fish species such as herring. During their time at sea, they can sense the change in the Earth magnetic field through iron in their lateral line. When they have had a year of good growth, they will move to the sea surface currents that transport them back to their natal river. It is a major misconception that salmon swim thousands of kilometres at sea; instead they surf through sea surface currents. It is possible they find their natal river by smell, although this is not confirmed; only 5% of Atlantic salmon go up the wrong river. The range of an individual Atlantic salmon can thus be the river where they are born and the sea surface currents that are connected to that river in a circular path. Wild salmon continued to disappear from many rivers during the twentieth century due to overfishing and habitat change. Ecology Diet Young salmon begin a feeding response within a few days. After the yolk sac is absorbed by the body, they begin to hunt. Juveniles start with tiny invertebrates, but as they mature, they may occasionally eat small fish. During this time, they hunt both in the substrate and in the current. Some have been known to eat salmon eggs. Plankton such as euphausiids are important food for pre-grilse but amphipods and decapods are also consumed. The most commonly eaten foods include caddisflies, blackflies, mayflies, stoneflies, and chironomids, as well as terrestrial insects. As adults, the salmon prefer capelin as their meal of choice. Capelin are elongated silvery fish that grow up to long. Other fish consumed include herring, alewives, smelts, scomberids, sand lance, and small cod. Behavior Fry and parr have been said to be territorial, but evidence showing them to guard territories is inconclusive. While they may occasionally be aggressive towards each other, the social hierarchy is still unclear. Many have been found to school, especially when leaving the estuary. Adult Atlantic salmon are considered much more aggressive than other salmon, and are more likely to attack other fish than others. Life stages Most Atlantic salmon follow an anadromous migration pattern, in that they undergo their greatest feeding and growth in saltwater; however, adults return to spawn in native freshwater streams where the eggs hatch and juveniles grow through several distinct stages. Atlantic salmon do not require saltwater. Numerous examples of fully freshwater (i.e., "landlocked") populations of the species exist throughout the Northern Hemisphere, including a now extinct population in Lake Ontario, which has been shown in recent studies to have spent its entire life cycle in the watershed of the lake. In North America, the landlocked strains are frequently known as ouananiche. Freshwater phase The freshwater phases of Atlantic salmon vary between two and eight years, according to river location. While the young in southern rivers, such as those to the English Channel, are only one year old when they leave, those further north, such as in Scottish rivers, can be over four years old, and in Ungava Bay, northern Quebec, smolts as old as eight years have been encountered. The first phase is the alevin stage, when the fish stay in the breeding ground and use the remaining nutrients in their yolk sacs. During this developmental stage, their young gills develop and they become active hunters. Next is the fry stage, where the fish grow and subsequently leave the breeding ground in search of food. During this time, they move to areas with higher prey concentration. The final freshwater stage is when they develop into parr, in which they prepare for the trek to the Atlantic Ocean. During these times, the Atlantic salmon are very susceptible to predation. Nearly 40% are eaten by trout alone. Other predators include other fish and birds. Egg and juvenile survival is dependent on habitat quality as Atlantic salmon are sensitive to ecological change. Saltwater phases When parr develop into smolt, they begin the trip to the ocean, which predominantly happens between March and June. Migration allows acclimation to the changing salinity. Once ready, young smolt leave, preferring an ebb tide. Having left their natal streams, they experience a period of rapid growth during the one to four years they live in the ocean. Typically, Atlantic salmon migrate from their home streams to an area on the continental plate off West Greenland. During this time, they face predation from humans, seals, Greenland sharks, skate, cod, and halibut. Some dolphins have been noticed playing with dead salmon, but it is still unclear whether they consume them. Once large enough, Atlantic salmon change into the grilse phase, when they become ready to return to the same freshwater tributary they departed from as smolts. After returning to their natal streams, the salmon will cease eating altogether prior to spawning. Although largely unknown, odor – the exact chemical signature of that stream – may play an important role in how salmon return to the area where they hatched. Once heavier than about 250 g, the fish no longer become prey for birds and many fish, although seals do prey upon them. Grey and common seals commonly eat Atlantic salmon. Survivability to this stage has been estimated at between 14 and 53%. Breeding Atlantic salmon breed in the rivers of Western Europe from northern Portugal north to Norway, Iceland, and Greenland, and the east coast of North America from Connecticut in the United States north to northern Labrador and Arctic Canada. The species constructs a nest or "redd" in the gravel bed of a stream. The female creates a powerful downdraught of water with her tail near the gravel to excavate a depression. After she and a male fish have eggs and milt (sperm), respectively, upstream of the depression, the female again uses her tail, this time to shift gravel to cover the eggs and milt which have lodged in the depression. Unlike the various Pacific salmon species which die after spawning (semelparous), the Atlantic salmon is iteroparous, which means the fish may recondition themselves and return to the sea to repeat the migration and spawning pattern several times, although most spawn only once or twice. Migration and spawning exact an enormous physiological toll on individuals, such that repeat spawners are the exception rather than the norm. Atlantic salmon show high diversity in age of maturity and may mature as parr, one- to five-sea-winter fish, and in rare instances, at older sea ages. This variety of ages can occur in the same population, constituting a 'bet hedging' strategy against variation in stream flows. So in a drought year, some fish of a given age will not return to spawn, allowing that generation other, wetter years in which to spawn. Hybridization When in shared breeding habitats, Atlantic salmon will hybridize with brown trout (Salmo trutta). Hybrids between Atlantic salmon and brown trout were detected in two of four watersheds studied in northern Spain. The proportions of hybrids in samples of salmon ranged from 0 to 7-7% but these proportions were not significantly homogeneous among locations, resulting in a mean hybridization rate of 2-3%. This is the highest rate of natural hybridization so far reported and is significantly greater than rates observed elsewhere in Europe. Beaver impact The decline in anadromous salmonid species over the last two to three centuries is correlated with the decline in the North American beaver and European beaver, although some fish and game departments continue to advocate removal of beaver dams as potential barriers to spawning runs. Migration of adult Atlantic salmon may be limited by beaver dams during periods of low stream flows, but the presence of juvenile salmon upstream of the dams suggests they are penetrated by parr. Downstream migration of Atlantic salmon smolts was similarly unaffected by beaver dams, even in periods of low flows. In a 2003 study, Atlantic salmon and sea-run brown trout spawning in the Numedalslågen River and 51 of its tributaries in southeastern Norway was unhindered by beavers. In a restored, third-order stream in northern Nova Scotia, beaver dams generally posed no barrier to Atlantic salmon migration except in the smallest upstream reaches in years of low flow where pools were not deep enough to enable the fish to leap the dam or without a column of water over-topping the dam for the fish to swim up. The importance of winter habitat to salmonids afforded by beaver ponds may be especially important in streams of northerly latitudes without deep pools where ice cover makes contact with the bottom of shallow streams. In addition, the up to eight-year-long residence time of juveniles in freshwater may make beaver-created permanent summer pools a crucial success factor for Atlantic salmon populations. In fact, two-year-old Atlantic salmon parr in beaver ponds in eastern Canada showed faster summer growth in length and mass and were in better condition than parr upstream or downstream from the pond. Relationship to humans Atlantic salmon is a popular fish for human consumption and is commonly sold fresh, canned, or frozen. Wood and stone weirs along streams and ponds were used for millennia to harvest salmon in the rivers of New England. European fishermen gillnetted for Atlantic salmon in rivers using hand-made nets for many centuries and gillnetting was also used in early colonial America. In its natal streams, Atlantic salmon are considered prized recreational fish, pursued by fly anglers during its annual runs. At one time, the species supported an important commercial fishery, but having become endangered throughout its range globally, wild-caught Atlantic salmon are now virtually absent from the market. Instead, nearly all are from aquaculture farms, predominantly in Norway, Chile, Canada, the UK, Ireland, Faroe Islands, Russia and Tasmania in Australia. Aquaculture Adult male and female fish are anaesthetised; their eggs and sperm are "stripped" after the fish are cleaned and cloth dried. Sperm and eggs are mixed, washed, and placed into freshwater. Adults recover in flowing, clean, well-aerated water. Some researchers have even studied cryopreservation of their eggs. Fry are generally reared in large freshwater tanks for 12 to 20 months. Once the fish have reached the smolt phase, they are taken out to sea, where they are held for up to two years. During this time, the fish grow and mature in large cages off the coasts of Canada, the US, or parts of Europe. There are many different commercially available cage designs built to operate in a wide variety of aquatic conditions. High-density polyethylene (HDPE) cages are widely used, with HDPE pipes forming a floating collar ring onto which the fish net pen is secured and suspended in the water below. Advancements in cage technologies have allowed for reduction in fish escapes, improvement in growing conditions, and maximization of aquaculture production volume per unit area of growing space. Controversy Farmed Atlantic salmon are known to occasionally escape from cages and enter the habitat of wild populations. Interbreeding between escaped farm fish and wild fish decreases genetic diversity and introduces "the potential to genetically alter native populations, reduce local adaptation and negatively affect population viability and character". A study in 2000 demonstrated that the genes of farmed Atlantic salmon intrude wild populations mainly through wild males breeding with farmed females, though farmed specimens showed reduced capacity for breeding success overall compared to their wild counterparts. Further study in 2018 discovered extensive cross-breeding of wild and farmed Atlantic salmon in the Northwest Atlantic, showing that 27.1% of fish in 17 out of 18 rivers examined are artificially stocked or hybrids. Farming of Atlantic salmon in open cages at sea has also been linked, at least in part, to a decline in wild stocks attributed to the passing of parasites from farmed to wild individuals. On the west coast of the United States and Canada, aquaculturists are generally under scrutiny to ensure that non-native Atlantic salmon cannot escape from their open-net pens, however occasional incidents of escape have been documented. During one incident in 2017, for example, up to 300,000 potentially invasive Atlantic salmon escaped a farm among the San Juan Islands in Puget Sound, Washington. Washington went on in 2019 to implement a gradual phase out of salmon farming to be completed by 2025. Despite being the source of considerable controversy, the likelihood of escaped Atlantic salmon establishing an invasive presence in the Pacific Northwest is considered minimal, largely because a number of 20th century efforts aimed at deliberately introducing them to the region were ultimately unsuccessful. From 1905 until 1935, for example, in excess of 8.6 million Atlantic salmon of various life stages (predominantly advanced fry) were intentionally introduced to more than 60 individual British Columbia lakes and streams. Historical records indicate, in a few instances, mature sea-run Atlantic salmon were captured in the Cowichan River; however, a self-sustaining population never materialized. Similarly unsuccessful results were realized after deliberate attempts at introduction by Washington as late as the 1980s. Consequently, environmental assessments by the US National Marine Fisheries Service (NMFS), the Washington Department of Fish and Wildlife and the BC Environmental Assessment Office have concluded the potential risk of Atlantic salmon colonization in the Pacific Northwest is low. Future prospects A study of Næve et al. (2022) estimated the impact of 50 years of genetic selection and tried to predict the impact it could have until 2050. In order to do this, a common garden experiment was used to model and simulate past and future effects for 11 generations of genetic selection of increased growth rate in Atlantic salmon. To model the contribution that breeding has made in the industry from generation 0 (harvested in 1975– 1978) to generation 11 (harvested in 2017 – 2019), and to simulate growth until 2050 (generation 24), the Norwegian salmon aquaculture production between 2016 and 2019 was used as a base case. The simulation of the expected growth until 2050 (generation 24) gave five different scenarios : Historical (H1), Forecast 1 (F1), Forecast 2 (F2), Forecast 3 (F3) and Forecast 4 (F4). Changes in thermal growth coefficient (TGC) per generation were used in the model to simulate the differences in the five scenarios. The genetic data, H1, and the most conservative forecast scenario, F1, simulate what can be expected in 2050 if the trend from generation 0 through 11 is maintained. The following forecast scenarios assume a greater increase in genetic growth with a larger increase in the TGC in the generations to come. In the next two generations, more advanced selection methods such as marker assisted selection (from generation 10) and genomic selection (from generation 11) were implemented. This resulted in increased gain in selection for growth and simulated F2 and F3. The most progressive scenario, F4, aimed at exploring the effect in the industry when the full genetic potential is utilized. This assumes a further development of advanced techniques in the years to come. The authors of the article found that the daily yield of the biomass increased with increasing generations in the historic and forecast scenarios. Further, the production time in seawater to reach the harvest weight of 5100 g is expected to be reduced by 53% in 2050. When production time can be reduced, this will also reduce e.g. time at risk of diseases. In the most progressive scenario, mortality in seawater was expected to be reduced by up to 50%. Further, the authors found that production per license can increase by up to 121%. Additionally, 77% of the new volume needed to achieve five million tonnes in 2050, may be provided by genomic selection. However, one should keep in mind that this article was published by the firm Aquagen, and can possibly be biased and too optimistic. Conservation The IUCN rates this as a common species with a conservation status of "least concern", however it has been 25 years since the IUCN has released this status. A more recent regional assessment revealed that the European population of this species is vulnerable, and this might be the same or a similar status globally. Location-specific assessments have shown population declines across parts of the Atlantic Salmon's natural range, with populations along the coast of Maine and the Inner Bay of Fundy now listed as "endangered" under the Endangered Species Act, and the Canadian Species at Risk Act, respectively. Human activities have impacted salmon populations across parts of its range. The major threats are from overfishing and habitat change. Salmon decline in Lake Ontario goes back to the 18th–19th centuries, due to logging and soil erosion, as well as dam and mill construction. By 1896, the species was declared extirpated from the lake. In the 1950s, salmon from rivers in the United States and Canada, as well as from Europe, were discovered to gather in the sea around Greenland and the Faroe Islands. A commercial fishing industry was established, taking salmon using drift nets. After an initial series of record annual catches, the numbers crashed; between 1979 and 1990, catches fell from four million to 700,000. Beginning around 1990, the rates of Atlantic salmon mortality at sea more than doubled in the western Atlantic. Rivers of the coast of Maine, southern New Brunswick and much of mainland Nova Scotia saw runs drop precipitously, and even disappear. An international effort to study the increased mortality rate was organized by the North Atlantic Salmon Conservation Organization. In 2000 the numbers of Atlantic salmon dropped to very low levels in Newfoundland, Canada. In 2007 at least one sport fishing organization from Iceland and Scandinavia blamed less fish caught by recreational anglers on overfishing at sea, and thus created the North Atlantic Salmon Fund to buy commercial quotas in the Atlantic from commercial fishermen in an effort to preserve wild Salmo salar stocks. Possibly because of improvements in ocean feeding grounds, returns in 2008 were very positive. On the Penobscot River in Maine, returns were about 940 in 2007, and by mid-July 2008, the return was 1,938. Similar stories were reported in rivers from Newfoundland to Quebec. In 2011, more than 3,100 salmon returned to the Penobscot, the most since 1986, and nearly 200 ascended the Narraguagus River, up from the low two digits just a decade before. Recreational fishing of stocked, landlocked Atlantic salmon is now authorized in much of the US and Canada where it occurs in large numbers, but this is subject to regulations in many states or provinces which are designed to maintain the continuity of the species. Strict catch limits, catch and release practices and forced fly fishing are examples of those regulations. However, catch and release angling can be an additional stressor on Atlantic salmon populations, especially when its impacts are combined with the existing pressures of climate change, overfishing, and predation. Restoration efforts Around the North Atlantic, efforts to restore salmon to their native habitats are underway, with slow progress. Habitat restoration and protection are key to this process, but issues of excessive harvest and competition with farmed and escaped salmon are also primary considerations. In the Great Lakes, Atlantic salmon have been reintroduced, but the percentage of salmon reproducing naturally is very low. Most areas are re-stocked annually. Since the extirpation of Atlantic salmon from Lake Ontario in the late 19th century, the state of New York has stocked its adjoining rivers and tributaries, and in many cases does not allow active fishing. The province of Ontario started the Atlantic Salmon Restoration Program in 2006, which is one of the largest freshwater conservation programs in North America. It has since stocked Lake Ontario and surrounding tributaries with upwards of 6,000,000 young Atlantic salmon, with efforts growing each year. In New England, many efforts are underway to restore salmon to the region by knocking down obsolete dams and updating others with fish ladders and other techniques that have proven effective in the West with Pacific salmon. There is some success thus far, with populations growing in the Penobscot and Connecticut Rivers. Lake Champlain now has Atlantic salmon. The Atlantic Salmon Federation is involved in restoration efforts along the eastern United States and Canada, where their projects are focused on removing barriers to fish passage and eradicating invasive species. Recent documented successes in the reintroduction of Atlantic salmon include the following: In October 2007, salmon were video-recorded running in Toronto's Humber River by the Old Mill. A migrating salmon was observed in Ontario's Credit River in November 2007. As of 2013, there has been some success in establishing Atlantic salmon in Fish Creek, a tributary of Oneida Lake in central New York. In November 2015, salmon nests were observed in Connecticut in the Farmington River, a tributary of the Connecticut River where Atlantic salmon had not been observed spawning since "probably the Revolutionary War". However, both state and federal experts indicated that this find likely represented a dwindling wave of returning stocked fish from massive salmon restoration efforts that had concluded years earlier in 2012. Significant doubt was cast on fish returning to spawn in meaningful numbers after 2017, when the last generation of stocked salmon would return. NASCO The North Atlantic Salmon Conservation Organization is an international council made up of Canada, the European Union, Iceland, Norway, the Russian Federation, and the United States, with its headquarters in Edinburgh. It was established in 1983 to help protect Atlantic salmon stocks, through the cooperation between nations. They work to restore habitat and promote conservation of the salmon. In December 2021, NASCO published an updated interactive map of their Rivers Database, showing the stock status of wild Atlantic salmon populations across the species range. Legislation England and Wales Edward I instituted a penalty for collecting salmon during certain times of the year. His son Edward II continued, regulating the construction of weirs. Enforcement was overseen by those appointed by the justices of the peace. Because of confusing laws and the appointed conservators having little power, most laws were barely enforced. Based on this, a royal commission was appointed in 1860 to thoroughly investigate the Atlantic salmon and the laws governing the species, resulting in the 1861 Salmon Fisheries Act. The act placed enforcement of the laws under the Home Office's control, but it was later transferred to the Board of Trade, and then later to the Board of Agriculture and Fisheries. Another act passed in 1865 imposed charges to fish and catch limits. It also caused the formation of local boards having jurisdiction over a certain river. The next significant act, passed in 1907, allowed the board to charge 'duties' to catch other freshwater fish, including trout. Despite legislation, board effects decreased until, in 1948, the River Boards Act gave authority of all freshwater fish and the prevention of pollution to one board per river. In total, it created 32 boards. In 1974, the 32 boards, which by then were integrated into regional river authorities, were reduced to 10 regional water authorities (RWAs). Although only the Northumbrian Water Authority, Welsh National Water Development Authority, Northwest Water Authority and Southwest Water Authority had significant salmon populations, all ten also regulated and conserved trout and freshwater eel fisheries The Salmon and Freshwater Fisheries Act was passed in 1975. Among other things, it regulated fishing licences, seasons, and size limits, and banned obstructing the salmon's migratory paths. Scotland Salmon was greatly valued in medieval Scotland, and various fishing methods, including the use of weirs, cruives, and nets, were used to catch the fish. Fishing for salmon was heavily regulated in order to conserve the resource. In 1318, King Robert I enacted legislation setting a minimum size for cruives, "so that no fry of fish are impeded from ascending and descending..." Laws on catching fish upon royal lands were frequently updated, demonstrating their importance. Because the fish were held in such high regard, poachers were severely punished; a person twice convicted of poaching salmon on a royal estate could be sentenced to death. The export of salmon was economically important in Aberdeen; beginning in the 15th century, the fish could be preserved through salting and barreling, allowing them to be exported abroad, including as far away as the Baltic. The volume of the early Scottish salmon trade is impossible to determine, since surviving custom records date only from the 1420 onward, and since Aberdeen burgesses enjoyed an exemption on salmon customs until the 1530s. During the 15th century, many laws were passed; many regulated fishing times, and worked to ensure smolts could safely pass downstream. James III even closed a meal mill because of its history of killing fish attracted to the wheel. More recent legislation has established commissioners who manage districts. Furthermore, the Salmon and Freshwater Fisheries Act in 1951 required the Secretary of State be given data about the catches of salmon and trout to help establish catch limits. United States Commercial and recreational fishing of wild, anadromous Atlantic salmon is prohibited in the United States. Several populations of Atlantic salmon are in serious decline, and are listed as endangered under the Endangered Species Act (ESA). Currently, runs of 11 rivers in Maine are on the list – Kennebec, Androscoggin, Penobscot, Sheepscot, Ducktrap, Cove Brook, Pleasant, Narraguagus, Machias, East Machias and Dennys. The Penobscot River is the "anchor river" for Atlantic salmon populations in the US. Returning fish in 2008 were around 2,000, more than double the 2007 return of 940. Section 9 of the ESA makes it illegal to take an endangered species of fish or wildlife. The definition of "take" is to "harass, harm, pursue, hunt, shoot, wound, kill, trap, capture, or collect, or to attempt to engage in any such conduct".<ref>{{Cite web|url=https://archive.ph/20120731090529/http://www.epa.gov/EPA-SPECIES/1998/May/Day-01/e11668.htm|title=Endangered and Threatened Wildlife and Plants; Definition of ``Harm…|date=31 July 2012|website=archive.ph}}</ref> Canada The federal government has prime responsibility for protecting the Atlantic salmon, but over the last generation, effort has continued to shift management as much as possible to provincial authorities through memoranda of understanding, for example. A new Atlantic salmon policy is in the works, and in the past three years, the government has attempted to pass a new version of the century-old Fisheries Act through Parliament. Federal legislation regarding at-risk populations is weak. Inner Bay of Fundy Atlantic salmon runs were declared endangered in 2000. A recovery and action plan is in place. Nongovernmental organizations, such as the Atlantic Salmon Federation, constantly demand for improvements in management, and for initiatives to be considered. For example, the ASF and the Nova Scotia Salmon Association desire the use of technology for mitigation of acid rain-affected rivers such as used in Norway is in 54 Nova Scotia rivers and managed to raise the funds to get a project in place in one river. In Quebec, the daily catch limit for Atlantic salmon is dependent on the individual river. Some rivers are strictly catch and release with a limit of 3 released fish. Each catch must be declared. Some rivers allow you to keep between 1 or 2 grilse (30 cm to 63 cm), while some of the more prolific rivers (mainly on the north coast) will allow you to keep 1 salmon over 63 cm. The annual catch limit is 4 Atlantic salmon of small size and only 1 of those may be bigger than 63 cm. In Lake Ontario, the historic populations of Atlantic salmon became extinct, and cross-national efforts have been under way to reintroduce the species, with some areas already having restocked naturally reproducing populations.
Biology and health sciences
Salmoniformes
Animals
1203155
https://en.wikipedia.org/wiki/Pedosphere
Pedosphere
The pedosphere () is the outermost layer of the Earth that is composed of soil and subject to soil formation processes. It exists at the interface of the lithosphere, atmosphere, hydrosphere and biosphere. The pedosphere is the skin of the Earth and only develops when there is a dynamic interaction between the atmosphere (air in and above the soil), biosphere (living organisms), lithosphere (unconsolidated regolith and consolidated bedrock) and the hydrosphere (water in, on and below the soil). The pedosphere is the foundation of terrestrial life on Earth. The pedosphere acts as the mediator of chemical and biogeochemical flux into and out of these respective systems and is made up of gaseous, mineralic, fluid and biologic components. The pedosphere lies within the Critical Zone, a broader interface that includes vegetation, pedosphere, aquifer systems, regolith and finally ends at some depth in the bedrock where the biosphere and hydrosphere cease to make significant changes to the chemistry at depth. As part of the larger global system, any particular environment in which soil forms is influenced solely by its geographic position on the globe as climatic, geologic, biologic and anthropogenic changes occur with changes in longitude and latitude. The pedosphere lies below the vegetative cover of the biosphere and above the hydrosphere and lithosphere. The soil forming process (pedogenesis) can begin without the aid of biology but is significantly quickened in the presence of biologic reactions, where it forms a soil carbon sponge. Soil formation begins with the chemical and/or physical breakdown of minerals to form the initial material that overlies the bedrock substrate. Biology quickens this by secreting acidic compounds that help break rock apart. Particular biologic pioneers are lichen, mosses and seed bearing plants, but many other inorganic reactions take place that diversify the chemical makeup of the early soil layer. Once weathering and decomposition products accumulate, a coherent soil body allows the migration of fluids both vertically and laterally through the soil profile, causing ion exchange between solid, fluid and gaseous phases. As time progresses, the bulk geochemistry of the soil layer will deviate away from the initial composition of the bedrock and will evolve to a chemistry that reflects the type of reactions that take place in the soil. Lithosphere The primary conditions for soil development are controlled by the chemical composition of the rock on which the soil will be. Rock types that form the base of the soil profile are often either sedimentary (carbonate or siliceous), igneous or metaigneous (metamorphosed igneous rocks) or volcanic and metavolcanic rocks. The rock type and the processes that lead to its exposure at the surface are controlled by the regional geologic setting of the specific area under study, which revolve around the underlying theory of plate tectonics, subsequent deformation, uplift, subsidence and deposition. Metaigneous and metavolcanic rocks form the largest component of cratons and are high in silica. Igneous and volcanic rocks are also high in silica, but with non-metamorphosed rock, weathering becomes faster and the mobilization of ions is more widespread. Rocks high in silica produce silicic acid as a weathering product. There are few rock types that lead to localized enrichment of some of the biologically limiting elements like phosphorus (P) and nitrogen (N). Phosphatic shale (< 15% P2O5) and phosphorite (> 15% P2O5) form in anoxic deep water basins that preserve organic material. Greenstone (metabasalt), phyllite, and schist release up to 30–50% of the nitrogen pool. Thick successions of carbonate rocks are often deposited on craton margins during sea level rise. The widespread dissolution of carbonate and evaporites leads to elevated levels of Mg2+, , Sr2+, Na+, Cl− and ions in aqueous solution. Weathering and dissolution of minerals The process of soil formation is dominated by chemical weathering of silicate minerals, aided by acidic products of pioneering plants and organisms as well as carbonic acid inputs from the atmosphere. Carbonic acid is produced in the atmosphere and soil layers through the carbonation reaction. H2O + CO2 <=> H+ + HCO3- H2CO3 This is the dominant form of chemical weathering and aides in the breakdown of carbonate minerals (such as calcite and dolomite) and silicate minerals (such as feldspar). The breakdown of the Na-feldspar, albite, by carbonic acid to form kaolinite clay is as follows: 2 NaAlSi3O8 + 2 H2CO3 + 9 H2O <=> 2 Na+ + 2 HCO3- + 4 H4SiO4 + Al2Si2O5(OH)4 Evidence of this reaction in the field would be elevated levels of bicarbonate (), sodium and silica ions in the water runoff. The breakdown of carbonate minerals: CaCO3 + H2CO3 <=> Ca^2+ + 2 HCO3- CaCO3 <=> Ca^2+ + CO3^2- The further dissolution of carbonic acid (H2CO3) and bicarbonate () produces CO2 gas. Oxidization is also a major contributor to the breakdown of many silicate minerals and formation of secondary minerals (diagenesis) in the early soil profile. Oxidation of olivine (FeMgSiO4) releases Fe, Mg and Si ions. The Mg is soluble in water and is carried in the runoff, but the Fe often reacts with oxygen to precipitate Fe2O3 (hematite), the oxidized state of iron oxide. Sulfur, a byproduct of decaying organic material, will also react with iron to form pyrite (FeS2) in reducing environments. Pyrite dissolution leads to low pH levels due to elevated H+ ions and further precipitation of Fe2O3 ultimately changing the redox conditions of the environment. Biosphere Inputs from the biosphere may begin with lichen and other microorganisms that secrete oxalic acid. These microorganisms, associated with the lichen community or independently inhabiting rocks, include blue-green algae, green algae, various fungi, and numerous bacteria. Lichen has long been viewed as the pioneers of soil development as the following 1997 Isozaki statement suggests: However, lichens are not necessarily the only pioneering organisms nor the earliest form of soil formation as it has been documented that seed-bearing plants may occupy an area and colonize quicker than lichen. Also, eolian sedimentation (wind generated) can produce high rates of sediment accumulation. Nonetheless, lichen can certainly withstand harsher conditions than most vascular plants, and although they have slower colonization rates, they do form the dominant group in alpine regions. Organic acids released from plant roots include acetic acid and citric acid. During the decay of organic matter phenolic acids are released from plant matter and humic acid and fulvic acid are released by soil microbes. These organic acids speed up chemical weathering by combining with some of the weathering products in a process known as chelation. In the soil profile, these organic acids are often concentrated at the top of the profile, while carbonic acid plays a larger role towards the bottom of the profile or below in the aquifer. As the soil column develops further into thicker accumulations, larger animals come to inhabit the soil and continue to alter the chemical evolution of their respective niche. Earthworms aerate the soil and convert large amounts of organic matter into rich humus, improving soil fertility. Small burrowing mammals store food, grow young and may hibernate in the pedosphere altering the course of soil evolution. Large mammalian herbivores above ground transport nutrients in form of nitrogen-rich waste and phosphorus-rich antlers, while predators leave phosphorus-rich piles of bones on the soil surface, leading to localized enrichment of the soil. Redox conditions in wetland soils Nutrient cycling in lakes and freshwater wetlands depends heavily on redox conditions. Under a few millimeters of water, heterotrophic bacteria metabolize and consume oxygen. They therefore deplete the soil of oxygen and create the need for anaerobic respiration. Some anaerobic microbial processes include denitrification, sulfate reduction and methanogenesis and are responsible for the release of N2 (nitrogen), H2S (hydrogen sulfide) and CH4 (methane). Other anaerobic microbial processes are linked to changes in the oxidation state of iron and manganese. As a result of anaerobic decomposition, the soil stores large amounts of organic carbon because the soil carbon sponge stays intact. The reduction potential describes which way chemical reactions will proceed in oxygen deficient soils and controls the nutrient cycling in flooded systems. Reduction potential is used to express the likelihood of an environment to receive electrons and therefore become reduced. For example, if a system already has plenty of electrons (anoxic, organic-rich shale) it is reduced. In a system, it will likely donate electrons to a part that has a low concentration of electrons, or an oxidized environment, to equilibrate to the chemical gradient. An oxidized environment has high redox potential, whereas a reduced environment has a low redox potential. The redox potential is controlled by the oxidation state of the chemical species, pH and the amount of O2 there is in the system. The oxidizing environment accepts electrons because of the presence of O2, which acts as an electron acceptor: O2 + 4 e- + 4 H+ <=>> 2 H2O This equation will tend to move to the right in acidic conditions. Higher redox potentials are found at lower pH levels. Bacteria, heterotrophic organisms, consume oxygen while decomposing organic material. This depletes the soils of oxygen, thus decreasing the redox potential. At high redox potential, the oxidized form of iron, ferric iron (Fe3+), will be deposited commonly as hematite. In low redox conditions, decomposition rates decrease and the deposition of ferrous iron (Fe2+) increase. By using analytical geochemical tools such as X-ray fluorescence or inductively coupled mass spectrometry the two forms of Fe (Fe2+ and Fe3+) can be measured in ancient rocks therefore determining the redox potential for ancient soils. Such a study was done on Permian through Triassic rocks (300–200 million years old) in Japan and British Columbia. The geologists found hematite throughout the early and middle Permian but began to find the reduced form of iron in pyrite within the ancient soils near the end of the Permian and into the Triassic. These results suggest that conditions became less oxygen rich, even anoxic, during the late Permian, which eventually led to the greatest extinction in Earth’s history, the P-T extinction. Decomposition in anoxic or reduced soils is also carried out by sulfur-reducing bacteria which, instead of O2 use as an electron acceptor and produce hydrogen sulfide (H2S) and carbon dioxide in the process: 2 H+ + SO4^2- + 2 [CH2O] <=> 2 CO2 + H2S + 2 H2O The H2S gas percolates upwards and reacts with Fe2+ and precipitates pyrite, acting as a trap for the toxic H2S gas. However, H2S is still a large fraction of emissions from wetland soils. In most freshwater wetlands there is little sulfate () so methanogenesis becomes the dominant form of decomposition by methanogenic bacteria only when sulfate is depleted. Acetate, a compound that is a byproduct of fermenting cellulose is split by methanogenic bacteria to produce methane (CH4) and carbon dioxide (CO2), which are released to the atmosphere. Methane is also released during the reduction of CO2 by the same bacteria. Atmosphere In the pedosphere it is safe to assume that gases are in equilibrium with the atmosphere. Because plant roots and soil microbes release CO2 to the soil, the concentration of bicarbonate () in soil waters is much greater than that in equilibrium with the atmosphere, the high concentration of CO2 and the occurrence of metals in soil solutions results in lower pH levels in the soil. Gases that escape from the pedosphere to the atmosphere include the gaseous byproducts of carbonate dissolution, decomposition, redox reactions and microbial photosynthesis. The main inputs from the atmosphere are aeolian sedimentation, rainfall and gas diffusion. Eolian sedimentation includes anything that can be entrained by wind or that stays suspended in air and includes a wide variety of aerosol particles, biological particles like pollen, and dust particles. Nitrogen is the most abundant constituent in rain (after water), as water vapor utilizes aerosol particles to nucleate rain droplets. Soil in forests Soil is well developed in the forest as suggested by the thick humus layers, rich diversity of large trees and animals that live there. Forest soils can form a thick soil carbon sponge. In forests, precipitation exceeds evapotranspiration which results in an excess of water that percolates downward through the soil layers. Slow rates of decomposition leads to large amounts of fulvic acid, greatly enhancing chemical weathering. The downward percolation, in conjunction with chemical weathering leaches Mg, Fe, and aluminium (Al) from the soil and transports them downward, a process known as podzolization. This process leads to marked contrasts in the appearance and chemistry of the soil layers. Soil in the tropics Tropical forests receive more insolation and rainfall over longer growing seasons than any other environment on earth. With these elevated temperatures, insolation and rainfall, biomass is extremely productive leading to the production of as much as 800 grams of carbon per square meter per year (8 tons of C/hectare/year). Higher temperatures and larger amounts of water contribute to higher rates of chemical weathering. Increased rates of decomposition cause smaller amounts of fulvic acid to percolate and leach metals from the zone of active weathering. Thus, in stark contrast to soil in temperate forests, tropical forests have little to no podzolization and therefore do not have marked visual and chemical contrasts with the soil layers. Instead, the mobile metals Mg, Fe and Al are precipitated as oxide minerals giving the soil a rusty red color. Soil in grasslands and deserts Precipitation in grasslands is equal to or less than evapotranspiration and causes soil development to operate in relative drought. Leaching and migration of weathering products is therefore decreased. Large amounts of evaporation cause a buildup of calcium (Ca), and other large cations flocculate clay minerals and fulvic acids in the upper soil profile. Low amounts of precipitation and high levels of evapotranspiration limit the downward percolation of water and organic acids, reducing chemical weathering and soil development. The depth to the maximum concentration of clay increases in areas of increased precipitation and leaching. When leaching is decreased, the calcium precipitates as calcite (CaCO3) in the lower soil levels, a layer known as caliche. Deserts behave similarly to grasslands but operate in constant drought as precipitation is less than evapotranspiration. Chemical weathering proceeds more slowly than in grasslands and beneath the caliche layer may be a layer of gypsum and halite. To study soils in deserts, pedologists have used the concept of chronosequences to relate the timing and development of the soil layers. It has been shown that phosphorus leaches very quickly from the system, and soil P-levels decrease with age. Furthermore, carbon buildup in the soils is decreased due to slower decomposition rates. As a result, the rates of carbon circulation in the biogeochemical cycle is decreased.
Physical sciences
Pedology
null
1204150
https://en.wikipedia.org/wiki/Chionoecetes
Chionoecetes
Chionoecetes is a genus of crabs that live in the northern Pacific and Atlantic Oceans. Common names for crabs in this genus include "queen crab" (in Canada) and "spider crab". The generic name Chionoecetes means snow (, ) inhabitant (, ); means shepherd, and C. opilio is the primary species referred to as snow crab. Marketing strategies, however, employ snow crab for any species in the genus Chionoecetes. The name "snow crab" refers to their being commonly found in cold northern oceans. General Snow crabs are caught as far north as the Arctic Ocean, from Newfoundland to Greenland and north of Norway in the Atlantic Ocean, and across the Pacific Ocean, including the Sea of Japan, the Bering Sea, the Gulf of Alaska, Norton Sound, and even as far south as California for Chionoecetes bairdi. In 2019, the Norwegian Supreme Court ruled that the species is considered a sedentary species living on the seabed, and thus governed by the United Nations Law of the Sea. Species Seven extant species are currently recognised in the genus: Chionoecetes angulatus Rathbun, 1924 – triangle tanner crab Chionoecetes bairdi Rathbun, 1924 – tanner crab, bairdi, or inshore tanner crab Chionoecetes elongatus Rathbun, 1924 Chionoecetes japonicus Rathbun, 1932 – beni-zuwai crab Chionoecetes opilio (Fabricius, 1788) – snow crab or opilio Chionoecetes pacificus Sakai, 1978 Chionoecetes tanneri Rathbun, 1893 – grooved tanner crab Cookery Crabs are prepared and eaten as a dish in many different ways all over the world. The legs are usually served in clusters and are steamed, boiled, or grilled. Snow crab can also be used as an ingredient in other dishes such as snow crab macaroni and cheese. Food web position and importance Snow crabs are an important part of the ecosystem throughout the Pacific and Atlantic Oceans. They eat other invertebrates on the benthic shelf like crustaceans, bivalves, brittle stars, polychaetes, phytobenthos, foraminiferans, annelid worms, and mollusks. They are also fed on by halibut, cod, larger snow crabs, seals, squid, and Alaskan king crabs. Snow crabs are also highly sought after for the commercial fishing industry. Life history stages and vulnerabilities Juvenile snow crabs mature in cold-water pools on the ocean floor that are sustained by melting sea ice. If waters warm above the 2 °C maximum necessary for juvenile development, their normal nursery habitat will be reduced significantly. Adults are similarly unlikely to tolerate conditions of more than 5 °C. With a gestation period of up to two years and an average spawn size of up to 100,000 eggs, their fecundity (i.e., fertility) is high, but recent trends have shown that these characteristics do not make them impervious to threats like a warming climate. Population decline in the Bering Sea 2018 was one of the warmest years coinciding with periods of the lowest sea ice extent on record in the Bering Sea. The driver of this trend was the northeast Pacific marine heatwave, which contributed to significant die-offs in a number of species. 2019 was yet another year of record-breaking temperatures, attributed to a weakened North Pacific High, which reduced evaporative cooling in the Northeast Pacific and saw a steep decline in the number of juvenile crabs. In 2021, crabs of all ages declined, and habitat range shrank substantially. 2022 saw the most drastic decline in Bering Sea snow crab populations, decreasing from 11.7 billion in 2018 to 1.9 billion in 2022 (a decline of approximately 84%). This decimation of the crustaceans’ population spurred the closing of the Alaska snow crab season for the first time in history, an industry worth approximately $160,000,000 annually. Theories regarding decline Though the cause is yet unknown, several theories behind this decimation have been put forward. Overfishing is likely the main driver, intertwined with the effects of climate change. Increased water temperatures also increase snow crabs’ metabolism, so one theory is that their increased metabolic rate – combined with fewer resources due to a shrinking habitat – left them to either starve or consume each other. Predator range expansion is another possibility; as waters warm, predators that normally inhabit warmer southern waters (such as the Pacific cod) can travel further north in search of prey. A third theory is that a reduction in habitat area could increase the spread of diseases like bitter crab syndrome. All these theories tie back to an altogether warmer ocean and are supported by the impacts of low ice delineated in Thoman et al. (2020). Bering Sea climatic impacts on snow crab The Bering Sea's southeastern shelf is composed of 3 biophysical domains: 1) a vertically well-mixed upper region (0–50m); 2) a middle region that is well-mixed in winter and stratified in summer (50–100m); and 3) an outer region with more gradual stratification (100–200m). The Bering Sea shelf break (a zone where the shallower continental shelf drops off into the North Aleutians Basin) is the dominant driver of primary productivity in the Bering Sea – upwelling brings nutrients from the cold waters of the Aleutian basin to mix in shallow waters. This area is called home to many ecologically important species, including the snow crab. To assess trends and impacts of the warming climate in the Bering Sea, a recent study created a regional model of both physical and biological elements of the Bering Sea using three global climate simulations from the Intergovernmental Panel on Climate Change Fourth Assessment. This model detected overall trends of warmer temperatures and a retreat of sea ice in the southeastern Bering Sea. Primary drivers of these higher water column temperatures include increasing air temperature and northward wind stress. Warming trends on the outer Bering Sea shelf are concerning for a variety of reasons, one of which being that they may lead to decreased production of large crustacean zooplankton. On a broader spatial scale, sea surface temperatures (SSTs) that marked the start of summer in the North Pacific now come 11 days earlier and SSTs that marked the end of summer now come around 27 days later. Additionally, summers are on average 1.5 °C warmer and winters are on average 0.5 °C warmer. Historically, the Bering Sea continental shelf maintains between 40–100% ice cover at its annual winter maximum. In 2018, the maximum sea ice cover was only 47% of the 1979–2016 mean seasonal maximum extent. Southeastward advection of melting sea ice contributes to the latitudinal salinity gradient of the Bering Sea, so when sea ice formation is reduced, the salinity gradient is altered. Though these don’t seem like significant changes, the inherent thermal conductivity of water (its ability to absorb heat) means that small changes like these are a big deal for marine organisms like the snow crab. It is yet unknown whether the Bering Sea snow crab population will recover, but scientists and policymakers will need to act quickly if improvement is to occur.
Biology and health sciences
Crabs and hermit crabs
Animals
1204311
https://en.wikipedia.org/wiki/Autopen
Autopen
An autopen (or signing machine) is a device used for the automatic signing of a signature. Prominent individuals may be asked to provide their signatures many times a day, such as celebrities receiving requests for autographs, or politicians signing documents and correspondence in their official capacities. Consequently, many public figures employ autopens to allow their signature to be printed on demand and without their direct involvement. Though manual precursors of the modern autopen have existed since at least 1803, 21st-century autopens are machines that are programmed with a signature, which is then reproduced by a motorized, mechanical arm holding a pen. Given the exact verisimilitude to the real hand signature, the use of the autopen allows for a small degree of wishful thinking and plausible deniability as to whether a famous autograph is real or reproduced, thus increasing the perception of the personal value of the signature by the lay recipient. However, known or suspected autopen signatures are also vastly less valuable as philographic collectibles; legitimate hand-signed documents from individuals known to also use an autopen usually require verification and provenance to be considered valid. Early autopens used a plastic matrix of the original signature which is a channel cut into an engraved plate in the shape of a wheel. A stylus driven by an electric motor followed the x- and y-axis of a profile or shape engraved in the plate (which is why it is called a matrix). The stylus is mechanically connected to an arm which can hold almost any common writing instrument, so the favourite pen and ink can be used to suggest authenticity. The autopen signature is made with even pressure (and indentation in the paper), which is how these machines are distinguishable from original handwriting where the pressure varies. History The first signature duplicating machines were developed by Englishman John Isaac Hawkins. Hawkins received a United States patent for his device in 1803, called a polygraph (an abstracted version of the pantograph), in which the user may write with one pen and have their writing simultaneously reproduced by an attached second pen. Thomas Jefferson used the device extensively during his presidency. This device bears little resemblance to today's autopens in design or operation. The autopen called the Robot Pen was developed in the 1930s, and became commercially available in 1937 (used as a storage unit device, similar in principle to how vinyl records store information) to record a signer's signature. A small segment of the record could be removed and stored elsewhere to prevent misuse. The machine would then be able to mass-produce a template signature when needed. While the Robot Pen was commercially available, the first commercially successful autopen was developed by Robert M. De Shazo Jr., in 1942. De Shazo developed the technology that became the modern autopen in reference to a Request For Quote (RFQ) from the Navy, and in 1942, received an order for the machine from the Secretary of the Navy. This was the beginning of a significant market in government for the autopen, as the machines soon ended up in the offices of members of Congress, the Senate and the Executive branches. At one point, De Shazo estimated there were more than 500 autopens in use in Washington, D.C. Use Individuals who use autopens often do not disclose this publicly. Signatures generated by machines are valued less than those created manually, and perceived by their recipients as somewhat inauthentic. In 2004, Donald Rumsfeld, then the U.S. Secretary of Defense, incurred criticism after it was discovered that his office used an autopen to sign letters of condolence to families of American soldiers who were killed in war. Outside of politics, it was reported in November 2022 that some copies of The Philosophy of Modern Song, a book by singer-songwriter Bob Dylan that had been published earlier that month, had been signed with an autopen, resulting in criticism. Autographed editions had been marketed as "hand-signed" and priced at US$600 each. Both Dylan and the book's publisher, Simon & Schuster, issued apologies; refunds were also offered to customers who had bought autopen-signed editions. In addition, Dylan also said that some prints of his artwork sold after 2019 had been signed with an autopen, which he further apologized for and attributed his use of the machine to vertigo and the COVID-19 pandemic, the latter of which prevented him from meeting with staff to facilitate signing the works in question. U.S. Presidents It has long been known that the president of the United States uses multiple autopen systems to sign many official documents (e.g., military, diplomatic, and judicial commissions; some Acts of Congress, executive directives, letters and other correspondence), due to the volume of such documents requiring their signature per the U.S. Constitution. Some say Harry Truman was the first president to use the autopen as a way of responding to mail and signing checks. Others credit Gerald Ford as the first president to openly acknowledge his use of the autopen, but Lyndon Johnson allowed photographs of his autopen to be taken while he was in office, and in 1968 the National Enquirer ran them along with the front-page headline "The Robot That Sits In For The President." While visiting France, Barack Obama authorized the use of an autopen to create his signature which signed into law an extension of three provisions of the Patriot Act. On January 3, 2013, he signed the extension to the Bush tax cuts, using the autopen while vacationing in Hawaii. In order to sign it by the required deadline, his other alternative would have been to have had the bill flown to him overnight. Republican leaders questioned whether this use of the autopen met the constitutional requirement for signing a bill into law, but the validity of presidential use of an autopen had not been actually tested in court. In 2005, George W. Bush asked for and received a favorable opinion from the Department of Justice regarding the constitutionality of using the autopen, but did not use it himself. In May 2024, Joe Biden directed an autopen be used to sign legislation providing a one-week funding extension for the Federal Aviation Administration. Biden was traveling in San Francisco at the time, and wished to avoid any lapse in FAA operations, while a five-year funding bill was being voted on by Congress. Similar devices Further developing the class of devices known as autopens, Canadian author Margaret Atwood created a device called the LongPen, which allows audio and video conversation between the fan and author while a book is being signed remotely.
Technology
Media and communication: Basics
null
1204834
https://en.wikipedia.org/wiki/Deathstalker
Deathstalker
The deathstalker (Leiurus quinquestriatus) is a species of scorpion, a member of the family Buthidae. It is also known as the Palestine yellow scorpion, Omdurman scorpion, and Naqab desert scorpion, as well as by many other colloquial names, which generally originate from the commercial captive trade of the animal. To eliminate confusion, especially important with potentially dangerous species, the scientific name is normally used to refer to them. The name Leiurus quinquestriatus roughly translates into English as "five-striped smooth-tail". In 2014, the subspecies L. q. hebraeus was separated from it and elevated to its own species Leiurus hebraeus. Other species of the genus Leiurus are also often referred to as "deathstalkers". Leiurus quinquestriatus is yellow, and long, with an average of . Distribution and habitat Leiurus quinquestriatus can be found in desert and scrubland habitats ranging from North Africa through to the Middle East. Its range covers a wide sweep of territory in the Sahara, Arabian Desert, Thar Desert, and Central Asia, from Algeria and Mali in the west through to Egypt, Ethiopia, Asia Minor and the Arabian Peninsula, eastwards to Kazakhstan and western India in the northeast and southeast. Venom Neurotoxins in L. quinquestriatus venom include: Chlorotoxin Charybdotoxin, a blocker of calcium-activated potassium channels. Scyllatoxin Agitoxins types one, two and three Other components : Lq2, which gets its name from this scorpion. Hazards The deathstalker is one of the most dangerous species of scorpions. Its venom is a powerful mixture of neurotoxins, with a low lethal dose. While a sting from this scorpion is extraordinarily painful, it normally would not kill a healthy adult human. However, young children, the elderly, or infirm (such as those with a heart condition and those who are allergic) are at much greater risk. Any envenomation runs the risk of anaphylaxis, a potentially life-threatening allergic reaction to the venom. A study from Israel shows a high rate of pancreatitis following envenomation. If a sting from Leiurus quinquestriatus does prove deadly, the cause of death is usually pulmonary edema. Antivenom for the treatment of deathstalker envenomations is produced by pharmaceutical companies AbbVie and Sanofi Pasteur, and by the National Antivenom and Vaccine Production Center in Riyadh. Envenomation by the deathstalker is considered a medical emergency even with antivenom treatment, as its venom is unusually resistant to treatment and typically requires large doses of antivenom. In the United States and other countries outside of the typical range of the deathstalker, there is the additional complicating factor that none of the existing antivenoms are approved by the Food and Drug Administration (or equivalent agencies) and are only available as investigational drugs (INDs). The US Armed Forces maintain an investigational drug application for the AVPC-Riyadh antivenom in the event of envenomation of soldiers in the Gulf War theater of operations, and the Florida Antivenin Bank, managed by the Miami-Dade Fire Rescue Department, maintains Sanofi Pasteur's Scorpifav antivenom for the deathstalker. Uses A component of the deathstalker's venom, the peptide chlorotoxin, has shown potential for treating human brain tumors. There has also been some evidence to show that other components of the venom may aid in the regulation of insulin and could be used to treat diabetes. In 2015 clinical trials were beginning of the use of chlorotoxin with a fluorescent molecule attached as brain tumour "paint" (BLZ-100), to mark cancerous cells in real time during an operation. This is important in brain cancer surgery, where it is vital both to remove as many cancerous cells as possible, but not to remove healthy tissue necessary for brain functioning. In preclinical animal trials the technique could highlight extremely small clusters of as few as 200 cancer cells, compared to the standard use of MRI, with a lower limit in excess of 500,000. Legality Possession of L. quinquestriatus may be illegal or regulated in countries with laws prohibiting the keeping of dangerous animals in general. Jurisdictions are increasingly and explicitly including L. quinquestriatus in laws requiring permits to keep animals which are not usual pets, or restricting possession of dangerous animals, and in some cases have prohibited the keeping of L. quinquestriatus save by licensed zoos and educational facilities. In several jurisdictions departments of fish and wildlife require permits for many animals, and a number of cities and municipal governments have prohibited their possession in their bylaws.
Biology and health sciences
Scorpions
Animals
1205435
https://en.wikipedia.org/wiki/Jet%20fuel
Jet fuel
Jet fuel or aviation turbine fuel (ATF, also abbreviated avtur) is a type of aviation fuel designed for use in aircraft powered by gas-turbine engines. It is colorless to straw-colored in appearance. The most commonly used fuels for commercial aviation are Jet A and Jet A-1, which are produced to a standardized international specification. The only other jet fuel commonly used in civilian turbine-engine powered aviation is Jet B, which is used for its enhanced cold-weather performance. Jet fuel is a mixture of a variety of hydrocarbons. Because the exact composition of jet fuel varies widely based on petroleum source, it is impossible to define jet fuel as a ratio of specific hydrocarbons. Jet fuel is therefore defined as a performance specification rather than a chemical compound. Furthermore, the range of molecular mass between hydrocarbons (or different carbon numbers) is defined by the requirements for the product, such as the freezing point or smoke point. Kerosene-type jet fuel (including Jet A and Jet A-1, JP-5, and JP-8) has a carbon number distribution between about 8 and 16 (carbon atoms per molecule); wide-cut or naphtha-type jet fuel (including Jet B and JP-4), between about 5 and 15. History Fuel for piston-engine powered aircraft (usually a high-octane gasoline known as avgas) has a high volatility to improve its carburetion characteristics and high autoignition temperature to prevent preignition in high compression aircraft engines. Turbine engines (as with diesel engines) can operate with a wide range of fuels because fuel is injected into the hot combustion chamber. Jet and gas turbine (turboprop, helicopter) aircraft engines typically use lower cost fuels with higher flash points, which are less flammable and therefore safer to transport and handle. The first axial compressor jet engine in widespread production and combat service, the Junkers Jumo 004 used on the Messerschmitt Me 262A fighter and the Arado Ar 234B jet recon-bomber, burned either a special synthetic "J2" fuel or diesel fuel. Gasoline was a third option but unattractive due to high fuel consumption. Other fuels used were kerosene or kerosene and gasoline mixtures. A pressure to move from Jet fuel to sustainable aviation fuel, aka Aviation biofuel, has existed since before the 2016 Paris Agreement. Standards Most jet fuels in use since the end of World War II are kerosene-based. Both British and American standards for jet fuels were first established at the end of World War II. British standards derived from standards for kerosene use for lamps—known as paraffin in the UK—whereas American standards derived from aviation gasoline practices. Over the subsequent years, details of specifications were adjusted, such as minimum freezing point, to balance performance requirements and availability of fuels. Very low temperature freezing points reduce the availability of fuel. Higher flash point products required for use on aircraft carriers are more expensive to produce. In the United States, ASTM International produces standards for civilian fuel types, and the U.S. Department of Defense produces standards for military use. The British Ministry of Defence establishes standards for both civil and military jet fuels. For reasons of inter-operational ability, British and United States military standards are harmonized to a degree. In Russia and the CIS members, grades of jet fuels are covered by the State Standard (GOST) number, or a Technical Condition number, with the principal grade available being TS-1. Types Jet A/A-1 Jet A specification fuel has been used in the United States since the 1950s and is usually not available outside the United States and a few Canadian airports such as Toronto, Montreal, and Vancouver, whereas Jet A-1 is the standard specification fuel used in most of the rest of the world, the main exceptions being Russia and the CIS members, where TS-1 fuel type is the most common standard. Both Jet A and Jet A-1 have a flash point higher than , with an autoignition temperature of . Differences between Jet A and Jet A-1 The differences between Jet A and Jet A-1 are twofold. The primary difference is the lower freezing point of Jet A-1 fuel: Jet A's is Jet A-1's is The other difference is the mandatory addition of an antistatic additive to Jet A-1 fuel. Jet A and Jet A-1 fuel trucks and storage tanks, as well as plumbing that carries them, are all marked "Jet A" or "Jet A-1" in white italicized text within a black rectangle background, adjacent to one or two diagonal black stripes. Typical physical properties for Jet A and Jet A-1 Jet A-1 fuel must meet: DEF STAN 91-91 (Jet A-1), ASTM specification D1655 (Jet A-1), and IATA Guidance Material (Kerosene Type), NATO Code F-35. Jet A fuel must reach ASTM specification D1655 (Jet A). Jet B Jet B is a naphtha-kerosene fuel that is used for its enhanced cold-weather performance. However, Jet B's lighter composition makes it more dangerous to handle. For this reason, it is rarely used, except in very cold climates. A blend of approximately 30% kerosene and 70% gasoline, it is known as wide-cut fuel. It has a very low freezing point of , and a low flash point as well. It is primarily used in northern Canada and Alaska, where the extreme cold makes its low freezing point necessary, and which helps mitigate the danger of its lower flash point. GOST standards The GOST standard 10227 specifies civilian fuels, among which TS-1, T-1, T-1S, T2 and RT. Military fuels such as T-1pp, T-8V (aka T-8B) and T-6 are specified by GOST 12308. Icing inhibitors are specified by GOST 8313. Some researchers refer to T-6 as "ram rocket fuel"; others have patented a method used to produce T-1pp from a mixture of T-6 and RT, the latter of which has been characterized as "unified Russian fuel for sub- and supersonic aircraft". TS-1 TS-1 is a jet fuel made to Russian standard GOST 10227 for enhanced cold-weather performance. It has somewhat higher volatility than Jet A-1 (flash point is minimum). It has a very low freezing point, below . Additives The DEF STAN 91-091 (UK) and ASTM D1655 (international) specifications allow for certain additives to be added to jet fuel, including: Antioxidants to prevent gumming, usually based on alkylated phenols, e.g., AO-30, AO-31, or AO-37; Antistatic agents, to dissipate static electricity and prevent sparking; Stadis 450, with dinonylnaphthylsulfonic acid (DINNSA) as a component, is an example Corrosion inhibitors, e.g., DCI-4A used for civilian and military fuels, and DCI-6A used for military fuels; Fuel system icing inhibitor (FSII) agents, e.g., 2-(2-Methoxyethoxy)ethanol (Di-EGME); FSII is often mixed at the point-of-sale so that users with heated fuel lines do not have to pay the extra expense. Biocides are to remediate microbial (i.e., bacterial and fungal) growth present in aircraft fuel systems. Two biocides were previously approved for use by most aircraft and turbine engine original equipment manufacturers (OEMs); Kathon FP1.5 Microbiocide and Biobor JF. Biobor JF is currently the only biocide available for aviation use. Kathon was discontinued by the manufacturer due to several airworthiness incidents. Kathon is now banned from use in aviation fuel. Metal deactivator can be added to reduce the negative effects of trace metals on the thermal stability of the fuel. The one allowable additive is the chelating agent salpn (N,N′-bis(salicylidene)-1,2-propanediamine). As the aviation industry's jet kerosene demands have increased to more than 5% of all refined products derived from crude, it has been necessary for the refiner to optimize the yield of jet kerosene, a high-value product, by varying process techniques. New processes have allowed flexibility in the choice of crudes, the use of coal tar sands as a source of molecules and the manufacture of synthetic blend stocks. Due to the number and severity of the processes used, it is often necessary and sometimes mandatory to use additives. These additives may, for example, prevent the formation of harmful chemical species or improve a property of a fuel to prevent further engine wear. Water in jet fuel It is very important that jet fuel be free from water contamination. During flight, the temperature of the fuel in the tanks decreases, due to the low temperatures in the upper atmosphere. This causes precipitation of the dissolved water from the fuel. The separated water then drops to the bottom of the tank, because it is denser than the fuel. Since the water is no longer in solution, it can form droplets which can supercool to below 0 °C (32 °F). If these supercooled droplets collide with a surface they can freeze and may result in blocked fuel inlet pipes. This was the cause of the British Airways Flight 38 accident. Removing all water from fuel is impractical; therefore, fuel heaters are usually used on commercial aircraft to prevent water in fuel from freezing. There are several methods for detecting water in jet fuel. A visual check may detect high concentrations of suspended water, as this will cause the fuel to become hazy in appearance. An industry standard chemical test for the detection of free water in jet fuel uses a water-sensitive filter pad that turns green if the fuel exceeds the specification limit of 30 ppm (parts per million) free water. A critical test to rate the ability of jet fuel to release emulsified water when passed through coalescing filters is ASTM standard D3948 Standard Test Method for Determining Water Separation Characteristics of Aviation Turbine Fuels by Portable Separometer. Military jet fuels Military organizations around the world use a different classification system of JP (for "Jet Propellant") numbers. Some are almost identical to their civilian counterparts and differ only by the amounts of a few additives; Jet A-1 is similar to JP-8, Jet B is similar to JP-4. Other military fuels are highly specialized products and are developed for very specific applications. JP-1 was an early jet fuel specified in 1944 by the United States government (AN-F-32). It was a pure kerosene fuel with high flash point (relative to aviation gasoline) and a freezing point of . The low freezing point requirement limited availability of the fuel and it was soon superseded by other "wide cut" jet fuels which were kerosene-naphtha or kerosene-gasoline blends. It was also known as avtur. JP-2 an obsolete type developed during World War II. JP-2 was intended to be easier to produce than JP-1 since it had a higher freezing point, but was never widely used. JP-3 was an attempt to improve availability of the fuel compared to JP-1 by widening the cut and loosening tolerances on impurities to ensure ready supply. In his book Ignition! An Informal History of Liquid Rocket Propellants, John D. Clark described the specification as, "remarkably liberal, with a wide cut (range of distillation temperatures) and with such permissive limits on olefins and aromatics that any refinery above the level of a Kentucky moonshiner's pot still could convert at least half of any crude to jet fuel". It was even more volatile than JP-2 and had high evaporation loss in service. JP-4 was a 50-50 kerosene-gasoline blend. It had lower flash point than JP-1, but was preferred because of its greater availability. It was the primary United States Air Force jet fuel between 1951 and 1995. Its NATO code is F-40. It is also known as avtag. JP-5 is a yellow kerosene-based jet fuel developed in 1952 for use in aircraft stationed aboard aircraft carriers, where the risk from fire is particularly great. JP-5 is a complex mixture of hydrocarbons, containing alkanes, naphthenes, and aromatic hydrocarbons that weighs and has a high flash point (min. ). Because some US naval air stations, Marine Corps air stations and Coast Guard air stations host both sea and land based naval aircraft, these installations will also typically fuel their shore-based aircraft with JP-5, thus precluding the need to maintain separate fuel facilities for JP-5 and non-JP-5 fuel. Chinese also named their navy fuel RP-5. Its freezing point is . It does not contain antistatic agents. JP-5 is also known as NCI-C54784. JP-5's NATO code is F-44. It is also called AVCAT fuel for Aviation Carrier Turbine fuel. The JP-4 and JP-5 fuels, covered by the MIL-DTL-5624 and meeting the British Specification DEF STAN 91-86 AVCAT/FSII (formerly DERD 2452), are intended for use in aircraft turbine engines. These fuels require unique additives that are necessary for military aircraft and engine fuel systems. JP-6 was developed for the General Electric YJ93 afterburning turbojet engines used in the North American XB-70 Valkyrie for sustained flight at Mach 3. It was similar to JP-5 but with a lower freezing point and improved thermal oxidative stability. When the XB-70 program was cancelled, the JP-6 specification, MIL-J-25656, was also cancelled. JP-7 was developed for the Pratt & Whitney J58 afterburning turbojet engines used in the Lockheed SR-71 Blackbird for sustained flight at Mach 3+. It had a high flash point required to prevent boiloff caused by aerodynamic heating. Its thermal stability was high enough to prevent coke and varnish deposits when used as a heat-sink for aircraft air conditioning and hydraulic systems and engine accessories. JP-8 is a jet fuel, specified and used widely by the U.S. military. It is specified by MIL-DTL-83133 and British Defence Standard 91-87. JP-8 is a kerosene-based fuel, projected to remain in use at least until 2025. The United States military uses JP-8 as a "universal fuel" in both turbine-powered aircraft and diesel-powered ground vehicles. It was first introduced at NATO bases in 1978. Its NATO code is F-34. JP-9 is a gas turbine fuel for missiles, specifically the Tomahawk cruise missile, containing the TH-dimer (tetrahydrodimethyldicyclopentadiene) produced by catalytic hydrogenation of methylpentadiene dimer. JP-10 is a gas turbine fuel for missiles, specifically the AGM-86 ALCM cruise missile. It contains a mixture of (in decreasing order) endo-tetrahydrodicyclopentadiene, exo-tetrahydrodicyclopentadiene (a synthetic fuel), and adamantane. It is produced by catalytic hydrogenation of dicyclopentadiene. It superseded JP-9 fuel, achieving a lower low-temperature service limit of . It is also used by the Tomahawk jet-powered subsonic cruise missile. JPTS was a combination of LF-1 charcoal lighter fluid and an additive to improve thermal oxidative stability officially known as "Thermally Stable Jet Fuel". It was developed in 1956 for the Pratt & Whitney J57 engine which powered the Lockheed U-2 spy plane. Zip fuel designates a series of experimental boron-containing "high energy fuels" intended for long range aircraft. The toxicity and undesirable residues of the fuel made it difficult to use. The development of the ballistic missile removed the principal application of zip fuel. Syntroleum has been working with the USAF to develop a synthetic jet fuel blend that will help them reduce their dependence on imported petroleum. The USAF, which is the United States military's largest user of fuel, began exploring alternative fuel sources in 1999. On December 15, 2006, a B-52 took off from Edwards Air Force Base for the first time powered solely by a 50–50 blend of JP-8 and Syntroleum's FT fuel. The seven-hour flight test was considered a success. The goal of the flight test program was to qualify the fuel blend for fleet use on the service's B-52s, and then flight test and qualification on other aircraft. Piston engine use Jet fuel is very similar to diesel fuel, and in some cases, may be used in diesel engines. The possibility of environmental legislation banning the use of leaded avgas (fuel in spark-ignited internal combustion engine, which usually contains tetraethyllead (TEL), a toxic substance added to prevent engine knocking), and the lack of a replacement fuel with similar performance, has left aircraft designers and pilot's organizations searching for alternative engines for use in small aircraft. As a result, a few aircraft engine manufacturers, most notably Thielert and Austro Engine, have begun offering aircraft diesel engines which run on jet fuel which may simplify airport logistics by reducing the number of fuel types required. Jet fuel is available in most places in the world, whereas avgas is only widely available in a few countries which have a large number of general aviation aircraft. A diesel engine may be more fuel-efficient than an avgas engine. However, very few diesel aircraft engines have been certified by aviation authorities. Diesel aircraft engines are uncommon today, even though opposed-piston aviation diesel powerplants such as the Junkers Jumo 205 family had been used during the Second World War. Jet fuel is often used in diesel-powered ground-support vehicles at airports. However, jet fuel tends to have poor lubricating ability in comparison to diesel, which increases wear in fuel injection equipment. An additive may be required to restore its lubricity. Jet fuel is more expensive than diesel fuel but the logistical advantages of using one fuel can offset the extra expense of its use in certain circumstances. Jet fuel contains more sulfur, up to 1,000 ppm, which therefore means it has better lubricity and does not currently require a lubricity additive as all pipeline diesel fuels require. The introduction of Ultra Low Sulfur Diesel or ULSD brought with it the need for lubricity modifiers. Pipeline diesels before ULSD were able to contain up to 500 ppm of sulfur and were called Low Sulfur Diesel or LSD. In the United States LSD is now only available to the off-road construction, locomotive and marine markets. As more EPA regulations are introduced, more refineries are hydrotreating their jet fuel production, thus limiting the lubricating abilities of jet fuel, as determined by ASTM Standard D445. JP-8, which is similar to Jet A-1, is used in NATO diesel vehicles as part of the single-fuel policy. Synthetic jet fuel Fischer–Tropsch (FT) Synthesized Paraffinic Kerosene (SPK) synthetic fuels are certified for use in United States and international aviation fleets at up to 50% in a blend with conventional jet fuel. As of the end of 2017, four other pathways to SPK are certified, with their designations and maximum blend percentage in brackets: Hydroprocessed Esters and Fatty Acids (HEFA SPK, 50%); synthesized iso-paraffins from hydroprocessed fermented sugars (SIP, 10%); synthesized paraffinic kerosene plus aromatics (SPK/A, 50%); alcohol-to-jet SPK (ATJ-SPK, 30%). Both FT and HEFA based SPKs blended with JP-8 are specified in MIL-DTL-83133H. Some synthetic jet fuels show a reduction in pollutants such as SOx, NOx, particulate matter, and sometimes carbon emissions. It is envisaged that usage of synthetic jet fuels will increase air quality around airports which will be particularly advantageous at inner city airports. Qatar Airways became the first airline to operate a commercial flight on a 50:50 blend of synthetic Gas to Liquid (GTL) jet fuel and conventional jet fuel. The natural gas derived synthetic kerosene for the six-hour flight from London to Doha came from Shell's GTL plant in Bintulu, Malaysia. The world's first passenger aircraft flight to use only synthetic jet fuel was from Lanseria International Airport to Cape Town International Airport on September 22, 2010. The fuel was developed by Sasol. Chemist Heather Willauer is leading a team of researchers at the U.S. Naval Research Laboratory who are developing a process to make jet fuel from seawater. The technology requires an input of electrical energy to separate Oxygen (O2) and Hydrogen (H2) gas from seawater using an iron-based catalyst, followed by an oligomerization step wherein carbon monoxide (CO) and hydrogen are recombined into long-chain hydrocarbons, using zeolite as the catalyst. The technology is expected to be deployed in the 2020s by U.S. Navy warships, especially nuclear-powered aircraft carriers. On February 8, 2021, the world's first scheduled passenger flight flew with some synthetic kerosene from a non-fossil fuel source. 500 liters of synthetic kerosene was mixed with regular jet fuel. Synthetic kerosene was produced by Shell and the flight was operated by KLM. USAF synthetic fuel trials On August 8, 2007, Air Force Secretary Michael Wynne certified the B-52H as fully approved to use the FT blend, marking the formal conclusion of the test program. This program is part of the Department of Defense Assured Fuel Initiative, an effort to develop secure domestic sources for the military energy needs. The Pentagon hopes to reduce its use of crude oil from foreign producers and obtain about half of its aviation fuel from alternative sources by 2016. With the B-52 now approved to use the FT blend, the USAF will use the test protocols developed during the program to certify the Boeing C-17 Globemaster III and then the Rockwell B-1B Lancer to use the fuel. To test these two aircraft, the USAF has ordered of FT fuel. The USAF intends to test and certify every airframe in its inventory to use the fuel by 2011. They will also supply over to NASA for testing in various aircraft and engines. The USAF has certified the B-1B, B-52H, C-17, Lockheed Martin C-130J Super Hercules, McDonnell Douglas F-4 Phantom (as QF-4 target drones), McDonnell Douglas F-15 Eagle, Lockheed Martin F-22 Raptor, and Northrop T-38 Talon to use the synthetic fuel blend. The U.S. Air Force's C-17 Globemaster III, F-16 and F-15 are certified for use of hydrotreated renewable jet fuels. The USAF plans to certify over 40 models for fuels derived from waste oils and plants by 2013. The U.S. Army is considered one of the few customers of biofuels large enough to potentially bring biofuels up to the volume production needed to reduce costs. The U.S. Navy has also flown a Boeing F/A-18E/F Super Hornet dubbed the "Green Hornet" at 1.7 times the speed of sound using a biofuel blend. The Defense Advanced Research Projects Agency (DARPA) funded a $6.7 million project with Honeywell UOP to develop technologies to create jet fuels from biofeedstocks for use by the United States and NATO militaries. In April 2011, four USAF F-15E Strike Eagles flew over the Philadelphia Phillies opening ceremony using a blend of traditional jet fuel and synthetic biofuels. This flyover made history as it was the first flyover to use biofuels in the Department of Defense. Jet biofuels The air transport industry is responsible for 2–3 percent of man-made carbon dioxide emitted. Boeing estimates that biofuels could reduce flight-related greenhouse-gas emissions by 60 to 80 percent. One possible solution which has received more media coverage than others would be blending synthetic fuel derived from algae with existing jet fuel: Green Flight International became the first airline to fly jet aircraft on 100% biofuel. The flight from Reno Stead Airport in Stead, Nevada was in an Aero L-29 Delfín piloted by Carol Sugars and Douglas Rodante. Boeing and Air New Zealand are collaborating with Tecbio Aquaflow Bionomic and other jet biofuel developers around the world. Virgin Atlantic successfully tested a biofuel blend consisting of 20 percent babassu nuts and coconut and 80 percent conventional jet fuel, which was fed to a single engine on a 747 flight from London Heathrow to Amsterdam Schiphol. A consortium consisting of Boeing, NASA's Glenn Research Center, MTU Aero Engines (Germany), and the U.S. Air Force Research Laboratory is working on development of jet fuel blends containing a substantial percentage of biofuel. British Airways and Velocys have entered into a partnership in the UK to design a series of plants that convert household waste into jet fuel. 24 commercial and military biofuel flights have taken place using Honeywell “Green Jet Fuel,” including a Navy F/A-18 Hornet. In 2011, United Continental Holdings was the first United States airline to fly passengers on a commercial flight using a blend of sustainable, advanced biofuels and traditional petroleum-derived jet fuel. Solazyme developed the algae oil, which was refined utilizing Honeywell's UOP process technology, into jet fuel to power the commercial flight. Solazyme produced the world's first 100 percent algae-derived jet fuel, Solajet, for both commercial and military applications. Oil prices increased about fivefold from 2003 to 2008, raising fears that world petroleum production is becoming unable to keep up with demand. The fact that there are few alternatives to petroleum for aviation fuel adds urgency to the search for alternatives. Twenty-five airlines were bankrupted or stopped operations in the first six months of 2008, largely due to fuel costs. In 2015 ASTM approved a modification to Specification D1655 Standard Specification for Aviation Turbine Fuels to permit up to 50 ppm (50 mg/kg) of FAME (fatty acid methyl ester) in jet fuel to allow higher cross-contamination from biofuel production. Worldwide consumption of jet fuel Worldwide demand of jet fuel has been steadily increasing since 1980. Consumption more than tripled in 30 years from 1,837,000 barrels/day in 1980, to 5,220,000 in 2010. Around 30% of the worldwide consumption of jet fuel is in the US (1,398,130 barrels/day in 2012). Taxation Article 24 of the Chicago Convention on International Civil Aviation of 7 December 1944 stipulates that when flying from one contracting state to another, the kerosene that is already on board aircraft may not be taxed by the state where the aircraft lands, nor by a state through whose airspace the aircraft has flown. This is to prevent double taxation. It is sometimes suggested that the Chicago Convention precludes the taxation of aviation fuel. However, this is not correct. The Chicago Convention does not preclude a kerosene tax on domestic flights or on refuelling before international flights. Article 15 of the Chicago Convention is also sometimes said to ban fuel taxes. Article 15 states: "No fees, dues or other charges shall be imposed by any contracting State in respect solely of the right of transit over or entry into or exit from its territory of any aircraft of a contracting State or persons or property thereon." However, ICAO distinguishes between charges and taxes, and Article 15 does not prohibit the levying of taxes without a service provided. In the European Union, commercial aviation fuel is exempt from taxation, according to the 2003 Energy Taxation Directive. EU member states may tax jet fuel via bilateral agreements, however no such agreements exist. In the United States, most states tax jet fuel. Health effects General health hazards associated with exposure to jet fuel vary according to its components, exposure duration (acute vs. long-term), route of administration (dermal vs. respiratory vs. oral), and exposure phase (vapor vs. aerosol vs. raw fuel). Kerosene-based hydrocarbon fuels are complex mixtures which may contain up to 260+ aliphatic and aromatic hydrocarbon compounds including toxicants such as benzene, n-hexane, toluene, xylenes, trimethylpentane, methoxyethanol, naphthalenes. While time-weighted average hydrocarbon fuel exposures can often be below recommended exposure limits, peak exposure can occur, and the health impact of occupational exposures is not fully understood. Evidence of the health effects of jet fuels comes from reports on both temporary or persisting biological from acute, subchronic, or chronic exposure of humans or animals to kerosene-based hydrocarbon fuels, or the constituent chemicals of these fuels, or to fuel combustion products. The effects studied include: cancer, skin conditions, respiratory disorders, immune and hematological disorders, neurological effects, visual and hearing disorders, renal and hepatic diseases, cardiovascular conditions, gastrointestinal disorders, genotoxic and metabolic effects.
Technology
Fuel
null
1205637
https://en.wikipedia.org/wiki/Pyridoxine
Pyridoxine
Pyridoxine (PN) is a form of vitamin B6 found commonly in food and used as a dietary supplement. As a supplement it is used to treat and prevent pyridoxine deficiency, sideroblastic anaemia, pyridoxine-dependent epilepsy, certain metabolic disorders, side effects or complications of isoniazid use, and certain types of mushroom poisoning. It is used by mouth or by injection. It is usually well tolerated. Occasionally side effects include headache, numbness, and sleepiness. Normal doses are safe during pregnancy and breastfeeding. Pyridoxine is in the vitamin B family of vitamins. It is required by the body to metabolise amino acids, carbohydrates, and lipids. Sources in the diet include meat, fish, fruit, vegetables, and grain. Medical uses As a treatment (oral or injection), it is used to treat or prevent pyridoxine deficiency, sideroblastic anaemia, pyridoxine-dependent epilepsy, certain metabolic disorders, side effects of isoniazid treatment and certain types of mushroom poisoning. Isoniazid is an antibiotic used for the treatment of tuberculosis. Common side effect include numbness in the hands and feet. Co-treatment with vitamin B6 alleviates the numbness. Pyridoxine-dependent epilepsy is a type of rare infant epilepsy that does not improve with typical anti-seizure medications. Pyridoxine in combination with doxylamine is used as a treatment for morning sickness in pregnant women. Side effects It is usually well tolerated, though overdose toxicity is possible. Occasionally side effects include headache, numbness, and sleepiness. Pyridoxine overdose can cause a peripheral sensory neuropathy characterized by poor coordination, numbness, and decreased sensation to touch, temperature, and vibration. Healthy human blood levels of pyridoxine are 2.1–21.7 ng/mL. Normal doses are safe during pregnancy and breastfeeding. Mechanism Pyridoxine is in the vitamin B family of vitamins. It is required by the body to make amino acids, carbohydrates, and lipids. Sources in the diet include fruit, vegetables, and grain. It is also required for muscle phosphorylase activity associated with glycogen metabolism. Metabolism The half-life of pyridoxine varies according to different sources: one source suggests that the half-life of pyridoxine is up to 20 days, while another source indicates half-life of vitamin B6 is in range of 25 to 33 days. After considering the different sources, it can be concluded that the half-life of pyridoxine is typically measured in several weeks. History Pyridoxine was discovered in 1934, isolated in 1938, and first made in 1939. It is on the World Health Organization's List of Essential Medicines. Pyridoxine is available both as a generic medication and over the counter product. Foods, such as breakfast cereal have pyridoxine added in some countries.
Biology and health sciences
Vitamins
Health
1205877
https://en.wikipedia.org/wiki/Defoliant
Defoliant
A defoliant is any herbicidal chemical sprayed or dusted on plants to cause their leaves to fall off. Defoliants are widely used for the selective removal of weeds in managing croplands and lawns. Worldwide use of defoliants, along with the development of other herbicides and pesticides, allowed for the Green Revolution, an increase in agricultural production in mid-20th century. Defoliants have also been used in warfare as a means to deprive an enemy of food crops and/or hiding cover, most notably by the United Kingdom during the Malayan Emergency and the United States in the Vietnam War. Defoliants were also used by Indonesian forces in various internal security operations. Use and application A primary application of defoliants is the selective killing of plants. Two of the oldest chemical herbicides used as defoliants are 2,4-Dichlorophenoxyacetic acid (2,4-D) and 2,4,5-Trichlorophenoxyacetic acid (2,4,5-T). 2,4-D and 2,4,5-T are absorbed by broad-leafed plants, killing them by causing excessive hormonal growth. These phenoxy herbicides were designed to selectively kill weeds and unwanted plants in croplands. They were first introduced at the beginning of World War II and became widespread in use in agriculture following the end of the War. Defoliants have a practical use in the harvesting of certain crops, particularly cotton, in the United States as well as a number of other cotton-producing countries. The use of defoliants aids in the effective harvesting of cotton and finer lint quality. The effectiveness of defoliant use in cotton harvesting depends on the type of defoliant(s) used, the number of applications, the amount applied, and environmental variables. Common harvest-aiding chemical defoliants include tribufos, dimethipin, and thidiazuron. According to a 1998 report by the U.S. Department of Agriculture National Agricultural Statistics Service (NASS), tribufos and thidiazuron accounted for 60% of crop area that was treated by defoliants during that crop year. Examples of defoliants In Southeast Asia during the Vietnam War, the Rainbow Herbicides were a group of tactical-use chemicals used by the United States military. The environmental destruction caused by this defoliation has been described by Swedish Prime Minister Olof Palme, lawyers, historians and other academics as an ecocide. Health and environmental effects In 1998, the U.S. Environmental Protection Agency (U.S. EPA) concluded that the use of agricultural defoliants led to increased risks of water contamination and dangers to freshwater and marine life. High doses of tribufos were labeled as a possible carcinogen and a toxin to freshwater and marine invertebrates. Dimethipin has also been labeled as a possible human carcinogen. A published study in the Journal of Agricultural and Food Chemistry reported that through successive surface runoff events in defoliated cotton fields, defoliant concentrations decreased exponentially within the test area and could negatively affect marine life in the runoff zones. Agent Orange, a defoliant used by the United Kingdom during the Malayan Emergency in the 1950s and the United States during the Vietnam War to defoliate regions of Vietnam from 1961 to 1971, has been linked to several long-term health issues. Agent Orange contains a mixture of 2,4-D and 2,4,5-T as well as dioxin contaminants. Members of the Air Force Ranch Hand and the Army Chemical Corps who served in the Vietnam War were occupationally exposed to Agent Orange have a higher incidence of diabetes, heart disease, hypertension, and chronic respiratory diseases. Among other occupations, farmers are at a significantly higher risk of developing Alzheimer's disease due to a greater chance of defoliant exposure.
Technology
Pest and disease control
null
4639256
https://en.wikipedia.org/wiki/Tiktaalik
Tiktaalik
Tiktaalik (; Inuktitut ) is a monospecific genus of extinct sarcopterygian (lobe-finned fish) from the Late Devonian Period, about 375 Mya (million years ago), having many features akin to those of tetrapods (four-legged animals). Tiktaalik is estimated to have had a total length of on the basis of various specimens. Unearthed in Arctic Canada, Tiktaalik is a non-tetrapod member of Osteichthyes (bony fish), complete with scales and gills—but it has a triangular, flattened head and unusual, cleaver-shaped fins. Its fins have thin ray bones for paddling like most fish, but they also have sturdy interior bones that would have allowed Tiktaalik to prop itself up in shallow water and use its limbs for support as most four-legged animals do. Those fins and other mixed characteristics mark Tiktaalik as a crucial transition fossil, a link in evolution from swimming fish to four-legged vertebrates. This and similar animals might be the common ancestors of all vertebrate terrestrial fauna: amphibians, reptiles, birds and mammals. The first Tiktaalik fossils were found in 2004 on Ellesmere Island in Nunavut, Canada. The discovery, made by Edward B. Daeschler of the Academy of Natural Sciences, Neil H. Shubin from the University of Chicago, and Harvard University Professor Farish A. Jenkins Jr., was published in the April 6, 2006 issue of Nature and quickly recognized as a transitional form. Discovery In 2004, three fossilized Tiktaalik skeletons were discovered in the Late Devonian fluvial Fram Formation on Ellesmere Island, Nunavut, in northern Canada. Estimated ages were reported at 375 Ma, 379 Ma and 383 Ma. At the time of the species' existence, Ellesmere Island was part of the continent Laurentia (modern eastern North America and Greenland), which was centered on the equator and had a warm climate. When discovered, one of the skulls was found sticking out of a cliff. Upon further inspection, the fossil was found to be in excellent condition for a 375-million-year-old specimen. The discovery by Daeschler, Shubin and Jenkins was published in the April 6, 2006 issue of Nature and quickly recognized as a transitional form. Jennifer A. Clack, a Cambridge University expert on tetrapod evolution, said of Tiktaalik, "It's one of those things you can point to and say, 'I told you this would exist,' and there it is." Tiktaalik is an Inuktitut word meaning "large freshwater fish". The "fishapod" genus received this name after a suggestion by Inuit elders of Canada's Nunavut Territory, where the fossil was discovered. The specific name roseae honours an anonymous donor. Taking a detailed look at the internal head skeleton of Tiktaalik roseae, in the October 16, 2008, issue of Nature, researchers show how Tiktaalik was gaining structures that could allow it to support itself on solid ground and breathe air, a key intermediate step in the transformation of the skull that accompanied the shift to life on land by our distant ancestors. More than 60 specimens of Tiktaalik have been discovered, though the holotype remains the most complete and well-described fossil. Description Tiktaalik provides insights on the features of the extinct closest relatives of the tetrapods. Tiktaalik was a large fish: the largest known fossils have an estimated length of , with the longest lower jaws reaching a length of . Skull and neck The skull of Tiktaalik was low and flat, more similar in shape to that of a crocodile than most fish. The rear edge of the skull was excavated by a pair of indentations known as otic notches. These notches may have housed spiracles on the top of the head, which suggest the creature had primitive lungs as well as gills. Tiktaalik also lacked a characteristic most fishes have—bony plates in the gill area that restrict lateral head movement. This makes Tiktaalik the earliest-known fish to have a neck, with the pectoral (shoulder) girdle separate from the skull. This would give the creature more freedom in hunting prey on land or in the shallows. Forelimbs The "fins" of Tiktaalik have helped to contextualize the origin of weight-bearing limbs and digits. The fin has both a robust internal skeleton, like tetrapods, surrounded by a web of simple bony fin rays (lepidotrichia), like fish. The lepidotrichia are thickest and most extensive on the front edge and upper side of the fin, leaving more room for muscle and skin on the underside of the fin. The pectoral fin was clearly weight bearing, being attached to a massive shoulder girdle with expanded scapular and coracoid elements attached to the body armor. Moreover, there are large muscle scars on the underside of the forefin bones, and the distal joints of the wrist are highly mobile. Together, these suggest that the fin was both muscular and had the ability to flex like a wrist joint. These wrist-like features would have helped anchor the creature to the bottom in a fast current. One of the persistent questions facing paleontologists is the evolution of the tetrapod limb: specifically, how the internal bones of lobed fins evolved into the feet and toes of tetrapods. In many lobe-finned fish, including living coelacanths and the Australian lungfish, the fin skeleton is based around a straight string of midline bones, making up the metapterygial axis. The component bones of the axis are known as axials or mesomeres. The axis is flanked by one or two series of rod-like bones known as radials. Radials can be characterized as preaxial (in front of the axials) or postaxial (behind the axials). This semi-symmetrical structure is difficult to homologize with the more splayed lower limbs of tetrapods. Tiktaalik retains a metapterygial axis with distinctly enlarged axial bones, a very fish-like condition. Even Panderichthys, which is otherwise more fish-like, seems to be more advanced towards a tetrapod-like limb. Nevertheless, the internal skeleton of the pectoral fin can still be equated to the forelimb bones of tetrapods. The first axial, at the base of the fin, has developed into the humerus, the single large bone making up the stylopodium (upper arm). This is followed by the two bones of the zeugopodium (forearm): the radius (i.e., the first preaxial radial) and ulna (i.e., the second axial). The radius is much larger than the ulna, and its front edge thins into a sharp blade like that of Panderichthys. Further down, the internal skeleton transitions into the mesopodium, which in tetrapods contains the bones of the wrist. Tiktaalik has two large wrist bones: the narrow intermedium (i.e., the second preaxial radial) and the blocky ulnare (i.e., the third axial). In tetrapods, the wrist is followed by the hand and finger bones. The origin of these bones has long been a topic of contention. In the early 20th century, most paleontologists considered the digits to develop symmetrically from the distal fin radials. Another school of thought, popularized in the 1940s, is that the hand was neomorphic. This means that it was an entirely new structure that spontaneously evolved once the distal axials and radials were reduced. A third hypothesis, emphasized by Shubin and Alberch (1986), is that digits are homologous to postaxial radials in particular. This interpretation, better known as the digital arch model, is supported by numerous developmental studies. A consistent set of Hox genes are responsible for moderating both the rear edge of the fin (in several modern fish) and the digits of modern tetrapods as their embryos develop. The digital arch model posits that the metapterygial axis was bent forwards at a sharp angle near the origin of tetrapods. This allowed the axials to transform into wrist bones, while the narrower postaxial radials splay out and evolve into fingers. Tiktaalik presents a contradictory set of traits. As predicted by the digital arch model, there are multiple (at least eight) rectangular distal radials arranged in a dispersed pattern, similar to fingers. Some of the radials are even arranged sequentially, akin to finger joints. However, the metapterygial axis is straight and runs down the middle of the fin. Only three of the finger-like radials are postaxial, while the model predicts that most or all of the radials should be postaxial. It remains to be seen whether any of the distal radials of Tiktaalik are homologous to fingers. Finger-like distal radials are also known in other elpistostegalians: Panderichthys (which has at least four) and Elpistostege (which has 19). Hip and hindlimbs As with other regions of the body, the pelvis (hip) was intermediate in form between earlier lobe-finned fish (like Gooloogongia and Eusthenopteron) and tetrapods (like Acanthostega). The pelvis was much larger than in other fish, nearly the same size as the shoulder girdle, like tetrapods. In terms of shape, the pelvis is a single bone, much more similar to fish. There is a broad upper iliac blade continuous with a low semi-cartilaginous pubic process in front of the acetabulum (hip socket). This contrasts with the more complex pelvis of tetrapods, which have three separate bones (the ilium, pubis and ischium) making up the hip. In addition, in tetrapods the left and right pelvises often connect to each other or the spinal column, while in Tiktaalik each side of the pelvis is fully separate. The orientation of the hip socket is halfway between the rear-facing socket of other fish and the sideways-facing socket of tetrapods. The hindlimbs, also known as pelvic fins, appear to be almost as long as the forelimbs. This is yet another trait more similar to tetrapods than to other fish. Though not all bones are preserved in the fossil, it is clear that the hindlimbs of Tiktaalik had lepidotrichia and at least three large rod-like ankle bones. If fully preserved, the pelvic fins would probably have been internally and externally very similar to the pectoral fins. Torso The torso of Tiktaalik is elongated by the standards of most Devonian tetrapodomorphs. Although the vertebrae are not ossified, there are about 45 pairs of ribs between the skull and the hip region. The ribs are larger than in earlier fish, imbricating (overlapping) via blade-like flanges. Imbricating ribs are also known in Ichthyostega, though in that taxon the ribs are more diverse in shape. Tiktaalik most likely lacked dorsal fins, like other elpistostegalians as well as tetrapods. The shape of the tail and caudal fin are unknown, since that portion of the skeleton has not been preserved. Many lobe-finned fish have a single anal fin on the underside of the tail, behind the pelvic fins. While not reported in Tiktaalik, an anal fin can be observed in Elpistostege, a close relative. Tiktaalik was covered by rhombic (diamond-shaped) bony scales, most similar to Panderichthys among lobe-finned fish. The scales are roughly textured, slightly broader than long, and overlap from front-to-back. Strong lungs (as supported by the plausible presence of a spiracle) may have led to the evolution of a more robust ribcage, a key evolutionary trait of land-living creatures. The more robust ribcage of Tiktaalik would have helped support the animal's body any time it ventured outside a fully aquatic habitat. Tiktaalik is sometimes compared to gars (especially the alligator gar), with whom it shares a number of characteristics: Diamond-shaped scale patterns common to the Crossopterygii class (in both species scales are rhombic, overlapping and tuberculated); Teeth structured in two rows; Both internal and external nostrils; Tubular and streamlined body; Absence of anterior dorsal fin; Broad, dorsoventrally compressed skull; Paired frontal bones; Marginal nares; Subterminal mouth; Lung-like organ. Classification and evolution Tiktaalik roseae is the only species classified under the genus. Tiktaalik lived approximately 375 million years ago. It is representative of the transition between non-tetrapod vertebrates (fish) such as Panderichthys, known from fossils 380 million years old, and early tetrapods such as Acanthostega and Ichthyostega, known from fossils about 365 million years old. Its mixture of primitive fish and derived tetrapod characteristics led one of its discoverers, Neil Shubin, to characterize Tiktaalik as a "fishapod". Tiktaalik is a transitional fossil; it is to tetrapods what Archaeopteryx is to birds, troodonts and dromaeosaurids. While it may be that neither is ancestor to any living animal, they serve as evidence that intermediates between very different types of vertebrates did once exist. The mixture of both fish and tetrapod characteristics found in Tiktaalik include these traits: Fish Fish gills Fish scales Fish fins "Fishapod" Half-fish, half-tetrapod limb bones and joints, including a functional wrist joint and radiating, fish-like fins instead of toes Half-fish, half-tetrapod ear region Tetrapod Tetrapod rib bones Tetrapod mobile neck with separate pectoral girdle Tetrapod lungs Classification history 2006–2010: Elpistostegids as tetrapod ancestors The phylogenetic analysis of Daeschler et al. (2006) placed Tiktaalik as a sister taxon to Elpistostege and directly above Panderichthys, which was preceded by Eusthenopteron. Tiktaalik was thus inserted below Acanthostega and Ichthyostega, acting as a transitional form between limbless fish and limbed vertebrates ("tetrapods"). Some press coverage also used the term "missing link", implying that Tiktaalik filled an evolutionary gap between fish and tetrapods. Nevertheless, Tiktaalik has never been claimed to be a direct ancestor to tetrapods. Rather, its fossils help to illuminate evolutionary trends and approximate the hypothetical true ancestor to the tetrapod lineage, which would have been similar in form and ecology. In its original description, Tiktaalik was described as a member of Elpistostegalia, a name previously used to refer to particularly tetrapod-like fish such as Elpistostege and Panderichthys. Daeschler et al. (2006) recognized that this term referred to a paraphyletic grade of fish incrementally closer to tetrapods. Elpistostegalian fish have few unique traits which are not retained from earlier fish or inherited by later tetrapods. In response, Daescler et al. (2006) redefined Elpisostegalia as a clade, including all vertebrates descended from the common ancestor of Panderichthys, Elpistostege and tetrapods. Nevertheless, they still retained the phrase "elpistostegalian fish" to refer to the grade of early elpisostegalians which had not acquired limbs, digits, or other specializations which define tetrapods. In this sense, Tiktaalik is an elpistostegalian fish. Later papers also use the term "elpisostegid" for the same category of Devonian fish. This order of the phylogenetic tree was initially adopted by other experts, most notably by Per Ahlberg and Jennifer Clack. However, it was questioned in a 2008 paper by Boisvert et al., who noted that Panderichthys, due to its more derived distal forelimb structure, might be closer to tetrapods than Tiktaalik or even that it was convergent with tetrapods. Ahlberg, co-author of the study, considered the possibility of Tiktaaliks fin having been "an evolutionary return to a more primitive form." 2010–present: Doubts over tetrapod ancestry The proposed origin of tetrapods among elpistostegalian fish was called into question by a discovery made in the Holy Cross Mountains of Poland. In January 2010, a group of paleontologists (including Ahlberg) published on a series of trackways from the Eifelian stage of the Middle Devonian, about 12 million years older than Tiktaalik. These trackways, discovered at the Zachełmie quarry, appear to have been created by fully terrestrial tetrapods with a quadrupedal gait. Tiktaaliks discoverers were skeptical about the Zachelmie trackways. Daeschler said that trace evidence was not enough for him to modify the theory of tetrapod evolution, while Shubin argued that Tiktaalik could have produced very similar footprints. In a later study, Shubin expressed a significantly modified opinion that some of the Zachelmie footprints, those which lacked digits, may have been made by walking fish. However, Ahlberg insisted that those tracks could not have possibly been formed either by natural processes or by transitional species such as Tiktaalik or Panderichthys. Instead, the authors of the publication suggested that "ichthyostegalian"-grade tetrapods were the responsible trackmakers, based on available pes morphology of those animals. Narkiewicz, co-author of the article on the Zachelmie trackways, claimed that the Polish "discovery has disproved the theory that elpistostegids were the ancestors of tetrapods", a notion partially shared by Philippe Janvier. To resolve the questions posed by the Zachelmie trackways, several hypotheses have been suggested. One approach maintains that the first pulse of elpistostegalian and tetrapod evolution occurring in the Middle Devonian, a time when body fossils showing this trend are too rare to be preserved. This maintains the elpistostegalian–tetrapod ancestor–descendant relationship apparent in fossils, but also introduces long ghost lineages required to explain the apparent delay in fossil appearances. Another approach is that elpistostegalian and tetrapod similarities are a case of convergent evolution. In this interpretation, tetrapods would originate in the Middle Devonian while elpisostegalians originate independently in the Late Devonian, before going extinct near the end of the period. Estimates published after the discovery of Zachelmie tracks suggested that digited tetrapods may have appeared as early as 427.4 Mya and questioned attempts to read absolute timing of evolutionary events in early tetrapod evolution from stratigraphy. However, a reanalysis of the Zachelmie trackways in 2015 suggested that they do not constitute movement trackways, but should rather be interpreted as fish nests or feeding traces. Paleobiology Tiktaalik generally had the characteristics of a lobe-finned fish, but with front fins featuring arm-like skeletal structures more akin to those of a crocodile, including a shoulder, elbow and wrist. The fossil discovered in 2004 did not include the rear fins and tail, which were found in other specimens. It had rows of sharp teeth indicative of a predator fish, and its neck could move independently of its body, which is not common in other fish (Tarrasius, Mandageria, placoderms and extant seahorses being some exceptions; see also Lepidogalaxias and Channallabes apus). The animal had a flat skull resembling a crocodile's; eyes on top of its head; a neck and ribs similar to those of tetrapods, with the ribs being used to support its body and aid in breathing via lungs; well developed jaws suitable for catching prey; and a small gill slit called a spiracle that, in more derived animals, became an ear. Spiracles would have been useful in shallow water, where higher water temperature would lower oxygen content. The discoverers said that in all likelihood, Tiktaalik flexed its proto-limbs primarily on the floor of streams and may have pulled itself onto the shore for brief periods. In 2014, the discovery of the animal's pelvic girdle was announced; it was strongly built, indicating the animal could have used them for moving in shallow water and across mudflats. Neil Shubin and Daeschler, the leaders of the team, have been searching Ellesmere Island for fossils since 2000: Paleoecology The fossils of Tiktaalik were found in the Fram Formation, deposits of meandering stream systems near the Devonian equator, suggesting a benthic animal that lived on the bottom of shallow waters and perhaps even out of the water for short periods, with a skeleton indicating that it could support its body under the force of gravity whether in very shallow water or on land. At that period, for the first time, deciduous plants were flourishing and annually shedding leaves into the water, attracting small prey into warm oxygen-poor shallows that were difficult for larger fish to swim in. Cultural significance Tiktaalik has been used as the subject of various Internet memes. The images criticize Tiktaalik for its evolutionary adaptations, construing them as playing a critical role in the chain of events that would eventually lead to all human suffering.
Biology and health sciences
Prehistoric amphibians
Animals
4640505
https://en.wikipedia.org/wiki/Sole%20%28fish%29
Sole (fish)
Sole is a fish belonging to several families. Generally speaking, they are members of the family Soleidae, but, outside Europe, the name sole is also applied to various other similar flatfish, especially other members of the sole suborder Soleoidei as well as members of the flounder family. In European cookery, there are several species which may be considered true soles, but the common or Dover sole Solea solea, often simply called the sole, is the most esteemed and most widely available. Etymology of the word The word sole in English, French, and Italian comes from its resemblance to a sandal, Latin . In other languages, it is named for the tongue, e.g. Greek (), German , Dutch or or the smaller and popular (young sole), Hungarian , Spanish , Cantonese (, 'dragon tongue'), Arabic () (for the common sole) meaning 'the tongue of ox' in Qosbawi accent, Turkish . A partial list of common names for species referred to as sole include: In the sole suborder Soleoidei: The true soles, Soleidae, including the common or Dover sole, Solea solea. These are the only fishes called soles in Europe. The American soles, Achiridae, sometimes classified among the Soleidae. The tonguefishes or tongue soles, Cynoglossidae, whose common names usually include the word 'tongue'. Several species of righteye flounder in the family Pleuronectidae, including the lemon sole, the Pacific Dover sole, and the petrale sole. Threats The true sole, Solea solea, is sufficiently distributed that it is not considered a threatened species; however, overfishing in Europe has produced severely diminished populations, with declining catches in many regions. For example, the western English Channel and Irish Sea sole fisheries face potential collapse according to data in the UK Biodiversity Action Plan. Sole, along with the other major bottom-feeding fish in the North Sea such as cod, monkfish, and plaice, is listed by the ICES as "outside safe biological limits." Moreover, they are growing less quickly now and are rarely older than six years, although they can reach forty. World stocks of large predatory fish and large ground fish such as sole and flounder were estimated in 2003 to be only about 10% of pre-industrial levels. According to the World Wildlife Fund in 2006, "of the nine sole stocks, seven are overfished with the status of the remaining two unknown." In 2010, Greenpeace International has added the common sole to its seafood red list, as they are primarily caught by beam trawlers, which have a very high bycatch rate. The Greenpeace International seafood red list is a list of fish that are commonly sold in supermarkets around the world, and which have a very high risk of being sourced from unsustainable fisheries.
Biology and health sciences
Acanthomorpha
Animals
4640562
https://en.wikipedia.org/wiki/Solar%20core
Solar core
The core of the Sun is considered to extend from the center to about 0.2 of the solar radius (). It is the hottest part of the Sun and of the Solar System. It has a density of at the center, and a temperature of 15 million kelvins (15 million degrees Celsius; 27 million degrees Fahrenheit). The core is made of hot, dense plasma (ions and electrons), at a pressure estimated at at the center. Due to fusion, the composition of the solar plasma drops from about 70% hydrogen by mass at the outer core, to 34% hydrogen at the center. The core contains 34% of the Sun's mass, but only 3% of the Sun's volume, and it generates 99% of the fusion power of the Sun. There are two distinct reactions in which four hydrogen nuclei may eventually result in one helium nucleus: the proton–proton chain reaction – which is responsible for most of the Sun's released energy – and the CNO cycle. Composition The composition of the Sun varies with depth. In the photosphere, it is about 73–74% hydrogen by mass, the rest being primarily helium, which is the same composition as the atmosphere of Jupiter, and the primordial composition of gases at the earliest star formation after the Big Bang. However, as depth into the Sun increases, fusion decreases the fraction of hydrogen. Traveling inward, hydrogen mass fraction starts to decrease rapidly after the core radius has been reached (it is still about 70% at a radius equal to 25% of the Sun's radius) and inside this, the hydrogen fraction drops rapidly as the core is traversed, until it reaches a low of about 33% hydrogen, at the Sun's center (radius zero). All but 2% of the remaining plasma mass (i.e. 65%) is helium. Energy conversion Approximately 3.7 protons (hydrogen nuclei), or roughly 600 million tonnes of hydrogen, are converted into helium nuclei every second, releasing energy at a rate of 3.86 joules per second. The core produces almost all of the Sun's heat via fusion; the rest of the star is heated by the outward transfer of heat from the core. The energy produced by fusion in the core, except a small part carried out by neutrinos, must travel through many successive layers to the solar photosphere before it escapes into space as sunlight, or else as kinetic or thermal energy of massive particles. The energy conversion per unit time (power) of fusion in the core varies with distance from the solar center. At the center of the Sun, fusion power is estimated by models to be about 276.5 watts/m3. Despite its intense temperature, the peak power generating density of the core overall is similar to an active compost heap, and is lower than the power density produced by the metabolism of an adult human. The Sun is much hotter than a compost heap due to the Sun's enormous volume and limited thermal conductivity. The low power outputs occurring inside the fusion core of the Sun may also be surprising, considering the large power which might be predicted by a simple application of the Stefan–Boltzmann law for temperatures of 10–15 million kelvins. However, layers of the Sun are radiating to outer layers only slightly lower in temperature, and it is this difference in radiation powers between layers which determines net power generation and transfer in the solar core. At 19% of the solar radius, near the edge of the core, temperatures are about 10 million kelvins and fusion power density is 6.9 W/m3, which is about 2.5% of the maximum value at the solar center. The density here is about 40 g/cm3, or about 27% of that at the center. Some 91% of the solar energy is produced within this radius. Within 24% of the radius (the outer "core" by some definitions), 99% of the Sun's power is produced. Beyond 30% of the solar radius, where temperature is 7 million K and density has fallen to 10 g/cm3 the rate of fusion is almost nil. There are two distinct reactions in which four hydrogen nuclei may eventually result in one helium nucleus: "proton–proton chain reaction" and the "CNO cycle". Proton–proton chain reaction The first reaction in which 4 H nuclei may eventually result in one He nucleus, known as the proton–proton chain reaction, is: This reaction sequence is thought to be the most important one in the solar core. The characteristic time for the first reaction is about one billion years even at the high densities and temperatures of the core, due to the necessity for the weak force to cause beta decay before the nucleons can adhere (which rarely happens in the time they tunnel toward each other, to be close enough to do so). The time that deuterium and helium-3 in the next reactions last, by contrast, are only about 4 seconds and 400 years. These later reactions proceed via the nuclear force and are thus much faster. The total energy released by these reactions in turning 4 hydrogen atoms into 1 helium atom is 26.7 MeV. CNO cycle The second reaction sequence, in which 4 H nuclei may eventually result in one He nucleus, is called the CNO cycle and generates less than 10% of the total solar energy. This involves carbon atoms which are not consumed in the overall process. The details of this CNO cycle are as follows: This process can be further understood by the picture on the right, starting from the top in clockwise direction. Equilibrium The rate of nuclear fusion depends strongly on density. Therefore, the fusion rate in the core is in a self-correcting equilibrium: a slightly higher rate of fusion would cause the core to heat up more and expand slightly against the weight of the outer layers. This would reduce the fusion rate and correct the perturbation; and a slightly lower rate would cause the core to cool and shrink slightly, increasing the fusion rate and again reverting it to its present level. However, the Sun gradually becomes hotter during its time on the main sequence, because the helium atoms in the core are denser than the hydrogen atoms they were fused from. This increases the gravitational pressure on the core, which is resisted by a gradual increase in the rate at which fusion occurs. This process speeds up over time as the core gradually becomes denser. It is estimated that the Sun has become 30% brighter in the last four and a half billion years and will continue to increase in brightness by 1% every 100 million years. Energy transfer The high-energy photons (gamma rays) released in fusion reactions take indirect paths to the Sun's surface. According to current models, random scattering from free electrons in the solar radiative zone (the zone within 75% of the solar radius, where heat transfer is by radiation) sets the photon diffusion time scale (or "photon travel time") from the core to the outer edge of the radiative zone at about 170,000 years. From there they cross into the convective zone (the remaining 25% of distance from the Sun's center), where the dominant transfer process changes to convection, and the speed at which heat moves outward becomes considerably faster. In the process of heat transfer from core to photosphere, each gamma photon in the Sun's core is converted during scattering into several million visible light photons before escaping into space. Neutrinos are also released by the fusion reactions in the core, but unlike photons they very rarely interact with matter, so almost all are able to escape the Sun immediately. For many years measurements of the number of neutrinos produced in the Sun were much lower than theories predicted, a problem which was recently resolved through a better understanding of neutrino oscillation.
Physical sciences
Solar System
Astronomy
4641155
https://en.wikipedia.org/wiki/Degree%20of%20unsaturation
Degree of unsaturation
In the analysis of the molecular formula of organic molecules, the degree of unsaturation (DU) (also known as the index of hydrogen deficiency (IHD), double bond equivalents (DBE), or unsaturation index) is a calculation that determines the total number of rings and π bonds. A formula is used in organic chemistry to help draw chemical structures. It does not give any information about those components individually—the specific number of rings, or of double bonds (one π bond each), or of triple bonds (two π bonds each). The final structure is verified with use of NMR, mass spectrometry and IR spectroscopy, as well as qualitative inspection. It is based on comparing the actual molecular formula to what would be a possible formula if the structure were saturated—having no rings and containing only σ bonds—with all atoms having their standard valence. General formula The formula for degree of unsaturation is: where ni is the number of atoms with valence vi. That is, an atom that has a valence of x contributes a total of x − 2 to the degree of unsaturation. The result is then halved and increased by 1. Simplified formulae For certain classes of molecules, the general formula can be simplified or rewritten more clearly. For example: where a = number of carbon atoms in the compound b = number of hydrogen atoms in the compound c = number of nitrogen atoms in the compound f = number of halogen atoms in the compound or where C = number of carbons, H = number of hydrogens, X = number of halogens and N = number of nitrogens, gives an equivalent result. In either case, oxygen and other divalent atoms do not contribute to the degree of unsaturation, as 2 − 2 = 0. Explanation For hydrocarbons, the DBE (or IHD) tells us the number of rings and/or extra bonds in a non-saturated structure, which equals the number of hydrogen pairs that are required to make the structure saturated, simply because joining two elements to form a ring or adding one extra bond (e.g., a single bond changed to a double bond) in a structure reduces the need for two H's. For non-hydrocarbons, the elements in a pair can include any elements in the lithium family and the fluorine family in the periodic table, not necessarily all H's. A popular form of the formula is as follows: where , , and represent the number of carbon, nitrogen, hydrogen and halogen atoms, respectively. Each of the terms on the RHS can be explained, respectively, as follows: Except for the terminal carbons, every carbon chained to the structure with two single bonds requires a pair of hydrogen atoms attached to it. The number of carbons in the formula actually represents the number of hydrogen pairs required for that number of carbons to form a saturated structure. (This is also true if a carbon is added to the structure, whether it is inserted into a backbone chain, attached to a terminal to replace a hydrogen, or branched out from a carbon to replace a hydrogen.) Each of the two terminal carbons in the backbone chain needs one extra hydrogen – that is why "1" is added to the formula. (A branch's terminal doesn't need an H added in the calculation because the H replaced by the branch can be counted as the H added to the branch terminal. This is also true for a branch terminated with any element.) Except the terminal nitrogens, each nitrogen in the chain only requires one H attached to it, which is half a pair of hydrogens—that is why is in the formula, which gives a value of 1 for every two nitrogens. (This is also true if nitrogen is added into the structure, whether it is inserted into a backbone chain, attached to a terminal to replace an H, or branched out from a C to replace an H.) The represents the number of hydrogen pairs because it gives a value of 1 for every two hydrogen atoms. It is subtracted in the formula to count how many pairs of hydrogen atoms are missing in the unsaturated structure, which tells us the degree of hydrogen deficiency. (No hydrogen pair is missing if , which corresponds to no hydrogen deficiency.) The presence of is for a reason similar to . Adding an oxygen atom to the structure requires no hydrogen added, which is why the number of oxygen atoms does not appear in the formula. Furthermore, the formula can be generalised to include all elements of Group I (the hydrogen and lithium family), Group IV (the carbon family), Group V (the nitrogen family) and Group VII (the fluorine family) of CAS A group in the periodic table as follows: Or simply,
Physical sciences
Concepts_2
Chemistry
20584918
https://en.wikipedia.org/wiki/Saturn%20V
Saturn V
The Saturn V is a retired American super heavy-lift launch vehicle developed by NASA under the Apollo program for human exploration of the Moon. The rocket was human-rated, had three stages, and was powered by liquid fuel. Flown from 1967 to 1973, it was used for nine crewed flights to the Moon, and to launch Skylab, the first American space station. the Saturn V remains the only launch vehicle to have carried humans beyond low Earth orbit (LEO). The Saturn V holds the record for the largest payload capacity to low Earth orbit, , which included unburned propellant needed to send the Apollo command and service module and Lunar Module to the Moon. The largest production model of the Saturn family of rockets, the Saturn V was designed under the direction of Wernher von Braun at the Marshall Space Flight Center in Huntsville, Alabama; the lead contractors for construction of the rocket were Boeing, North American Aviation, Douglas Aircraft Company, and IBM. Fifteen flight-capable vehicles were built, not counting three used for ground testing. A total of thirteen missions were launched from Kennedy Space Center, nine of which carried 24 astronauts to the Moon from Apollo 8 (December 1968) to Apollo 17 (December 1972). History Background In September 1945, the U.S. government brought the German rocket technologist Wernher von Braun and over 1,500 German rocket engineers and technicians to the United States in Operation Paperclip, a program authorized by President Truman. Von Braun, who had helped create the German V-2 rocket, was assigned to the Army's rocket design division. Between 1945 and 1958, his work was restricted to conveying the ideas and methods behind the V-2 to American engineers, though he wrote books and articles in popular magazines. This approach changed in 1957, when the Soviets launched Sputnik 1 atop an R-7 ICBM, which could carry a thermonuclear warhead to the U.S. The Army and government began putting more effort towards sending Americans into space before the Soviets. They turned to von Braun's team, who had created the Jupiter series of rockets. The Juno I rocket launched the first American satellite in January 1958. Von Braun considered the Jupiter series of rockets to be a prototype of the upcoming Saturn series of rockets, and referred to it as "an infant Saturn". Saturn development Named after the sixth planet from the Sun, the design of the various Saturn rockets evolved from the Jupiter vehicles. Between 1960 and 1962, the Marshall Space Flight Center (MSFC) designed a series of Saturn rockets that could be deployed for Earth orbit and lunar missions. NASA planned to use the Saturn C-3 as part of the Earth orbit rendezvous (EOR) method for a lunar mission, with at least two or three launches needed for a single landing on the Moon. However, the MSFC planned an even bigger rocket, the C-4, which would use four F-1 engines in its first stage, an enlarged C-3 second stage, and the S-IVB, a stage with a single J-2 engine, as its third stage. The C-4 would only need to carry out two launches to carry out an EOR lunar mission. On January 10, 1962, NASA announced plans to build the C-5. The three-stage rocket would consist of the S-IC first stage, with five F-1 engines; the S-II second stage, with five J-2 engines; and the S-IVB third stage, with a single J-2 engine. The C-5 would undergo component testing even before the first model was constructed. The S-IVB third stage would be used as the second stage for the C-1B, which would serve both to demonstrate proof of concept and feasibility for the C-5, but would also provide flight data critical to the development of the C-5. Rather than undergoing testing for each major component, the C-5 would be tested in an "all-up" fashion, meaning that the first test flight of the rocket would include complete versions of all three stages. By testing all components at once, far fewer test flights would be required before a crewed launch. The C-5 was confirmed as NASA's choice for the Apollo program in early 1962, and was named the Saturn V. The C-1 became the Saturn I and C-1B became Saturn IB. Von Braun headed a team at the MSFC to build a vehicle capable of launching a crewed spacecraft to the Moon. During these revisions, the team rejected the single engine of the V-2's design and moved to a multiple-engine design. The Saturn V's final design had several key features. F-1 engines were chosen for the first stage, while new liquid hydrogen propulsion system called J-2 for the second and third stage. NASA had finalized its plans to proceed with von Braun's Saturn designs, and the Apollo space program gained speed. The stages were designed by von Braun's Marshall Space Flight Center in Huntsville, and outside contractors were chosen for the construction: Boeing (S-IC), North American Aviation (S-II), Douglas Aircraft (S-IVB), and IBM (instrument unit). Selection for Apollo lunar landing Early in the planning process, NASA considered three methods for the Moon mission: Earth orbit rendezvous (EOR), direct ascent, and lunar orbit rendezvous (LOR). A direct ascent configuration would require an extremely large rocket to send a three-man spacecraft to land directly on the lunar surface. An EOR would launch the direct-landing spacecraft in two smaller parts which would combine in Earth orbit. A LOR mission would involve a single rocket launching two spacecraft: a mother ship, and a smaller, two-man landing module which would rendezvous back with the main spacecraft in lunar orbit. The lander would be discarded and the mother ship would return home. At first, NASA dismissed LOR as a riskier option, as a space rendezvous had yet to be performed in Earth orbit, much less in lunar orbit. Several NASA officials, including Langley Research Center engineer John Houbolt and NASA Administrator George Low, argued that a lunar orbit rendezvous provided the simplest landing on the Moon with the most cost–efficient launch vehicle, and the best chance to accomplish the lunar landing within the decade. Other NASA officials became convinced, and LOR was then officially selected as the mission configuration for the Apollo program and announced by NASA administrator James E. Webb on November 7, 1962. Arthur Rudolph became the project director of the Saturn V rocket program in August 1963. He developed the requirements for the rocket system and the mission plan for the Apollo program. The first Saturn V launch lifted off from Kennedy Space Center and performed flawlessly on November 9, 1967, Rudolph's birthday. He was then assigned as the special assistant to the director of MSFC in May 1968 and subsequently retired from NASA on January 1, 1969. On July 16, 1969, the Saturn V launched Apollo 11, putting the first men on the Moon. Launch history Specifications The size and payload capacity of the Saturn V dwarfed those of all other previous rockets successfully flown at that time. With the Apollo spacecraft on top, it stood tall, and, ignoring the fins, was in diameter. Fully fueled, the Saturn V weighed and had a low Earth orbit (LEO) payload capacity originally estimated at , but was designed to send at least to the Moon. Later upgrades increased that capacity; on the final three Apollo lunar missions, it sent up to to the Moon. At a height of , the Saturn V stood taller than the Statue of Liberty from the ground to the torch, and taller than the Elizabeth Tower, which houses Big Ben at the Palace of Westminster. In contrast, the Mercury-Redstone Launch Vehicle used on Freedom 7, the first crewed American spaceflight, was approximately longer than the S-IVB stage and delivered less sea level thrust () than the Launch Escape System rocket ( sea level thrust) mounted atop the Apollo command module. The Apollo LES fired for a much shorter time than the Mercury-Redstone (3.2 seconds vs. 143.5 seconds). The Saturn V was principally designed by the Marshall Space Flight Center in Huntsville, Alabama, although numerous major systems, including propulsion systems, were designed by subcontractors. The rocket used the powerful F-1 and J-2 rocket engines; during testing at Stennis Space Center, the force developed by the engines shattered the windows of nearby houses. Designers decided early on to attempt to use as much technology from the Saturn I program as possible for the Saturn V. Consequently, the S-IVB-500 third stage of the Saturn V was based on the S-IVB-200 second stage of the Saturn IB. The instrument unit that controlled the Saturn V shared characteristics with the one carried by the Saturn IB. The Saturn V was primarily constructed of aluminum. It was also made of titanium, polyurethane, cork and asbestos. Blueprints and other plans of the rocket are available on microfilm at the Marshall Space Flight Center. The Saturn V consisted of three stages—the S-IC first stage, S-II second stage, and S-IVB third stage—and the instrument unit. All three stages used liquid oxygen (LOX) as the oxidizer. The first stage used RP-1 for fuel, while the second and third stages used liquid hydrogen (LH2). LH2 has a higher specific energy (energy per unit mass) than RP-1, which makes it more suitable for higher-energy orbits, such as the trans-lunar injection required for Apollo missions. Conversely, RP-1 offers higher energy density (energy per unit volume) and higher thrust than LH2, which makes it more suitable for reducing aerodynamic drag and gravity losses in the early stages of launch. If the first stage had used LH2, the volume required would have been more than three times greater, which would have been aerodynamically infeasible at the time. The upper stages also used small solid-propellant ullage motors that helped to separate the stages during the launch, and to ensure that the liquid propellants were in a proper position to be drawn into the pumps. S-IC first stage The S-IC was built by the Boeing Company at the Michoud Assembly Facility, New Orleans, where the Space Shuttle external tanks would later be built by Lockheed Martin. Most of its mass at launch was propellant: RP-1 fuel with liquid oxygen as the oxidizer. The stage was tall and in diameter. It provided of thrust at sea level. The S-IC stage had a dry mass of about ; when fully fueled at launch, it had a total mass of . The S-IC was powered by five Rocketdyne F-1 engines arrayed in a quincunx. The center engine was held in a fixed position, while the four outer engines could be hydraulically turned with gimbals to steer the rocket. In flight, the center engine was turned off about 26 seconds earlier than the outboard engines to limit acceleration. During launch, the S-IC fired its engines for 168 seconds (ignition occurred about 8.9 seconds before liftoff) and at engine cutoff, the vehicle was at an altitude of about , was downrange about , and was moving around . While not put into production, a proposed replacement for the first stage was the AJ-260x. This solid rocket motor would have simplified the design by removing the five-engine configuration and, in turn, reduced launch costs. S-II second stage The S-II was built by North American Aviation at Seal Beach, California. Using liquid hydrogen and liquid oxygen, it had five Rocketdyne J-2 engines in a similar arrangement to the S-IC, and also used the four outer engines for control. The S-II was tall with a diameter of , identical to the S-IC, and thus was the largest cryogenic stage until the launch of the Space Shuttle in 1981. The S-II had a dry mass of about ; when fully fueled, it weighed . The second stage accelerated the Saturn V through the upper atmosphere with of thrust in a vacuum. When loaded with fuel, more than 90 percent of the mass of the stage was propellant; however, the ultra-lightweight design had led to two failures in structural testing. Instead of having an intertank structure to separate the two fuel tanks as was done in the S-IC, the S-II used a common bulkhead that was constructed from both the top of the LOX tank and bottom of the LH2 tank. It consisted of two aluminum sheets separated by a honeycomb structure made of phenolic resin. This bulkhead had to be able to insulate against the temperature difference between the two tanks. The use of a common bulkhead saved by both eliminating one bulkhead and reducing the stage's length. Like the S-IC, the S-II was transported from its manufacturing plant to Cape Kennedy by sea. S-IVB third stage The S-IVB stage was built by the Douglas Aircraft Company at Huntington Beach, California. It had one Rocketdyne J-2 engine and used the same fuel as the S-II. The S-IVB used a common bulkhead to separate the two tanks. It was tall with a diameter of and was also designed with high mass efficiency, though not quite as aggressively as the S-II. The S-IVB had a dry mass of about and, when fully fueled, weighed about . The S-IVB was the only rocket stage of the Saturn V small enough to be transported by the cargo plane Aero Spacelines Pregnant Guppy. For lunar missions it was fired twice: first for Earth orbit insertion after second stage cutoff, and a second time for translunar injection (TLI). Instrument unit The Saturn V's instrument unit was built by IBM and was placed on top of the rocket's third stage. It was constructed at the Space Systems Center in Huntsville, Alabama. This computer controlled the operations of the rocket from just before liftoff until the S-IVB was discarded. It included guidance and telemetry systems for the rocket. By measuring the acceleration and vehicle attitude, it could calculate the position and velocity of the rocket and correct for any deviations. Assembly After the construction and ground testing of each stage was completed, they were each shipped to the Kennedy Space Center. The first two stages were so massive that the only way to transport them was by barge. The S-IC, constructed in New Orleans, was transported down the Mississippi River to the Gulf of Mexico. After rounding Florida, the stages were transported up the Intra-Coastal Waterway to the Vehicle Assembly Building (originally called the Vertical Assembly Building). This was essentially the same route which would be used later to ship Space Shuttle external tanks. The S-II was constructed in California and traveled to Florida via the Panama Canal. The third stage and Instrument Unit was carried by the Aero Spacelines Pregnant Guppy and Super Guppy, but could also have been carried by barge if warranted. Upon arrival at the Vertical Assembly Building, each stage was inspected in a horizontal position before being oriented vertically. NASA also constructed large spool-shaped structures that could be used in place of stages if a particular stage was delayed. These spools had the same height and mass and contained the same electrical connections as the actual stages. NASA stacked (assembled) the Saturn V on a Mobile Launcher, which consisted of a Launch Umbilical Tower with nine swing arms (including the crew access arm), a "hammerhead" crane, and a water suppression system which was activated prior to engine ignition during a launch. After assembly was completed, the entire stack was moved from the Vehicle Assembly Building (VAB) to the launch pad using the Crawler Transporter (CT). Built by the Marion Power Shovel Company (and later used for transporting the smaller and lighter Space Shuttle, as well as the Space Launch System), the CT ran on four double-tracked treads, each with 57 "shoes". Each shoe weighed . This transporter was also required to keep the rocket level as it traveled the to the launch site, especially at the 3 percent grade encountered at the launch pad. The CT also carried the Mobile Service Structure (MSS), which allowed technicians access to the rocket until eight hours before launch, when it was moved to the "halfway" point on the Crawlerway (the junction between the VAB and the two launch pads). Cost From 1964 until 1973, $6.417 billion (equivalent to $ in ) was appropriated for the Research and Development and flights of the Saturn V, with the maximum being in 1966 with $1.2 billion (equivalent to $ in ). That same year, NASA received its largest total budget of $4.5 billion, about 0.5 percent of the gross domestic product (GDP) of the United States at that time. Two main reasons for the cancellation of the last three Apollo missions were the heavy investments in Saturn V and the ever-increasing costs of the Vietnam War to the U.S. in money and resources. In the time frame from 1969 to 1971 the cost of launching a Saturn V Apollo mission was between $185,000,000 to $189,000,000, of which $110 million were used for the production of the vehicle (equivalent to $–$ in ). Lunar mission launch sequence The Saturn V carried all Apollo lunar missions, which were launched from Launch Complex 39 at the John F. Kennedy Space Center in Florida. After the rocket cleared the launch tower, flight control transferred to Mission Control at the Johnson Space Center in Houston, Texas. An average mission used the rocket for a total of just 20 minutes. Although Apollo 6 experienced three engine failures, and Apollo 13 experienced one engine shutdown, the onboard computers were able to compensate by burning the remaining engines longer to achieve parking orbit. Range safety In the event of an abort requiring the destruction of the rocket, the range safety officer would remotely shut down the engines and after several seconds send another command for the shaped explosive charges attached to the outer surfaces of the rocket to detonate. These would make cuts in fuel and oxidizer tanks to disperse the fuel quickly and to minimize mixing. The pause between these two actions would give time for the crew to escape via the Launch Escape Tower or (in the later stages of the flight) the propulsion system of the Service module. A third command, "safe", was used after the S-IVB stage reached orbit to irreversibly deactivate the self-destruct system. The system was also held inactive as long as the rocket was still on the launch pad. Startup sequence The first stage burned for about 2 minutes and 41 seconds, lifting the rocket to an altitude of and a speed of and burning of propellant. At 8.9 seconds before launch, the first stage ignition sequence started. The center engine ignited first, followed by opposing outboard pairs at 300-millisecond intervals to reduce the structural loads on the rocket. When thrust had been confirmed by the onboard computers, the rocket was "soft-released" in two stages: first, the hold-down arms released the rocket, and second, as the rocket began to accelerate upwards, it was slowed by tapered metal pins pulled through holes for half a second. Once the rocket had lifted off, it could not safely settle back down onto the pad if the engines failed. The astronauts considered this one of the tensest moments in riding the Saturn V, for if the rocket did fail to lift off after release they had a low chance of survival given the large amounts of propellant. To improve safety, the Saturn Emergency Detection System (EDS) inhibited engine shutdown for the first 30 seconds of flight. If all three stages were to explode simultaneously on the launch pad, an unlikely event, the Saturn V had a total explosive yield of 543 tons of TNT or 0.543 kilotons (2,271,912,000,000 J or 155,143 lbs of weight loss), which is 0.222 kt for the first stage, 0.263 kt for the second stage and 0.068 kt for the third stage. (See Saturn V Instrument Unit) Contrary to popular myth, the noise produced was not able to melt concrete. It took about 12 seconds for the rocket to clear the tower. During this time, it yawed 1.25 degrees away from the tower to ensure adequate clearance despite adverse winds; this yaw, although small, can be seen in launch photos taken from the east or west. At an altitude of the rocket rolled to the correct flight azimuth and then gradually pitched down until 38 seconds after second stage ignition. This pitch program was set according to the prevailing winds during the launch month. The four outboard engines also tilted toward the outside so that in the event of a premature outboard engine shutdown the remaining engines would thrust through the rocket's center of mass. The Saturn V reached at over in altitude. Much of the early portion of the flight was spent gaining altitude, with the required velocity coming later. The Saturn V broke the sound barrier at just over 1 minute at an altitude of between . At this point, shock collars, or condensation clouds, would form around the bottom of the command module and around the top of the second stage. Max Q sequence At about 80 seconds, the rocket experienced maximum dynamic pressure (max q). The dynamic pressure on a rocket varies with air density and the square of relative velocity. Although velocity continues to increase, air density decreases so quickly with altitude that dynamic pressure falls below max q. The propellant in just the S-IC made up about three-quarters of Saturn V's entire launch mass, and it was consumed at . Newton's second law of motion states that force is equal to mass multiplied by acceleration, or equivalently that acceleration is equal to force divided by mass, so as the mass decreased (and the force increased somewhat), acceleration rose. Including gravity, launch acceleration was only  g, i.e., the astronauts felt  g while the rocket accelerated vertically at  g. As the rocket rapidly lost mass, total acceleration including gravity increased to nearly 4 g at T+135 seconds. At this point, the inboard (center) engine was shut down to prevent acceleration from increasing beyond 4 g. When oxidizer or fuel depletion was sensed in the suction assemblies, the remaining four outboard engines were shut down. First stage separation occurred a little less than one second after this to allow for F-1 thrust tail-off. Eight small solid fuel separation motors backed the S-IC from the rest of the vehicle at an altitude of about . The first stage continued on a ballistic trajectory to an altitude of about and then fell in the Atlantic Ocean about downrange. The engine shutdown procedure was changed for the launch of Skylab to avoid damage to the Apollo Telescope Mount. Rather than shutting down all four outboard engines at once, they were shut down two at a time with a delay to reduce peak acceleration further. S-II sequence After S-IC separation, the S-II second stage burned for 6 minutes and propelled the craft to and , close to orbital velocity. For the first two uncrewed launches, eight solid-fuel ullage motors ignited for four seconds to accelerate the S-II stage, followed by the ignition of the five J-2 engines. For the first seven crewed Apollo missions, only four ullage motors were used on the S-II, and they were eliminated for the final four launches. About 30 seconds after first stage separation, the interstage ring dropped from the second stage. This was done with an inertially fixed attitude—orientation around its center of gravity—so that the interstage, only from the outboard J-2 engines, would fall cleanly without hitting them, as the interstage could have potentially damaged two of the J-2 engines if it was attached to the S-IC. Shortly after interstage separation the Launch Escape System was also jettisoned. About 38 seconds after the second stage ignition, the Saturn V switched from a preprogrammed trajectory to a "closed loop" or Iterative Guidance Mode. The instrument unit now computed in real time the most fuel-efficient trajectory toward its target orbit. If the instrument unit failed, the crew could switch control of the Saturn to the command module's computer, take manual control, or abort the flight. About 90 seconds before the second stage cutoff, the center engine shut down to reduce longitudinal pogo oscillations. At around this time, the LOX flow rate decreased, changing the mix ratio of the two propellants and ensuring that there would be as little propellant as possible left in the tanks at the end of second stage flight. This was done at a predetermined delta-v. Five level sensors in the bottom of each S-II propellant tank were armed during S-II flight, allowing any two to trigger S-II cutoff and staging when they were uncovered. One second after the second stage cut off it separated and several seconds later the third stage ignited. Solid fuel retro-rockets mounted on the interstage at the top of the S-II fired to back it away from the S-IVB. The S-II impacted about from the launch site. On the Apollo 13 mission, the inboard engine suffered major pogo oscillation, resulting in an early automatic cutoff. To ensure sufficient velocity was reached, the remaining four engines were kept active for longer than planned. A pogo suppressor was fitted to later Apollo missions to avoid this, though the early fifth engine's cutoff remained to reduce g-forces. S-IVB sequence Unlike the two-plane separation of the S-IC and S-II, the S-II and S-IVB stages separated with a single step. Although it was constructed as part of the third stage, the interstage remained attached to the second stage. The third stage did not use much fuel to get into LEO (Low Earth Orbit), because the second stage had done most of the job. During Apollo 11, a typical lunar mission, the third stage burned for about 2.5 minutes until first cutoff at 11 minutes 40 seconds. At this point it was downrange and in a parking orbit at an altitude of and velocity of . The third stage remained attached to the spacecraft while it orbited the Earth one and a half times while astronauts and mission controllers prepared for translunar injection (TLI). For the final three Apollo flights, the temporary parking orbit was even lower (approximately ), using the Oberth effect to increase payload capacity for these missions. The Apollo 9 Earth orbit mission was launched into the nominal orbit consistent with Apollo 11, but the spacecraft were able to use their own engines to raise the perigee high enough to sustain the 10-day mission. Skylab was launched into a quite different orbit, with a perigee which sustained it for six years, and also a higher inclination to the equator (50 degrees versus 32.5 degrees for Apollo). Lunar Module sequence On Apollo 11, TLI came at 2 hours and 44 minutes after launch. The S-IVB burned for almost six minutes, giving the spacecraft a velocity close to the Earth's escape velocity of . This gave an energy-efficient transfer to lunar orbit, with the Moon helping to capture the spacecraft with a minimum of CSM fuel consumption. About 40 minutes after TLI, the Apollo command and service module (CSM) separated from the third stage, turned 180 degrees, and docked with the Lunar Module (LM) that rode below the CSM during launch. The CSM and LM separated from the spent third stage 50 minutes later, in a maneuver known as transposition, docking, and extraction. If it were to remain on the same trajectory as the spacecraft, the S-IVB could have presented a collision hazard, so its remaining propellants were vented and the auxiliary propulsion system fired to move it away. For lunar missions before Apollo 13, the S-IVB was directed toward the Moon's trailing edge in its orbit so that the Moon would slingshot it beyond earth escape velocity and into solar orbit. From Apollo 13 onwards, controllers directed the S-IVB to hit the Moon. Seismometers left behind by previous missions detected the impacts, and the information helped map the internal structure of the Moon. Skylab sequence In 1965, the Apollo Applications Program (AAP) was created to look into science missions that could be performed using Apollo hardware. Much of the planning centered on the idea of a space station. Wernher von Braun's earlier (1964) plans employed a "wet workshop" concept, with a spent S-II Saturn V second stage being launched into orbit and outfitted in space. The next year AAP studied a smaller station using the Saturn IB second stage. By 1969, Apollo funding cuts eliminated the possibility of procuring more Apollo hardware and forced the cancellation of some later Moon landing flights. This freed up at least one Saturn V, allowing the wet workshop to be replaced with the "dry workshop" concept: the station (now known as Skylab) would be built on the ground from a surplus Saturn IB second stage and launched atop the first two live stages of a Saturn V. A backup station, constructed from a Saturn V third stage, was built and is now on display at the National Air and Space Museum. Skylab was the only launch not directly related to the Apollo lunar landing program. The only significant changes to the Saturn V from the Apollo configurations involved some modification to the S-II to act as the terminal stage for inserting the Skylab payload into Earth orbit, and to vent excess propellant after engine cutoff so the spent stage would not rupture in orbit. The S-II remained in orbit for almost two years, and made an uncontrolled re-entry on January 11, 1975. Three crews lived aboard Skylab from May 25, 1973, to February 8, 1974. Skylab remained in orbit until July 11, 1979. Post-Apollo proposal After Apollo, the Saturn V was planned to be the prime launch vehicle for Prospector to be launched to the Moon. Prospector was a proposed robotic rover, similar to the two Soviet Lunokhod rovers, the Voyager Mars probes, and a scaled-up version of the Voyager interplanetary probes. Saturn V was also to have been the launch vehicle for the nuclear rocket stage RIFT test program and for some versions of the upcoming NERVA project. All of these planned uses of the Saturn V were cancelled, with cost being a major factor. Edgar Cortright, who had been the director of NASA Langley, stated decades later that "JPL never liked the big approach. They always argued against it. I probably was the leading proponent in using the Saturn V, and I lost. Probably very wise that I lost." The canceled second production run of Saturn Vs would very likely have used the F-1A engine in its first stage, providing a substantial performance boost. Other likely changes would have been the removal of the fins (which turned out to provide little benefit when compared to their weight), a stretched S-IC first stage to support the more powerful F-1As, and uprated J-2s or an M-1 for the upper stages. A number of alternate Saturn vehicles were proposed based on the Saturn V, ranging from the Saturn INT-20 with an S-IVB stage and interstage mounted directly onto an S-IC stage, through to the Saturn V-23(L) which would not only have five F-1 engines in the first stage, but also four strap-on boosters with two F-1 engines each, giving a total of thirteen F-1 engines firing at launch. Lack of a second Saturn V production run killed these plans and left the United States without a super heavy-lift launch vehicle. Some in the U.S. space community came to lament this situation, as continued production could have allowed the International Space Station, using a Skylab or Mir configuration with both U.S. and Russian docking ports, to be lifted with just a handful of launches. The Saturn-Shuttle concept also could have eliminated the Space Shuttle Solid Rocket Boosters that ultimately precipitated the Challenger accident in 1986. Proposed successors Post-Apollo U.S. proposals for a rocket larger than the Saturn V from the late 1950s through the early 1980s were generally called Nova. Over thirty different large rocket proposals carried the Nova name, but none were developed. Wernher von Braun and others also had plans for a rocket that would have featured eight F-1 engines in its first stage, like the Saturn C-8, allowing a direct ascent flight to the Moon. Other plans for the Saturn V called for using a Centaur as an upper stage or adding strap-on boosters. These enhancements would have enabled the launch of large robotic spacecraft to the outer planets or the sending of astronauts to Mars. Other Saturn V derivatives analyzed included the Saturn MLV family of "Modified Launch Vehicles", which would have almost doubled the payload lift capability of the standard Saturn V and were intended for use in a proposed mission to Mars by 1980. In 1968, Boeing studied another Saturn-V derivative, the Saturn C-5N, which included a nuclear thermal rocket engine for the third stage of the vehicle. The Saturn C-5N would carry a considerably greater payload for interplanetary spaceflight. Work on the nuclear engines, along with all Saturn V ELVs, ended in 1973. The Comet HLLV was a massive heavy lift launch vehicle designed for the First Lunar Outpost program, which was in the design phase from 1992 to 1993 under the Space Exploration Initiative. It was a Saturn V derived launch vehicle with over twice the payload capability and would have relied completely on existing technology. All of the Comet HLLV engines were modernized versions of their Apollo counterparts and the fuel tanks would be stretched. Its main goal was to support the First Lunar Outpost program and future crewed Mars missions. It was designed to be as cheap and easy to operate as possible. Ares family In 2006, as part of the proposed Constellation program, NASA unveiled plans to construct two Shuttle Derived Launch Vehicles, the Ares I and Ares V, which would use some existing Space Shuttle and Saturn V hardware and infrastructure. The two rockets were intended to increase safety by specializing each vehicle for different tasks, Ares I for crew launches and Ares V for cargo launches. The original design of the heavy-lift Ares V, named in homage to the Saturn V, was in height and featured a core stage based on the Space Shuttle External Tank, with a diameter of . It was to be powered by five RS-25 engines and two five-segment Space Shuttle Solid Rocket Boosters (SRBs). As the design evolved, the RS-25 engines were replaced with five RS-68 engines, the same engines used on the Delta IV. The switch from the RS-25 to the RS-68 was intended to reduce cost, as the latter was cheaper, simpler to manufacture, and more powerful than the RS-25, though the lower efficiency of the RS-68 required an increase in core stage diameter to , the same diameter as the Saturn V's S-IC and S-II stages. In 2008, NASA again redesigned the Ares V, lengthening the core stage, adding a sixth RS-68 engine, and increasing the SRBs to 5.5 segments each. This vehicle would have been tall and would have produced a total thrust of approximately at liftoff, more than the Saturn V or the Soviet Energia, but less than the Soviet N-1. Projected to place approximately into orbit, the Ares V would have surpassed the Saturn V in payload capability. An upper stage, the Earth Departure Stage, would have utilized a more advanced version of the J-2 engine, the J-2X. Ares V would have placed the Altair lunar landing vehicle into low Earth orbit. An Orion crew vehicle launched on Ares I would have docked with Altair, and the Earth Departure Stage would then send the combined stack to the Moon. Space Launch System After the cancellation of the Constellation program – and hence Ares I and Ares V – NASA announced the Space Launch System (SLS) heavy-lift launch vehicle for beyond low Earth orbit space exploration. The SLS, similar to the original Ares V concept, is powered by four RS-25 engines and two five-segment SRBs. Its Block 1 configuration can lift approximately to LEO. The Block 1B configuration will add the Exploration Upper Stage, powered by four RL10 engines, to increase payload capacity. An eventual Block 2 variant will upgrade to advanced boosters, increasing LEO payload to at least . One proposal for advanced boosters would use a derivative of the Saturn V's F-1, the F-1B, and increase SLS payload to around to LEO. The F-1B is to have better specific impulse and be cheaper than the F-1, with a simplified combustion chamber and fewer engine parts, while producing of thrust at sea level, an increase over the approximate achieved by the mature Apollo 15 F-1 engine, Saturn V displays There are two displays at the U.S. Space & Rocket Center in Huntsville: SA-500D is on horizontal display made up of its S-IC-D, S-II-F/D and S-IVB-D. These were all test stages not meant for flight. This vehicle was displayed outdoors from 1969 to 2007, was restored, and is now displayed in the Davidson Center for Space Exploration. Vertical display (replica) built in 1999 located in an adjacent area. There is one at the Johnson Space Center made up of the first stage from SA-514, the second stage from SA-515, and the third stage from SA-513 (replaced for flight by the Skylab workshop). With stages arriving between 1977 and 1979, this was displayed in the open until its 2005 restoration when a structure was built around it for protection. This is the only display Saturn consisting entirely of stages intended to be launched. Another one at the Kennedy Space Center Visitor Complex, made up of S-IC-T (test stage) and the second and third stages from SA-514. It was displayed outdoors for decades, then in 1996 was enclosed for protection from the elements in the Apollo/Saturn V Center. The S-IC stage from SA-515, originally at the Michoud Assembly Facility, New Orleans, is now on display at the Infinity Science Center in Mississippi. The S-IVB stage from SA-515 was converted for use as a backup for Skylab, and is on display at the National Air and Space Museum in Washington, D.C. Discarded stages On September 3, 2002, astronomer Bill Yeung discovered a suspected asteroid, which was given the discovery designation J002E3. It appeared to be in orbit around the Earth, and was soon discovered from spectral analysis to be covered in white titanium dioxide, which was a major constituent of the paint used on the Saturn V. Calculation of orbital parameters led to tentative identification as being the Apollo 12 S-IVB stage. Mission controllers had planned to send Apollo 12's S-IVB into solar orbit after separation from the Apollo spacecraft, but it is believed the burn lasted too long, and hence did not send it close enough to the Moon, so it remained in a barely stable orbit around the Earth and Moon. In 1971, through a series of gravitational perturbations, it is believed to have entered in a solar orbit and then returned into weakly captured Earth orbit 31 years later. It left Earth orbit again in June 2003.
Technology
Crewed spacecraft
null
20588316
https://en.wikipedia.org/wiki/Wide%20area%20synchronous%20grid
Wide area synchronous grid
A wide area synchronous grid (also called an "interconnection" in North America) is a three-phase electric power grid that has regional scale or greater that operates at a synchronized utility frequency and is electrically tied together during normal system conditions. Also known as synchronous zones, the most powerful is the Northern Chinese State Grid with 1,700 gigawatts (GW) of generation capacity, while the widest region served is that of the IPS/UPS system serving most countries of the former Soviet Union. Synchronous grids with ample capacity facilitate electricity trading across wide areas. In the ENTSO-E in 2008, over 350,000 megawatt hours were sold per day on the European Energy Exchange (EEX). Neighbouring interconnections with the same frequency and standards can be synchronized and directly connected to form a larger interconnection, or they may share power without synchronization via high-voltage direct current power transmission lines (DC ties), solid-state transformers or variable-frequency transformers (VFTs), which permit a controlled flow of energy while also functionally isolating the independent AC frequencies of each side. Each of the interconnects in North America is synchronized at a nominal 60 Hz, while those of Europe run at 50 Hz. The benefits of synchronous zones include pooling of generation, resulting in lower generation costs; pooling of load, resulting in significant equalizing effects; common provisioning of reserves, resulting in cheaper primary and secondary reserve power costs; opening of the market, resulting in possibility of long term contracts and short term power exchanges; and mutual assistance in the event of disturbances. One disadvantage of a wide-area synchronous grid is that problems in one part can have repercussions across the whole grid. Properties Wide area synchronous networks improve reliability and permit the pooling of resources. Also, they can level out the load, which reduces the required generating capacity, allow more environmentally-friendly power to be employed; allow more diverse power generation schemes and permit economies of scale. Wide area synchronous networks cannot be formed if the two networks to be linked are running at different frequencies or have significantly different standards. For example, in Japan, for historical reasons, the northern part of the country operates on 50 Hz, but the southern part uses 60 Hz. That makes it impossible to form a single synchronous network, which was problematic when the Fukushima Daiichi plant melted down. Also, even when the networks have compatible standards, failure modes can be problematic. Phase and current limitations can be reached, which can cause widespread outages. The issues are sometimes solved by adding HVDC links within the network to permit greater control during off-nominal events. As was discovered in the California electricity crisis, there can be strong incentives among some market traders to create deliberate congestion and poor management of generation capacity on an interconnection network to inflate prices. Increasing transmission capacity and expanding the market by uniting with neighbouring synchronous networks make such manipulations more difficult. Frequency In a synchronous grid, all the generators naturally lock together electrically and run at the same frequency, and stay very nearly in phase with each other. For rotating generators, a local governor regulates the driving torque and helps maintain a more or less constant speed as loading changes. Droop speed control ensures that multiple parallel generators share load changes in proportion to their rating. Generation and consumption must be balanced across the entire grid because energy is consumed as it is produced. Energy is stored in the immediate short term by the rotational kinetic energy of the generators. Small deviations from the nominal system frequency are very important in regulating individual generators and assessing the equilibrium of the grid as a whole. When the grid is heavily loaded, the frequency slows, and governors adjust their generators so that more power is output (droop speed control). When the grid is lightly loaded the grid frequency runs above the nominal frequency, and this is taken as an indication by Automatic Generation Control systems across the network that generators should reduce their output. In addition, there's often central control, which can change the parameters of the AGC systems over timescales of a minute or longer to further adjust the regional network flows and the operating frequency of the grid. Where neighbouring grids, operating at different frequencies, need to be interconnected, a frequency converter is required. HVDC Interconnectors, solid-state transformers or variable-frequency transformers links can connect two grids that operate at different frequencies or that are not maintaining synchronism. Inertia Inertia in a synchronous grid is stored energy that a grid has available which can provide extra power for up to a few seconds to maintain the grid frequency. Historically, this was provided only by the angular momentum of the generators, and gave the control circuits time to adjust their output to variations in loads, and sudden generator or distribution failures. Inverters connected to HVDC usually have no inertia, but wind power can provide inertia, and solar and battery systems can provide synthetic inertia. Short circuit current In short circuit situations, it's important for a grid to be able to provide sufficient current to keep the voltage and frequency reasonably stable until circuit breakers can resolve the fault. Many traditional generator systems had wires which could be overloaded for very short periods without damage, but inverters are not as able to deliver multiple times their rated load. The short circuit ratio can be calculated for each point on the grid, and if it is found to be too low, for steps to be taken to increase it to be above 1, which is considered stable. Timekeeping For timekeeping purposes, over the course of a day the operating frequency will be varied so as to balance out deviations and to prevent line-operated clocks from gaining or losing significant time by ensuring there are 4.32 million on 50 Hz, and 5.184 million cycles on 60 Hz systems each day. This can, rarely, lead to problems. In 2018 Kosovo used more power than it generated due to a row with Serbia, leading to the phase in the whole synchronous grid of Continental Europe lagging behind what it should have been. The frequency dropped to 49.996 Hz. Over time, this caused synchronous electric clocks to become six minutes slow until the disagreement was resolved. Deployed networks A partial table of some of the larger interconnections. Historically, on the North American power transmission grid the Eastern and Western Interconnections were directly connected, and was at the time largest synchronous grid in the world, but this was found to be unstable, and they are now only DC interconnected. Planned China's electricity suppliers plan to complete by 2020 its ultra high voltage AC synchronous grid linking the current North, Central, and Eastern grids. When complete, its generation capacity will dwarf that of the UCTE Interconnection. Unified Smart Grid unification of the US interconnections into a single grid with smart grid features. SuperSmart Grid a similar mega grid proposal linking UCTE, IPS/UPS, and Mediterranean grid. ASEAN Power Grid plan to connect all ASEAN Grids. The first step is connecting all mainland ASEAN countries with Sumatra, Java, and Singapore Grid, then Borneo Island and Philippines. DC interconnectors Interconnectors such as High-voltage direct current lines, solid-state transformers or variable-frequency transformers can be used to connect two alternating current interconnection networks which are not necessarily synchronized with each other. This provides the benefit of interconnection without the need to synchronize an even wider area. For example, compare the wide area synchronous grid map of Europe (in the introduction) with the map of HVDC lines (here to the right). Solid state transformers have larger losses than conventional transformers, but DC lines lack reactive impedance and overall HVDC lines have lower losses sending power over long distances within a synchronous grid, or between them. Planned non-synchronous connections The Tres Amigas SuperStation aims to enable energy transfers and trading between the Eastern Interconnection and Western Interconnection using 30GW HVDC Interconnectors.
Technology
Electricity transmission and distribution
null
8008324
https://en.wikipedia.org/wiki/Cataclasite
Cataclasite
Cataclasite is a cohesive granular fault rock. Comminution, also known as cataclasis, is an important process in forming cataclasites. They fall into the category of cataclastic rocks which are formed through faulting or fracturing in the upper crust. Cataclasites are distinguished from fault gouge, which is incohesive, and fault breccia, which contains coarser fragments. Types Cataclasites are composed of fragments of the pre-existing wall rock as well as a matrix consisting of crushed microfragments, which cohesively holds the rock together. There are different types of classification schemes for cataclasites in the fault rock literature. The original classification scheme by Sibson classifies them by their proportion of fine-grained matrix to angular fragments. The term fault breccia is used for describing a cataclasite with coarser grains. A fault breccia is a cataclastic rock with clasts that are larger than two millimeters making up at least 30% of the rock. These are the varieties based on the classification scheme of cataclasites proposed by Sibson: protocataclasite : a type of cataclasite in which the matrix takes up less than 50% of the total volume, mesocataclasite : a type of cataclasite in which the matrix occupies between 50 and 90 percent of the total volume, and ultracataclasite : a type of cataclasite characterized by a matrix occupying greater than 90% of the total volume. This classification scheme separates distinct features of cataclasites, but any fault rock that has been formed through brittle deformation mechanisms containing pieces of the fractured pre-existing rock type are normally referred to as cataclasites. Cataclasites are different from mylonites, another type of fault rock, that is classified by the presence of a schistosity formed through ductile deformation methods. Although cataclasites often lack an oriented fabric, some cataclasites are foliated. According to Sibson's 1975 classification scheme, these would be classified as mylonites although it was proven experimentally that some cataclastic mechanisms can form cataclasites with an oriented foliation solely due to brittle deformation. In a modification to the original definitions, the foliated fault rock would be still considered a cataclasite because it was created by cataclastic mechanisms. Formation Cataclasites form through the progressive fracturing of mineral grains and aggregates, a process known as comminution. Cataclasites are the result of comminution, along with frictional sliding and grain rotation during faulting. This crushing, frictional sliding and rotation of grains is referred to as cataclasis. Comminution, along with frictional sliding and grain boundary rotation can allow a rock to macroscopically flow over a wide brittle zone in the crust. This macroscopic flow due to the combination of brittle deformation mechanisms can be referred to as cataclastic flow. Setting Many faults near the earth's surface are brittle and show evidence of low temperature deformation. At low temperatures, there is not enough energy for the crystal grains to deform plastically, thus each grain fractures as opposed to elongation or recrystallizing. In these systems, cataclasites would be more likely to form as opposed to mylonites, which would require crystal plastic deformation. Due to quartz being the main mineral in many rocks in the brittle regime of the crust, the brittle-ductile transition for quartz can be a good indication of where cataclasites would form before ductile deformation plays a role. This normally refers to the uppermost 10–12 km of the continental crust.
Physical sciences
Metamorphic rocks
Earth science
8012316
https://en.wikipedia.org/wiki/Medium-capacity%20rail%20system
Medium-capacity rail system
A medium-capacity system (MCS), also known as light rapid transit or light metro, is a rail transport system with a capacity greater than light rail, but less than typical heavy-rail rapid transit. MCS trains are usually 1 to 4 cars. Most medium-capacity rail systems are automated or use light-rail type vehicles. Since ridership determines the scale of a rapid transit system, statistical modeling allows planners to size the rail system for the needs of the area. When the predicted ridership falls between the service requirements of a light rail and heavy-rail rapid transit or metro system, an MCS project is indicated. An MCS may also result when a rapid transit service fails to achieve the requisite ridership due to network inadequacies (e.g. single-tracking) or changing demographics. In contrast with light rail systems, an MCS runs on a fully grade separated exclusive right-of-way. In some cases, the distance between stations is much longer than typically found on heavy rail networks. An MCS may also be suitable for branch line connections to another mode of a heavy-capacity transport system, such as an airport or a main route of a metro network. Definition The medium capacity designation is created from relative lower capacity and/or train configuration comparisons to other heavy rail systems. For example, the train in an MCS may have a shorter configuration than the standard metro system, with fewer traincars than a heavy capacity systems, allowing for shorter platforms to be built and used. Rather than using steel wheels, rubber-tyred metro technology, such as the VAL system used on the Taipei Metro, is sometimes recommended, due to its low running noise, as well as the ability to climb steeper grades and turn tighter curves, thus allowing more flexible alignments. Fully heavy rail or metro systems generally have train headways of 10 minutes or better during peak hours. Some systems that qualify as heavy rail/metro in every other way (e.g. are fully grade separated), but which have network inadequacies (e.g. a section of single track rail) can only achieve lesser headways (e.g. every 15 minutes) which result in lower passenger volume capacities, and thus would be more accurately defined as "light metro" or "medium-capacity" systems as a result. Capacity A report from the World Bank places the capacity of an MCS at 15,000 to 30,000 p/h/d. For comparison, ridership capacity of more than 30,000 p/h/d has been quoted as the standard for metro or "heavy rail" standards rapid transit systems, while light rail systems have passenger capacity volumes of around 10,000 to 12,000 p/h/d or 12,000 to 18,000 p/h/d. VAL (Véhicule Automatique Léger) systems are categorised in the medium-capacity rail systems because their manufacturer defines their passenger capacities as being up to 30,000 p/h/d. However, the capacity boundaries for a line to be categorised as a medium-capacity system can vary due to its non-standardisation. Inconsistencies in international definitions are even reflected within individual countries. For example, the Taiwan Ministry of Transportation and Communications states that each MCS system can board around 6,000 to 20,000 passengers per hour per direction (p/h/d or PPHPD), while the Taiwan Department of Rapid Transit Systems (TCG) suggests an MCS has a capability of boarding around 20,000 to 30,000 p/h/d, In Hong Kong, MTR's Ma On Shan line was locally classified as a medium-capacity system (as it used shorter 4-car SP1950 trains, compared to 7- to 12-car trains on other MTR lines) but can attain up to 32,000 p/h/d which is comparable to the passenger capacity of some full metro transit networks. However, it was built to the full heavy rail standard as it was designed to be extended. Full-length, 8-car trains were deployed on the line in advance of its extension into the Tuen Ma line in June 2021. Two other lines, the Disneyland Resort line shuttle service since 2005 and the South Island line since December 2016, are also classified as MCS because of their shorter trains and smaller capacity, however they use the same technology as the full-capacity rapid transit lines. Terminology In addition to MCS, light metro is a common alternative word in European countries, India, and South Korea. In some countries, however, light metro systems are conflated with light rail. In South Korea, light rail is used as the translation for the original Korean term, "경전철" – its literal translation is "light metro", but it actually means "Any railway transit other than heavy rail, which has capacity between heavy rail and bus transit". For example, the U Line in Uijeongbu utilises VAL system, a variant of medium-capacity rail transport, and is therefore categorised "light metro" by LRTA and others, though the operator itself and South Korean sources refer to the U Line as "light rail". Busan–Gimhae Light Rail Transit is also akin to a light metro in its appearance and features, thought the operator refers it as a "light rail". Likewise, Malaysian officials and media commonly refer to the Kelana Jaya, Ampang and Sri Petaling lines as "light rail transit" systems; when originally opened, the original Malay abbreviations for the lines, PUTRA-LRT (Projek Usahasama Transit Ringan Automatik/Automatic Light Transit Joint Venture Project) and STAR-LRT (Sistem Transit Aliran Ringan/Light Flow Transit System) did not clearly distinguish between light rail and light rapid transit. Some articles in India also refer to some "light metro"-type systems as "light rail". The Light Rail Transit Association (LRTA), a nonprofit organisation, also categorises several public transport systems as "light metro". Advantages and disadvantages The main reason to build a light metro instead of a regular metro is to reduce costs, mainly because this system employs shorter vehicles and shorter stations. Light metros may operate faster than heavy-rail rapid transit systems due to shorter dwell times at stations, and the faster acceleration and deceleration of lighter trains. For example, express trains on the New York City Subway are about as fast as the Vancouver SkyTrain, but these express trains skip most stops on lines where they operate. Medium-capacity systems have restricted growth capacities as ridership increases. For example, it is difficult to extend station platforms once a system is in operation, especially for underground railway systems, since this work must be done without interfering with traffic. Some railway systems, like Hong Kong and Wuhan, may make advance provisions for longer platforms, for example, so that they will be able to accommodate trains with more, or longer cars, in the future. Taipei Metro, for example, constructed extra space for two extra cars in all its Wenhu Line stations. List of medium-capacity rail systems The following is the list of currently-operating MCSs which are categorised as light metros by the Light Rail Transit Association (LRTA) , unless otherwise indicated. The list does not include, for example, monorails and urban maglev, despite most of them also being "medium-capacity rail system". Under construction Former MCSs The following is the list of former-MCSs that either developed into a full rapid transit system, or which are no longer in operation: Guangzhou, China Line 3 – began with 3-car configuration, changed to 6-car in 2010. Komaki, Japan Peachliner – abandoned on 30 September 2006. Seoul, South Korea Line 9 – trains lengthened from 4 cars to 6 cars in 2019. Sha Tin and Ma On Shan, Hong Kong Ma On Shan Rail – converted from 4- to 8-car configuration and became part of Tuen Ma line. Toronto, Ontario Line 3 Scarborough – Categorised by APTA as being "intermediate rail" (i.e. between "heavy rail" and "light rail"), and categorised as a "light metro" by LRTA. Scheduled to cease operations in November 2023, service was suspended following a derailment in July 2023 and was not resumed, instead being replaced by an express bus service.
Technology
Rail and cable transport
null
6087161
https://en.wikipedia.org/wiki/Graupel
Graupel
Graupel (; ), also called soft hail or snow pellets, is precipitation that forms when supercooled water droplets in air are collected and freeze on falling snowflakes, forming balls of crisp, opaque rime. Graupel is distinct from hail and ice pellets in both formation and appearance. However, both hail and graupel are common in thunderstorms with cumulonimbus clouds, though graupel also falls in winter storms, and at higher elevations as well. The METAR code for graupel is GS. Formation Under some atmospheric conditions, snow crystals may encounter supercooled water droplets. These droplets, which have a diameter of about on average, can exist in the liquid state at temperatures as low as , far below the normal freezing point as long as it is above the homogeneous nucleation point of water. Contact between a snow crystal and the supercooled droplets results in freezing of the liquid droplets onto the surface of the crystal. This process of crystal growth is known as accretion. Crystals that exhibit frozen droplets on their surfaces are often referred to as rimed. When this process continues so that the shape of the original snow crystal is no longer identifiable and has become ball-like, the resulting crystal is referred to as graupel. As graupel falls, it often deforms into a conical shape. This conical shape, in turn, determines which direction it falls and how far it travels as it falls. Small graupel particles with a base diameter less than 1mm generally fall with the conical base down, but if the particle is between 1mm and 3mm persistent oscillations around the center of the conical base appear, and if larger than 3mm the graupel particle will start to tumble. As the base diameter increases, conical graupel particles generally further horizontally from where it initially fell. Graupel was formerly referred to by meteorologists as "soft hail." Graupel is distinguishable from true hail in both the shape and strength of the pellet and, in some cases, the circumstances in which it falls. Ice from hail is formed in hard, relatively uniform layers and usually falls only during thunderstorms. Graupel forms fragile, soft, oblong crystals and falls in place of typical snowflakes in wintry mix situations, often in concert with ice pellets. However, graupel does also occur in thunderstorms. Graupel is also fragile enough that it will typically fall apart when pressed on. Microscopic structure The frozen droplets on the surface of rimed crystals are difficult to see even when zoomed in, and the topography of a graupel particle is not easy to record with a light microscope because of the limited resolution and depth of field in the instrument. However, observations of snow crystals with a low-temperature scanning electron microscope (LT-SEM) clearly show frozen cloud droplets measuring up to on the surface of the crystals. The rime has been observed on all four basic forms of snow crystals, including plates, dendrites, columns and needles. As the riming process continues, the mass of frozen, accumulated cloud droplets eventually obscures the form of the original snow crystal, thereby giving rise to graupel. Graupel and avalanches Graupel commonly forms in high-altitude climates and is both denser and more granular than ordinary snow, due to its rimed exterior. Macroscopically, graupel resembles small beads of polystyrene. The combination of density and low viscosity makes fresh layers of graupel unstable on slopes, and layers of or higher present a high risk of dangerous slab avalanches. In addition, thinner layers of graupel falling at low temperatures can act as ball bearings below subsequent falls of more naturally stable snow, rendering them also liable to avalanche or otherwise making surfaces slippery. Graupel tends to compact and stabilise ("weld") approximately one or two days after falling, depending on the temperature and the properties of the graupel. Gallery
Physical sciences
Precipitation
Earth science
6090525
https://en.wikipedia.org/wiki/Neglected%20tropical%20diseases
Neglected tropical diseases
Neglected tropical diseases (NTDs) are a diverse group of tropical infections that are common in low-income populations in developing regions of Africa, Asia, and the Americas. They are caused by a variety of pathogens, such as viruses, bacteria, protozoa, and parasitic worms (helminths). These diseases are contrasted with the "big three" infectious diseases (HIV/AIDS, tuberculosis, and malaria), which generally receive greater treatment and research funding. In sub-Saharan Africa, the effect of neglected tropical diseases as a group is comparable to that of malaria and tuberculosis. NTD co-infection can also make HIV/AIDS and tuberculosis more deadly. Some treatments for NTDs are relatively inexpensive. For example, praziquantel for schistosomiasis costs about US $0.20 per child per year. Nevertheless, in 2010 it was estimated that control of neglected diseases would require funding of between US$2 billion and $3 billion over the subsequent five to seven years. Some pharmaceutical companies have committed to donating all the drug therapies required, and mass drug administration efforts (for example, mass deworming) have been successful in several countries. While preventive measures are often more accessible in the developed world, they are not universally available in poorer areas. Within developed countries, neglected tropical diseases affect the very poorest in society. In the United States, there are up to 1.46 million families, including 2.8 million children, living on less than two dollars per day. In developed countries, the burdens of neglected tropical diseases are often overshadowed by other public health issues. However, many of the same issues put populations at risk in developed as well as developing nations. For example, other problems stemming from poverty, such as lack of adequate housing, can expose individuals to the vectors of these diseases. Twenty neglected tropical diseases are prioritized by the World Health Organization (WHO), though other organizations define NTDs differently. Chromoblastomycosis and other deep mycoses, scabies and other ectoparasites, and snakebite envenomation were added to the WHO list in 2017. These diseases are common in 149 countries, affecting more than 1.4 billion people (including more than 500 million children) and costing developing economies billions of dollars every year. They resulted in 142,000 deaths in 2013, down from 204,000 deaths in 1990. Reasons for neglect The importance of neglected tropical diseases has been underestimated since many are asymptomatic and have long incubation periods. The connection between death and a neglected tropical disease that has been latent for a long period is often not realized. Areas of high endemicity are often geographically isolated, making treatment and prevention much more difficult. There are three other major reasons that these diseases have been overlooked: they mainly affect the poorest countries of the developing world; in recent years public health efforts have focused heavily on decreasing the prevalence of HIV/AIDS, tuberculosis, and malaria (far more resources are given to those three diseases because of their higher mortality rates and higher public awareness of them); and neglected tropical diseases do not currently have a prominent cultural figure to champion their elimination. Stigma Neglected tropical diseases are often associated with social stigma, making their treatment more complex. Public health research has only recently begun to focus on stigma as a component of the issue. From the 1960s onward, approximately one citation a year related to social stigma. In 2006, there were 458. Stigma greatly affects disease control by decreasing help-seeking and treatment adherence. Disease control programs since the 1980s have begun to integrate stigma mitigation into their offerings. In India, a leprosy program prioritized the message that "leprosy is curable, not hereditary" in order to inspire optimism in highly affected communities. The goal was to make leprosy a disease "like any other", so as to reduce stigma. At the same time, medical resources were optimized to fulfill the promise that the disease could be cured. Economic incentives Treatment and prevention of neglected tropical diseases are not seen as profitable, so patents and profit play a reduced role in stimulating innovation compared to other diseases. Like all non-commercial areas, communities affected by these diseases are reliant on governments and philanthropy. Currently, the pharmaceutical industry views research and development as highly risky. For this reason, resources are not often put into the field of NTDs, and new chemical products are often expensive. A review of public and private initiatives found that of the 1,393 new chemical products that were marketed between 1975 and 1999, only 16 were related to tropical diseases or tuberculosis. The same review found that there was a 13-fold greater chance of a newly marketed drug being for central nervous system disorders or cancer than for an NTD. Because of a lack of economic incentives for the pharmaceutical industry, successful NTD treatment programs have often relied on donations. For instance, the Mectizan Donation Program has donated over 1.8 billion tablets of ivermectin. While developed countries often rely on government-run and private partnerships to fund such projects, developing nations frequently have significantly lower per-person spending on these diseases. A 2006 report found that the Gates Foundation funded most extra activities to counter these diseases. Neglected diseases in developed nations Since 2008, the concept of "neglected diseases of poverty" has been developed and explored. This group of diseases, which overlaps with neglected tropical diseases, also pose a threat to human health in developed nations. In the United States alone, there are at least 12 million people with neglected parasitic infections. They make up a hidden disease burden among the poorest people in wealthy societies. In developed nations, lack of knowledge in the healthcare industry and lack of conclusive diagnostic tests perpetuate the neglect of this group of diseases. In the United States, rates of parasitic infection can be distributed along geographic, racial, and socio-economic lines. Among African Americans, there may be up to 2.8 million cases of toxocariasis. Toxocariasis, trichomoniasis, and some other neglected infections occur in the United States at the same rate as in Nigeria. Within the Hispanic community, neglected infections are concentrated near the US–Mexico border. Vector-borne illnesses are especially high, with some rates approaching those of Latin America. Chagas disease was found in the US as early as the 1970s. However, in the developed world, diseases that are associated with poverty are often not addressed comprehensively. This may be due to a lack of economic incentives and public policy failings. A lack of awareness prevents effective policy generation and leaves healthcare services unequipped to address the issue. Additionally, little effort is put into creating and maintaining large data sets on neglected diseases in the United States and other developed nations. The first summit on the issue was held by the Adler Institute on Social Exclusion in the United States in 2009. In Europe, a similar trend is seen. Neglected tropical diseases are concentrated in eastern and southern Europe, where poverty levels are highest. The most prevalent diseases in this region are ascariasis, trichuriasis, zoonotic helminth infections, and visceral leishmaniasis. Migration paths to Europe, most notably to Spain, have brought diseases to Europe as well. As many as 6,000 cases of Chagas disease have been introduced in this way. In response to a growing awareness of the burden on these populations, the European Centre for Disease Prevention and Control has laid out ten public health guidelines. They cover a variety of topics, from health education and promotion to community partnerships and the development of a minority healthcare workforce. List of diseases There is some debate among the WHO, CDC, and infectious disease experts over which diseases are classified as neglected tropical diseases. Feasey, a researcher in neglected tropical diseases, notes 13 neglected tropical diseases: ascariasis, Buruli ulcer, Chagas disease, dracunculiasis, hookworm infection, human African trypanosomiasis, leishmaniasis, leprosy, lymphatic filariasis, onchocerciasis, schistosomiasis, trachoma, and trichuriasis. Fenwick recognizes 12 "core" neglected tropical diseases: the same as above, excluding hookworm. These diseases result from four classes of causative pathogens: (i) protozoa (Chagas disease, human African trypanosomiasis, and leishmaniasis); (ii) bacteria (Buruli ulcer, leprosy, trachoma, and yaws), (iii) helminths or metazoan worms (cysticercosis/taeniasis, dracunculiasis, echinococcosis, foodborne trematodiases, lymphatic filariasis, onchocerciasis, schistosomiasis, and soil-transmitted helminthiasis); and (iv) viruses (dengue, chikungunya, and rabies). The WHO recognizes the twenty diseases below as neglected tropical diseases. The World Health Organization's 2010 report on neglected tropical diseases offers an expanded list including dengue, rabies, yaws, cysticercosis, echinococcosis, and foodborne trematode infections. Buruli ulcer Buruli ulcer is caused by the bacterium Mycobacterium ulcerans. It is related to the bacteria that cause tuberculosis and leprosy. Mycobacterium ulcerans produces a toxin, mycolactone, that destroys tissue. The prevalence of Buruli ulcer is unknown. The risk of mortality is low, although secondary infections can be lethal. Morbidity takes the form of deformity, disability, and skin lesions, which can be prevented through early treatment with antibiotics and surgery. It is found in Africa, Asia, Australia, and Latin America. Chagas disease Chagas disease is also known as American trypanosomiasis. There are approximately 15 million people infected with Chagas disease. Morbidity rates are higher for immunocompromised individuals, children, and the elderly, but can be very low if the disease is treated early. Chagas disease does not kill victims rapidly, instead causing years of debilitating chronic symptoms. It is caused by the vector-borne protozoa Trypanosoma cruzi. It is spread by contact with Trypanosoma cruzi-infected feces of the triatomine (assassin bug). The protozoan can enter the body via the bug's bite, skin breaks, or mucous membranes. Infection can result from eating infected food or coming into contact with contaminated bodily fluids. There are two phases of Chagas disease. The acute phase is usually asymptomatic. The first symptoms are usually skin chancres, unilateral purplish orbital oedema, local lymphadenopathy, and fever, accompanied by a variety of other symptoms depending on the infection site. The chronic phase occurs in 30 percent of all infections and can take three forms: asymptomatic (most prevalent), cardiac, and digestive lesions. Chagas disease can be prevented by avoiding insect bites through insecticide spraying, home improvement, bed nets, hygienic food, medical care, laboratory practices, and testing. It can be diagnosed through a serological test, although the test is not very accurate. Treatment is with medication, which may have severe side effects. Dengue and chikungunya There are 50–100 million dengue virus infections annually. Dengue fever is usually not fatal, but infection with one of four serotypes can increase later susceptibility to other serotypes, resulting in a potentially fatal disease called severe dengue. Dengue fever is caused by a flavivirus which is spread mostly by the bite of the Aedes aegypti mosquito. No treatment for either dengue or severe dengue exists beyond palliative care. The symptoms are high fever and flu-like symptoms. It is found in Asia, Latin America, and Northern Australia. Chikungunya is an arboviral disease transmitted by A. albopictus and A. aegypti mosquitoes. The virus was first isolated from an outbreak in Tanzania in 1952. Chikungunya virus is a member of the genus Alphavirus and family Togaviridae. The word "chikungunya" is from the Makonde language and means "that which bends up", referring to the effect of debilitating joint pain on the patient. Symptoms, generally appearing 5–7 days after exposure, can be confused with dengue and include fever, rash, headache, joint pain, and swelling. The disease mainly occurs in Africa and Asia. Dracunculiasis Dracunculiasis is also known as Guinea-worm disease. In 2019, 53 cases were reported across four countries, a substantial decrease from 3,500,000 cases in 1986. It is not fatal, but can cause months of inactivity. It is caused by drinking water contaminated by water fleas infected with guinea-worm larvae. Approximately one year after infection, a painful blister forms and one or more worms emerge. Worms can be up to 1 metre long. It is usually treated by World Health Organization volunteers who clean and bandage wounds caused by worms and return daily to pull the worm out a few more inches. Dracunculiasis is preventable by water filtration, immediate case identification to prevent spread, health education, and treating ponds with larvicide. An eradication program has been able to reduce prevalence. , the four endemic countries are Chad, Ethiopia, Mali, and South Sudan. Echinococcosis The rate of echinococcosis is higher in rural areas, and there are more than one million people infected currently. It is caused by ingesting parasites in animal feces. There are two versions of the disease: cystic and alveolar. Both versions involve an asymptomatic incubation period of several years. In the cystic version, liver cysts cause abdominal pain, nausea, and vomiting, while cysts in the lungs cause chronic cough, chest pain, and shortness of breath. In alveolar echinococcosis, a primary cyst develops, usually in the liver, in addition to weight loss, abdominal pain, malaise, and signs of liver failure. Untreated alveolar echinococcosis is fatal. Surgery and drugs can be used to treat echinococcosis. It can be prevented by deworming dogs, sanitation, proper disposal of animal feces, health education, and livestock vaccination. Cystic echinococcosis is found in the eastern portion of the Mediterranean region, northern Africa, southern and eastern Europe, the southern portion of South America, and Central Asia. Alveolar echinococcosis is found in western and northern China, Russia, Europe, and northern North America. It can be diagnosed through imaging techniques and serological tests. Yaws There are limited data available on the prevalence of yaws, although it primarily affects children. The mortality risk is very low, but the disease causes disfigurement and disability if untreated. The most common symptom is skin lesions. It is a chronic bacterial infection, transmitted by skin contact, and caused by the spirochete bacterium Treponema pallidum pertenue. It is treated with antibiotics and can be prevented through hygiene and sanitation. Yaws is most prevalent in warm, moist tropical regions of the Americas, Africa, Asia, and the Pacific. Foodborne trematodiases Foodborne trematode infections include clonorchiasis, opisthorchiasis, fascioliasis, and paragonimiasis. These infections are all zoonotic, primarily affecting domestic or wild animals, but can also be transmitted to humans. They are acquired by eating food, such as raw fish, contaminated with the larval stages of the parasites. At least 40 million people are thought to be infected. Human African trypanosomiasis African trypanosomiasis (African sleeping sickness) is a somewhat rare protozoal disease, with fewer than 10,000 cases currently. Human African trypanosomiasis is vector-borne and spreads through the bite of the tsetse fly. The most common symptoms are fever, headache, lymphadenopathy, sleeping disturbances, personality changes, cognitive decline, and coma. The disease is always fatal if untreated. The current forms of treatment are highly toxic and ineffective, as resistance is spreading. It is diagnosed through an inexpensive serological test. Leishmaniasis The three forms of leishmaniasis, a protozoal disease, are visceral (Kala-azar), cutaneous, and mucocutaneous. There are an estimated 12 million people infected. It is fatal if untreated, and 20,000 deaths from visceral leishmaniasis occur annually. It is a vector-borne disease caused by the bite of sandflies. At least 90 percent of visceral leishmaniasis occurs in Bangladesh, Brazil, Ethiopia, India, South Sudan, and Sudan. Cutaneous leishmaniasis occurs in Afghanistan, Algeria, Brazil, Colombia, Iran, Pakistan, Peru, Saudi Arabia, and Syria. Around 90 percent of mucocutaneous leishmaniasis occurs in Bolivia, Brazil, and Peru. A vaccine is under development to prevent leishmaniasis. The only other method of prevention is avoidance of sandfly bites. Diagnosis can be made by clinical signs, serological tests, or parasitological tests. Leishmaniasis can be treated with expensive medications. Leprosy According to recent figures from the WHO, 208,619 new cases of leprosy were reported in 2018 from 127 countries. It is most prevalent in India (69% of cases), Brazil, Indonesia, Nigeria, the Democratic Republic of the Congo, Madagascar, and East Africa from Mozambique to Ethiopia, with the highest relative incidence in India, Brazil, and Nepal. There are one to two million individuals currently disabled or disfigured due to past or present leprosy. It is caused by bacteria and transmitted through droplets from the mouth and nose of infected individuals. Leprosy causes disfigurement and physical disabilities if untreated. It is curable if treated early. Treatment requires multidrug therapy. The BCG vaccine has some preventative effect against leprosy. Leprosy has a 5–20 year incubation period, and the symptoms are damage to the skin, nerves, eyes, and limbs. Lymphatic filariasis Lymphatic filariasis is also known as elephantiasis. There are approximately 120 million individuals infected and 40 million with deformities. Approximately two-thirds of cases are in Southwest Asia, and one-third are in Africa. Lymphatic filariasis is rarely fatal but has lifelong implications, such as lymphoedema of the limbs, genital disease, and painful recurrent attacks. Most people are asymptomatic but have lymphatic damage. Up to 40 percent of infected individuals have kidney damage. It is a vector-borne disease, caused by nematode worms that are transmitted by mosquitoes. It can be treated with cost-effective antihelminthic treatments, and washing skin can slow or even reverse damage. It is diagnosed with a finger-prick blood test. Noma Noma, an opportunistic bacterial infection causing gangrenous necrosis of the mouth, was added to the World Health Organization's list of neglected tropical diseases in December 2023. Onchocerciasis Onchocerciasis is also known as river blindness. There are 20.9 million people infected, and prevalence is higher in rural areas. Over 99 percent of cases are in sub-Saharan Africa. It causes blindness, skin rashes, lesions, intense itching, and skin depigmentation. It is a vector-borne disease, caused by blackflies infected with filarial worms. It can be treated with ivermectin and prevented by insecticide spraying or preventative dosing with ivermectin. Rabies There are two forms of rabies: furious and paralytic. It is mostly found in Asia and Africa. There is a higher prevalence in rural areas, and it disproportionately affects children. Rabies is fatal after symptoms develop. It is caused by a lyssavirus transmitted through wounds or bites from infected animals. The first symptoms are fever and pain near the infection site, which occur after a one- to three-month incubation period. Furious rabies (the more common type) causes hyperactivity, hydrophobia, and aerophobia; death by cardio-respiratory arrest occurs within days. Paralytic rabies causes a slow progression from paralysis to coma to death. There are 60,000 deaths from rabies annually. It can be prevented in dogs by vaccination and by cleaning and disinfecting bite wounds and post-exposure prophylaxis. Rabies is undiagnosable before symptoms develop. It can be detected through tissue testing after symptoms develop. Schistosomiasis There are over 200 million cases of schistosomiasis. Approximately 85 percent of cases are in sub-Saharan Africa. The disease can be fatal by causing bladder cancer and hematemesis. Schistosoma species have a complex life cycle that alternates between humans and freshwater snails. Infection occurs when the skin comes into contact with contaminated fresh water in which snails that carry the parasite are living. Symptoms for schistosomiasis are not caused by the worms but by the body's reaction to the eggs. The eggs that do not pass out of the body can become lodged in the intestine or bladder, causing inflammation or scarring. Children who are repeatedly infected can develop anemia, malnutrition, and learning difficulties. The symptoms are usually haematuria, bladder obstruction, renal failure, bladder cancer, periportal fibrosis, bladder fibrosis, liver fibrosis, portal hypertension, cervical lesions, ascites, and esophageal varices. Inexpensive praziquantel can be used to treat individuals with schistosomiasis, but it cannot prevent reinfection. The cost of prevention is US$0.32 per child per year. Mass deworming treatment with praziquantel, better access to safe water, sanitation, and health education can all be used to prevent schistosomiasis. Vaccines are under development. It can be diagnosed through a serological test, but the test often produces false negatives. Soil-transmitted helminthiasis Soil-transmitted helminthiasis is the most prevalent neglected tropical disease. The four major worm species responsible for soil-transmitted helminthiasis are Ascaris (roundworms), Trichuris (whipworm), the hookworms Necator americanus and Ancylostoma duodenale, and Strongyloides stercoralis. There are 1.5 billion people currently infected. Soil-transmitted helminthiasis occurs in sub-Saharan Africa, the Americas, China, and East Asia. The mortality risk is very low. The most common symptoms are anemia, stunted growth, intestinal problems, lack of energy, and compromised physical and cognitive development. Infected children often fall behind in schooling. The severity of symptoms depends on the number of worms in the body. Parasitic worms are generally transmitted via exposure to infected human feces and soil that are spread in the environment, for example, due to open defecation. The most common treatment is medicine. It can be prevented through hygienically prepared food and clean water, improved sanitation, periodic deworming, and health education. The World Health Organization recommends mass deworming without prior diagnosis. Taeniasis/cysticercosis Cysticercosis is a tapeworm larvae infection, while taeniasis is infection with adult tapeworms. Both are found in Asia, Africa, and Latin America, particularly on farms in which pigs are exposed to human excrement. Cysticercosis is the most common preventable cause of epilepsy in the developing world. Cysticercosis occurs after ingestion of contaminated food, water, or soil. Cysts and lesions can cause headaches, blindness, seizures, hydrocephalus, meningitis, and dementia. Neurocysticercosis, or the parasitic infection of the nervous system, can be fatal. Taeniasis is not fatal. It is usually contracted after eating undercooked contaminated pork. Taeniasis has mild symptoms, including abdominal pain, nausea, diarrhea, or constipation. Drugs are used to treat both diseases. Infection can be prevented through stricter meat-inspection standards, livestock confinement, improved hygiene and sanitation, health education, safe meat preparation, and identifying and treating human and pig carriers. Trachoma There are 21.4 million people infected with trachoma, of whom 2.2 million are partially blind and 1.2 million are blind. It is found in Africa, Asia, Central and South America, the Middle East, and Australia. The disease disproportionately affects women and children. The mortality risk is very low, although multiple re-infections eventually lead to blindness. The symptoms are internally scarred eyelids, followed by eyelids turning inward. Trachoma is caused by a micro-organism that spreads through eye discharges (on hands, cloth, etc.) and by "eye-seeking flies". It is treated with antibiotics. The only known prevention method is interpersonal hygiene. Chromoblastomycosis and other deep mycoses Other important endemic mycoses with common systemic involvement are histoplasmosis, paracoccidioidomycosis, coccidioidomycosis, blastomycosis and talaromycosis. These infections are also seldomly seen in returning travelers in western countries Scabies Snakebite envenoming Snakebite was added to the list in 2017, after years of criticism of the WHO by activists for not making it a priority. The greatest burden of snakebite morbidity is in India and Southeast Asia. Globally, there are an estimated 421,000 envenomings each year (about 1 in 4 snakebites) and 20,000 deaths, but snakebites often go unreported. A policy analysis however found that the placement of snakebite in the global health agenda of WHO is fragile due to reluctance acceptance of the disease in the neglected tropical disease community and the perceived colonial nature of the network driving the agenda. Effects for patients Social effects Social stigma Several NTDs, such as leprosy, cause severe deformities that result in social stigma. Stigma is considered to be the "hidden burden" of NTDs and is not accounted for in measures such as disability-adjusted life years (DALYs). Other NTDs that carry heavy social stigma include onchocerciasis, lymphatic filariasis, plague, Buruli ulcer, leishmaniasis, and Chagas disease. Lymphatic filariasis, for example, causes severe deformities that can result in denial of marriage and inability to work. Studies in Ghana and Sri Lanka have demonstrated that support groups for patients with lymphatic filariasis can increase participants' self-esteem, quality of life, and social relations through social support and providing practical advice on how to manage their illness. The social effects of neglected tropical diseases have been shown to affect men and women in different ways. Men are socially stigmatized in a way that detrimentally affects their economic prospects. Women are more likely to be affected in the areas of marriage and family. Mental health A 2012 review found that infection with a neglected tropical disease predisposes individuals to poor mental health. This is partially due to the social stigma that surrounds NTDs, but is also likely caused by the subsequent lack of access to health and social services. Overall, being a member of the infected community was found to cut individuals off from multiple aspects of society via civic rights, educational opportunities, and employment. A high prevalence of post-traumatic stress disorder (PTSD) and depression was found among people who had survived snakebites. More research needs to be directed to understanding psychological aspects of NTDs to understand their effects more fully and to direct strategies to manage them better in healthcare systems where mental health professionals are scarce. Gender NTDs disproportionately affect women and children. There is also added risk of hookworm infection during pregnancy and potential to transfer diseases such as Chagas during pregnancy. A study in Uganda found that women were able to obtain treatment more easily than men because they had fewer occupational responsibilities and were more trusting of treatments, but ignorance of the effects of medicines during pregnancy prevented adequate care. The paper concludes that gender should be considered when designing treatment programs in Uganda. Additionally, women often bear a heavier social stigma in relation to the pressure to marry. Economic effects The cost of treatment of some of these diseases, such as Buruli ulcer, can be almost the average household income for families in the highest quarter of incomes, while for those in the lowest quarter it can be over twice the yearly income. These enormous financial costs often cause deferral of treatment and financial ruin. These diseases also cost the government in terms of healthcare provision and lost worker productivity through morbidity and shortened life spans. In Kenya, for example, deworming is estimated to increase average adult income by 40 percent, which is a benefit-to-cost ratio of 100. Each untreated case of trachoma is estimated to cost US$118 in lost productivity. Each case of schistosomiasis causes a loss of 45.4 days of work per year. Most of the diseases cost the economies of developing countries millions of dollars. Large-scale prevention campaigns are predicted to increase agricultural output and education levels. The low cost of treatment for NTDs can be attributed to the large scale of the programs, free provision of drugs by pharmaceutical companies, delivery modes of drugs, and unpaid volunteers who distribute the drugs. The economic burden of NTDs is undervalued and therefore the corresponding economic effect and cost-effectiveness of decreasing prevalence of NTDs is underestimated. The investment return on measures to control NTDs is estimated to be between 14 and 30 percent, depending on the disease and region. Health effects Coinfection Coinfection is a major concern with NTDs, making them more damaging than their mortality rates might suggest. Because factors such as poverty, inadequate healthcare and inadequate sanitation practices contribute to all NTDs, they are often found in overlapping distributions. Helminth infections, as the most common infection of humans, are often found to be in multi-infection systems. For example, in Brazil, low socioeconomic status contributes to overcrowded housing. In these same areas, coinfection by Necator americanus and Schistosoma mansoni is common. The effect of each worm weakens the immune system, making infection from the other more likely and more severe. For this reason, coinfection carries a higher risk of mortality. NTDs may also play a role in infection with other diseases, such as malaria, HIV/AIDS, and tuberculosis. The ability of helminths to manipulate the immune system may create a physiological environment that could exacerbate the progression of HIV/AIDS. Some evidence from Senegal, Malawi, and Thailand has shown that helminth infections raise the risk of malarial infection. Prevention, treatment and eradication Prevention and eradication are important because "of the appalling stigma, disfigurement, blindness and disabilities caused by NTDs." The principal aim of the London Declaration on Neglected Tropical Diseases was the elimination or eradication of dracunculiasis, leprosy, lymphatic filariasis, onchocerciasis, trachoma, sleeping sickness, visceral leishmaniasis, and canine rabies within ten years of its launch in January 2012. The declaration is a collaborative effort involving the WHO, the World Bank, the Bill & Melinda Gates Foundation, the world's 13 leading pharmaceutical companies, and government representatives from the US, UK, United Arab Emirates, Bangladesh, Brazil, Mozambique, and Tanzania. While there has been a noticeable uptick in biological research into NTDs, prevention may be supplemented by social and development outreach. Spiegel and coauthors advocated for "social offset", which reallocates some funding for biotechnological research to social programs. This attempts to alleviate some of the factors (such as poverty, poor sanitation, overcrowding and poor healthcare) that greatly exacerbate conditions brought on by NTDs. Projects such as these also strengthen the goal of sustained eliminations rather than quickly addressing symptoms. Policy initiatives There are many prevention and eradication campaigns funded by organizations such as the World Health Organization, US Agency for International Development, Bill & Melinda Gates Foundation, and UK Department for International Development. Sustainable Development Goal 3 has the target: "By 2030, [to] end the epidemics of AIDS, tuberculosis, malaria and neglected tropical diseases and combat hepatitis, water-borne diseases and other communicable diseases." WHO Roadmap of 2012 In 2012, WHO published an NTD "roadmap", which contained milestones for 2015 and 2020, and specified targets for eradication, elimination and intensified control of the different NTDs. For example: NTDs planned to be eradicated: dracunculiasis by the year 2015, endemic treponematoses (yaws) by 2020 NTDs planned to be eliminated globally by 2020: blinding trachoma, leprosy, human African trypanosomiasis, and lymphatic filariasis NTDs planned to be eliminated in certain regions: rabies (by 2015 in Latin America, by 2020 in Southeast Asia and the western Pacific), Chagas disease (transmission through blood transfusion by 2015, intra-domiciliary transmission by 2020 in the Americas), visceral leishmaniasis (by 2020 in the Indian subcontinent), onchocerciasis (by 2015 in Latin America), and schistosomiasis (by 2015 in the eastern Mediterranean region, the Caribbean, Indonesia, and the Mekong River basin, and by 2020 in the Americas and western Pacific) NTDs planned to be eliminated in certain countries: human African trypanosomiasis (by 2015 in 80 percent of areas in which it occurs), onchocerciasis (by 2015 in Yemen, by 2020 in selected countries in Africa), and schistosomiasis (by 2020 in selected countries in Africa) Intensified control with specific targets for 2015 and 2020 are provided for these NTDs: dengue, Buruli ulcer, cutaneous leishmaniasis, taeniasis/cysticercosis and echinococcosis/hydatidosis, foodborne trematode infections, and soil-transmitted helminthiases. In 2021, WHO updated their NTD roadmap "Together towards 2030", outlining their approach for 2021–2030. Others The U.S. Food and Drug Administration priority review voucher is an incentive for companies to invest in new drugs and vaccines for tropical diseases. A provision of the Food and Drug Administration Amendments Act of 2007 awards a transferable "priority review voucher" to any company that obtains approval for a treatment for one of the listed diseases. The voucher can later be used to accelerate the review of an unrelated drug. This program is for all tropical diseases and includes medicines for malaria and tuberculosis. The first voucher given was for Coartem, a malaria treatment. The prize was proposed by Duke University faculty Henry Grabowski, Jeffrey Moe, and David Ridley in their 2006 Health Affairs paper "Developing Drugs for Developing Countries". In 2007, United States Senators Sam Brownback (R-KS) and Sherrod Brown (D-OH) sponsored an amendment to the Food and Drug Administration Amendments Act of 2007. President George W. Bush signed the bill in September 2007. Deworming treatment Deworming treatments in infected children may have some nutritional benefit, as worms are often partially responsible for malnutrition. However, in areas where these infections are common, there is strong evidence that mass deworming campaigns do not have a positive effect on children's average nutritional status, levels of blood haemoglobin, cognitive abilities, performance at school, or survival. To achieve health gains in the longer term, improvements in sanitation and hygiene behaviours are also required, together with deworming treatments. The effect of mass deworming on school attendance is disputed. It has been argued that mass deworming has a positive effect on school attendance. The long-term benefits of deworming include a decrease in school absenteeism by 25 percent and an increase in adult earnings by 20 percent. A systematic review, however, found that there is little or no difference in attendance in children who receive mass deworming compared to children who did not. One study found that boys were enrolled in primary school for more years than boys who were in schools that did not offer such programs. Girls in the same study were about a quarter more likely to attend secondary school if they received treatment. Both groups went on to participate in more skilled sectors of the labor market. The economic growth generated from school programs such as this may balance out the actual expenses of the program. However, the results of this study are disputed (due to a high risk of bias in the study), and the positive long-term outcomes of mass deworming remain unclear. Integration of treatment Inclusion of NTDs into initiatives for malaria, HIV/AIDS, and tuberculosis, as well as integration of NTD treatment programs, may have advantages given the strong link between these diseases and NTDs. Some neglected tropical diseases share common vectors (sandflies, black flies, and mosquitos). Both medicinal and vector control efforts may be combined. A four-drug rapid-impact package has been proposed that targets multiple diseases together. This package is estimated to cost US$0.40 per patient, with estimated saving of 26–47% compared to treating the diseases separately. While more research must be done in order to understand how NTDs and other diseases interact in both the vector and the human stages, safety assessments have so far produced positive results. Many neglected tropical diseases and other prevalent diseases share common vectors, creating another opportunity for treatment and control integration. One such example of this is malaria and lymphatic filariasis, which are both transmitted by the same or related mosquito vectors. Vector control, through the distribution of insecticide-treated nets, reduces human contact with a wide variety of disease vectors. Integrated vector control may also alleviate pressure on mass drug administration, especially with respect to rapidly evolving drug resistance. Combining vector control and mass drug administration deemphasizes both, making each less susceptible to resistance evolution. Integration with water, sanitation and hygiene (WASH) programs Water, sanitation, and hygiene (WASH) interventions are essential in preventing many NTDs, such as soil-transmitted helminthiasis. Mass drug administration alone will not protect people from re-infection. A more holistic and integrated approach to NTDs and WASH efforts will benefit both sectors along with the communities they are aiming to serve. This is especially true in areas where more than one NTD is endemic. In August 2015, the World Health Organization unveiled a global strategy and action plan to integrate WASH with other public health interventions to accelerate the elimination of NTDs. The plan aimed to intensify control or eliminate certain NTDs in specific regions by 2020, and referred to the NTD "roadmap" milestones from 2012 that included eradication of dracunculiasis by 2015 and of yaws by 2020, elimination of trachoma and lymphatic filariasis as public health problems by 2020, and intensified control of dengue, schistosomiasis, and soil-transmitted helminthiases. Closer collaboration between WASH and NTD programmes can lead to synergies. They can be achieved through collaborative planning, delivery and evaluation of programmes, strengthening and sharing of evidence, and using monitoring tools to improve the equity of health services. Reasons why WASH plays an important role in NTD prevention and patient care include: NTDs affect more than one billion people in 149 countries. They occur mainly in regions with a lack of basic sanitation. About 2.4 billion people worldwide do not have adequate sanitation facilities. 663 million do not have access to improved drinking water sources. A leading cause of preventable blindness is trachoma. The bacterial infection is transmitted through contact with eye-seeking flies, fingers, and fomites. Prevention components are facial cleanliness, which requires water for face washing, and environmental improvement, which includes safe disposal of excreta to reduce fly populations. Improved sanitation prevents soil-transmitted helminthiases. It impedes fecal pathogens such as intestinal worm eggs from contaminating the environment and infecting people through contaminated food, water, dirty hands, and direct skin contact with the soil. Improved sanitation and water management can contribute to reduced proliferation of mosquitoes that transmit diseases, such as lymphatic filariasis, dengue, and chikungunya. Breeding of the Culex mosquito, which transmits filarial parasites, is facilitated through poorly constructed latrines. Breeding of the Aedes aegypti and Aedes albopictus mosquitoes, which transmit dengue and chikungunya, can be prevented through safe storage of water. Feces and urine that contain worm eggs can contaminate surface water and lead to transmission of schistosomiasis. This can be prevented through improved sanitation. Not only human but also animal (cow, buffalo) urine or feces can transmit some schistosome species. Therefore, it is important to protect freshwater from animals and animal waste. Treatment of many NTDs requires clean water and hygienic conditions for healthcare facilities and households. For Guinea-worm disease, Buruli ulcer, and cutaneous leishmaniasis, wound management is needed to speed up healing and reduce disability. Lymphatic filariasis causes chronic disabilities. People who have this disease need to maintain rigorous personal hygiene with water and soap to prevent secondary infections. NTDs that lead to permanent disabilities make tasks such as carrying water long distances or accessing toilets difficult. However, people affected by these diseases often face stigma and can be excluded from accessing water and sanitation facilities. This increases their risk of poverty and severe illness. Clean water and soap are essential for these groups to maintain personal hygiene and dignity. Therefore, additional efforts to reduce stigma and exclusion are needed. In this manner, WASH can improve the quality of life of people affected by NTDs. In a meta-analysis, safe water was associated with significantly reduced odds of Schistosoma infection, and adequate sanitation was associated with significantly lower odds of infection with both S. mansoni and S. haematobium. A systematic review and meta-analysis showed that better hygiene in children is associated with lower odds of trachoma. Access to sanitation was associated with 15 percent lower odds of active trachoma and 33 percent lower odds of C. trachomatis infection of the eyes. Another systematic review and meta-analysis found a correlation between WASH access and practices, and lower odds of soil-transmitted helminthiasis infections by 33 to 77 percent. Persons who washed their hands after defecating were less than half as likely to be infected as those who did not. Traditionally, preventive chemotherapy is used as a measure of control, although this measure does not stop the transmission cycle and cannot prevent reinfection. In contrast, improved sanitation can. Pharmaceutical market Biotechnology companies in the developing world have targeted neglected tropical diseases due to a need to improve global health. Mass drug administration is considered a possible method for eradication, especially for lymphatic filariasis, onchocerciasis, and trachoma, although drug resistance is a potential problem. According to Fenwick, Pfizer donated 70 million doses of drugs in 2011 to eliminate trachoma through the International Trachoma Initiative. Merck has helped The African Programme for the Control of Onchocerciasis (APOC) and Oncho Elimination Programme for the Americas to greatly diminish the effect of onchocerciasis by donating ivermectin. Merck KGaA pledged to give 200 million tablets of praziquantel, the only cure for schistosomiasis, over 10 years. GlaxoSmithKline has donated two billion tablets of medicine for lymphatic filariasis and pledged 400 million deworming tablets per year for five years in 2010. Johnson & Johnson has pledged 200 million deworming tablets per year. Novartis has pledged leprosy treatment, and EISAI pledged two billion tablets to help treat lymphatic filariasis. NGO initiatives Non-governmental organizations that focus exclusively on NTDs include the Schistosomiasis Control Initiative, Deworm the World, and the END Fund. Despite under-funding, treatment and prevention of many neglected diseases is cost-effective. The cost of treating a child for infection of soil-transmitted helminths and schistosomes (some of the main causes of neglected diseases) is less than US$0.50 per year when administered as part of school-based mass deworming by Deworm the World. This programme is recommended by Giving What We Can and the Copenhagen Consensus Centre as one of the most efficient and cost-effective solutions. The efforts of the Schistosomiasis Control Initiative to combat neglected diseases include the use of rapid-impact packages: supplying schools with packages including four or five drugs, and training teachers in how to administer them. Health Action International based in Amsterdam worked with the WHO to get snakebite envenoming on the list of neglected tropical diseases. Public-private initiatives An alternative to the profit-driven drug development model emerged in 2000 to address the needs of these neglected patients. Product development partnerships (PDPs) aim at implementing and accelerating the research and development (R&D) of safe and effective health tools (diagnostics, vaccines, drugs) to combat neglected diseases. Drugs for Neglected Disease initiative (DNDi) is one of these PDPs that has already developed new treatments for NTDs. The Sabin Vaccine Institute, founded in 1993, works to address the issues of vaccine-preventable diseases as well as NTDs. They run three main programs: Sabin Vaccine Development, Global Network for Neglected Tropical Diseases, and Vaccine Advocacy and Education. Their product development partnership affiliates them with the Texas Children's Hospital as well as the Baylor College of Medicine. Their major campaign, End7, aims to end seven of the most common NTDs (elephantiasis, river blindness, snail fever, trachoma, roundworm, whipworm, and hookworm) by 2020. Through End7, college campuses undertake fundraising and educational initiatives for the broader goals of the campaign. WIPO Re:Search was established in 2011 by the World Intellectual Property Organization in collaboration with BIO Ventures for Global Health (BVGH) and with the active participation of leading pharmaceutical companies and other private and public sector research organizations. It allows organizations to share their intellectual property, compounds, expertise, facilities, and know-how royalty-free with qualified researchers worldwide working on new solutions for NTDs, malaria, and tuberculosis. In 2013, the Government of Japan, five Japanese pharmaceutical companies, the Bill and Melinda Gates Foundation, and the UNDP established a new public–private partnership, the Global Health Innovative Technology Fund. They pledged over US$100 million to the fund over five years, to be awarded as grants to R&D partnerships across sectors in Japan and elsewhere, working to develop new drugs and vaccines for 17 neglected diseases, in addition to HIV, malaria, and tuberculosis. Affordability of the resulting drugs and vaccines is one of the key criteria for grant awards. London Declaration on Neglected Tropical Diseases The London Declaration on Neglected Tropical Diseases, initiated by the Bill and Melinda Gates Foundation launched on 30 January 2012 in London. Inspired by the WHO roadmap to eradicate or prevent transmission for neglected tropical diseases, it aimed to eradicate or reduce NTDs by the year 2020. It was endorsed by governments and organisations around the world, as well as major pharmaceutical companies including Abbott, AstraZeneca, Bayer HealthCare Pharmaceuticals, Becton Dickinson, Bristol-Myers Squibb, Eisai, Gilead Sciences, GlaxoSmithKline, Johnson & Johnson, Merck KGaA, Merck Sharp & Dohme, MSD, Novartis, Pfizer, and Sanofi. It was not a complete success, but millions of lives were saved, the burden of the infections was reduced, and 42 countries eliminated at least one disease. To commemorate the programme, WHO adopted 30 January as the World NTD Day. Kigali Declaration on Neglected Tropical Diseases The Kigali Declaration on Neglected Tropical Diseases was launched at the Kigali Summit on Malaria and Neglected Tropical Diseases (NTDs) hosted by the Government of Rwanda at its capital city Kigali on 23 June 2022. It was signed as a support for the World Health Organization's 2021–30 road map for NTDs and the target of Sustainable Development Goal 3 to end NTD epidemics; and as a follow-up project of the London Declaration . Supported by WHO, governments of the Commonwealth of Nations pledged the endorsement, along with commitments from GSK plc, Novartis, and Pfizer. Others An open-access journal dedicated to neglected tropical diseases called PLoS Neglected Tropical Diseases first began publication in 2007. One of the first large-scale initiatives to address NTDs came from a collaboration between Kenneth Warren and the Rockefeller Foundation. Ken Warren is regarded as a pioneer in neglected tropical disease research. The Great Neglected Tropical Diseases Network was a consortium of scientists from all over the world, hand-picked by Warren, working to expand the research base in neglected diseases. Many of the scientists that he recruited had not been involved in NTD research before. The network ran from 1978 to 1988. Warren's vision was to establish units within biological labs across the world, dedicated to R&D. By forming a critical mass of scientists in NTD research, he hoped to attract new students into the field. The interdisciplinary group met annually to update the community on research progress. Much of the work done by this group focused on understanding the mechanisms behind infection. At these informally structured meetings, research partnerships were formed. Warren himself encouraged these partnerships, especially if they bridged the divide between developed and developing nations. Through the Great Neglected Tropical Disease Network, a great number of scientists were brought into the field of parasitology. Epidemiology The distribution of neglected tropical disease disproportionally affects about one billion of the world's poorest populations, causing mortality, disability, and morbidity. Lack of funding, resources, and attention can result in treatable and preventable diseases causing death. Factors like political dynamics, poverty, and geographical conditions can make the delivery of NTD control programs difficult. Intersectional collaboration of poverty reduction policies and neglected tropical diseases creates cross-sector approaches to simultaneously address these issues. The six most common NTDs include soil-transmitted helminths (STHs)—specifically roundworms (Ascaris lumbricoides), whipworm (Trichuris trichiura), and hookworms (Necator americanus and Ancylostoma duodenale)—schistosomiasis, trachoma, and lymphatic filariasis (LF). These diseases affect one-sixth of the world's population, with 90 percent of the disease burden occurring in sub-Saharan Africa. Information on the frequency of neglected tropical diseases is of low quality. It is currently difficult to summarize all of the information on this family of diseases. One effort to do so is the Global Burden of Disease framework. It aims to create a standardized method of measurement. The principle components of the approach involve 1) the measuring of premature mortality as well as disability, 2) the standardized usage of DALYs (disability-adjusted life years), and 3) widespread inclusion of diseases and injury causes with the estimation of missing data. However, the DALY has been criticized as a "systematic undervaluation" of disease burden. King asserts that DALY emphasizes the individual too much while ignoring the effects of the ecology of the disease. In order for the measure to become more valid, it may have to take the context of poverty more into account. King also emphasizes that DALYs may not capture the non-linear effects of poverty on the cost-utility analysis of disease control. The Socio-Demographic Index (SDI) and Healthy Life Expectancy (HALE) are other summary measures that can be used to take into account other factors. HALE is a metric that weights years lived and health loss before death to provide a summary of population health. SDI is a measurement that includes lag-distributed income per capita, average education, and fertility rate. Socioeconomic factors greatly influence the distribution of neglected tropical diseases, and not addressing these factors in models and measurements can lead to ineffective public health policy. Research and development NTD interventions include programs to address environmental and social determinants of health (e.g., vector control, water quality, sanitation) as well as programs offering mass drug administration for disease prevention and treatment. Drug treatments exist to confront many of the NTDs and represent some of the world's essential medicines. Despite significant health and economic improvements using available medicines, the low number of new compounds being researched and developed for NTDs is an ongoing and significant challenge. The dearth of candidates in pharmaceutical company drug pipelines is primarily attributed to the high costs of drug development and the fact that NTDs are concentrated among the world's poor. Other disincentives to investment include weak existing infrastructure for distribution and sales as well as concerns regarding intellectual property protection. However, the major stakeholders in NTD drug development—governments, foundations, pharmaceutical companies, academia, and NGOs—are involved in activities to help address the research and development shortfall and meet the many challenges presented by neglected tropical diseases. Initiatives include public-private partnerships, global R&D capacity building, priority vouchers to speed drug approval processes, open source scientific collaborations, and harmonization of global governance structures concerning NTDs. The diseases considered neglected tropical diseases vary. Some researchers no longer consider malaria, HIV, and tuberculosis to be neglected due to the amount of public attention and increased funding they have received. Outside "The Big Three", the seven most prevalent neglected tropical diseases in order of their global prevalence are ascariasis, trichuriasis, hookworm infection, schistosomiasis, lymphatic filariasis, and trachoma. These seven are among a larger list of thirteen major NTDs: onchocerciasis, leishmaniasis, Chagas disease, leprosy, human African trypanosomiasis (sleeping sickness), dracunculiasis, and Buruli ulcer. Deficient market In their 2002 review of the U.S. Food and Drug Administration (FDA) databases and the European Agency for the Evaluation of Medicinal Products, Troullier et al. found that 16 out of 1393 new chemical entities were approved for NTDs between 1975 and 1999 (~1%). Cohen et al. revisited the data and using the same methodology found 32 new chemical entities during the time period. In a second analysis using an expanded list of NTDs based on the G-FINDER survey, the number was slightly higher, with 46 new drugs and vaccines approved (~3% of the total including HIV drugs). Between 2000 and 2009, there has been some increase with an additional 26 newly approved drugs and vaccines for NTDs. A number of factors are recognized as contributing to the low number. The barrier most reported is the high cost of drug development. Estimates are that pharmaceutical companies' development costs to approval fall between $500 million and $2 billion. DiMasi, Hansen, and Grabowski calculated an average of $802 million in year 2000 dollars. Furthermore, the time that drugs are approved for use averages seven years out of the twenty years on patent, meaning a tendency for the market to focus on diseases of developed nations where high prices can be used to recoup research and development costs, and subsidize failed R&D efforts. In short, NTD research and development is considered a high investment risk, given that NTDs predominantly affect the poor in low- and middle-income countries. Additional barriers include drug safety regulatory requirements, intellectual property protection problems, and poor infrastructure for distribution and sales. Although drug companies have not invested heavily in NTDs, in several cases, rather than focus on profits, some have decided to donate key drugs to address NTDs. For example, Merck has had a program since the mid-1980s to donate ivermectin (Mectizan) indefinitely to support the global fight against onchocerciasis. GlaxoSmithKline and several other large pharmaceutical companies have donation programs as well. Drug donation, however, does not ameliorate the deficiency of new chemical entities being researched and developed. This is especially of concern with reports of emerging resistance among existing drugs. Policy initiatives Public–private partnerships Governments, foundations, the non-profit sector, and the private sector have found new connections to help address market deficiencies by providing funding support and spreading both the costs and risks of NTD research and development. The proliferation of public–private partnerships (PPPs) has been recognized as a key innovation in the past decade, helping to unlock existing and new resources. Major PPPs for NTDs include: the Sabin Vaccine Institute, Norvartis Vaccines Institute for Global Health, MSD Wellcome Trust Hilleman Laboratories, Infectious Diseases Research Institute, Institut Pasteur and INSERM, WIPO Re:Search, and the International Vaccine Institute. Likewise, a number of new academic drug development centers have been created in recent years drawing in industry partners. Support for these centers is frequently traced to the Bill & Melinda Gates Foundation, the Sandler Foundation, and the Wellcome Trust. R&D capacity building in middle-income countries Growing NTD research and development capacity in middle-income countries is an area of policy interest. A 2009 study of biotechnology companies in India, China, Brazil, and South Africa revealed 62 NTD products in development and on the market out of approximately 500 products offered (~14%). When products to fight HIV, malaria, and TB were included in the analysis, the number increased to 123 products, approximately 25% of the total products offered. Researchers have argued that, unlike most multinationals, small and mid-sized "Global South" companies see significant business opportunities in the development of NTD-related diagnostics, biologics, pharmaceuticals, and services. Potential actions to improve and expand this R&D capacity have been recommended, including expansion of human capital, increased private investment, knowledge and patent sharing, infrastructure building for business incubation, and innovation support. Innovation prizes and grants Competitive innovation prizes have been used to spur development in a range of fields such as aerospace engineering, clean technology, and genomics. The X-Prize Foundation is launching a competition for high-speed, point-of-care diagnostics for tuberculosis. A more widely defined annual "Global Health EnterPrize" for neglected tropical diseases has been proposed to reward health innovators, particularly those based in countries where NTDs represent a serious health burden. The Bill & Melinda Gates Foundation offers the Grand Challenges Explorations Opportunities on a rolling basis. This grant program allows individuals from any organization or background to apply to address priority global health issues. Each project award is $100,000 and is drawn from a Foundation funding pool of $100 million. Awardees have tended to offer research projects on topics that are highly speculative but offer potentially game-changing breakthroughs in global health. FDA priority review vouchers (PRV) In 2006, Ridley et al. recommended the development of a priority review voucher (PRV) in the journal Health Affairs. It gained interest from Senator Sam Brownback of Kansas, who championed its introduction in the FDA Amendments Act of 2007. Under the enacted law, FDA approval of a non-NTD drug can be accelerated through the drug review process if paired with a drug that addresses an NTD. The potential economic benefit to a pharmaceutical company is estimated to be potentially as high as $300 million per drug. Three drugs have earned NTD PRVs to date (December 2014): Coartem (by Novartis, for malaria); bedaquiline (by Janssen, for TB); and miltefosine (by Knight, for leishmaniasis). However, the success of the PRV system is now under much scrutiny, given that Knight benefitted by $125 million from the sale of a PRV earned from a drug (miltefosine) that was largely researched and developed by the WHO. Médecins Sans Frontières are now pressuring Knight to guarantee to supply miltefosine at cost price, thus far without success. The PRV isn't limited to the pairing of drugs within a single company as it can be transferred between companies. Companies with NTD drug candidates in their pipelines but without a blockbuster drug are able to sell their vouchers, producing financial returns. In the EU, similar priority review incentives are now under consideration to increase the speed of regulatory pricing and reimbursement decisions. However, PRVs have been criticized as being open to manipulation and possibly encouraging errors through too rapid regulatory decision-making. Open source collaboration initiatives Several companies and scientific organizations are participating in open-source initiatives to share drug data and patent information over the web, and facilitate virtual collaboration on NTD research. One rich area to explore is the wealth of genomic data resulting from the sequencing of parasite genomes. These data offer opportunities for the exploration of new therapeutic products using computational and open-source collaboration methods for drug discovery. The Tropical Disease Initiative, for example, has used large amounts of computing power to generate the protein structures for ten parasite genomes. An open-source drug bank was matched algorithmically to determine compounds with protein interaction activity, and two candidates were identified. In general, such methods may hold important opportunities for off-label use of existing approved drugs. History In 1977, Kenneth S. Warren, an American researcher, invented the concept of what is now "neglected tropical diseases". In 2005 Lorenzo Savioli, a senior United Nations civil servant, was appointed director of the "Department of Control of Neglected Tropical Diseases". The World Health Organization definition of neglected tropical disease has been criticised to be restrictive and described as a form of epistemic injustice.
Biology and health sciences
Concepts
Health
10347359
https://en.wikipedia.org/wiki/NGC%204151
NGC 4151
NGC 4151 is an intermediate spiral Seyfert galaxy with weak inner ring structure located from Earth in the constellation Canes Venatici. The galaxy was first mentioned by William Herschel on March 17, 1787; it was one of the six Seyfert galaxies described in the paper which defined the term. It is one of the nearest galaxies to Earth to contain an actively growing supermassive black hole. The black hole would have a mass on the order of 2.5 million to 30 million solar masses. It was speculated that the nucleus may host a binary black hole, with about 40 million and about 10 million solar masses respectively, orbiting with a 15.8-year period. This is, however, still a matter of active debate. Some astronomers nickname it the "Eye of Sauron" from its appearance. One supernova has been observed in NGC 4151: SN 2018aoq (Type II-P, mag 15.3). X-ray source X-ray emission from NGC 4151 was apparently first detected on December 24, 1970, with the X-ray observatory satellite Uhuru, although the observation spanned an error-box of 0.56 square degrees and there is some controversy as to whether UHURU might not have detected the BL Lac object 1E 1207.9 +3945, which is inside their error box – the later HEAO 1 detected an X-ray source of NGC 4151 at 1H 1210+393, coincident with the optical position of the nucleus and outside the error box of Uhuru. To explain the X-ray emission two different possibilities have been proposed: radiation of material falling onto the central black hole (which was growing much more quickly about 25,000 years ago) was so bright that it stripped electrons away from the atoms in the gas in its path, and then electrons recombined with these ionized atoms the energy released by material flowing into the black hole in an accretion disk created a vigorous outflow of gas from the surface of the disk, which directly heated gas in its path to X-ray emitting temperatures
Physical sciences
Notable galaxies
Astronomy
10356246
https://en.wikipedia.org/wiki/Standard%20atomic%20weight
Standard atomic weight
The standard atomic weight of a chemical element (symbol Ar°(E) for element "E") is the weighted arithmetic mean of the relative isotopic masses of all isotopes of that element weighted by each isotope's abundance on Earth. For example, isotope 63Cu (Ar = 62.929) constitutes 69% of the copper on Earth, the rest being 65Cu (Ar = 64.927), so Because relative isotopic masses are dimensionless quantities, this weighted mean is also dimensionless. It can be converted into a measure of mass (with dimension ) by multiplying it with the dalton, also known as the atomic mass constant. Among various variants of the notion of atomic weight (Ar, also known as relative atomic mass) used by scientists, the standard atomic weight is the most common and practical. The standard atomic weight of each chemical element is determined and published by the Commission on Isotopic Abundances and Atomic Weights (CIAAW) of the International Union of Pure and Applied Chemistry (IUPAC) based on natural, stable, terrestrial sources of the element. The definition specifies the use of samples from many representative sources from the Earth, so that the value can widely be used as the atomic weight for substances as they are encountered in reality—for example, in pharmaceuticals and scientific research. Non-standardized atomic weights of an element are specific to sources and samples, such as the atomic weight of carbon in a particular bone from a particular archaeological site. Standard atomic weight averages such values to the range of atomic weights that a chemist might expect to derive from many random samples from Earth. This range is the rationale for the interval notation given for some standard atomic weight values. Of the 118 known chemical elements, 80 have stable isotopes and 84 have this Earth-environment based value. Typically, such a value is, for example helium: . The "(2)" indicates the uncertainty in the last digit shown, to read . IUPAC also publishes abridged values, rounded to five significant figures. For helium, . For fourteen elements the samples diverge on this value, because their sample sources have had a different decay history. For example, thallium (Tl) in sedimentary rocks has a different isotopic composition than in igneous rocks and volcanic gases. For these elements, the standard atomic weight is noted as an interval: . With such an interval, for less demanding situations, IUPAC also publishes a conventional value. For thallium, . Definition The standard atomic weight is a special value of the relative atomic mass. It is defined as the "recommended values" of relative atomic masses of sources in the local environment of the Earth's crust and atmosphere as determined by the IUPAC Commission on Atomic Weights and Isotopic Abundances (CIAAW). In general, values from different sources are subject to natural variation due to a different radioactive history of sources. Thus, standard atomic weights are an expectation range of atomic weights from a range of samples or sources. By limiting the sources to terrestrial origin only, the CIAAW-determined values have less variance, and are a more precise value for relative atomic masses (atomic weights) actually found and used in worldly materials. The CIAAW-published values are used and sometimes lawfully required in mass calculations. The values have an uncertainty (noted in brackets), or are an expectation interval (see example in illustration immediately above). This uncertainty reflects natural variability in isotopic distribution for an element, rather than uncertainty in measurement (which is much smaller with quality instruments). Although there is an attempt to cover the range of variability on Earth with standard atomic weight figures, there are known cases of mineral samples which contain elements with atomic weights that are outliers from the standard atomic weight range. For synthetic elements the isotope formed depends on the means of synthesis, so the concept of natural isotope abundance has no meaning. Therefore, for synthetic elements the total nucleon count of the most stable isotope (i.e., the isotope with the longest half-life) is listed in brackets, in place of the standard atomic weight. When the term "atomic weight" is used in chemistry, usually it is the more specific standard atomic weight that is implied. It is standard atomic weights that are used in periodic tables and many standard references in ordinary terrestrial chemistry. Lithium represents a unique case where the natural abundances of the isotopes have in some cases been found to have been perturbed by human isotopic separation activities to the point of affecting the uncertainty in its standard atomic weight, even in samples obtained from natural sources, such as rivers. Terrestrial definition An example of why "conventional terrestrial sources" must be specified in giving standard atomic weight values is the element argon. Between locations in the Solar System, the atomic weight of argon varies as much as 10%, due to extreme variance in isotopic composition. Where the major source of argon is the decay of in rocks, will be the dominant isotope. Such locations include the planets Mercury and Mars, and the moon Titan. On Earth, the ratios of the three isotopes 36Ar : 38Ar : 40Ar are approximately 5 : 1 : 1600, giving terrestrial argon a standard atomic weight of 39.948(1). However, such is not the case in the rest of the universe. Argon produced directly, by stellar nucleosynthesis, is dominated by the alpha-process nuclide . Correspondingly, solar argon contains 84.6% (according to solar wind measurements), and the ratio of the three isotopes 36Ar : 38Ar : 40Ar in the atmospheres of the outer planets is 8400 : 1600 : 1. The atomic weight of argon in the Sun and most of the universe, therefore, would be only approximately 36.3. Causes of uncertainty on Earth Famously, the published atomic weight value comes with an uncertainty. This uncertainty (and related: precision) follows from its definition, the source being "terrestrial and stable". Systematic causes for uncertainty are: Measurement limits. As always, the physical measurement is never finite. There is always more detail to be found and read. This applies to every single, pure isotope found. For example, today the mass of the main natural fluorine isotope (fluorine-19) can be measured to the accuracy of eleven decimal places: . But a still more precise measurement system could become available, producing more decimals. Imperfect mixtures of isotopes. In the samples taken and measured the mix (relative abundance) of those isotopes may vary. For example, copper. While in general its two isotopes make out 69.15% and 30.85% each of all copper found, the natural sample being measured can have had an incomplete 'stirring' and so the percentages are different. The precision is improved by measuring more samples of course, but there remains this cause of uncertainty. (Example: lead samples vary so much, it can not be noted more precise than four figures: ) Earthly sources with a different history. A source is the greater area being researched, for example 'ocean water' or 'volcanic rock' (as opposed to a 'sample': the single heap of material being investigated). It appears that some elements have a different isotopic mix per source. For example, thallium in igneous rock has more lighter isotopes, while in sedimentary rock it has more heavy isotopes. There is no Earthly mean number. These elements show the interval notation: Ar°(Tl) = [, ]. For practical reasons, a simplified 'conventional' number is published too (for Tl: 204.38). These three uncertainties are accumulative. The published value is a result of all these. Determination of relative atomic mass Modern relative atomic masses (a term specific to a given element sample) are calculated from measured values of atomic mass (for each nuclide) and isotopic composition of a sample. Highly accurate atomic masses are available for virtually all non-radioactive nuclides, but isotopic compositions are both harder to measure to high precision and more subject to variation between samples. For this reason, the relative atomic masses of the 22 mononuclidic elements (which are the same as the isotopic masses for each of the single naturally occurring nuclides of these elements) are known to especially high accuracy. The calculation is exemplified for silicon, whose relative atomic mass is especially important in metrology. Silicon exists in nature as a mixture of three isotopes: Si, Si and Si. The atomic masses of these nuclides are known to a precision of one part in 14 billion for Si and about one part in one billion for the others. However the range of natural abundance for the isotopes is such that the standard abundance can only be given to about ±0.001% (see table). The calculation is A(Si) = (27.97693 × 0.922297) + (28.97649 × 0.046832) + (29.97377 × 0.030872) = 28.0854 The estimation of the uncertainty is complicated, especially as the sample distribution is not necessarily symmetrical: the IUPAC standard relative atomic masses are quoted with estimated symmetrical uncertainties, and the value for silicon is 28.0855(3). The relative standard uncertainty in this value is 1 or 10 ppm. To further reflect this natural variability, in 2010, IUPAC made the decision to list the relative atomic masses of 10 elements as an interval rather than a fixed number. Naming controversy The use of the name "atomic weight" has attracted a great deal of controversy among scientists. Objectors to the name usually prefer the term "relative atomic mass" (not to be confused with atomic mass). The basic objection is that atomic weight is not a weight, that is the force exerted on an object in a gravitational field, measured in units of force such as the newton or poundal. In reply, supporters of the term "atomic weight" point out (among other arguments) that: the name has been in continuous use for the same quantity since it was first conceptualized in 1808; for most of that time, atomic weights really were measured by weighing (that is by gravimetric analysis) and the name of a physical quantity should not change simply because the method of its determination has changed; the term "relative atomic mass" should be reserved for the mass of a specific nuclide (or isotope), while "atomic weight" be used for the weighted mean of the atomic masses over all the atoms in the sample; it is not uncommon to have misleading names of physical quantities which are retained for historical reasons, such as electromotive force, which is not a force resolving power, which is not a power quantity molar concentration, which is not a molar quantity (a quantity expressed per unit amount of substance). It could be added that atomic weight is often not truly "atomic" either, as it does not correspond to the property of any individual atom. The same argument could be made against "relative atomic mass" used in this sense. Published values IUPAC publishes one formal value for each stable chemical element, called the standard atomic weight. Any updates are published biannually (in uneven years). In 2015, the atomic weight of ytterbium was updated. Per 2017, 14 atomic weights were changed, including argon changing from single number to interval value. The value published can have an uncertainty, like for neon: , or can be an interval, like for boron: [10.806, 10.821]. Next to these 84 values, IUPAC also publishes abridged values (up to five digits per number only), and for the twelve interval values, conventional values (single number values). Symbol Ar is a relative atomic mass, for example from a specific sample. To be specific, the standard atomic weight can be noted as , where (E) is the element symbol. Abridged atomic weight The abridged atomic weight, also published by CIAAW, is derived from the standard atomic weight, reducing the numbers to five digits (five significant figures). The name does not say 'rounded'. Interval borders are rounded downwards for the first (low most) border, and upwards for the upward (upmost) border. This way, the more precise original interval is fully covered. Examples: Calcium: → Helium: → Hydrogen: → Conventional atomic weight Fourteen chemical elements – hydrogen, lithium, boron, carbon, nitrogen, oxygen, magnesium, silicon, sulfur, chlorine, argon, bromine, thallium, and lead – have a standard atomic weight that is defined not as a single number, but as an interval. For example, hydrogen has . This notation states that the various sources on Earth have substantially different isotopic constitutions, and that the uncertainties in all of them are just covered by the two numbers. For these elements, there is not an 'Earth average' constitution, and the 'right' value is not its middle (which would be 1.007975 for hydrogen, with an uncertainty of (±0.000135) that would make it just cover the interval). However, for situations where a less precise value is acceptable, for example in trade, CIAAW has published a single-number conventional atomic weight. For hydrogen, . A formal short atomic weight By using the abridged value, and the conventional value for the fourteen interval values, a short IUPAC-defined value (5 digits plus uncertainty) can be given for all stable elements. In many situations, and in periodic tables, this may be sufficiently detailed. List of atomic weights In the periodic table
Physical sciences
Periodic table
Chemistry
19400075
https://en.wikipedia.org/wiki/Partial%20melting
Partial melting
Partial melting is the phenomenon that occurs when a rock is subjected to temperatures high enough to cause certain minerals to melt, but not all of them. Partial melting is an important part of the formation of all igneous rocks and some metamorphic rocks (e.g., migmatites), as evidenced by a multitude of geochemical, geophysical and petrological studies. The parameters that influence partial melting include the composition of the source rock, the pressure and temperature of the environment, and the availability of water or other fluids. As for the mechanisms that govern partial melting, the main are decompression melting and flux melting. Decompression melting occurs when rocks are brought from higher to lower pressure zones in the Earth's crust, lowering the melting point of its mineral components, thus generating a partial melt. Flux melting, on the other hand, occurs when water and other volatiles get in contact with hot rock, reducing the melting point of minerals, leading to partial melting. With a few exceptions (e.g., Yellowstone), conduction of heat is considered a mechanism too slow and inefficient to partially melt large bodies of rock. Partial melting is also linked to the formation of ores. Magmatic and hydrothermal ore deposits, such as chromite, Ni-Cu sulfides, rare-metal pegmatites, kimberlites, volcanic-hosted massive sulfide deposits are some examples of valuable natural resources closely related to the conditions of the origin, migration and emplacement of partial melts. Parameters Melting in the mantle depends on the following parameters: composition of the rocks, pressure and temperature, and the presence of volatiles. Composition The chemical composition of rocks affects their melting points and the final product of partial melting. For example, the bulk chemistry of melts obtained experimentally from sedimentary rocks, such as shales and graywacke reflects that of the source rocks. Additionally, rocks containing minerals with lower melting points will undergo partial melting more easily under the same conditions of pressure and temperature if compared to minerals with higher melting points. Temperature and Pressure Temperature and pressure can have a significant impact on the amount of partial melting that occurs in rocks. When temperature is low, the pressure needs to be low as well for melting to occur, and when temperature is high, the pressure needs to be higher to prevent melting from taking place. Higher pressure can suppress melting, while higher temperature can promote it. The extent to which partial melting occurs depends on the balance between temperature and pressure, with both having a strong influence on the process. Addition of volatiles The presence of volatiles has the potential to significantly reduce solidus temperatures of a given system. This allows for melt to be generated at lower temperatures than otherwise predicted, eliminating the need for a change in pressure or temperature conditions of the system. Furthermore, some consider that volatiles control the stability of minerals and the chemical reactions that happen during partial melting, while others assign a more subordinate role to these components. Mechanisms The main mechanisms responsible for partial melting are decompression melting and flux melting. The first process happens when bodies of rock move from a higher to a lower pressure setting, causing melting of a part of its components, while the second is caused by the addition of fluids that lower the melting point of minerals, leading to their melting at lower temperatures. Although conduction of heat is a known mechanism capable of transferring heat from one body to another, it plays a subordinate role in causing partial melting. This is due to the ineffective heat flow in large rock bodies in the solid portion of the Earth and a lack of heat sources capable of inciting partial melting. Decompression melting Main process responsible for the generation of basaltic melts on certain settings, such as rift zones in continents, back-arc basins, seafloor spreading zones and intraplate hotspots. Plate tectonics and mantle convection are responsible for the transportation of hot and less dense rock towards the surface. This causes a reduction in pressure without loss of heat, leading to partial melting. At seafloor spreading zones (mid-ocean ridges), hot peridotite ascending from the mantle undergoes partial melting due to a decrease in pressure, generating a basaltic melt and a solid phase. This melt when extruded on the surface is responsible for the creation of new oceanic crust. In continental rifts, where the lithosphere is colder and more rigid, decompression melting occurs when material from the hot and more plastic asthenosphere is transported to lower pressures. Flux melting Decompression melting does not explain how volcanoes form above subduction zones, since in this setting there is an increase in pressure when the oceanic plate subducts under a colder oceanic plate or a continental plate. The mechanism that explains melting in this setting is flux melting. In this case, when water, oceanic crustal material and metamorphosed mantle rocks are added into the system, minerals can be melted at lower temperatures. There are arguments that the most efficient way of carrying material from the subducting slab to the volcanic arc on the surface is by melting the slab itself, while other views support that melting occurs between the lithosphere and the slab. Heat conduction Although decompression and flux melting are the main mechanisms causing partial melting, the generation of certain igneous systems, such as large felsic continental magma reservoirs (for example, Yellowstone), are not explained by them. In this case, heat conduction is the mechanism responsible for that. When basaltic melt moves through the continental crust, it can accumulate and partially crystallize. In this event, if sufficient heat is released, it can cause the melting of the surrounding rocks and the creation of felsic magma. The relevance of this phenomenon to the modification of the continental crust is a topic of discussion in the scientific community. Significance Partial melting is an important process in geology with respect to the chemical differentiation of crustal rocks. On the Earth, partial melting of the mantle at mid-ocean ridges produces oceanic crust, and partial melting of the mantle and oceanic crust at subduction zones creates continental crust. Furthermore, the process of partial melting is also associated with the development of a series of ore deposits such as: Light rare-earth element (LREE) in carbonatites; Chromite deposits; Base-metal Ni-Cu sulfide deposits in mafic and ultramafic rocks; PGE sulfide deposits; Rare-metal pegmatites; Diamond deposits in kimberlites and lamproites.
Physical sciences
Geochemistry
Earth science
18403271
https://en.wikipedia.org/wiki/Nursing
Nursing
Nursing is a health care profession that "integrates the art and science of caring and focuses on the protection, promotion, and optimization of health and human functioning; prevention of illness and injury; facilitation of healing; and alleviation of suffering through compassionate presence". Nurses practice in many specialties with varying levels of certification and responsibility. Nurses comprise the largest component of most healthcare environments. There are shortages of qualified nurses in many countries. Nurses develop a plan of care, working collaboratively with physicians, therapists, patients, patients' families, and other team members that focuses on treating illness to improve quality of life. In the United Kingdom and the United States, clinical nurse specialists and nurse practitioners diagnose health problems and prescribe medications and other therapies, depending on regulations that vary by state. Nurses may help coordinate care performed by other providers or act independently as nursing professionals. In addition to providing care and support, nurses educate the public and promote health and wellness. In the U.S., nurse practitioners are nurses with a graduate degree in advanced practice nursing, and are permitted to prescribe medications. They practice independently in a variety of settings in more than half of the United States. In the postwar period, nurse education has diversified, awarding advanced and specialized credentials, and many traditional regulations and roles are changing. History Premodern Nursing historians face challenges of determining whether care provided to the sick or injured in antiquity is called nursing care. In the fifth century BC, for example, the Hippocratic Collection in places described skilled care and observation of patients by male "attendants," who may have provided care now provided by nurses. Around 600 BC in India, it is recorded in Sushruta Samhita, Book 3, Chapter V about the role of the nurse as "the different parts or members of the body as mentioned before including the skin, cannot be correctly described by one who is not well versed in anatomy. Hence, anyone desirous of acquiring a thorough knowledge of anatomy should prepare a dead body and carefully, observe, by dissecting it, and examining its different parts." In the Middle Ages, members of religious orders such as nuns and monks often provided nursing-like care. Examples exist in Christian, Islamic, Buddhist, and other traditions. The biblical figure of Phoebe is described in many sources as "the first visiting nurse". These traditions were influential in the development of the ethos of modern nursing. Its religious roots remain in evidence in many countries. One example in the United Kingdom is the use of the historical title "sister" to refer to a senior nurse. During the Reformation, Protestant reformers shut down monasteries and convents, allowing a few hundred municipal hospices to remain in operation in northern Europe. Nuns who had been serving as nurses were given pensions or told to marry and stay home. Nursing care went to the inexperienced as traditional caretakers, rooted in the Roman Catholic Church, were removed from their positions. The nursing profession in Europe was extinguished for approximately 200 years. 19th century During the Crimean War, Grand Duchess Elena Pavlovna called for women to join the Order of Exaltation of the Cross (Krestodvizhenskaya Obshchina) for a year of service in military hospitals. The first section of twenty-eight "sisters", headed by Aleksandra Petrovna Stakhovich, the Directress of the Order, reached Crimea early in November 1854. Florence Nightingale laid the foundations of professional nursing after the Crimean War, in light of a comprehensive statistical study she made of sanitation in India, leading her to emphasize the importance of sanitation. "After 10 years of sanitary reform, in 1873, Nightingale reported that mortality among the soldiers in India had declined from 69 to 18 per 1,000". Nightingale believed that nursing was a social freedom and mission for women. She believed that any educated woman could help improve the care of the ill. Her
Biology and health sciences
Health, fitness, and medicine
null
1811906
https://en.wikipedia.org/wiki/Kapellbr%C3%BCcke
Kapellbrücke
The Kapellbrücke (literally, Chapel Bridge) is a covered wooden footbridge spanning the river Reuss diagonally in the city of Lucerne in central Switzerland. Named after the nearby St. Peter's Chapel, the bridge is unique in containing a number of interior paintings dating back to the 17th century, although many of them were destroyed along with a larger part of the centuries-old bridge in a 1993 fire. Subsequently restored, the Kapellbrücke is the oldest wooden covered bridge in Europe, as well as the world's oldest surviving truss bridge. It serves as the city's symbol and as one of Switzerland's main tourist attractions. History Part of the bridge complex is the octagonal tall (from ground) "Wasserturm", which translates to "water tower," in the sense of "tower standing in the water." The tower pre-dated the bridge by about 30 years. Over the centuries, the tower has been used as a prison, torture chamber, and later a municipal archive as well as a local treasury. Today, the tower is closed to the public, although it houses a local artillery association and a tourist gift shop. The bridge itself was originally built 1365 as part of Lucerne's fortifications. It linked the old town on the right bank of the Reuss to the new town on the left bank, securing the town from attack from the south (i.e. from the lake). The bridge was initially over long, although numerous shortenings over the years and river bank replenishments mean the bridge now totals only long. It is the oldest surviving truss bridge in the world, consisting of strutted and triangulated trusses of moderate span, supported on piled trestles; as such, it is probably an evolution of the strutted bridge. The Kapellbrücke almost burned down on 18 August 1993, destroying two thirds of its interior paintings. Shortly thereafter, the Kapellbrücke was reconstructed and again opened to the public on 14 April 1994 for a total of CHF 3.4 million. Paintings Lucerne is unique in that its three wooden pedestrian bridges, the 14th-century Hofbrücke (now destroyed) and Kapellbrücke and the 16th-century Spreuerbrücke, all featured painted interior triangular frames. None of Europe's other wooden footbridges have this feature. The paintings, dating back to the 17th century and executed by local Catholic painter Hans Heinrich Wägmann, depict events from Lucerne's history. Of the original 158 paintings, 147 existed before the 1993 fire. After the fire, the remains of 47 paintings were collected, but ultimately only 30 were fully restored. The wooden boards that held the paintings varied from to wide and to wide. Most of the panels were made from spruce wood boards, and only a few were made from linden wood and maple. The paintings were created during the Counter-Reformation, featuring scenes promoting the Catholic Church. The paintings were sponsored by the city's council members, who, upon sponsoring a panel, were allowed to attribute their personal coat of arms on it. An explanation of each painting was printed below each scene. The paintings ran all along the bridge, dating from the life and death of Lucerne's patron saint St. Leger to the legends of the city's other patron saint St. Maurice.
Technology
Bridges
null
1812811
https://en.wikipedia.org/wiki/Tappet
Tappet
A tappet or valve lifter is a valve train component which converts rotational motion into linear motion in activating a valve. It is most commonly found in internal combustion engines, where it converts the rotational motion of the camshaft into linear motion of intake and exhaust valves, either directly or indirectly. An earlier use of the term was for part of the valve gear in beam engines beginning in 1715. The term is also used for components in pneumatic cylinders and weaving loom. History The first recorded use of the term tappet is as part of the valve gear in the 1715 Newcomen engine, an early form of steam engine. Early versions of the Newcomen engines from 1712 had manually operated valves, but by 1715 this repetitive task had been automated through the use of tappets. The beam of the engine had a vertical 'plug rod' hung from it, alongside the cylinder. Adjustable blocks or 'tappets' were attached to this rod and as the beam moved up and down, the tappets pressed against long levers or 'horns' attached to the engine's valves, working the cycle of steam and injection water valves to operate the engine. This operation by tappets on a plug rod continued into the early twentieth century with the Cornish engine. From the 19th century onwards, most steam engines used slide valves or piston valves, which do not require the use of tappets. In an internal combustion engine, a tappet (also called a 'valve lifter' or 'cam follower') is the component which converts the rotation of the camshaft into vertical motion to open and close an intake or exhaust valve. The principal types of tappets used in automotive engines are solid, hydraulic, and roller. To reduce wear from the rotating camshaft, tappets are usually circular and allowed, or even encouraged, to rotate in place. This minimizes wear caused by the camshaft contacting the same point on the base of the tappet each valve cycle, which can result in grooving. However, in some relatively small engines with many cylinders (such as the Daimler '250' V8 engine), the tappets were small and non-rotating. The base of most plain tappets is given a slight convex profile to soften contact of the leading edge of the camshaft lobe. Alternatives An alternative to the tappet is the “finger follower”, which is a pivoting beam that is used to convert the camshaft rotation into opening and closing of a valve. Finger followers are used in some high-performance dual overhead camshaft (DOHC) engines, most commonly in motorcycles and sports cars. Adjusting valve clearance On most overhead valve (OHV) engines, proper clearance between the camshaft and tappet is achieved by turning a set screw in the end of the rocker arm that contacts the end of the pushrod until a desired gap is achieved using a feeler gauge. Too large a gap results in wear from misaligned parts and compromised engine performance, and too small can lead to bent pushrods or burnt valves. A locknut secures the set screw-in place. Loose set screws can cause catastrophic engine failure, which has led to fatal aircraft crashes. On some OHV engines in the 1960s, such as the Ford Taunus V4 engine and Opel CIH engine, the tappet adjustment was done by setting the height of the rocker pivot point (rather than the typical method of a rocker-end adjustment screw). On the 1965-1970 versions of the Opel CIH engine with solid tappets, the tappet adjustment was conducted with the engine running. Hydraulic tappets A hydraulic tappet, also known as a "hydraulic valve lifter" and "hydraulic lash adjuster", contains a small hydraulic piston that becomes filled with pressurised engine oil. The piston acts as a hydraulic spring that automatically adjusts the tappet clearance according to the oil pressure. Although the movements of the piston are small and infrequent, they are sufficient to make the valve actuation self-adjusting so that there is no need to manually adjust the clearance of the tappets. Hydraulic tappets depend on a supply of clean oil at the appropriate pressure. When starting a cold engine, with low oil pressure, hydraulic tappets are often noisy for a few seconds, until they position themselves correctly. Roller tappets Early automotive engines used a roller at the contact point with the camshaft, however as engine speeds increased, 'flat tappets' with plain ends became far more common than tappets with rollers. However in recent times, roller tappets and rocker arms with roller tappet ends have made a resurgence due to the lower friction providing greater efficiency and reducing drag. Valvetrain layouts In a sidevalve engine— a common design for car engines until the 1950s— the valves are mounted at the sides of the cylinder and face upwards. This means that the camshaft could be placed directly beneath the valves, without the need for a rocker. With lower cylinder blocks, the tappets could drive the valves directly without needing even a push rod. Sidevalve engines also required regular adjustment of the tappet clearance, and in this case it was the tappets themselves that were adjusted directly. Small access plates were provided on the sides of the cylinder block, giving access to the gap between the valves and tappets. Some tappets had a threaded adjuster, but simpler engines could be adjusted by grinding down the ends of the valve stem directly. As the tappet adjustment always consisted of expanding the clearance (re-grinding valves into their valve seats during de-coking makes them sit lower, thus reducing the tappet clearance), adjustment by shortening the valve stems was a viable method. Eventually the valves would be replaced entirely, a relatively common operation for engines of this era. In a pushrod engine (OHV), the tappets are located down in the engine block and operate long, thin pushrods which transfer the motion (via the rocker arms) to the valves located at the top of the engine. In a single overhead camshaft (SOHC) engine, the tappets are integrated into the design of the rocker arms as one piece, since the camshaft interacts with the rocker arm directly. Mass-production of SOHC engines for passenger cars became more common in the 1970s, in the form of crossflow cylinder heads with overhead rockers located directly above a single overhead camshaft, as a more efficient design which could be cost-effectively manufactured. The 1970-2001 Ford Pinto engine was one of the first mass-production engines to use an SOHC design with a toothed cambelt. In this configuration, the rockers combine the function of sliding tappet, rocker and adjustment device. Adjustment of the valve clearance was usually by a threaded stud at the valve end of the rocker. The linear sliding tappet side often had a high rate of wear and demanded careful lubrication with oil containing zinc additives. A relatively uncommon design of an SOHC camshaft with four valves per cylinder was first used in the 1973-1980 Triumph Dolomite Sprint inline-four engine, which used a camshaft with 8 lobes that actuated the 16 valves via a clever arrangement of rocker arms. Double overhead camshaft (DOHC) engines were first developed as high performance aircraft and racing engines, with the camshafts mounted directly over the valves and driving them through a simple 'bucket tappet'. Most engines used a crossflow cylinder head with the valves in two rows in line with their corresponding camshaft. The tappet clearance adjustment is typically set using a small shim, located either above or below the tappet. Shims were made in a range of standard thicknesses and a mechanic would swap them to change the tappet gap. In early DOHC engines, the engine would first be assembled with a default shim of known thickness, then the gap measured. This measurement would be used to calculate the thickness of shim that would result in the desired gap. After installation of the new shim, the gaps would then be measured again to verify that the clearance was correct. As the camshaft had to be removed to change the shims, this was a very time consuming operation (especially since the position of the camshaft could vary slightly each time it was re-installed). Later engines used an improved design where the shims were located above the tappets, which allowed each shim to be changed without removing either the tappet or camshaft. A drawback of this design is that the rubbing surface of the tappet becomes the surface of the shim, which is a difficult problem of mass-production metallurgy. The first mass production engine to use this system was the 1966-2000 Fiat Twin Cam engine, followed by engines from Volvo and the water-cooled Volkswagens. Other uses The term 'tappet' is also used, obscurely, as a component of valve systems for other machinery, particularly as part of a bash valve in pneumatic cylinders. Where a reciprocating action is produced, such as for a pneumatic drill or jackhammer, the valve may be actuated by inertia or by the movement of the working piston. As the piston hammers back and forth, it impacts a small tappet, which in turn moves the air valve and so reverses the flow of air to the piston. In weaving looms, a tappet is a mechanism which helps form the shed or opening in the warp threads (long direction) of the material through which the weft threads (side to side or short direction) are passed. The tappets form the basic patterns in the material such as plain weave, twill, denim, or satin weaves. Harris tweed is still woven on looms in which tappets are still used.
Technology
Mechanisms
null
1814677
https://en.wikipedia.org/wiki/Plasmodesma
Plasmodesma
Plasmodesmata (singular: plasmodesma) are microscopic channels which traverse the cell walls of plant cells and some algal cells, enabling transport and communication between them. Plasmodesmata evolved independently in several lineages, and species that have these structures include members of the Charophyceae, Charales, Coleochaetales and Phaeophyceae (which are all algae), as well as all embryophytes, better known as land plants. Unlike animal cells, almost every plant cell is surrounded by a polysaccharide cell wall. Neighbouring plant cells are therefore separated by a pair of cell walls and the intervening middle lamella, forming an extracellular domain known as the apoplast. Although cell walls are permeable to small soluble proteins and other solutes, plasmodesmata enable direct, regulated, symplastic transport of substances between cells. There are two forms of plasmodesmata: primary plasmodesmata, which are formed during cell division, and secondary plasmodesmata, which can form between mature cells. Similar structures, called gap junctions and membrane nanotubes, interconnect animal cells and stromules form between plastids in plant cells. Formation Primary plasmodesmata are formed when fractions of the endoplasmic reticulum are trapped across the middle lamella as new cell wall are synthesized between two newly divided plant cells. These eventually become the cytoplasmic connections between cells. At the formation site, the wall is not thickened further, and depressions or thin areas known as pits are formed in the walls. Pits normally pair up between adjacent cells. Plasmodesmata can also be inserted into existing cell walls between non-dividing cells (secondary plasmodesmata). Primary plasmodesmata The formation of primary plasmodesmata occurs during the part of the cellular division process where the endoplasmic reticulum and the new plate are fused together, this process results in the formation of a cytoplasmic pore (or cytoplasmic sleeve). The desmotubule, also known as the appressed ER, forms alongside the cortical ER. Both the appressed ER and the cortical ER are packed tightly together, thus leaving no room for any luminal space. It is proposed that the appressed ER acts as a membrane transportation route in the plasmodesmata. When filaments of the cortical ER are entangled in the formation of a new cell plate, plasmodesmata formation occurs in land plants. It is hypothesized that the appressed ER forms due to a combination of pressure from a growing cell wall and interaction from ER and PM proteins. Primary plasmodesmata are often present in areas where the cell walls appear to be thinner. This is due to the fact that as a cell wall expands, the abundance of the primary plasmodesmata decreases. In order to further expand plasmodesmal density during cell wall growth secondary plasmodesmata are produced. The process of secondary plasmodesmata formation is still to be fully understood, however various degrading enzymes and ER proteins are said to stimulate the process. Structure Plasmodesmatal plasma membrane A typical plant cell may have between 1,000 and 100,000 plasmodesmata connecting it with adjacent cells equating to between 1 and 10 per μm2. Plasmodesmata are approximately 50–60 nm in diameter at the midpoint and are constructed of three main layers, the plasma membrane, the cytoplasmic sleeve, and the desmotubule. They can transverse cell walls that are up to 90 nm thick. The plasma membrane portion of the plasmodesma is a continuous extension of the cell membrane or plasmalemma and has a similar phospholipid bilayer structure. The cytoplasmic sleeve is a fluid-filled space enclosed by the plasmalemma and is a continuous extension of the cytosol. Trafficking of molecules and ions through plasmodesmata occurs through this space. Smaller molecules (e.g. sugars and amino acids) and ions can easily pass through plasmodesmata by diffusion without the need for additional chemical energy. Larger molecules, including proteins (for example green fluorescent protein) and RNA, can also pass through the cytoplasmic sleeve diffusively. Plasmodesmatal transport of some larger molecules is facilitated by mechanisms that are currently unknown. One mechanism of regulation of the permeability of plasmodesmata is the accumulation of the polysaccharide callose around the neck region to form a collar, thereby reducing the diameter of the pore available for transport of substances. Through dilation, active gating or structural remodeling the permeability of the plasmodesmata is increased. This increase in plasmodesmata pore permeability allows for larger molecules, or macromolecules, such as signaling molecules, transcription factors and RNA-protein complexes to be transported to various cellular compartments. Desmotubule The desmotubule is a tube of appressed (flattened) endoplasmic reticulum that runs between two adjacent cells. Some molecules are known to be transported through this channel, but it is not thought to be the main route for plasmodesmatal transport. Around the desmotubule and the plasma membrane areas of an electron dense material have been seen, often joined together by spoke-like structures that seem to split the plasmodesma into smaller channels. These structures may be composed of myosin and actin, which are part of the cell's cytoskeleton. If this is the case these proteins could be used in the selective transport of large molecules between the two cells. Transport Plasmodesmata have been shown to transport proteins (including transcription factors), short interfering RNA, messenger RNA, viroids, and viral genomes from cell to cell. One example of a viral movement proteins is the tobacco mosaic virus MP-30. MP-30 is thought to bind to the virus's own genome and shuttle it from infected cells to uninfected cells through plasmodesmata. Flowering Locus T protein moves from leaves to the shoot apical meristem through plasmodesmata to initiate flowering. Plasmodesmata are also used by cells in phloem, and symplastic transport is used to regulate the sieve-tube cells by the companion cells. The size of molecules that can pass through plasmodesmata is determined by the size exclusion limit. This limit is highly variable and is subject to active modification. For example, MP-30 is able to increase the size exclusion limit from 700 daltons to 9400 daltons thereby aiding its movement through a plant. Also, increasing calcium concentrations in the cytoplasm, either by injection or by cold-induction, has been shown to constrict the opening of surrounding plasmodesmata and limit transport. Several models for possible active transport through plasmodesmata exist. It has been suggested that such transport is mediated by interactions with proteins localized on the desmotubule, and/or by chaperones partially unfolding proteins, allowing them to fit through the narrow passage. A similar mechanism may be involved in transporting viral nucleic acids through the plasmodesmata. A number of mathematical models have been suggested for estimating transport across plasmodesmata. These models have primarily treated transport as a diffusion problem with some added hindrance. Cytoskeletal components of Plasmodesmata Plasmodesmata link almost every cell within a plant, which can cause negative effects such as the spread of viruses. In order to understand this we must first look at cytoskeletal components, such as actin microfilaments, microtubules, and myosin proteins, and how they are related to cell to cell transport. Actin microfilaments are linked to the transport of viral movement proteins to plasmodesmata which allow for cell to cell transport through the plasmodesmata. Fluorescent tagging for co-expression in tobacco leaves showed that actin filaments are responsible for transporting viral movement proteins to the plasmodesmata. When actin polymerization was blocked it caused a decrease in plasmodesmata targeting of the movement proteins in the tobacco and allowed for 10-kDa (rather than 126-kDa) components to move between tobacco mesophyll cells. This also impacted cell to cell movement of molecules within the tobacco plant. Viruses Viruses break down actin filaments within the plasmodesmata channel in order to move within the plant. For example, when the cucumber mosaic virus (CMV) gets into plants it is able to travel through almost every cell through utilization of viral movement proteins to transport themselves through the plasmodesmata. When tobacco leaves are treated with a drug that stabilizes actin filaments, phalloidin, the cucumber mosaic virus movement proteins are unable to increase the plasmodesmata size exclusion limit (SEL). Myosin High amounts of myosin proteins are found at the sites of plasmodesmata. These proteins are involved in directing viral cargoes to plasmodesmata. When mutant forms of myosin were tested in tobacco plants, viral protein targeting to plasmodesmata was negatively affected. Permanent binding of myosin to actin, which was induced by a drug, caused a decrease in cell to cell movement. Viruses are also able to selectively bind to myosin proteins. Microtubules Microtubules have an important role in cell to cell transport of viral RNA. Viruses use many different methods of transporting themselves from cell to cell and one of those methods associating the N-terminal domain of its RNA to localize to plasmodesmata through microtubules. In tobacco plants injected with tobacco mosaic viruses that were kept in high temperatures there was a strong correlation between GFP-labelled TMV movement proteins and microtubules. This led to an increase in the spread of viral RNA through the tobacco. Plasmodesmata and callose Plasmodesmata regulation and structure are regulated by a beta 1,3-glucan polymer known as callose. Callose is found in cell plates during the process of cytokinesis, but as this process reaches completion the levels of callose decrease. The only callose rich parts of the cell include the sections of the cell wall where plasmodesmata are present. In order to regulate what is transported through the plasmodesmata, callose must be present. Callose provides the mechanism by which plasmodesmata permeability is regulated. In order to control what is transported between different tissues, the plasmodesmata undergo several specialized conformational changes. The activity of plasmodesmata are linked to physiological and developmental processes within plants. There is a hormone signaling pathway that relays primary cellular signals via the plasmodesmata. There are also patterns of environmental, physiological, and developmental cues that show relation to plasmodesmata function. An important mechanism of plasmodesmata is the ability to gate its channels. Callose levels have been proved to be a method of changing plasmodesmata aperture size. Callose deposits are found at the neck of the plasmodesmata in new cell walls that have been formed. The level of deposits at the plasmodesmata can fluctuate which shows that there are signals that trigger an accumulation of callose at the plasmodesmata and cause plasmodesmata to become gated or more open. Enzyme activities of Beta 1,3-glucan synthase and hydrolases are involved in changes in plasmodesmata cellulose level. Some extracellular signals change transcription of activities of this synthase and hydrolase. Arabidopsis thaliana has callose synthase genes that encode a catalytic subunit of B-1,3-glucan. Gain of function mutants in this gene pool show increased deposition of callose at plasmodesmata and a decrease in macromolecular trafficking as well as a defective root system during development.
Biology and health sciences
Plant cells
Biology
1815052
https://en.wikipedia.org/wiki/Pteridospermatophyta
Pteridospermatophyta
Pteridospermatophyta, also called "pteridosperms" or "seed ferns" are a polyphyletic grouping of extinct seed-producing plants. The earliest fossil evidence for plants of this type are the lyginopterids of late Devonian age. They flourished particularly during the Carboniferous and Permian periods. Pteridosperms declined during the Mesozoic Era and had mostly disappeared by the end of the Cretaceous Period, though Komlopteris seem to have survived into Eocene times, based on fossil finds in Tasmania. With regard to the enduring utility of this division, many palaeobotanists still use the pteridosperm grouping in an informal sense to refer to the seed plants that are not angiosperms, coniferoids (conifers or cordaites), ginkgophytes or cycadophytes (cycads or bennettites). This is particularly useful for extinct seed plant groups whose systematic relationships remain speculative, as they can be classified as pteridosperms with no valid implications being made as to their systematic affinities. Also, from a purely curatorial perspective the term pteridosperms is a useful shorthand for describing the fern-like fronds that were probably produced by seed plants, which are commonly found in many Palaeozoic and Mesozoic fossil floras. History of classification The concept of pteridosperms goes back to the late 19th century when palaeobotanists came to realise that many Carboniferous fossils resembling fern fronds had anatomical features more reminiscent of the modern-day seed plants, the cycads. In 1899 the German palaeobotanist Henry Potonié coined the term "Cycadofilices" ("cycad-ferns") for such fossils, suggesting that they were a group of non-seed plants intermediate between the ferns and cycads. Shortly afterwards, the British palaeobotanists Frank Oliver and Dukinfield Henry Scott (with the assistance of Oliver's student at the time, Marie Stopes) made the critical discovery that some of these fronds (genus Lyginopteris) were associated with seeds (genus Lagenostoma) that had identical and very distinctive glandular hairs, and concluded that both fronds and seeds belonged to the same plant. Soon, additional evidence came to light suggesting that seeds were also attached to the Carboniferous fern-like fronds Dicksonites, Neuropteris and Aneimites. Initially it was still thought that they were "transitional fossils" intermediate between the ferns and cycads, and especially in the English-speaking world they were referred to as "seed ferns" or "pteridosperms". Today, despite being regarded by most palaeobotanists as only distantly related to ferns, these spurious names have nonetheless established themselves. Nowadays, four orders of Palaeozoic seed plants tend to be referred to as pteridosperms: Lyginopteridales, Medullosales, Callistophytales and Peltaspermales, with "Mesozoic seed ferns" including the Petriellales, Corystospermales and Caytoniales. Their discovery attracted considerable attention at the time, as the pteridosperms were the first extinct group of vascular plants to be identified solely from the fossil record. In the 19th century the Carboniferous Period was often referred to as the "Age of Ferns" but these discoveries during the first decade of the 20th century made it clear that the "Age of Pteridosperms" was perhaps a better description. During the 20th century the concept of pteridosperms was expanded to include various Mesozoic groups of seed plants with fern-like fronds, such as the Corystospermaceae. Some palaeobotanists also included seed plant groups with entire leaves such as the Glossopteridales and Gigantopteridales, which was stretching the concept. In the context of modern phylogenetic models, the groups often referred to as pteridosperms appear to be liberally spread across a range of clades, and many palaeobotanists today would regard pteridosperms as little more than a paraphyletic 'grade-group' with no common lineage. One of the few characters that may unify the group is that the ovules were borne in a cupule, a group of enclosing branches, but this has not been confirmed for all "pteridosperm" groups. It has been speculated that some seed fern groups may be close to the ancestry of flowering plants (angiosperms). A 2009 study concluded that "phylogenetic analysis techniques have surpassed the hard data needed to formulate meaningful phylogenetic hypotheses" regarding the relationships of "seed ferns" to living plant groups. Taxonomy Major groups Order †Calamopityales Němejc (1963) Order †Corystospermales Petriella (1981) [= Umkomasiales Doweld (2001)] Order †Callistophytales Rothwell (1981) emend. Anderson, Anderson & Cleal (2007) [Poroxylales Němejc (1968)] Order †Petriellales Taylor et al. (1994) Order †Peltaspermales Taylor (1981) [Lepidopteridales Němejc (1968)] Order †Gigantopteridales Li & Yao (1983) [Gigantonomiales Meyen (1987)] Order †Pentoxylales Pilger & Melchior (1954) Order †Glossopteridales Plumstead, 1956 Order †Caytoniales Gothan (1932) Order †Medullosales Corsin (1960) Order †Lyginopteridales (Corsin (1960)) Havlena (1961) [Lagenostomatales Seward ex Long (1975); Lyginodendrales Nemejc (1968); Sphenopteridales Schimper 1869] Family †Angaranthaceae Naugolnykh (2012) Family †Heterangiaceae Němejc (1950) nom. nud. Family †Physostomataceae Long (1975) Family †Lyginopteridaceae Potonie (1900) emend. Anderson, Anderson & Cleal (2007) [Lagenostomataceae Long (1975; Pityaceae Scott (1909); Lyginodendraceae Scott (1909); Sphenopteridaceae Gopp. (1842); Pseudopecopteridaceae Lesquereux (1884); Megaloxylaceae Scott (1909), nom. rej.; Rhetinangiaceae Scott (1923), nom. rej.; Tetratmemaceae Němejc (1968)] Family †Moresnetiaceae Němejc (1963) emend. Anderson, Anderson & Cleal (2007) [Genomospermaceae Long (1975); Elkinsiaceae Rothwell, Scheckler & Gillespie (1989) ex Cleal; Hydraspermaceae] Other minor groups Class incertae sedis Order incertae sedis Family ?†Nystroemiaceae Wang & Pfefferkorn (2009) †Nystroemia Halle (1927) Family †Austrocalyxaceae Vega & Archangelsky (2001) †Austrocalyx †Polycalyx †Rinconadia †Jejenia †Fedekurtzia (Archangelsky) emend. Coturel et Césari, 2017 Order ?†Alexiales Anderson & Anderson (2003) Family †Alexiaceae Anderson & Anderson (2003) †Alexia Anderson & Anderson (2003) Order †Buteoxylonales Family †Buteoxylonaceae Barnard & Long (1973) †Buteoxylon Barnard & Long (1973) †Triradioxylon Barnard & Long (1975) Order †Dicranophyllales Meyen (1984) emend. Anderson, Anderson & Cleal (2007) Family †Dicranophyllaceae Němejc (1959) ex Archangelsky & Cúneo (1990) Family †Trichopityaceae Němejc (1968) [Florin emend.] †Polyspermophyllum? Archangelsky and Cúneo (1990) (possibly a coniferophyte) Order †Erdtmanithecales Friis and Pedersen (1996) Order †Fredlindiales Anderson & Anderson (2003) Order †Hamshawviales Anderson & Anderson (2003) Order †Hlatimbiales Anderson & Anderson (2003) Family †Hlatimbiaceae Anderson & Anderson (2003) †Hlatimbia Anderson & Anderson (2003) †Batiopteris Anderson & Anderson (2003) Order †Matatiellales Anderson & Anderson (2003) Order †Nilssoniales Darrah (1960) (possibly cycadopsids) Order †Phasmatocycadales Doweld (2001) [Taeniopteridales] Family †Phasmatocycadaceae Doweld (2001) [Spermopteridaceae Doweld (2001)] †Lesleya Lesquereux (1879–80) (otherwise placed as incetae sedis regarding family and order) Class †Axelrodiopsida Anderson & Anderson (2007) Order †Axelrodiales Anderson & Anderson (2007) Family †Axelrodiaceae Anderson & Anderson (2007) †Axelrodia Cornet (1986) †Sanmiguelia Brown (1956) †Synangispadixis Cornet (1986) Family †Zamiostrobacea Anderson & Anderson (2007) †Zamiostrobus Endlicher (1836) Incertae sedis to order and family: †Gnetopsis Renault et Zeiller (1884) †Pullaritheca Rothwell and Wight (1989) †Kegelidium Dolianiti (1954) †Ptilozamites
Biology and health sciences
Seed plants (except flowering plants)
Plants
12669091
https://en.wikipedia.org/wiki/Aphonopelma%20chalcodes
Aphonopelma chalcodes
Aphonopelma chalcodes, commonly known as the western desert tarantula, desert blonde tarantula, Arizona blonde tarantula or Mexican blonde tarantula, is a species of spider belonging to the family Theraphosidae. It has a limited distribution in the deserts of Arizona and adjacent parts of Mexico but can be very common within this range. The common name "blonde tarantula" refers to the carapace, which is densely covered in pale hairs, and contrasts strongly with the all-dark legs and abdomen. Additionally, these spiders have low toxicity, a long life expectancy, and several offspring. Description This 3 to 5 in (8 to 13 cm) large bodied, burrowing spider is commonly seen during the summer rainy season in southwestern deserts. The female is usually a uniform tan color. The male has black legs, a copper-colored cephalothorax and a reddish abdomen. The female body length is up to 56 mm, males only reaching 44 mm. Their burrows can be as large as 1 to 2 in (25 to 51 mm) in diameter, with some strands of silk across the opening. Multiple lectins have been detected in the serum of Aphonopelma chalcodes. Simply, lectins are proteins that bind to carbohydrates. Research studies illustrate that the lectins within the serum of A. chalcodes have the ability to bind to sialic acid. The function of sialic acids is diverse, including contributing significantly to protein folding, neural development, and metabolism. However, the implications of the lectins binding to sialic acid must be investigated further. Visual system The visual system of A. chalcodes is critical to its survival as spiders rely on their spectral sensitivity and visual acuity in order to survive. These spiders have two sets of eyes, referred to as the primary and secondary sets. Spectral sensitivity within these eyes is critical as it is essential in distinguishing different wavelengths. The peak response amplitudes of these spiders were directly correlated to the intensity of light that was exhibited. However, it was also found that the period of depolarizations, pertaining to receptor potentials, was longer for longer flashes. Additionally, the spectral sensitivity of the species was assessed. The range of wavelength sensitivity in all ocular cells was between 350 and 640 nm. The most sensitive spectral sensitivity was around 500 nm and the least sensitive point was at 640 nm. Both the primary and secondary sets of eyes had very similar spectral sensitivities and waveforms. Research studies have demonstrated that the receptor potentials of the tarantula photoreceptors in response to light flashes was characterized by smooth depolarizations. Lastly, secondary eyes in these spiders have tapeta, which are used to amplify and detect dim light more effectively than primary light. The function of both the primary and secondary eyes in A. chalcodes resembles the functions of rods and cones in other vertebrates. Molting Molting is a biological process that invertebrates often go through. Molting in spiders consists of shedding the exoskeleton and forming a new covering through different developmental stages. This process allows spiders to grow as they go through different stages of development. The molting of A. chalcodes has been determined to occur through ten primary stages, with a total of twenty-five molts occurring over a two-year period. Each stage of molting corresponds to shedding of a different portion of their exoskeleton ranging from the dorsum to the abdomen and ultimately the legs. Additionally, each stage of molting does not take the same amount of time, with the first stage being the most extensive. Additionally, tarantulas are able to molt any time of the day. Research studies have illustrated that molting is not restricted by any time of day. Although molting is not dependent on the time of day, it is seasonally dependent. In A. chalcodes, molting is especially apparent during March and April. The reason for why tarantulas tend to molt during spring are not currently known, however it has been established that is seasonally dependent. Reproduction and development The spider undergoes sexual differentiation later in development, as it is born resembling a female. After several years, the spider may begin to display male traits after further differentiation. Male A. chalcodes develop palpal bulbs, intended to store sperm and insert it into the female's genital opening. Females possess abdominal pouches (spermatheca) that are utilized in order to store sperm until reproduction occurs through the laying of eggs. When reproduction occurs, females lay eggs in the male's sperm in order to provide nutrients for the offspring. The average number of offspring is 600, with an average gestation period of about six to seven weeks. The life expectancy of an average A. chalcodes is about 24–30 years for females, and 5–10 years for males. This is highly dependent on the habitat and respective development of each spider, however in general, one can expect a high life expectancy in this particular species. The significantly higher life expectancy for females in comparison to males can be attributed to differences in development and reproductive organs. Distribution, habitat and lifestyle Aphonopelma chalcodes, the western desert tarantula, occupies several states within the southwestern United States. Specifically, these spiders are known to be common in New Mexico and Arizona within the United States. This spider often lives in desert soil and is resistant to harsh weather. These spiders often reside in burrows which they create for themselves. These burrows are very deep in order to help the spider resist and adapt to fluctuations in environmental temperature. However, when temperatures are between 23˚C and 31˚C, these spiders leave the burrows into the general outdoors. A. chalcodes makes residence in burrows through digging under a stone or living in isolated burrows that are not being used. The entrance to the burrow is surrounded by strands of silk, which allow the spider to detect that prey are present while it is hiding in the burrow. The nocturnal activity of this spider begins when the silk covering surrounding the burrow is broken. Potential reasons explaining the breaking of the silk covering include the spider's circadian rhythm, decreased environmental light intensity, and surface temperatures. During the night, tarantulas remain inside the burrow entrance expecting the arrival of prey. At dawn, the tarantula goes into the burrow. Although A. chalcodes is particularly active at night, it is not strictly nocturnal because they are seen in the upper portion of the burrow early in the day. Toxicity In general, spider venoms contain several classifications of neurotoxins that are relevant to the development of insecticides and other pharmaceutical preventative measures. Specifically, the venom of A. chalcodes contains two compounds referred to as Apc600 and Apc728. Analysis of these neurotoxins within the venom revealed the presence of spermine, a polyamine involved in cellular metabolism, and 1,3-diaminopropane. These toxins have not been investigated significantly, however are theorized to function in short term paralysis or immobilization of the tarantulas' prey. The venom of A. chalcodes is not highly dangerous to humans. When compared to a bee sting, the level of venom is not significantly higher. Specifically, these spiders are one of the least dangerous within their family of Theraphosidae. As pets They are popular among beginner tarantula keepers due to their long lifespan (5–10 years for males, up to 30 years for females) and docile nature.
Biology and health sciences
Spiders
Animals
20596557
https://en.wikipedia.org/wiki/Watermelon
Watermelon
Watermelon (Citrullus lanatus) is a flowering plant species of the Cucurbitaceae family and the name of its edible fruit. A scrambling and trailing vine-like plant, it is a highly cultivated fruit worldwide, with more than 1,000 varieties. Watermelon is grown in favorable climates from tropical to temperate regions worldwide for its large edible fruit, which is a berry with a hard rind and no internal divisions, and is botanically called a pepo. The sweet, juicy flesh is usually deep red to pink, with many black seeds, although seedless varieties exist. The fruit can be eaten raw or pickled, and the rind is edible after cooking. It may also be consumed as a juice or an ingredient in mixed beverages. Kordofan melons from Sudan are the closest relatives and may be progenitors of modern, cultivated watermelons. Wild watermelon seeds were found in Uan Muhuggiag, a prehistoric site in Libya that dates to approximately 3500 BC. In 2022, a study was released that traced 6,000-year-old watermelon seeds found in the Libyan desert to the Egusi seeds of Nigeria, West Africa. Watermelons were domesticated in north-east Africa and cultivated in Egypt by 2000 BC, although they were not the sweet modern variety. Sweet dessert watermelons spread across the Mediterranean world during Roman times. Considerable breeding effort has developed disease-resistant varieties. Many cultivars are available that produce mature fruit within 100 days of planting. In 2017, China produced about two-thirds of the world's total of watermelons. Description The watermelon is an annual that has a prostrate or climbing habit. Stems are up to long and new growth has yellow or brown hairs. Leaves are long and wide. These usually have three lobes that are lobed or doubly lobed. Young growth is densely woolly with yellowish-brown hairs which disappear as the plant ages. Like all but one species in the genus Citrullus, watermelon has branching tendrils. Plants have unisexual male or female flowers that are white or yellow and borne on hairy stalks. Each flower grows singly in the leaf axils, and the species' sexual system, with male and female flowers produced on each plant, is monoecious. The male flowers predominate at the beginning of the season; the female flowers, which develop later, have inferior ovaries. The styles are united into a single column. The large fruit is a kind of modified berry called a pepo with a thick rind (exocarp) and fleshy center (mesocarp and endocarp). Wild plants have fruits up to in diameter, while cultivated varieties may exceed . The rind of the fruit is mid- to dark green and usually mottled or striped, and the flesh, containing numerous pips spread throughout the inside, can be red or pink (most commonly), orange, yellow, green or white. A bitter watermelon, C. amarus, has become naturalized in semiarid regions of several continents, and is designated as a "pest plant" in parts of Western Australia where they are called "pig melon". Taxonomy The sweet watermelon was first described by Carl Linnaeus in 1753 and given the name Cucurbita citrullus. It was reassigned to the genus Citrullus in 1836, under the replacement name Citrullus vulgaris, by the German botanist Heinrich Adolf Schrader. (The International Code of Nomenclature for algae, fungi, and plants does not allow names like "Citrullus citrullus".) The species is further divided into several varieties, of which bitter wooly melon (Citrullus lanatus (Thunb.) Matsum. & Nakai var. lanatus), citron melons (Citrullus lanatus var. citroides (L. H. Bailey) Mansf.), and the edible var. vulgaris may be the most important. This taxonomy originated with the erroneous synonymization of the wooly melon Citrullus lanatus with the sweet watermelon Citrullus vulgaris by L.H. Bailey in 1930. Molecular data, including sequences from the original collection of Thunberg and other relevant type material, show that the sweet watermelon (Citrullus vulgaris Schrad.) and the bitter wooly melon Citrullus lanatus (Thunb.) Matsum. & Nakai are not closely related to each other. A proposal to conserve the name, Citrullus lanatus (Thunb.) Matsum. & Nakai, was accepted by the nomenclature committee and confirmed at the International Botanical Congress in 2017. Prior to 2015, the wild species closest to Citrullus lanatus was assumed to be the tendril-less melon Citrullus ecirrhosus Cogn. from South African arid regions based on an erroneously identified 18th-century specimen. However, after phylogenetic analysis, the closest relative to Citrullus lanatus is now thought to be Citrullus mucosospermus (Fursa) from West Africa (from Senegal to Nigeria), which is also sometimes considered a subspecies within C. lanatus. Watermelon populations from Sudan are also close to domesticated watermelons. The bitter wooly melon was formally described by Carl Peter Thunberg in 1794 and given the name Momordica lanata. It was reassigned to the genus Citrullus in 1916 by Japanese botanists Jinzō Matsumura and Takenoshin Nakai. History Watermelons were originally cultivated for their high water content and stored to be eaten during dry seasons, as a source of both food and water. Watermelon seeds were found in the Dead Sea region at the ancient settlements of Bab edh-Dhra and Tel Arad. Many 5000-year-old wild watermelon seeds (C. lanatus) were discovered at Uan Muhuggiag, a prehistoric archaeological site located in southwestern Libya. This archaeobotanical discovery may support the possibility that the plant was more widely distributed in the past. In the 7th century, watermelons were being cultivated in India, and by the 10th century had reached China. The Moors introduced the fruit into the Iberian Peninsula, and there is evidence of it being cultivated in Córdoba in 961 and also in Seville in 1158. It spread northwards through southern Europe, perhaps limited in its advance by summer temperatures being insufficient for good yields. The fruit had begun appearing in European herbals by 1600, and was widely planted in Europe in the 17th century as a minor garden crop. Early watermelons were not sweet, but bitter, with yellowish-white flesh. They were also difficult to open. The modern watermelon, which tastes sweeter and is easier to open, was developed over time through selective breeding. European colonists introduced the watermelon to the New World. Spanish settlers were growing it in Florida in 1576. It was being grown in Massachusetts by 1629, and by 1650 was being cultivated in Peru, Brazil and Panama. Around the same time, Native Americans were cultivating the crop in the Mississippi valley and Florida. Watermelons were rapidly accepted in Hawaii and other Pacific islands when they were introduced there by explorers such as Captain James Cook. In the Civil War era United States, watermelons were commonly grown by free black people and became one symbol for the abolition of slavery. After the Civil War, black people were maligned for their association with watermelon. The sentiment evolved into a racist stereotype where black people shared a supposed voracious appetite for watermelon, a fruit long associated with laziness and uncleanliness. Seedless watermelons were initially developed in 1939 by Japanese scientists who were able to create seedless triploid hybrids which remained rare initially because they did not have sufficient disease resistance. Seedless watermelons became more popular in the 21st century, rising to nearly 85% of total watermelon sales in the United States in 2014. Systematics A melon from the Kordofan region of Sudan the kordofan melon may be the progenitor of the modern, domesticated watermelon. The kordofan melon shares with the domestic watermelon loss of the bitterness gene while maintaining a sweet taste, unlike other wild African varieties from other regions, indicating a common origin, possibly cultivated in the Nile Valley by 2340 BC. Composition Nutrition Watermelon fruit is 91% water, contains 6% sugars, and is low in fat (table). In a serving, watermelon fruit supplies of food energy and low amounts of essential nutrients (see table). Only vitamin C is present in appreciable content at 10% of the Daily Value (table). Watermelon pulp contains carotenoids, including lycopene. The amino acid citrulline is produced in watermelon rind. Varieties A number of cultivar groups have been identified: Citroides group (syn. C. lanatus subsp. lanatus var. citroides; C. lanatus var. citroides; C. vulgaris var. citroides) DNA data reveal that C. lanatus var. citroides Bailey is the same as Thunberg's bitter wooly melon, C. lanatus and also the same as C. amarus Schrad. It is not a form of the sweet watermelon C. vulgaris nor closely related to that species. The citron melon or makataan – a variety with sweet yellow flesh that is cultivated around the world for fodder and the production of citron peel and pectin. Lanatus group (syn. C. lanatus var. caffer) C. caffer Schrad. is a synonym of C. amarus Schrad. The variety known as tsamma is grown for its juicy white flesh. The variety was an important food source for travellers in the Kalahari Desert. Another variety known as karkoer or bitterboela is unpalatable to humans, but the seeds may be eaten. A small-fruited form with a bumpy skin has caused poisoning in sheep. Vulgaris group This is Linnaeus's sweet watermelon; it has been grown for human consumption for thousands of years. C. lanatus mucosospermus (Fursa) Fursa This West African species is the closest wild relative of the watermelon. It is cultivated for cattle feed. Additionally, other wild species have bitter fruit containing cucurbitacin. C. colocynthis (L.) Schrad. ex Eckl. & Zeyh., C. rehmii De Winter, and C. naudinianus (Sond.) Hook.f. Varieties The more than 1,200 cultivars of watermelon range in weight from less than to more than ; the flesh can be red, pink, orange, yellow or white. The 'Carolina Cross' produced the current world record for heaviest watermelon, weighing . It has green skin, red flesh and commonly produces fruit between . It takes about 90 days from planting to harvest. The 'Golden Midget' has a golden rind and pink flesh when ripe, and takes 70 days from planting to harvest. The 'Orangeglo' has a very sweet orange flesh, and is a large, oblong fruit weighing . It has a light green rind with jagged dark green stripes. It takes about 90–100 days from planting to harvest. The 'Moon and Stars' variety was created in 1926. The rind is purple/black and has many small yellow circles (stars) and one or two large yellow circles (moon). The melon weighs . The flesh is pink or red and has brown seeds. The foliage is also spotted. The time from planting to harvest is about 90 days. The 'Cream of Saskatchewan' has small, round fruits about in diameter. It has a thin, light and dark green striped rind, and sweet white flesh with black seeds. It can grow well in cool climates. It was originally brought to Saskatchewan, Canada, by Russian immigrants. The melon takes 80–85 days from planting to harvest. The 'Melitopolski' has small, round fruits roughly in diameter. It is an early ripening variety that originated from the Astrakhan region of Russia, an area known for cultivation of watermelons. The Melitopolski watermelons are seen piled high by vendors in Moscow in the summer. This variety takes around 95 days from planting to harvest. The 'Densuke' watermelon has round fruit up to . The rind is black with no stripes or spots. It is grown only on the island of Hokkaido, Japan, where up to 10,000 watermelons are produced every year. In June 2008, one of the first harvested watermelons was sold at an auction for 650,000 yen (US$6,300), making it the most expensive watermelon ever sold. The average selling price is generally around 25,000 yen ($250). Many cultivars are no longer grown commercially because of their thick rind, but seeds may be available among home gardeners and specialty seed companies. This thick rind is desirable for making watermelon pickles, and some old cultivars favoured for this purpose include 'Tom Watson', 'Georgia Rattlesnake', and 'Black Diamond'. Variety improvement Charles Fredrick Andrus, a horticulturist at the USDA Vegetable Breeding Laboratory in Charleston, South Carolina, set out to produce a disease-resistant and wilt-resistant watermelon. The result, in 1954, was "that gray melon from Charleston". Its oblong shape and hard rind made it easy to stack and ship. Its adaptability meant it could be grown over a wide geographical area. It produced high yields and was resistant to the most serious watermelon diseases: anthracnose and fusarium wilt. Others were also working on disease-resistant cultivars; J. M. Crall at the University of Florida produced 'Jubilee' in 1963 and C. V. Hall of Kansas State University produced 'Crimson Sweet' the following year. These are no longer grown to any great extent, but their lineage has been further developed into hybrid varieties with higher yields, better flesh quality and attractive appearance. Another objective of plant breeders has been the elimination of the seeds which occur scattered throughout the flesh. This has been achieved through the use of triploid varieties, but these are sterile, and the cost of producing the seed by crossing a tetraploid parent with a normal diploid parent is high. As of 2017, farmers in approximately 44 states in the United States grew watermelon commercially, producing more than $500 million worth of the fruit annually. Georgia, Florida, Texas, California and Arizona are the United States' largest watermelon producers, with Florida producing more watermelon than any other state. This now-common fruit is often large enough that groceries often sell half or quarter melons. Some smaller, spherical varieties of watermelon—both red- and yellow-fleshed—are sometimes called "icebox melons". The largest recorded fruit was grown in Tennessee in 2013 and weighed . Uses Culinary Watermelon is a sweet, commonly consumed fruit of summer, usually as fresh slices, diced in mixed fruit salads, or as juice. Watermelon juice can be blended with other fruit juices or made into wine. The seeds have a nutty flavor and can be dried and roasted, or ground into flour. Watermelon rinds may be eaten, but their unappealing flavor may be overcome by pickling, sometimes eaten as a vegetable, stir-fried or stewed. Citrullis lanatus, variety caffer, grows wild in the Kalahari Desert, where it is known as tsamma. The fruits are used by the San people and wild animals for both water and nourishment, allowing survival on a diet of tsamma for six weeks. Symbolic The watermelon is used variously as a symbol of Palestinian resistance, of the Kherson region in Ukraine, and of eco-socialism, as in 'green on the outside, red on the inside'. Because it is mostly water, the watermelon has been used to symbolize abrosexuality, a "fluid" or changing sexual orientation. In the United States, the watermelon has also been used as a racist stereotype associated with African Americans. Cultivation Watermelons are plants grown from tropical to temperate climates, needing temperatures higher than about to thrive. On a garden scale, seeds are usually sown in pots under cover and transplanted into the ground. Ideal conditions are a well-drained sandy loam with a pH between 5.7 and 7.2. Major pests of the watermelon include aphids, fruit flies, and root-knot nematodes. In conditions of high humidity, the plants are prone to plant diseases such as powdery mildew and mosaic virus. Some varieties often grown in Japan and other parts of the Far East are susceptible to fusarium wilt. Grafting such varieties onto disease-resistant rootstocks offers protection. The US Department of Agriculture recommends using at least one beehive per acre ( per hive) for pollination of conventional, seeded varieties for commercial plantings. Seedless hybrids have sterile pollen. This requires planting pollinizer rows of varieties with viable pollen. Since the supply of viable pollen is reduced, and pollination is much more critical in producing the seedless variety, the recommended number of hives per acre increases to three hives per acre ( per hive). Watermelons have a longer growing period than other melons and can often take 85 days or more from the time of transplanting for the fruit to mature. Lack of pollen is thought to contribute to "hollow heart" which causes the flesh of the watermelon to develop a large hole, sometimes in an intricate, symmetric shape. Watermelons suffering from hollow heart are safe to consume. Farmers of the Zentsuji region of Japan found a way to grow cubic watermelons by growing the fruits in metal and glass boxes and making them assume the shape of the receptacle. The cubic shape was originally designed to make the melons easier to stack and store, but these "square watermelons" may be triple the price of normal ones, so appeal mainly to wealthy urban consumers. Pyramid-shaped watermelons have also been developed, and any polyhedral shape may potentially be used. Watermelons, which are called in Khoisan language and in Tswana language, are important water sources in South Africa, the Kalahari Desert, and East Africa for both humans and animals. Production In 2020, global production of watermelons was 101.6 million tonnes, with China (mainland) accounting for 60% of the total (60.1 million tonnes). Secondary producers included Turkey, India, Iran, Algeria and Brazil all having annual production of 2–3 million tonnes in 2020. Gallery
Biology and health sciences
Cucurbitales
null
20597793
https://en.wikipedia.org/wiki/Diplodocus
Diplodocus
Diplodocus (, , or ) is an extinct genus of diplodocid sauropod dinosaurs known from the Late Jurassic of North America. The first fossils of Diplodocus were discovered in 1877 by S. W. Williston. The generic name, coined by Othniel Charles Marsh in 1878, is a Neo-Latin term derived from Greek διπλός (diplos) "double" and δοκός (dokos) "beam", in reference to the double-beamed chevron bones located in the underside of the tail, which were then considered unique. The genus of dinosaurs lived in what is now mid-western North America, at the end of the Jurassic period. It is one of the more common dinosaur fossils found in the middle to upper Morrison Formation, between about 154 and 152 million years ago, during the late Kimmeridgian Age, although it may have made it into the Tithonian. The Morrison Formation records an environment and time dominated by gigantic sauropod dinosaurs, such as Apatosaurus, Barosaurus, Brachiosaurus, Brontosaurus, and Camarasaurus. Its great size may have been a deterrent to the predators Allosaurus and Ceratosaurus: their remains have been found in the same strata, which suggests that they coexisted with Diplodocus. Diplodocus is among the most easily identifiable dinosaurs, with its typical sauropod shape, long neck and tail, and four sturdy legs. For many years, it was the longest dinosaur known. Description Among the best-known sauropods, Diplodocus were very large, long-necked, quadrupedal animals, with long, whip-like tails. Their forelimbs were slightly shorter than their hind limbs, resulting in a largely horizontal posture. The skeletal structure of these long-necked, long-tailed animals supported by four sturdy legs have been compared with cantilever bridges. In fact, D. carnegii is currently one of the longest dinosaurs known from a complete skeleton, with a total length of . Modern mass estimates for D. carnegii have tended to be in the range. No skull has ever been found that can be confidently said to belong to Diplodocus, though skulls of other diplodocids closely related to Diplodocus (such as Galeamopus) are well known. The skulls of diplodocids were very small compared with the size of these animals. Diplodocus had small, 'peg'-like teeth that pointed forward and were only present in the anterior sections of the jaws. Its braincase was small, and the neck was composed of at least 15 vertebrae. Postcranial skeleton D. hallorum, known from partial remains, was even larger, and is estimated to have been the size of four elephants. When first described in 1991, discoverer David Gillette calculated it to be 33 m (110 ft) long based on isometric scaling with D. carnegii. However, he later stated that this was unlikely and estimated it to be 39 – 45 meters (130 – 150 ft) long, suggesting that some individuals may have been up to 52 m (171 ft) long and weighed 80 to 100 metric tons, making it the longest known dinosaur (excluding those known from exceedingly poor remains, such as Amphicoelias or Maraapunisaurus). The estimated length was later revised downward to and later on to based on findings that show that Gillette had originally misplaced vertebrae 12–19 as vertebrae 20–27. Weight estimates based on the revised length are as high as although more recently, and according to Gregory S. Paul, a long D. hallorum was estimated to weigh in body mass. A study in 2024 later found the mass of a D. hallorum to be only , though the study suggested this only represents the average adult size and not the above average or maximum body size. The nearly complete D. carnegii skeleton at the Carnegie Museum of Natural History in Pittsburgh, Pennsylvania, on which size estimates of D. hallorum are mainly based, also was found to have had its 13th tail vertebra come from another dinosaur, throwing off size estimates for D. hallorum even further. While dinosaurs such as Supersaurus were probably longer, fossil remains of these animals are only fragmentary and D. hallorum still remains among the longest known dinosaurs. Diplodocus had an extremely long tail, composed of about 80 caudal vertebrae, which are almost double the number some of the earlier sauropods had in their tails (such as Shunosaurus with 43), and far more than contemporaneous macronarians had (such as Camarasaurus with 53). Some speculation exists as to whether it may have had a defensive or noisemaking (by cracking it like a coachwhip) or, as more recently suggested, tactile function. The tail may have served as a counterbalance for the neck. The middle part of the tail had "double beams" (oddly shaped chevron bones on the underside, which gave Diplodocus its name). They may have provided support for the vertebrae, or perhaps prevented the blood vessels from being crushed if the animal's heavy tail pressed against the ground. These "double beams" are also seen in some related dinosaurs. Chevron bones of this particular form were initially believed to be unique to Diplodocus; since then they have been discovered in other members of the diplodocid family as well as in non-diplodocid sauropods, such as Mamenchisaurus. Like other sauropods, the manus (front "feet") of Diplodocus were highly modified, with the finger and hand bones arranged into a vertical column, horseshoe-shaped in cross section. Diplodocus lacked claws on all but one digit of the front limb, and this claw was unusually large relative to other sauropods, flattened from side to side, and detached from the bones of the hand. The function of this unusually specialized claw is unknown. Skin impressions The discovery of partial diplodocid skin impressions in 1990 showed that some species had narrow, pointed, keratinous spines, much like those on an iguana. The spines could be up to long, on the "whiplash" portion of their tails, and possibly along the back and neck as well, similarly to hadrosaurids. The spines have been incorporated into many recent reconstructions of Diplodocus, notably Walking with Dinosaurs. The original description of the spines noted that the specimens in the Howe Quarry near Shell, Wyoming were associated with skeletal remains of an undescribed diplodocid "resembling Diplodocus and Barosaurus." Specimens from this quarry have since been referred to Kaatedocus siberi and Barosaurus sp., rather than Diplodocus. Fossilized skin of Diplodocus sp., discovered at the Mother's Day Quarry, exhibits several different types of scale shapes including rectangular, polygonal, pebble, ovoid, dome, and globular. These scales range in size and shape depending upon their location on the integument, the smallest of which reach about 1mm while the largest 10 mm. Some of these scales show orientations that may indicate where they belonged on the body. For instance, the ovoid scales are closely clustered together and look similar to scales in modern reptiles that are located dorsally. Another orientation on the fossil consists of arching rows of square scales that interrupts nearby polygonal scale patterning. It is noted that the arching scale rows look similar to the scale orientations seen around crocodilian limbs, suggesting that this area may have also originated from around a limb on the Diplodocus. The skin fossil itself is small in size, reaching less than 70 cm in length. Due to the vast amount of scale diversity seen within such a small area, as well as the scales being smaller in comparison to other diplodocid scale fossils, and the presence of small and potentially “juvenile” material at the Mother’s Day Quarry, it is hypothesized that the skin originated from a small or even “juvenile” Diplodocus. Discovery and history Bone Wars and Diplodocus longus The first record of Diplodocus comes from Marshall P. Felch’s quarry at Garden Park near Cañon City, Colorado, when several fossils were collected by Benjamin Mudge and Samuel Wendell Williston in 1877. The first specimen (YPM VP 1920) was very incomplete, consisting only of two complete caudal vertebrae, a chevron, and several other fragmentary caudal vertebrae. The specimen was sent to the Yale Peabody Museum and was named Diplodocus longus ('long double-beam') by paleontologist Othniel Charles Marsh in 1878. Marsh named Diplodocus during the Bone Wars, his competition with Philadelphian paleontologist Edward Drinker Cope to collect and describe as many fossil taxa as possible. Though several more complete specimens have been attributed to D. longus, detailed analysis has discovered that this type specimen is actually dubious, which is not an ideal situation for the type species of a well-known genus like Diplodocus. A petition to the International Commission on Zoological Nomenclature was being considered which proposed making D. carnegii the new type species. This proposal was rejected by the ICZN and D. longus has been maintained as the type species, because Hatcher did not demonstrate why the specimen he called Diplodocus carnegii was not actually just a more complete specimen of Diplodocus longus. Although the type specimen was very fragmentary, several additional diplodocid fossils were collected at Felch’s quarry from 1877 to 1884 and sent to Marsh, who then referred them to D. longus. One specimen (USNM V 2672), an articulated complete skull, mandibles, and partial atlas was collected in 1883, and was the first complete diplodocid skull to be reported. Tschopp et al.’s analysis placed it as an indeterminate diplodocine in 2015 due to the lack of overlap with any diagnostic Diplodocus postcranial material, as was the fate with all skulls assigned to Diplodocus. Second Dinosaur Rush and Diplodocus carnegii After the end of the Bone Wars, many major institutions in the eastern United States were inspired by the depictions and finds by Marsh and Cope to assemble their own dinosaur fossil collections. The competition to mount the first sauropod skeleton specifically was the most intense, with the American Museum of Natural History, Carnegie Museum of Natural History, and Field Museum of Natural History all sending expeditions to the west to find the most complete sauropod specimen, bring it back to the home institution, and mount it in their fossil halls. The American Museum of Natural History was the first to launch an expedition, finding a semi-articulated partial postcranial skeleton containing many vertebrae of Diplodocus in at Como Bluff in 1897. The skeleton (AMNH FR 223) was collected by Barnum Brown and Henry Osborn, who shipped the specimen to the AMNH and it was briefly described in 1899 by Osborn, who referred it to D. longus. It was later mounted—the first Diplodocus mount made—and was the first well preserved individual skeleton of Diplodocus discovered. In Emmanuel Tschopp et al.'s phylogenetic analysis of Diplodocidae, AMNH FR 223 was found to be not a skeleton of D. longus, but the later named species D. hallorum. As seen in the supplementary work done by Suzannah Maidment (2024), AMNH FR 223 also appears to be the geologically youngest specimen of D. hallorum, as the quarry it was found in is within systems tract 6 (C6), which contains the youngest deposits in the Morrison Formation, as opposed to the other specimens of the taxon which were found in the older systems tract 4 (B4). The most notable Diplodocus find also came in 1899, when crew members from the Carnegie Museum of Natural History were collecting fossils in the Morrison Formation of Sheep Creek, Wyoming, with funding from Scottish-American steel tycoon Andrew Carnegie, they discovered a massive and well preserved skeleton of Diplodocus. The skeleton was collected that year by Jacob L. Wortman and several other crewmen under his direction along with several specimens of Stegosaurus, Brontosaurus parvus, and Camarasaurus preserved alongside the skeleton. The skeleton (CM 84) was preserved in semi articulation and was very complete, including 41 well preserved vertebrae from the mid caudals to the anterior cervicals, 18 ribs, 2 sternal ribs, a partial pelvis, right scapulocoracoid, and right femur. In 1900, Carnegie crews returned to Sheep Creek, this expedition led by John Bell Hatcher, William Jacob Holland, and Charles Gilmore, and discovered another well preserved skeleton of Diplodocus adjacent to the specimen collected in 1899. The second skeleton (CM 94) was from a smaller individual and had preserved fewer vertebrae, but preserved more caudal vertebrae and appendicular remains than CM 84. Both of the skeletons were named and described in great detail by John Bell Hatcher in 1901, with Hatcher making CM 84 the type specimen of a new species of Diplodocus, Diplodocus carnegii ("Andrew Carnegie's double beam"), with CM 94 becoming the paratype. There were political reasons rather than scientific for naming the first dinosaur collected by the Carnegie Museum for their patron, Andrew Carnegie. It was not until 1907, that the Carnegie Museum of Natural History created a composite mount of Diplodocus carnegii that incorporated CM 84 and CM 94 along with several other specimens and even other taxa were used to complete the mount, including a skull molded based on USNM 2673, a skull assigned to Galeamopus pabsti. The Carnegie Museum mount became very popular, being nicknamed "Dippy" by the populace, eventually being cast and sent to museums in London, Berlin, Paris, Vienna, Bologna, St. Petersburg, Buenos Aires, Madrid, and Mexico City from 1905 to 1928. The London cast specifically became very popular; its casting was requested by King Edward VII and it was the first sauropod mount put on display outside of the United States. The goal of Carnegie in sending these casts overseas was apparently to bring international unity and mutual interest around the discovery of the dinosaur. Dinosaur National Monument The Carnegie Museum of Natural History made another landmark discovery in 1909 when Earl Douglass unearthed several caudal vertebrae from Apatosaurus in what is now Dinosaur National Monument on the border region between Colorado and Utah, with the sandstone dating to the Kimmeridgian of the Morrison Formation. From 1909 to 1922, with the Carnegie Museum excavating the quarry, eventually unearthing over 120 dinosaur individuals and 1,600+ bones, many of the associated skeletons being very complete and are on display in several American museums. In 1912, Douglass found a semi articulated skull of a diplodocine with mandibles (CM 11161) in the Monument. Another skull (CM 3452) was found by Carnegie crews in 1915, bearing 6 articulated cervical vertebrae and mandibles, and another skull with mandibles (CM 1155) was found in 1923. All of the skulls found at Dinosaur National Monument were shipped back to Pittsburgh and described by William Jacob Holland in detail in 1924, who referred the specimens to D. longus. This assignment was also questioned by Tschopp, who stated that all of the aforementioned skulls could not be referred to any specific diplodocine. Hundreds of assorted postcranial elements were found in the Monument that have been referred to Diplodocus, but few have been properly described. A nearly complete skull of a juvenile Diplodocus was collected by Douglass in 1921, and it is the first known from a Diplodocus. Another Diplodocus skeleton was collected at the Carnegie Quarry in Dinosaur National Monument, Utah, by the National Museum of Natural History in 1923. The skeleton (USNM V 10865) is one of the most complete known from Diplodocus, consisting of a semi-articulated partial postcranial skeleton, including a well preserved dorsal column. The skeleton was briefly described by Charles Gilmore in 1932, who also referred it to D. longus, and it was mounted in the fossil hall at the National Museum of Natural History the same year. In Emmanuel Tschopp et al.'s phylogenetic analysis of Diplodocidae, USNM V 10865 was also found to be an individual of D. hallorum. The Denver Museum of Nature and Science obtained a Diplodocus specimen through exchange from the Carnegie Museum that had been collected at Dinosaur National Monument. The specimen (DMNH 1494) was nearly as complete as the Smithsonian specimen. It consists of the vertebral column complete from cervical 8 to caudal 20, right scapula-coracoid, complete pelvis, and both hind limbs without feet. It was mounted in the museum during the late 1930s and remounted in the early 1990s. Although not described in detail, Tschopp and colleagues determined that this skeleton also belonged to D. hallorum. Later discoveries and D. hallorum Few Diplodocus finds came for many years until 1979, when three hikers came across several vertebrae stuck in elevated stone next to several petroglyphs in a canyon west of San Ysidro, New Mexico. The find was reported to the New Mexican Museum of Natural History, who dispatched an expedition led by David D. Gillette in 1985, that collected the specimen after several visits from 1985 to 1990. The specimen was preserved in semi-articulation, including 230 gastroliths, with several vertebrae, partial pelvis, and right femur and was prepared and deposited at the New Mexican Museum of Natural History under NMMNH P-3690. The specimen was not described until 1991 in the Journal of Paleontology, where Gillette named it Seismosaurus halli (Jim and Ruth Hall's seismic lizard), though in 1994, Gillette published an amendment changing the name to S. hallorum. In 2004 and later 2006, Seismosaurus was synonymized with Diplodocus and even suggested to be synonymous with the dubious D. longus and later Tschopp et al.'s phylogenetic analysis in 2015 supported the idea that many specimens referred to D. longus actually belonged to D. hallorum. In 1994, the Museum of the Rockies discovered a very productive fossil site at Mother's Day Quarry in Carbon County, Montana from the Salt Wash member of the Morrison Formation that was later excavated by the Cincinnati Museum of Natural History and Science in 1996, and after that the Bighorn Basin Paleontological Institute in 2017. The quarry was very productive, having mostly isolated Diplodocus bones from juveniles to adults in pristine preservation. The quarry notably had a great disparity between the amount of juveniles and adults in the quarry, as well as the frequent preservation of skin impressions, pathologies, and some articulated specimens from Diplodocus. One specimen, a nearly complete skull of a juvenile Diplodocus, was found at the quarry and is one of few known and highlighted ontogenetic dietary changes in the genus. Classification and species Phylogeny Diplodocus is both the type genus of, and gives its name to, the Diplodocidae, the family in which it belongs. Members of this family, while still massive, have a markedly more slender build than other sauropods, such as the titanosaurs and brachiosaurs. All are characterized by long necks and tails and a horizontal posture, with forelimbs shorter than hind limbs. Diplodocids flourished in the Late Jurassic of North America and possibly Africa. A subfamily, the Diplodocinae, was erected to include Diplodocus and its closest relatives, including Barosaurus. More distantly related is the contemporaneous Apatosaurus, which is still considered a diplodocid, although not a diplodocine, as it is a member of the sister subfamily Apatosaurinae. The Portuguese Dinheirosaurus and the African Tornieria have also been identified as close relatives of Diplodocus by some authors. Diplodocoidea comprises the diplodocids, as well as the dicraeosaurids, rebbachisaurids, Suuwassea, Amphicoelias possibly Haplocanthosaurus, and/or the nemegtosaurids. The clade is the sister group to Macronaria (camarasaurids, brachiosaurids and titanosaurians). A cladogram of the Diplodocidae after Tschopp, Mateus, and Benson (2015) below: Valid species Diplodocus carnegii (also spelled incorrectly D. carnegiei), named after Andrew Carnegie, is the best known, mainly due to a near-complete skeleton known as Dippy (specimen CM 84) collected by Jacob Wortman, of the Carnegie Museum of Natural History in Pittsburgh, Pennsylvania, and described and named by John Bell Hatcher in 1901. Diplodocus hallorum, first described in 1991 by Gillette as Seismosaurus halli from a partial skeleton comprising vertebrae, pelvis and ribs (specimen NMMNH P-3690). As the specific name honors two people, Jim and Ruth Hall (of Ghost Ranch), George Olshevsky later suggested to emend the name as S. hallorum, using the mandatory genitive plural; Gillette then emended the name, which usage has been followed by others, including Carpenter (2006). In 2004, a presentation at the annual conference of the Geological Society of America made a case for Seismosaurus being a junior synonym of Diplodocus. This was followed by a much more detailed publication in 2006, which not only renamed the species Diplodocus hallorum, but also speculated that it could prove to be the same as D. longus. The position that D. hallorum should be regarded as a specimen of D. longus was also taken by the authors of a redescription of Supersaurus, refuting a previous hypothesis that Seismosaurus and Supersaurus were the same. A 2015 analysis of diplodocid relationships noted that these opinions are based on the more complete referred specimens of Diplodocus longus. The authors of this analysis concluded that those specimens were indeed the same species as D. hallorum, but that D. longus itself was a nomen dubium but a position that was rejected by the International Commission on Zoological Nomenclature as discussed above. Nomina dubia (doubtful species) Diplodocus longus, the type species, is known from two complete and several fragmentary caudal vertebrae from the Morrison Formation (Felch Quarry) of Colorado. Though several more complete specimens have been attributed to D. longus, detailed analysis has suggested that the original fossil lacks the necessary features to allow comparison with other specimens. For this reason, it has been considered a nomen dubium, which Tschopp et al. regarded as not an ideal situation for the type species of a well-known genus like Diplodocus. A petition to the International Commission on Zoological Nomenclature (ICZN) was being considered, which proposed to make D. carnegii the new type species. The proposal was rejected by the ICZN and D. longus has been maintained as the type species. However, in comments responding to the petition, some authors regarded D. longus as potentially valid after all. Diplodocus lacustris ("of the lake") is a nomen dubium named by Marsh in 1884 based on specimen YPM 1922 found by Arthur Lakes, consisting of the snout and upper jaw of a smaller animal from Morrison, Colorado. The remains are now believed to have been from an immature animal, rather than from a separate species. Mossbrucker et al., 2013 surmised that the dentary and teeth of Diplodocus lacustris was actually from Apatosaurus ajax. Later in 2015, it was concluded that the snout of the specimen actually belonged to Camarasaurus. Formerly assigned species Diplodocus hayi was named by William Jacob Holland in 1924 based on a braincase and partial postcranial skeleton (HMNS 175), including a nearly complete vertebral column, found in the Morrison Formation strata near Sheridan, Wyoming. D. hayi remained a species of Diplodocus until reassessment by Emmanuel Tschopp and colleagues determined that it was its own genus, Galeamopus, in 2015. The reassessment also found that the skulls AMNH 969 and USNM 2673 were not Diplodocus either and actually referred specimens of Galeamopus. Paleobiology Due to a wealth of skeletal remains, Diplodocus is one of the best-studied dinosaurs. Many aspects of its lifestyle have been subjects of various theories over the years. Comparisons between the scleral rings of diplodocines and modern birds and reptiles suggest that they may have been cathemeral, active throughout the day at short intervals. Marsh and then Hatcher assumed that the animal was aquatic, because of the position of its nasal openings at the apex of the cranium. Similar aquatic behavior was commonly depicted for other large sauropods, such as Brachiosaurus and Apatosaurus. A 1951 study by Kenneth A. Kermack indicates that sauropods probably could not have breathed through their nostrils when the rest of the body was submerged, as the water pressure on the chest wall would be too great. Since the 1970s, general consensus has the sauropods as firmly terrestrial animals, browsing on trees, ferns, and bushes. Scientists have debated as to how sauropods were able to breathe with their large body sizes and long necks, which would have increased the amount of dead space. They likely had an avian respiratory system, which is more efficient than a mammalian and reptilian system. Reconstructions of the neck and thorax of Diplodocus show great pneumaticity, which could have played a role in respiration as it does in birds. Posture The depiction of Diplodocus posture has changed considerably over the years. For instance, a classic 1910 reconstruction by Oliver P. Hay depicts two Diplodocus with splayed lizard-like limbs on the banks of a river. Hay argued that Diplodocus had a sprawling, lizard-like gait with widely splayed legs, and was supported by Gustav Tornier. This hypothesis was contested by William Jacob Holland, who demonstrated that a sprawling Diplodocus would have needed a trench through which to pull its belly. Finds of sauropod footprints in the 1930s eventually put Hay's theory to rest. Later, diplodocids were often portrayed with their necks held high up in the air, allowing them to graze from tall trees. Studies looking at the morphology of sauropod necks have concluded that the neutral posture of Diplodocus neck was close to horizontal, rather than vertical, and scientists such as Kent Stevens have used this to argue that sauropods including Diplodocus did not raise their heads much above shoulder level. A nuchal ligament may have held the neck in this position. One approach to understanding the possible ligament structure in ancient sauropods is to study the ligaments and their attachments to bones in extant animals to see if they resemble any bony structures in sauropods or other dinosaur species like Parasaurolophus. If diplodocus relied on a mammal-like nuchal ligament, it would have been for passively sustaining the weight of its head and neck. This ligament is found in many hoofed mammals, such as bison and horses. In mammals, it typically consists of a funiculus cord that runs from the external occipital crest of the skull to elongate vertebral neural spines or “withers” in the shoulder region plus sheet-like extensions called laminae run from the cord to the neural spines on some or all of the cervical vertebrae. However, most sauropods do not have withers in the shoulders, so if they possessed a similar ligament, it would differ substantially, perhaps anchoring in the hip region. Another hypothesized neck-supporting ligament is an avian-like elastic ligament, such as that seen in Struthio camelus. This ligament acts similarly to the mammal-like nuchal ligament but comprises short segments of ligament that connect the bases of the neural spines, and therefore does not need a robust attachment zone like those seen in mammals. A 2009 study found that all tetrapods appear to hold the base of their necks at the maximum possible vertical extension when in a normal, alert posture, and argued that the same would hold true for sauropods barring any unknown, unique characteristics that set the soft tissue anatomy of their necks apart from other animals. The study found faults with Stevens' assumptions regarding the potential range of motion in sauropod necks, and based on comparing skeletons to living animals the study also argued that soft tissues could have increased flexibility more than the bones alone suggest. For these reasons they argued that Diplodocus would have held its neck at a more elevated angle than previous studies have concluded. As with the related genus Barosaurus, the very long neck of Diplodocus is the source of much controversy among scientists. A 1992 Columbia University study of diplodocid neck structure indicated that the longest necks would have required a 1.6-ton heart – a tenth of the animal's body weight. The study proposed that animals like these would have had rudimentary auxiliary "hearts" in their necks, whose only purpose was to pump blood up to the next "heart". Some argue that the near-horizontal posture of the head and neck would have eliminated the problem of supplying blood to the brain, as it would not be elevated. Diet and feeding Diplodocines have highly unusual teeth compared to other sauropods. The crowns are long and slender, and elliptical in cross-section, while the apex forms a blunt, triangular point. The most prominent wear facet is on the apex, though unlike all other wear patterns observed within sauropods, diplodocine wear patterns are on the labial (cheek) side of both the upper and lower teeth. This implies that the feeding mechanism of Diplodocus and other diplodocids was radically different from that of other sauropods. Unilateral branch stripping is the most likely feeding behavior of Diplodocus, as it explains the unusual wear patterns of the teeth (coming from tooth–food contact). In unilateral branch stripping, one tooth row would have been used to strip foliage from the stem, while the other would act as a guide and stabilizer. With the elongated preorbital (in front of the eyes) region of the skull, longer portions of stems could be stripped in a single action. Also, the palinal (backwards) motion of the lower jaws could have contributed two significant roles to feeding behavior: (1) an increased gape, and (2) allowed fine adjustments of the relative positions of the tooth rows, creating a smooth stripping action. Young et al. (2012) used biomechanical modeling to examine the performance of the diplodocine skull. It was concluded that the proposal that its dentition was used for bark-stripping was not supported by the data, which showed that under that scenario, the skull and teeth would undergo extreme stresses. The hypotheses of branch-stripping and/or precision biting were both shown to be biomechanically plausible feeding behaviors. Diplodocine teeth were also continually replaced throughout their lives, usually in less than 35 days, as was discovered by Michael D'Emic et al. Within each tooth socket, as many as five replacement teeth were developing to replace the next one. Studies of the teeth also reveal that it preferred different vegetation from the other sauropods of the Morrison, such as Camarasaurus. This may have better allowed the various species of sauropods to exist without competition. The flexibility of Diplodocus neck is debated but it should have been able to browse from low levels to about 4 m (13 ft) when on all fours. However, studies have shown that the center of mass of Diplodocus was very close to the hip socket; this means that Diplodocus could rear up into a bipedal posture with relatively little effort. It also had the advantage of using its large tail as a 'prop' which would allow for a very stable tripodal posture. In a tripodal posture Diplodocus could potentially increase its feeding height up to about . The neck's range of movement would have also allowed the head to graze below the level of the body, leading some scientists to speculate on whether Diplodocus grazed on submerged water plants, from riverbanks. This concept of the feeding posture is supported by the relative lengths of front and hind limbs. Furthermore, its peg-like teeth may have been used for eating soft water plants. Matthew Cobley et al. (2013) disputed this, finding that large muscles and cartilage would have limited neck movements. They state that the feeding ranges for sauropods like Diplodocus were smaller than previously believed and the animals may have had to move their whole bodies around to better access areas where they could browse vegetation. As such, they might have spent more time foraging to meet their minimum energy needs. The conclusions of Cobley et al. were disputed in 2013 and 2014 by Mike Taylor, who analyzed the amount and positioning of intervertebral cartilage to determine the flexibility of the neck of Diplodocus and Apatosaurus. Taylor found that the neck of Diplodocus was very flexible, and that Cobley et al. was incorrect, in that flexibility as implied by bones is less than in reality. In 2010, Whitlock et al. described a juvenile skull at the time referred to Diplodocus (CM 11255) that differed greatly from adult skulls of the same genus: its snout was not blunt, and the teeth were not confined to the front of the snout. These differences suggest that adults and juveniles were feeding differently. Such an ecological difference between adults and juveniles had not been previously observed in sauropodomorphs. Reproduction and growth While the long neck has traditionally been interpreted as a feeding adaptation, it was also suggested that the oversized neck of Diplodocus and its relatives may have been primarily a sexual display, with any other feeding benefits coming second. A 2011 study refuted this idea in detail. While no evidence indicates Diplodocus nesting habits, other sauropods, such as the titanosaurian Saltasaurus, have been associated with nesting sites. The titanosaurian nesting sites indicate that they may have laid their eggs communally over a large area in many shallow pits, each covered with vegetation. Diplodocus may have done the same. The documentary Walking with Dinosaurs portrayed a mother Diplodocus using an ovipositor to lay eggs, but it was pure speculation on the part of the documentary author. For Diplodocus and other sauropods, the size of clutches and individual eggs were surprisingly small for such large animals. This appears to have been an adaptation to predation pressures, as large eggs would require greater incubation time and thus would be at greater risk. Based on bone histology studies in the early 2000s, it was suggested that Diplodocus and other sauropods grew at a very fast rate, reaching sexual maturity at just over a decade, and continuing to grow throughout their lives. However, a 2024 study estimated that the holotype of D. hallorum was around 60 years old in maximum age of death, over 20 years older than the oldest known sauropod specimens, and that it "had 'recently' reached skeletal maturity before death". This would make it one of the oldest known dinosaur specimens. The study also suggested that D. hallorum may have had a relatively slower and more prolonged rate of growth than D. carnegii, as the latter reached maturity within just 24 to 34 years of age. Paleoenvironment The Morrison Formation is a sequence of shallow marine and alluvial sediments which, according to radiometric dating, ranges between 156.3 million years old (Ma) at its base, and 146.8 million years old at the top, which places it in the late Oxfordian, Kimmeridgian, and early Tithonian ages of the Late Jurassic epoch. This formation is interpreted as a semi-arid environment with distinct wet and dry seasons. The Morrison Basin, where many dinosaurs lived, stretched from New Mexico to Alberta and Saskatchewan, and was formed when the precursors to the Front Range of the Rocky Mountains started pushing up to the west. The deposits from their east-facing drainage basins were carried by streams and rivers and deposited in swampy lowlands, lakes, river channels, and floodplains. This formation is similar in age to the Lourinha Formation in Portugal and the Tendaguru Formation in Tanzania. The Morrison Formation records an environment and time dominated by gigantic sauropod dinosaurs. Dinosaurs known from the Morrison include the theropods Ceratosaurus, Koparion, Stokesosaurus, Ornitholestes, Allosaurus and Torvosaurus, the sauropods Brontosaurus, Apatosaurus, Brachiosaurus, Camarasaurus, and the ornithischians Camptosaurus, Dryosaurus, Othnielia, Gargoyleosaurus and Stegosaurus. Diplodocus is commonly found at the same sites as Apatosaurus, Allosaurus, Camarasaurus, and Stegosaurus. Allosaurus accounted for 70 to 75% of theropod specimens and was at the top trophic level of the Morrison food web. Many of the dinosaurs of the Morrison Formation are the same genera as those seen in Portuguese rocks of the Lourinha Formation (mainly Allosaurus, Ceratosaurus, Torvosaurus, and Stegosaurus), or have a close counterpart (Brachiosaurus and Lusotitan; Camptosaurus and Draconyx). Other vertebrates that shared the same paleoenvironment included ray-finned fishes, frogs, salamanders, turtles like Dorsetochelys, sphenodonts, lizards, terrestrial and aquatic crocodylomorphs such as Hoplosuchus, and several species of pterosaur like Harpactognathus and Mesadactylus. Shells of bivalves and aquatic snails are also common. The flora of the period was green algae, fungi, mosses, horsetails, cycads, ginkgoes, and several families of conifers. Vegetation varied from river-lining forests of tree ferns and ferns (gallery forests), to fern savannas with occasional trees such as the Araucaria-like conifer Brachyphyllum. Cultural significance Diplodocus has been a famous and much-depicted dinosaur as it has been on display in more places than any other sauropod dinosaur. Much of this has probably been due to its wealth of skeletal remains and former status as the longest dinosaur. The donation of many mounted skeletal casts of "Dippy" by industrialist Andrew Carnegie to potentates around the world at the beginning of the 20th century did much to familiarize it to people worldwide. Casts of Diplodocus skeletons are still displayed in many museums worldwide, including D. carnegii in a number of institutions. The project, along with its association with 'big science', philanthropism, and capitalism, drew much public attention in Europe. The German satirical weekly Kladderadatsch devoted a poem to the dinosaur: "Le diplodocus" became a generic term for sauropods in French, much as "brontosaur" is in English. D. longus is displayed the Senckenberg Museum in Frankfurt (a skeleton made up of several specimens, donated in 1907 by the American Museum of Natural History), Germany. A mounted and more complete skeleton of D. longus is at the Smithsonian National Museum of Natural History in Washington, DC, while a mounted skeleton of D. hallorum (formerly Seismosaurus), which may be the same as D. longus, can be found at the New Mexico Museum of Natural History and Science. A war machine (landship) from WW1 named Boirault machine was designed in 1915, later deemed impractical and hence given the nickname "Diplodocus militaris".
Biology and health sciences
Dinosaurs and prehistoric reptiles
null
20597989
https://en.wikipedia.org/wiki/African%20bush%20elephant
African bush elephant
The African bush elephant (Loxodonta africana), also known as the African savanna elephant, is a species of elephant native to sub-Saharan Africa. It is one of three extant elephant species and, along with the African forest elephant, one of two extant species of African elephant. It is the largest living terrestrial animal, with fully grown bulls reaching an average shoulder height of and a body mass of ; the largest recorded specimen had a shoulder height of and an estimated body mass of . The African bush elephant is characterised by its long prehensile trunk with two finger-like processes; a convex back; large ears which help reduce body heat; and sturdy tusks that are noticeably curved. The skin is grey with scanty hairs, and bending cracks which support thermoregulation by retaining water. The African bush elephant inhabits a variety of habitats such as forests, grasslands, woodlands, wetlands and agricultural land. It is a mixed herbivore feeding mostly on grasses, creepers, herbs, leaves, and bark. The average adult consumes about of vegetation and of water each day. A social animal, the African bush elephant often travels in herds composed of cows and their offspring. Adult bulls usually live alone or in small bachelor groups. During the mating season, males go through a process called musth; a period of high testosterone levels and heightened aggression. For females, the menstrual cycle lasts three to four months, and gestation around 22 months, the longest of any mammal. Since 2021, the African bush elephant has been listed as Endangered on the IUCN Red List. It is threatened foremost by habitat destruction, and in parts of its range also by poaching for meat and ivory. Between 2003 and 2015, the illegal killing of 14,606 African bush elephants was reported by rangers across 29 range countries. Chad is a major transit country for smuggling of ivory in West Africa. This trend was curtailed by raising penalties for poaching and improving law enforcement. Poaching of the elephant has dated back to the 1970s and 80s, which were considered the largest killings in history. In human culture, elephants have been extensively featured in literature, folklore and media, and are most valued for their large tusks in many places. Taxonomy and evolution In the 19th and 20th centuries, several zoological specimens were described by naturalists and curators of natural history museums from various parts of Africa, including: Elephas (Loxodonta) oxyotis and Elephas (Loxodonta) knochenhaueri by Paul Matschie in 1900. The first was a specimen from the upper Atbara River in northern Ethiopia, and the second a specimen from the Kilwa area in Tanzania. Elephas africanus toxotis, selousi, peeli, cavendishi, orleansi and rothschildi by Richard Lydekker in 1907 who assumed that ear size is a distinguishing character for a race. These specimens were shot in South Africa, Mashonaland in Zimbabwe, Aberdare Mountains and Lake Turkana area in Kenya, Somaliland, and western Sudan, respectively. North African elephant (L. a. pharaohensis) by Paulus Edward Pieris Deraniyagala in 1948 was a specimen from Fayum in Egypt. Today, these names are all considered synonyms. A genetic study based on mitogenomic analysis revealed that the African and Asian elephant genetically diverged about 7.6 million years ago. Phylogenetic analysis of nuclear DNA of African bush and forest elephants, Asian elephant, woolly mammoth, and American mastodon revealed that the African bush elephant and the African forest elephant form a sister group that genetically diverged at least 1.9 million years ago. They are therefore considered distinct species. Gene flow between the two species, however, might have occurred after the split. Some authors have suggested that L. africana evolved from Loxodonta atlantica. The fossil record for L. africana is sparse. The earliest possible records of the species are from the Shungura Formation around Omo in Ethiopia, which are dated to the Early Pleistocene, around 2.44-2.27 million years ago. Another possible early record is from the Kanjera site in Kenya, dating to the Middle Pleistocene, around 500,000 years ago. Genetic analysis suggests a major population expansion between 500,000 and 100,000 years ago. Records become more common during the Late Pleistocene, following the extinction of the last African Palaeoloxodon elephant species, Palaeoloxodon jolensis. Description The African bush elephant has grey skin with scanty hairs. Its large ears cover the whole shoulder, and can grow as large as . Its large ears help to reduce body heat; flapping them creates air currents and exposes large blood vessels on the inner sides to increase heat loss during hot weather. The African bush elephant's ears are pointed and triangular shaped. Its occipital plane slopes forward. Its back is shaped markedly concave. Its sturdy tusks are curved out and point forward. Its long trunk or proboscis ends with two finger-like tips. Size The African bush elephant is the largest and heaviest living land animal. Under optimal conditions where individuals are capable of reaching full growth potential, fully grown mature males are about tall at the shoulder and weigh on average (with 90% of fully grown males under optimal conditions being between and ). Mature fully grown females are smaller at about tall at the shoulder and in weight on average under optimal growth conditions (with 90% of fully grown females ranging between and in optimal conditions). The maximum recorded shoulder height of an adult bull is , with this individual having an estimated weight of . Elephants attain their maximum stature when they complete the fusion of long-bone epiphyses, occurring in males around the age of 40 and females around 25 years of age. Dentition The dental formula of the African bush elephant is . They develop six molars in each jaw quadrant that erupt at different ages and differ in size. The first molars grow to a size of wide by long, are worn by the age of one year and lost by the age of about 2.5 years. The second molars start protruding at the age of about six months, and grow to a size of wide by long and are lost by the age of 6–7 years. The third molars protrude at the age of about one year, grow to a size of wide by long, and are lost by the age of 8–10 years. The fourth molars show by the age of 6–7 years, grow to a size of wide by long and are lost by the age of 22–23 years. The dental alveoli of the fifth molars are visible by the age of 10–11 years. They grow to a size of wide by long and are worn by the age of 45–48 years. The dental alveoli of the last molars are visible by the age of 26–28 years. They grow to a size of wide by long and are well worn by the age of 65 years. Both sexes have large, curved, maxillary incisors known as tusks that continue growing throughout their lives. In the wild, a large percentage of elephants experience a tusk fracture, although this is more prevalent in captivity. A tusk fracture of any sort usually results in serious infections, as the pulp is exposed to the elements. The tusks erupt when they are 1–3 years old. Tusks grow from deciduous teeth known as tushes that develop in the upper jaw and consist of a crown, root and pulpal cavity, which are completely formed soon after birth. Tushes reach a length of . They are composed of dentin and coated with a thin layer of cementum. Their tips bear a conical layer of enamel that is usually worn off when the elephant is five years old. Tusks of bulls grow faster than tusks of cows. Mean weight of tusks at the age of 60 years is in bulls and in cows. The longest known tusk of an African bush elephant measured and weighed . Distribution and habitat The African bush elephant occurs in sub-Saharan Africa which includes Uganda, Kenya, Tanzania, Botswana, Zimbabwe, Namibia, Zambia, Angola, Malawi, Mali, Rwanda, Mozambique and South Africa. It moves between a variety of habitats, including subtropical and temperate forests, dry and seasonally flooded grasslands, woodlands, wetlands, and agricultural land from sea level to mountain slopes. In Mali and Namibia, it also inhabits desert and semi-desert areas. Populations of African bush elephants are increasing in some areas such as the Kruger National Park, where an annual growth of 4.2% was recorded between 2003 and 2015. There are estimated to be at least 17,000 elephants in the park's vicinity, as of 2015–the most of any area in South Africa. The increase in population occurred after the discontinuation of culling in the mid-1990s. This large elephant population is considered a problem to both the environment and its creatures. As such, with the use of natural processes, conservationists aim to control the ever-growing population. In other places in southern Africa, the elephant population continues to increase. Botswana in particular hosts more African bush elephants than any other country, at 130,000. In a 2019 study, populations were found to be steady, though the authors also noted an unusual increase in carcasses, possibly due to a new wave of poaching which was uncommon at the time. In East Africa there are roughly 137,000 elephants distributed across six countries in a wide array of habitats, such as grasslands and woodlands. They are most threatened by illegal hunting activities, such as poaching. In one instance, between 2006 and 2013, the population in East Africa fell by 62% due to high poaching pressures. Tanzania (where 80% of the East African population reside) lost the most elephants, while the resident population in Somalia went locally extinct. South Sudan, on the other hand, experienced an increase in elephants. Following successful conservation and governmental actions, Kenya also saw an increase in their elephant numbers. In Ethiopia, the African bush elephant has historically been recorded up to an elevation of . By the late 1970s, the population had declined to one herd in the Dawa River valley and one close to the Kenyan border. As of 2015, there are estimated to be 1,9002,151 elephants in the country, a decrease from 6,00010,000 in the 1970s. It is estimated that between the 1980s and 2010s, elephants in Ethiopia experienced a decline of around 90%hence the endangered assessment. In West and Central Africa, the population of elephants are threatened, in large part due to habitat loss and fragmentation, and rapid growth in human populations. Elephants occur in isolated pockets throughout the region and are for the most part decreasing in number. Behavior and ecology Social behavior The core of elephant society is the family unit, which mostly comprises several adult cows, their daughters, and their prepubertal sons. Iain Douglas-Hamilton, who observed African bush elephants for 4.5 years in Lake Manyara National Park, coined the term 'kinship group' for two or more family units that have close ties. The family unit is led by a matriarch who at times also leads the kinship group. Groups cooperate in locating food and water, in self-defense, and in caring for offspring (termed allomothering). Group size varies seasonally and between locations. In Tsavo East and Tsavo West National Parks, groups are bigger in the rainy season and areas with open vegetation. Aerial surveys in the late 1960s to early 1970s revealed an average group size of 6.3 individuals in Uganda's Rwenzori National Park and 28.8 individuals in Chambura Game Reserve. In both sites, elephants aggregated during the wet season, whereas groups were smaller in the dry season. Young bulls gradually separate from the family unit when they are between 10 and 19 years old. They range alone for some time or form all-male groups. A 2020 study highlighted the importance of old bulls for the navigation and survival of herds and raised concerns over the removal of old bulls as "currently occur[ring] in both legal trophy hunting and illegal poaching". Temperature regulation The African bush elephant has curved skin with bending cracks, which support thermoregulation by retaining water. These bending cracks contribute to an evaporative cooling process which helps to maintain body temperature via homeothermy regardless of air temperature. Diet The African bush elephant is herbivorous. It is a mixed feeder, consuming both grasses, as well as woody vegetation (browse), with the proportions varying wildly depending on the habitat and time of year, ranging from almost exclusively grazing to near-total browsing. African bush elephants' consumption of woody plants, particularly their habit of uprooting trees, has the ability to alter the local environment, transforming woodlands into grasslands. African bush elephants also at times consume fruit and serve as seed dispersers. Adults can consume up to of food per day. To supplement their diet with minerals, they congregate at mineral-rich water holes, termite mounds, and mineral licks. Salt licks visited by elephants in the Kalahari contain high concentrations of water-soluble sodium. Elephants drink of water daily, and seem to prefer sites where water and soil contain sodium. In Kruger National Park and on the shore of Lake Kariba, elephants were observed to ingest wood ash, which also contains sodium. Communication Africa bush elephants use their trunks for tactile communication. When greeting, a lower ranking individual will insert the tip of its trunk into its superior's mouth. Elephants will also stretch out their trunk toward an approaching individual they intend to greet. Mother elephants reassure their young with touches, embraces, and rubbings with the foot while slapping disciplines them. During courtship, a couple will caress and intertwine with their trunks while playing and fighting individuals wrestle with them. Elephant vocals are variations of rumbles, trumpets, squeals, and screams. Rumbles are mainly produced for long-distance communication and cover a broad range of frequencies which are mostly below what a human can hear. Infrasonic rumbles can travel vast distances and are important for attracting mates and scaring off rivals. Growls are audible rumbles and happen during greetings. When in pain or fear, an elephant makes an open-mouthed growl known as a bellow. A drawn-out growl is known as a moan. Growling can escalate into a roaring when the elephant is issuing a threat. Trumpeting is made by blowing through the trunk and signals excitement, distress, or aggression. Juvenile elephants squeal in distress while screaming is done by adults for intimidation. Musth Bulls in musth experience swelling of the temporal glands and secretion of fluid, the musth fluid, which flows down their cheeks. They begin to dribble urine, initially as discrete drops and later in a regular stream. These manifestations of musth last from a few days to months, depending on the age and condition of the bull. When a bull has been urinating for a long time, the proximal part of the penis and the distal end of the sheath show a greenish coloration, termed the 'green penis syndrome' by Joyce Poole and Cynthia Moss. Males in musth become more aggressive. They guard and mate with females in estrus, who stay closer to bulls in musth than to non-musth bulls. Urinary testosterone increases during musth. Bulls begin to experience musth by the age of 24 years. Periods of musth are short and sporadic in young bulls up to 35 years old, lasting a few days to weeks. Older bulls are in musth for 2–5 months every year. Musth occurs mainly during and following the rainy season when females are in estrus. Bulls in musth often chase each other and are aggressive towards other bulls in musth. When old and high-ranking bulls in musth threaten and chase young musth bulls, either the latter leave the group or their musth ceases. Young bulls in musth killed about 49 white rhinoceros in Pilanesberg National Park between 1992 and 1997. This unusual behavior was attributed to their young age and inadequate socialisation; they were 17–25-year-old orphans from culled families that grew up without the guidance of dominant bulls. When six adult bulls were introduced into the park, the young bulls did not attack rhinos anymore, indicating older bulls suppress the musth and aggressiveness of younger bulls. Similar incidents were recorded in Hluhluwe-Umfolozi Park, where young bulls killed five black and 58 white rhinoceros between 1991 and 2001. After the introduction of ten bulls, each up to 45 years old, the number of rhinos killed by elephants decreased considerably. Reproduction Spermatogenesis starts when bulls are about 15 years old. However, males have not begun sexual cycles, not experiencing their first musth period until they are 25 or 30 years of age. Cows ovulate for the first time at the age of 11 years. They are in estrus for 2–6 days. In captivity, cows have an oestrous cycle lasting 14–15 weeks. Foetal gonads enlarge during the second half of pregnancy. African bush elephants mate during the rainy season. Bulls in musth cover long distances in search of cows and associate with large family units. They listen for the cows' loud, very low frequency calls and attract cows by calling and by leaving trails of strong-smelling urine. Cows search for bulls in musth, listen for their calls, and follow their urine trails. Bulls in musth are more successful at obtaining mating opportunities than those who are not. A cow may move away from bulls that attempt to test her estrous condition. If pursued by several bulls, she will run away. Once she chooses a mating partner, she will stay away from other bulls, which are threatened and chased away by the favoured bull. Competition between bulls sometimes overrides the cow's choice of mating partner. After the mating period, females will undergo a gestation of 22 months. The interval between births was estimated at 3.9 to 4.7 years in Hwange National Park. Where hunting pressure on adult elephants was high in the 1970s, cows gave birth once in 2.9 to 3.8 years. Cows in Amboseli National Park gave birth once in 5 years on average. The birth of a calf was observed in Tsavo East National Park in October 1990. A group of 80 elephants including eight bulls had gathered in the morning in a radius around the birth site. A small group of calves and cows stood near the pregnant cow, rumbling and flapping their ears. One cow seemed to assist her. While she was in labour, fluid streamed from her temporal and ear canals. She remained standing while giving birth. The newborn calf struggled to its feet within 30 minutes and walked 20 minutes later. The mother expelled the placenta about 100 minutes after birth and covered it with soil immediately. Captive-born calves weigh between at birth and gain about weight per day. Cows lactate for about 4.8 years. Calves exclusively suckle their mother's milk during the first three months. Thereafter, they start feeding independently and slowly increase the time spent feeding until they are two years old. During the first three years, male calves spend more time suckling and grow faster than female calves. After this period, cows reject male calves more frequently from nursing than female calves. The maximum lifespan of the African bush elephant is between 70 and 75 years. Its generation length is 25 years. Predators Adult elephants are considered invulnerable to predation. Calves, usually under two years, are sometimes preyed on by lions and spotted hyenas. Adult elephants often chase off predators, especially lions, by mobbing behavior. Juveniles are usually well defended by protective adults though serious drought makes them vulnerable to lion predation. In Botswana's Chobe National Park, lions attacked and fed on juvenile and subadult elephants during the drought when smaller prey species were scarce. Between 1993 and 1996, lions successfully attacked 74 elephants; 26 were older than nine, and one was a bull of over 15 years. Most were killed at night, and hunts occurred more often during waning moon nights than during bright moon nights. In the same park, lions killed eight elephants in October 2005 that were aged between 1 and 11 years, two of them older than 8 years. Successful hunts took place after dark when prides exceeded 27 lions and herds were smaller than 5 elephants. Pathogens Observations at Etosha National Park indicate that African bush elephants die due to anthrax foremost in November at the end of the dry season. Anthrax spores spread through the intestinal tracts of vultures, jackals and hyaenas that feed on the carcasses. Anthrax killed over 100 elephants in Botswana in 2019. It is thought that wild bush elephants can contract fatal tuberculosis from humans. Infection of the vital organs by Citrobacter freundii bacteria caused the death of an otherwise healthy bush elephant after capture and translocation. From April to June 2020, over 400 bush elephants died in Botswana's Okavango Delta region after drinking from desiccating waterholes that were infested with cyanobacteria. Neurotoxins produced by the cyanobacteria caused calves and adult elephants to wander around confused, emaciated and in distress. The elephants collapsed when the toxin impaired their motor functions and their legs became paralysed. Poaching, intentional poisoning, and anthrax were excluded as potential causes. Elephants may also be host for a variety of parasites and bacteria such as Pasteurella, Salmonella, Clostridium, coccidian, nematode, and trematode. The elephant endotheliotropic herpesvirus (EEHV) is a member of the Proboscivirus genus, a novel clade most closely related to the mammalian betaherpesviruses. In benign infections found in some wild and captive African elephants, these viruses can affect either the skin or the pulmonary system. Intelligence Both African and Asian elephants have a very large and highly complex neocortex, a trait also shared by humans, apes and certain dolphin species. Elephants manifest a wide variety of behaviors, including those associated with grief, learning, mimicry, playing, altruism, tool use, compassion, cooperation, self-awareness, memory, and communication. In a 2013 study, it was suggested that elephants may understand pointing, the ability to nonverbally communicate an object by extending a finger, or equivalent. The intelligence of elephants is described as being on a par with that of cetaceans, and various primates. Threats The African bush elephant is threatened primarily by habitat loss and fragmentation following conversion of natural habitat for livestock farming, plantations of non-timber crops, and building of urban and industrial areas. As a result, human-elephant conflict has increased. Poaching Poachers target foremost elephant bulls for their tusks, which leads to a skewed sex ratio and affects the survival chances of a population. Access of poachers to unregulated black markets is facilitated by corruption and periods of civil war in some elephant range countries. During the 20th century, the African bush elephant population was decimated. Poaching of the elephant has dated back to the years 1970 and 1980, which were considered the largest killings in history. The species is placed in harm's way due to the limited conservation areas provided in Africa. In most cases, the killings of the African bush elephant have occurred near the outskirts of the protected areas. Between 2003 and 2015, the illegal killing of 14,606 African bush elephants was reported by rangers across 29 range countries. Chad is a major transit country for smuggling of ivory in West Africa. This trend was curtailed by raising penalties for poaching and improving law enforcement. Before this in June 2002, a container packed with more than ivory was confiscated in Singapore. It contained 42,120 hanko stamps and 532 tusks of African bush elephants that originated in Southern Africa, centered in Zambia and neighboring countries. Between 2005 and 2006, a total of ivory plus 91 unweighed tusks of African bush elephants were confiscated in 12 major consignments being shipped to Asia. When the international ivory trade reopened in 2006, the demand and price for ivory increased in Asia. The African bush elephant population in Chad's Zakouma National Park numbered 3,900 individuals in 2005. Within five years, more than 3,200 elephants were killed. The park did not have sufficient guards to combat poaching, and their weapons were outdated. Well-organized networks facilitated smuggling the ivory through Sudan. Poaching also increased in Kenya in those years. In Samburu National Reserve, 41 bulls were illegally killed between 2008 and 2012, equivalent to 31% of the reserve's elephant population. These killings were linked to confiscations of ivory and increased prices for ivory on the local black market. About 10,370 tusks were confiscated in Singapore, Hong Kong, Taiwan, Philippines, Thailand, Malaysia, Kenya and Uganda between 2007 and 2013. Genetic analysis of tusk samples showed that they originated from African bush elephants killed in Tanzania, Mozambique, Zambia, Kenya, and Uganda. Most of the ivory was smuggled through East African countries. In addition to elephants being poached, their carcasses may be poisoned by the poachers to avoid detection by vultures, which help rangers detect poaching activity by circling dead animals. This poses a threat to those vultures or birds that scavenge the carcasses. On 20 June 2019, the carcasses of two tawny eagles and 537 endangered Old World vultures including 468 white-backed vultures, 17 white-headed vultures, 28 hooded vultures, 14 lappet-faced vultures and 10 Cape vultures found dead in northern Botswana were suspected to have died after eating the poisoned carcasses of three elephants. Intensive poaching leads to strong selection on tusk attributes; African elephants in areas with heavy poaching often have smaller tusks and a higher frequency of congenitally tuskless females, whereas congenital tusklessness is rarely if ever observed in males. A study in Mozambique's Gorongosa National Park revealed that poaching during the Mozambican Civil War led to the increasing birth of tuskless females when the population recovered. Habitat changes Vast areas in Sub-Saharan Africa were transformed for agricultural use and the building of infrastructure. This disturbance leaves the elephants without a stable habitat and limits their ability to roam freely. Large corporations associated with commercial logging and mining have fragmented the land, giving poachers easy access to the African bush elephant. As human development grows, the human population faces the trouble of contact with the elephants more frequently, due to the species need for food and water. Farmers residing in nearby areas come into conflict with the African bush elephants rummaging through their crops. In many cases, they kill the elephants as soon as they disturb a village or forage upon its crops. Deaths caused by browsing on rubber vine, an invasive plant, have also been reported. Conservation Both African elephant species have been listed on Appendix I of the Convention on International Trade in Endangered Species of Wild Fauna and Flora since 1989. In 1997, populations of Botswana, Namibia, and Zimbabwe were placed on CITES Appendix II, as were populations of South Africa in 2000. Community-based conservation programmes have been initiated in several range countries, which contributed to reducing human-elephant conflict and increasing local people's tolerance towards elephants. Researchers discovered that playing back the recorded sounds of African bees is an effective method to drive elephants away from settlements. In 1986, the African Elephant Database was initiated to collate and update information on the distribution and status of elephant populations in Africa. The database includes results from aerial surveys, dung counts, interviews with local people, and data on poaching. Status In 2008, the IUCN Red List assessed the African elephant (then considered as a single species) as vulnerable. Since 2021, the African bush elephant has individually been assessed Endangered, after the global population was found to have decreased by more than 50% over 3 generations. More than 50% of its range is located outside protected areas. In 2016, the global population was estimated at 415,428 ± 20,111 individuals distributed in a total area of , of which 30% is protected. Approximately 42% of the total population lives in nine southern African countries comprising 293,447 ± 16,682 individuals; Africa's largest population lives in Botswana with 131,626 ± 12,508 individuals. In captivity The social behavior of elephants in captivity mimics that of those in the wild. Cows are kept with other cows, in groups, while bulls tend to be separated from their mothers at a young age and are kept apart. According to Schulte, in the 1990s, in North America, a few facilities allowed bull interaction. Elsewhere, bulls were only allowed to smell each other. Bulls and cows were allowed to interact for specific purposes such as breeding. In that event, cows were more often moved to the bull than the bull to the cow. Cows are more often kept in captivity because they are easier and less expensive to house. Cultural significance In Africa, elephants have found a prominent role in human culture since ancient times and were most priced for their ivory tusks, which were considered valuable commercial goods. In Kenya, the Maasai people have been known to use elephants for their tusks and often regard them as akin to humans. They feature extensively in Maasai culture, going by the local name of Arkanjowe (a being that is large and/or powerful). According to a Maasai legend, the elephant came to be when a woman, who was on her way to her partner's place for marriage, turned her back before reaching the destination. This event caused the woman to shape-shift into an elephant. Prehistoric North Africans depicted the elephant in Paleolithic age rock art. For example, the Libyan Tadrart Acacus, a UNESCO World Heritage Site, features a rock carving of an elephant from the last phase of the Pleistocene epoch (12,000–8000 BC) rendered with remarkable realism. There are many other prehistoric examples, including Neolithic rock art of south Oran (Algeria), and a white elephant rock painting in 'Phillip's Cave' by the San in the Erongo region of Namibia. From the Bovidian period (3550–3070 BCE), elephant images by the San bushmen in the South African Cederberg Wilderness Area suggest to researchers that they had "a symbolic association with elephants" and "had a deep understanding of the communication, behaviour and social structure of elephant family units" and "possibly developed a symbiotic relationship with elephants that goes back thousands of years."
Biology and health sciences
Proboscidea
Animals
20598015
https://en.wikipedia.org/wiki/Brachiosaurus
Brachiosaurus
Brachiosaurus () is a genus of sauropod dinosaur that lived in North America during the Late Jurassic, about 154to 150million years ago. It was first described by American paleontologist Elmer S. Riggs in 1903 from fossils found in the Colorado River valley in western Colorado, United States. Riggs named the dinosaur Brachiosaurus altithorax; the generic name is Greek for "arm lizard", in reference to its proportionately long arms, and the specific name means "deep chest". Brachiosaurus is estimated to have been between long; body mass estimates of the subadult holotype specimen range from . It had a disproportionately long neck, small skull, and large overall size, all of which are typical for sauropods. Atypically, Brachiosaurus had longer forelimbs than hindlimbs, which resulted in a steeply inclined trunk, and a proportionally shorter tail. Brachiosaurus is the namesake genus of the family Brachiosauridae, which includes a handful of other similar sauropods. Most popular depictions of Brachiosaurus are in fact based on Giraffatitan, a genus of brachiosaurid dinosaur from the Tendaguru Formation of Tanzania. Giraffatitan was originally described by German paleontologist Werner Janensch in 1914 as a species of Brachiosaurus, B. brancai, but moved to its own genus in 2009. Three other species of Brachiosaurus have been named based on fossils found in Africa and Europe; two are no longer considered valid, and a third has become a separate genus, Lusotitan. The type specimen of B. altithorax is still the most complete specimen, and only a few other specimens are thought to belong to the genus, making it one of the rarer sauropods of the Morrison Formation. It is regarded as a high browser, possibly cropping or nipping vegetation as high as off the ground. Unlike other sauropods, it was unsuited for rearing on its hindlimbs. It has been used as an example of a dinosaur that was most likely ectothermic because of its large size and the corresponding need for sufficient forage, but more recent research suggests it was warm-blooded. Among the most iconic and initially thought to be one of the largest dinosaurs, Brachiosaurus has appeared in popular culture, notably in the 1993 film Jurassic Park. History of discovery Holotype specimen The genus Brachiosaurus is based on a partial postcranial skeleton discovered in 1900 in the valley of the Colorado River near Fruita, Colorado. This specimen, which was later declared the holotype, comes from rocks of the Brushy Basin Member of the Morrison Formation, and therefore is late Kimmeridgian in age, about 154to 153million years old. Discovered by American paleontologist Elmer S. Riggs and his crew from the Field Columbian Museum (now the Field Museum of Natural History) of Chicago, it is currently cataloged as FMNHP25107. Riggs and company were working in the area as a result of favorable correspondence between Riggs and Stanton Merill Bradbury, a dentist in nearby Grand Junction. In the spring of 1899 Riggs had sent letters to mayors in western Colorado, inquiring after possible trails leading from railway heads into northeastern Utah, where he hoped to find fossils of Eocene mammals. To his surprise, he was informed by Bradbury, an amateur collector himself and president of the Western Colorado Academy of Science, that dinosaur bones had been collected near Grand Junction since 1885. Riggs was skeptical of this claim, but his superior, curator of geology Oliver Cummings Farrington, was very eager to add a large sauropod skeleton to the collection to outdo other institutions, and convinced the museum management to invest five hundred dollars in an expedition. Arriving on June 20, 1900, they set camp at the abandoned Goat Ranch. During a prospecting trip on horseback, Riggs's field assistant Harold William Menke found the humerus of FMNHP25107, on July 4, exclaiming it was "the biggest thing yet!". Riggs at first took the find for a badly preserved Brontosaurus specimen and gave priority to excavating Quarry 12, which held a more promising Morosaurus skeleton. Having secured that, on July 26 he returned to the humerus in Quarry 13, which soon proved to be of enormous size, convincing a puzzled Riggs that he had discovered the largest land animal ever. The site, Riggs Quarry 13, is located on a small hill later known as Riggs Hill; it is today marked by a plaque. More Brachiosaurus fossils are reported on Riggs Hill, but other fossil finds on the hill have been vandalized. During excavation of the specimen, Riggs misidentified the humerus as a deformed femur due to its great length, and this seemed to be confirmed when an equally-sized, well-preserved real femur of the same skeleton was discovered. In 1904 Riggs noted: "Had it not been for the unusual size of the ribs found associated with it, the specimen would have been discarded as an Apatosaur, too poorly preserved to be of value." It was only after preparation of the fossil material in the laboratory that the bone was recognized as a humerus. The excavation attracted large numbers of visitors, delaying the work and forcing Menke to guard the site to prevent bones from being looted. On August 17, the last bone was jacketed in plaster. After a concluding ten-day prospecting trip, the expedition returned to Grand Junction and hired a team and wagon to transport all fossils to the railway station, during five days; another week was spent to pack them in thirty-eight crates with a weight of . On September 10, Riggs left for Chicago by train, arriving on the 15th; the railroad companies let both passengers and cargo travel for free, as a public relations gesture. The holotype skeleton consists of the right humerus (upper arm bone), the right femur (thigh bone), the right ilium (a hip bone), the right coracoid (a shoulder bone), the sacrum (fused vertebrae of the hip), the last seven thoracic (trunk) and two caudal (tail) vertebrae, and several ribs. Riggs described the coracoid as from the left side of the body, but restudy has shown it to be a right coracoid. At the time of discovery, the lower end of the humerus, the underside of the sacrum, the ilium and the preserved caudal vertebrae were exposed to the air and thus partly damaged by weathering. The vertebrae were only slightly shifted out of their original anatomical position; they were found with their top sides directed downward. The ribs, humerus, and coracoid, however, were displaced to the left side of the vertebral column, indicating transportation by a water current. This is further evidenced by an isolated ilium of Diplodocus that apparently had drifted against the vertebral column, as well as by a change in composition of the surrounding rocks. While the specimen itself was embedded in fine-grained clay, indicating low-energy conditions at the time of deposition, it was cut off at the seventh presacral vertebra by a thick layer of much coarser sediments consisting of pebbles at its base and sandstone further up, indicating deposition under stronger currents. Based on this evidence, Riggs in 1904 suggested that the missing front part of the skeleton was washed away by a water current, while the hind part was already covered by sediment and thus got preserved. Riggs published a short report of the new find in 1901, noting the unusual length of the humerus compared to the femur and the extreme overall size and the resulting giraffe-like proportions, as well as the lesser development of the tail, but did not publish a name for the new dinosaur. In 1903, he named the type species Brachiosaurus altithorax. Riggs derived the genus name from the Greek brachion/βραχίων meaning "arm" and sauros/ meaning "lizard", because he realized that the length of the arms was unusual for a sauropod. The specific epithet was chosen because of the unusually deep and wide chest cavity, from Latin altus "deep" and Greek thorax/θώραξ, "breastplate, cuirass, corslet". Latin thorax was derived from the Greek and had become a usual scientific designation for the chest of the body. The titles of Riggs's 1901 and 1903 articles emphasized that the specimen was the "largest-known dinosaur". Riggs followed his 1903 publication with a more detailed description in a monograph in 1904. Preparation of the holotype began in the fall of 1900 shortly after it was collected by Riggs for the Field Museum. First the limb elements were processed. In the winter of 1904, the badly weathered vertebrae of the back and hip were prepared by James B. Abbott and C.T. Kline. As the preparation of each bone was finished, it was put on display in a glass case in Hall 35 of the Fine Arts Palace of the Worlds Columbian Exposition, the Field Museum's first location. All the bones were, solitarily, still on display by 1908 in Hall 35 when the Field Museum's newly mounted Apatosaurus was unveiled, the very specimen Riggs had found in Quarry 12, today catalogued as FMNH P25112 and identified as a Brontosaurus exemplar. No mount of Brachiosaurus was attempted because only twenty percent of the skeleton had been recovered. In 1993, the holotype bones were molded and cast, and the missing bones were sculpted based on material of the related Brachiosaurus brancai (now Giraffatitan) in Museum für Naturkunde, Berlin. This plastic skeleton was mounted and, in 1994, put on display at the north end of Stanley Field Hall, the main exhibit hall of the Field Museum's current building. The real bones of the holotype were put on exhibit in two large glass cases at either end of the mounted cast. The mount stood until 1999, when it was moved to the BConcourse of United Airlines' Terminal One in O'Hare International Airport to make room for the museum's newly acquired Tyrannosaurus skeleton, "Sue". At the same time, the Field Museum mounted a second plastic cast of the skeleton (designed for outside use) which is on display outside the museum on the NW terrace. Another outdoor cast was sent to Disney's Animal Kingdom to serve as a gateway icon for the "DinoLand, U.S.A." area, known as the "Oldengate Bridge" that connects the two halves of the fossil quarry themed Boneyard play area. Assigned material Further discoveries of Brachiosaurus material in North America have been uncommon and consist of a few bones. To date, material can be unambiguously ascribed only to the genus when overlapping with the holotype material, and any referrals of elements form the skull, neck, anterior dorsal region, or distal limbs or feet remain tentative. Nevertheless, material has been described from Colorado, Oklahoma, Utah, and Wyoming, and undescribed material has been mentioned from several other sites. In 1883, farmer Marshall Parker Felch, a fossil collector for the American paleontologist Othniel Charles Marsh, reported the discovery of a sauropod skull in Felch Quarry 1, near Garden Park, Colorado. The skull was found in yellowish white sandstone, near a cervical vertebra, which was destroyed during an attempt to collect it. The skull was cataloged as YPM 1986, and sent to Marsh at the Peabody Museum of Natural History, who incorporated it into his 1891 skeletal restoration of Brontosaurus (perhaps because Felch had identified it as belonging to that dinosaur). The Felch Quarry skull consists of the cranium, the maxillae, the right postorbital, part of the left maxilla, the left squamosal, the dentaries, and a possible partial pterygoid. The bones were roughly prepared for Marsh, which led to some damage. Felch also collected several postcranial fossils, including a partial cervical vertebra and partial forelimb. Most of the specimens collected by Felch were sent to the National Museum of Natural History in 1899 after Marsh's death, including the skull, which was then cataloged as USNM 5730. In 1975 the American paleontologists Jack McIntosh and David Berman investigated the historical issue of whether Marsh had assigned an incorrect skull to Brontosaurus (at the time thought to be a junior synonym of Apatosaurus), and found the Felch Quarry skull to be of "the general Camarasaurus type", while suggesting that the vertebra found near it belonged to Brachiosaurus. They concluded that if Marsh had not arbitrarily assigned the Felch quarry skull and another Camarasaurus-like skull to Brontosaurus, it would have been recognized earlier that the actual skull of Brontosaurus and Apatosaurus was more similar to that of Diplodocus. McIntosh later tentatively recognized the Felch Quarry skull as belonging to Brachiosaurus, and brought it to the attention of the American paleontologists Kenneth Carpenter and Virginia Tidwell, while urging them to describe it. They brought the skull to the Denver Museum of Natural History, where they further prepared it and made a reconstruction of it based on casts of the individual bones, with the skulls of Giraffatitan and Camarasaurus acting as templates for the missing bones. In 1998 Carpenter and Tidwell described the Felch Quarry skull, and formally assigned it to Brachiosaurus sp. (of uncertain species), since it is impossible to determine whether it belonged to the species B. altithorax itself (as there is no overlapping material between the two specimens). They based the skull's assignment to Brachiosaurus on its similarity to that of B. brancai, later known as Giraffatitan. In 2019, American paleontologists Michael D. D'Emic and Matthew T. Carrano re-examined the Felch Quarry skull after having it further prepared and CT-scanned (while consulting historical illustrations that showed earlier states of the bones), and concluded that a quadrate bone and dentary tooth considered part of the skull by Carpenter and Tidwell did not belong to it. The quadrate is too large to articulate with the squamosal, is preserved differently from the other bones, and was found several meters away. The tooth does not resemble those within the jaws (as revealed by CT data), is larger, and was therefore assigned to Camarasaurus sp. (other teeth assignable to that genus are known from the quarry). They also found it most parsimonious to assign the skull to B. altithorax itself rather than an unspecified species, as there is no evidence of other brachiosaurid taxa in the Morrison Formation (and adding this and other possible elements to a phylogenetic analysis did not change the position of B. altithorax). A shoulder blade with coracoid from Dry Mesa Quarry, Colorado, is one of the specimens at the center of the Supersaurus/Ultrasauros issue of the 1980s and 1990s. In 1985 James A. Jensen described disarticulated sauropod remains from the quarry as belonging to several exceptionally large taxa, including the new genera Supersaurus and Ultrasaurus, the latter renamed Ultrasauros shortly thereafter because another sauropod had already received the name. Later study showed that the "ultrasaur" material mostly belonged to Supersaurus, though the shoulder blade did not. Because the holotype of Ultrasauros, a dorsal vertebra, was one of the specimens that was actually from Supersaurus, the name Ultrasauros is a synonym of Supersaurus. The shoulder blade, specimen BYU 9462 (previously BYU 5001), was in 1996 assigned to a Brachiosaurus sp. (of uncertain species) by Brian Curtice and colleagues; in 2009 Michael P. Taylor concluded that it could not be referred to B. altithorax. The Dry Mesa "ultrasaur" was not as large as had been thought; the dimensions of the shoulder's coracoid bone indicate that the animal was smaller than Riggs's original specimen of Brachiosaurus. Several additional specimens were briefly described by Jensen in 1987. One of these finds, the humerus USNM 21903, was discovered in ca. 1943 by uranium prospectors Vivian and Daniel Jones in the Potter Creek Quarry in western Colorado, and donated to the Smithsonian Institution. Originally, this humerus was part of a poorly preserved partial skeleton that was not collected. According to Taylor in 2009, it is not clearly referable to Brachiosaurus despite its large size of . Jensen himself worked at the Potter Creek site in 1971 and 1975, excavating the disarticulated specimen BYU 4744, which contains a mid-dorsal vertebra, an incomplete left ilium, a left radius and a right metacarpal. According to Taylor in 2009, this specimen can be confidently referred to B. altithorax, as far as it is overlapping with its type specimen. Jensen further mentioned a specimen discovered near Jensen, Utah, that includes a rib in length, an anterior cervical vertebra, part of a scapula, and a coracoid, although he did not provide a description. In 2001, Curtice and Stadtman ascribed two articulated dorsal vertebrae (specimen BYU 13023) from Dry Mesa Quarry to Brachiosaurus. Taylor, in 2009, noted that these vertebrae are markedly shorter than those of the B. altithorax holotype, although otherwise being similar. In 2012, José Carballido and colleagues reported a nearly complete postcranial skeleton of a small juvenile approximately in length. This specimen, nicknamed "Toni" and cataloged as SMA 0009, stems from the Morrison Formation of the Bighorn Basin in north-central Wyoming. Although originally thought to belong to a diplodocid, it was later reinterpreted as a brachiosaurid, probably belonging to B. altithorax. In 2018, the largest sauropod foot ever found was reported from the Black Hills of Weston County, Wyoming. The femur is not preserved but comparisons suggest that it was about two percent longer than that of the B. altithorax holotype. Though possibly belonging to Brachiosaurus, the authors cautiously classified it as an indeterminate brachiosaurid. However, the assignment of these two specimens to their respective clades was later questioned by D'Emic and Carrano in 2019. They considered the referral of "Toni" to B. altithorax be based on mistaken interpretations of the species' unique features or of the specimen itself, and deemed it worthy of further study. Analyzing photos of the large foot, D'Emic and Carrano noted that the only feature that allowed referral to Brachiosauridae may have been influenced by damage to the bone it was found on, but did state that "general similarities" with Sonorasaurus and Giraffatitan suggested brachiosaurid affinities, but this, the authors stated, would be confirmed only through further study. Formerly assigned species Brachiosaurus brancai and Brachiosaurus fraasi Between 1909 and 1912, large-scale paleontological expeditions in German East Africa unearthed a considerable amount of brachiosaurid material from the Tendaguru Formation. In 1914, German paleontologist Werner Janensch listed differences and commonalities between these fossils and B. altithorax, concluding they could be referred to the genus Brachiosaurus. From this material Janensch named two species: Brachiosaurus brancai for the larger and more complete taxon, and Brachiosaurus fraasi for the smaller and more poorly known species. In three further publications in 1929, 1950 and 1961, Janensch compared the species in more detail, listing thirteen shared characters between Brachiosaurus brancai (which he now considered to include B. fraasi) and B. altithorax. Taylor, in 2009, considered only four of these characters as valid; six pertain to groups more inclusive than the Brachiosauridae, and the rest are either difficult to assess or refer to material that is not Brachiosaurus. There was ample material referred to B. brancai in the collections of the Museum für Naturkunde in Berlin, some of which was destroyed during World War II. Other material was transferred to other institutions throughout Germany, some of which was also destroyed. Additional material was collected by the British Museum of Natural History's Tendaguru expedition, including a nearly complete skeleton (BMNH R5937) collected by F.W.H. Migeod in 1930. This specimen is now believed to represent a new species, awaiting description. Janensch based his description of B. brancai on "Skelett S" (skeleton S) from Tendaguru, but later realized that it comprised two partial individuals: SI and SII. He at first did not designate them as a syntype series, but in 1935 made SI (presently MB.R.2180) the lectotype. Taylor in 2009, unaware of this action, proposed the larger and more complete SII (MB.R.2181) as the lectotype. It includes, among other bones, several dorsal vertebrae, the left scapula, both coracoids, the breastbones, both humeri, both ulnae and radii (lower arm bones), a right hand, a partial left hand, both hip bones and the right femur, tibia and fibula (shank bones). Later in 2011, Taylor realized that Janensch had designated the smaller skeleton SI as the lectotype in 1935. In 1988 Gregory S. Paul published a new reconstruction of the skeleton of B. brancai, highlighting differences in proportion between it and B. altithorax. Chief among them was a distinction in the way the trunk vertebrae vary: they are fairly uniform in length in the African material, but vary widely in B. altithorax. Paul believed that the limb and girdle elements of both species were very similar, and therefore suggested they be separated not at genus, but only at subgenus level, as Brachiosaurus (Brachiosaurus) altithorax and Brachiosaurus (Giraffatitan) brancai. Giraffatitan was raised to full genus level by George Olshevsky in 1991, while referring to the vertebral variation. Between 1991 and 2009, the name Giraffatitan was almost completely disregarded by other researchers. A detailed 2009 study by Taylor of all material, including the limb and girdle bones, found that there are significant divergences between B. altithorax and the Tendaguru material in all elements known from both species. Taylor found twenty-six distinct osteological (bone-based) characters, a larger difference than between Diplodocus and Barosaurus, and therefore argued that the African material should indeed be placed in its own genus (Giraffatitan) as Giraffatitan brancai. An important contrast between the two genera is their overall body shape, with Brachiosaurus having a 23 percent longer dorsal vertebral series and a 20 to 25 percent longer and also taller tail. The split was rejected by Daniel Chure in 2010, but from 2012 onward most studies recognized the name Giraffatitan. Brachiosaurus atalaiensis In 1947, at Atalaia in Portugal, brachiosaurid remains were found in layers dating from the Tithonian. Albert-Félix de Lapparent and Georges Zbyszewski named them as the species Brachiosaurus atalaiensis in 1957. Its referral to Brachiosaurus was doubted in the 2004 edition of The Dinosauria by Paul Upchurch, Barret, and Peter Dodson who listed it as an as yet unnamed brachiosaurid genus. Shortly before the publication of the 2004 book, the species had been placed in its own genus Lusotitan by Miguel Telles Antunes and Octávio Mateus in 2003. De Lapparent and Zbyszewski had described a series of remains but did not designate a type specimen. Antunes and Mateus selected a partial postcranial skeleton (MIGM4978, 4798, 4801–4810, 4938, 4944, 4950, 4952, 4958, 4964–4966, 4981–4982, 4985, 8807, 8793–87934) as the lectotype; this specimen includes twenty-eight vertebrae, chevrons, ribs, a possible shoulder blade, humeri, forearm bones, partial left pelvis, lower leg bones, and part of the right ankle. The low neural spines, the prominent deltopectoral crest of the humerus (a muscle attachment site on the upper arm bone), the elongated humerus (very long and slender), and the long axis of the ilium tilting upward indicate that Lusotitan is a brachiosaurid, which was confirmed by some later studies, such as an analysis in 2013. Brachiosaurus nougaredi In 1958 French petroleum geologist F. Nougarède reported to have discovered fragmentary brachiosaurid remains in eastern Algeria, in the Sahara Desert. Based on these, Albert-Félix de Lapparent described and named the species Brachiosaurus nougaredi in 1960. He indicated the discovery locality as being in the Late Jurassic-age Taouratine Series. He assigned the rocks to this age in part because of the presumed presence of Brachiosaurus. A more recent review placed it in the "Continental intercalaire", which is considered to belong to the Albian age of the late Early Cretaceous, significantly younger. The type material moved to Paris consisted of a sacrum, weathered out at the desert surface, and some of the left metacarpals and phalanges. Found at the discovery site but not collected, were partial bones of the left forearm, wrist bones, a right shin bone, and fragments that may have come from metatarsals. B. nougaredi was in 2004 considered to represent a distinct, unnamed brachiosaurid genus, but a 2013 analysis by Philip D. Mannion and colleagues found that the remains possibly belong to more than one species, as they were collected far apart. The metacarpals were concluded to belong to some indeterminate titanosauriform. The sacrum was reported lost in 2013. It was not analyzed and provisionally considered to represent an indeterminate sauropod, until such time that it could be relocated in the collections of the Muséum national d'histoire naturelle. Only four out of the five sacral vertebrae are preserved. The total original length was in 1960 estimated at , compared to with B. altithorax. This would make it larger than any other sauropod sacrum ever found, except those of Argentinosaurus and Apatosaurus. Description Size Most estimates of Brachiosaurus altithorax size are based on the related brachiosaurid Giraffatitan (formerly known as B. brancai), which is known from much more complete material than Brachiosaurus. The two species are the largest brachiosaurids of which relatively extensive remains have been discovered. There is another element of uncertainty for the North American Brachiosaurus because the type (and most complete) specimen appears to represent a subadult, as indicated by the unfused suture between the coracoid, a bone of the shoulder girdle that forms part of the shoulder joint, and the scapula (shoulder blade). Over the years, the mass of the holotype specimen has been estimated within the range of . Benson et al. suggested a maximum body mass of , but these estimates were questioned due to a very large error range and lack of precision. The length of Brachiosaurus has been estimated at 20–22 meters (66–72ft) and , and its height at and . While the limb bones of the most complete Giraffatitan skeleton (MB.R.2181) were very similar in size to those of the Brachiosaurus type specimen, the former was somewhat lighter than the Brachiosaurus specimen given its proportional differences. In studies including estimates for both genera, Giraffatitan was estimated at , , , , and . As with the main Brachiosaurus specimen, Giraffatitan specimen MB.R.2181 likely does not reflect the maximum size of the genus, as a fibula (specimen HMXV2) is thirteen percent longer than that of MB.R.2181. General build Like all sauropod dinosaurs, Brachiosaurus was a quadruped with a small skull, a long neck, a large trunk with a high-ellipsoid cross section, a long, muscular tail and slender, columnar limbs. Large air sacs connected to the lung system were present in the neck and trunk, invading the vertebrae and ribs by bone resorption, greatly reducing the overall density of the body. The neck is not preserved in the holotype specimen, but was very long even by sauropod standards in the closely related Giraffatitan, consisting of thirteen elongated cervical (neck) vertebrae. The neck was held in a slight S-curve, with the lower and upper sections bent and a straight middle section. Brachiosaurus likely shared with Giraffatitan the very elongated neck ribs, which ran down the underside of the neck, overlapping several preceding vertebrae. These bony rods were attached to neck muscles at their ends, allowing these muscles to operate distal portions of the neck while themselves being located closer to the trunk, lightening the distal neck portions. Brachiosaurus and Giraffatitan probably had a small shoulder hump between the third and fifth dorsal (back) vertebra, where the sideward- and upward-directed vertebral processes were longer, providing additional surface for neck muscle attachment. The ribcage was deep compared to other sauropods. Though the humerus (upper arm bone) and femur (thigh bone) were roughly equal in length, the entire forelimb would have been longer than the hindlimb, as can be inferred from the elongated forearm and metacarpus of other brachiosaurids. This resulted in an inclined trunk with the shoulder much higher than the hips, and the neck exiting the trunk at a steep angle. The overall build of Brachiosaurus resembles a giraffe more than any other living animal. In contrast, most other sauropods had a shorter forelimb than hindlimb; the forelimb is especially short in contemporaneous diplodocoids. Brachiosaurus differed in its body proportions from the closely related Giraffatitan. The trunk was about 25 to 30 percent longer, resulting in a dorsal vertebral column longer than the humerus. Only a single complete caudal (tail) vertebra has been discovered, but its great height suggests that the tail was larger than in Giraffatitan. This vertebra had a much greater area for ligament attachment due to a broadened neural spine, indicating that the tail was also longer than in Giraffatitan, possibly by 20 to 25 percent. In 1988, paleontologist Gregory S. Paul suggested that the neck of Brachiosaurus was shorter than that of Giraffatitan, but in 2009, paleontologist Mike P. Taylor pointed out that two cervical vertebrae likely belonging to Brachiosaurus had identical proportions. Unlike Giraffatitan and other sauropods, which had vertically oriented forelimbs, the arms of Brachiosaurus appear to have been slightly sprawled at the shoulder joints, as indicated by the sideward orientation of the joint surfaces of the coracoids. The humerus was less slender than that of Giraffatitan, while the femur had similar proportions. This might indicate that the forelimbs of Brachiosaurus supported a greater fraction of the body weight than is the case for Giraffatitan. Postcranial skeleton Though the vertebral column of the trunk or torso is incompletely known, the back of Brachiosaurus most likely comprised twelve dorsal vertebrae; this can be inferred from the complete dorsal vertebral column preserved in an unnamed brachiosaurid specimen, BMNH R5937. Vertebrae of the front part of the dorsal column were slightly taller but much longer than those of the back part. This is in contrast to Giraffatitan, where the vertebrae at the front part were much taller but only slightly longer. The centra (vertebral bodies), the lower part of the vertebrae, were more elongated and roughly circular in cross section, while those of Giraffatitan were broader than tall. The foramina (small openings) on the sides of the centra, which allowed for the intrusion of air sacs, were larger than in Giraffatitan. The diapophyses (large projections extending sideways from the neural arch of the vertebrae) were horizontal, while those of Giraffatitan were inclined upward. At their ends, these projections articulated with the ribs; the articular surface was not distinctly triangular as in Giraffatitan. In side view, the upward-projecting neural spines stood vertically and were twice as wide at the base than at the top; those of Giraffatitan tilted backward and did not broaden at their base. When seen in front or back view, the neural spines widened toward their tops. In Brachiosaurus, this widening occurred gradually, resulting in a paddle-like shape, while in Giraffatitan the widening occurred abruptly and only in the uppermost portion. At both their front and back sides, the neural spines featured large, triangular and rugose surfaces, which in Giraffatitan were semicircular and much smaller. The various vertebral processes were connected by thin sheets or ridges of bone, which are called laminae. Brachiosaurus lacked postspinal laminae, which were present in Giraffatitan, running down the back side of the neural spines. The spinodiapophyseal laminae, which stretched from the neural spines to the diapophyses, were conflated with the spinopostzygapophyseal laminae, which stretched between the neural spines and the articular processes at the back of the vertebrae, and therefore terminated at mid-height of the neural spines. In Giraffatitan, both laminae were not conflated, and the spinodiapophyseal laminae reached up to the top of the neural spines. Brachiosaurus is further distinguished from Giraffatitan in lacking three details in the laminae of the dorsal vertebrae that are unique to the latter genus. Air sacs not only invaded the vertebrae but also the ribs. In Brachiosaurus, the air sacs invaded through a small opening on the front side of the rib shafts, while in Giraffatitan openings were present on both the front and back sides of the tuberculum, a bony projection articulating with the diapophyses of the vertebrae. Paul, in 1988, stated that the ribs of Brachiosaurus were longer than in Giraffatitan, which was questioned by Taylor in 2009. Behind the dorsal vertebral column, the sacrum consisted of five co-ossified sacral vertebrae. As in Giraffatitan, the sacrum was proportionally broad and featured very short neural spines. Poor preservation of the sacral material in Giraffatitan precludes detailed comparisons between both genera. Of the tail, only the second caudal vertebra is well preserved. As in Giraffatitan, this vertebra was slightly amphicoelous (concave on both ends), lacked openings on the sides, and had a short neural spine that was rectangular and tilted backward. In contrast to the second caudal vertebra of Giraffatitan, that of Brachiosaurus had a proportionally taller neural arch, making the vertebra about thirty percent taller. The centrum lacked depressions on its sides, in contrast to Giraffatitan. In front or back view, the neural spine broadened toward its tip to approximately three times its minimum width, but no broadening is apparent in Giraffatitan. The neural spines were also inclined backward by about 30 degrees, more than in Giraffatitan (20 degrees). The caudal ribs projected laterally and were not tilted backward as in Giraffatitan. The articular facets of the articular processes at the back of the vertebra were directed downward, while those of Giraffatitan faced more toward the sides. Besides the articular processes, the hyposphene-hypantrum articulation formed an additional articulation between vertebrae, making the vertebral column more rigid; in Brachiosaurus, the hyposphene was much more pronounced than in Giraffatitan. The coracoid was semicircular and taller than broad. Differences from Giraffatitan are related to its shape in side view, including the straighter suture with the scapula. Moreover, the articular surface that forms part of the shoulder joint was thicker and directed more sideward than in Giraffatitan and other sauropods, possibly indicating a more sprawled forelimb. The humerus, as preserved, measures in length, though part of its lower end was lost to erosion; its original length is estimated at . This bone was more slender in Brachiosaurus than in most other sauropods, measuring only in width at its narrowest part. It was, however, more robust than that of Giraffatitan, being about ten percent broader at the upper and lower ends. At its upper end, it featured a low bulge visible in side view, which is absent in Giraffatitan. Distinguishing features can also be found in the ilium of the pelvis. In Brachiosaurus, the ischiadic peduncle, a downward projecting extension connecting to the ischium, reaches farther downward than in Giraffatitan. While the latter genus had a sharp notch between the ischiadic peduncle and the back portion of the ilium, this notch is more rounded in Brachiosaurus. On the upper surface of the hind part of the ilium, Brachiosaurus had a pronounced tubercle that is absent in other sauropods. Of the hindlimb, the femur was very similar to that of Giraffatitan although slightly more robust, and measured long. As in Giraffatitan, it was strongly elliptical in cross section, being more than twice as wide in front or back view than in side view. The fourth trochanter, a prominent bulge on the back side of the femoral shaft, was more prominent and located further downward. This bulge served as anchor point for the most important locomotory muscle, the caudofemoralis, which was situated in the tail and pulled the upper thigh backward when contracted. At the lower end of the femur, the pair of condyles did not extend backward as strongly as in Giraffatitan; the two condyles were similar in width in Brachiosaurus but unequal in Giraffatitan. Skull As reconstructed by Carpenter and Tidwell, the assigned Felch Quarry skull was about long from the occipital condyle at the back of the skull to the front of the premaxillae (the front bones of the upper jaw), making it the largest sauropod skull from the Morrison Formation. D'Emic and Carrano instead estimated the skull to have been long, and if proportionally similar to that of Giraffatitan, about tall, and wide. Overall, the skull was tall as in Giraffatitan, with a snout that was long (about 36 percent of the skull length according to Carpenter and Tidwell) in front of the nasal bar between the nostrils, typical of brachiosaurids. The snout was somewhat blunt when seen from above (as in Giraffatitan), and since it was set at an angle relative to the rest of the skull, gave the impression of pointing downward. The dorsal and lateral temporal fenestrae (openings at the upper rear and sides of the skull) were large, perhaps due to the force imparted there by the massive jaw adductor musculature. The frontal bones on top of the skull were short and wide (similar to Giraffatitan), fused and connected by a suture to the parietal bones, which were also fused together. The surface of the parietals between the dorsal fenestrae was wider than that of Giraffatitan, but narrower than that of Camarasaurus. The skull differed from that of Giraffatitan in its U-shaped (instead of W-shaped) suture between frontal and nasal bones, a shape which appears more pronounced by the frontal bones extending forward over the orbits (eye sockets). Similar to Giraffatitan, the neck of the occipital condyle was very long. The premaxilla appears to have been longer than that of Camarasaurus, sloping more gradually toward the nasal bar, which created the very long snout. Brachiosaurus had a long and deep maxilla (the main bone of the upper jaw), which was thick along the margin where the alveoli (tooth sockets) were placed, thinning upward. The interdental plates of the maxilla were thin, fused, porous, and triangular. There were triangular nutrient foramina between the plates, each containing the tip of an erupting tooth. The narial fossa (depression) in front of the bony nostril was long, relatively shallow, and less developed than that of Giraffatitan. It contained a subnarial fenestra, which was much larger than those of Giraffatitan and Camarasaurus. The dentaries (the bones of the lower jaws that contained the teeth) were robust, though less than in Camarasaurus. The upper margin of the dentary was arched in profile, but not as much as in Camarasaurus. The interdental plates of the dentary were somewhat oval, with diamond shaped openings between them. The dentary had a Meckelian groove that was open until below the ninth alveolus, continuing thereafter as a shallow trough. Each maxilla had space for fourteen or fifteen teeth, whereas Giraffatitan had eleven and Camarasaurus eight to ten. The maxillae contained replacement teeth that had rugose enamel, similar to Camarasaurus, but lacked the small denticles (serrations) along the edges. Since the maxilla was wider than that of Camarasaurus, Brachiosaurus would have had larger teeth. The replacement teeth in the premaxilla had crinkled enamel, and the most complete of these teeth did not have denticles. It was somewhat spatulate (spoon-shaped), and had a longitudinal ridge. Each dentary had space for about fourteen teeth. The maxillary tooth rows of Brachiosaurus and Giraffatitan ended well in front of the antorbital fenestra (the opening in front of the orbit), whereas they ended just in front of and below the fenestra in Camarasaurus and Shunosaurus. Classification Riggs, in his preliminary 1903 description of the not yet fully prepared holotype specimen, considered Brachiosaurus to be an obvious member of the Sauropoda. To determine the validity of the genus, he compared it to the previously named genera Camarasaurus, Apatosaurus, Atlantosaurus, and Amphicoelias, whose validity he questioned given the lack of overlapping fossil material. Because of the uncertain relationships of these genera, little could be said about the relationships of Brachiosaurus itself. In 1904, Riggs described the holotype material of Brachiosaurus in more detail, especially the vertebrae. He admitted that he originally had assumed a close affinity with Camarasaurus, but now decided that Brachiosaurus was more closely related to Haplocanthosaurus. Both genera shared a single line of neural spines on the back and had wide hips. Riggs considered the differences from other taxa significant enough to name a separate family, Brachiosauridae, of which Brachiosaurus is the namesake genus. According to Riggs, Haplocanthosaurus was the more primitive genus of the family while Brachiosaurus was a specialized form. When describing Brachiosaurus brancai and B. fraasi in 1914, Janensch observed that the unique elongation of the humerus was shared by all three Brachiosaurus species as well as the British Pelorosaurus. He also noted this feature in Cetiosaurus, where it was not as strongly pronounced as in Brachiosaurus and Pelorosaurus. Janensch concluded that the four genera must have been closely related to each other, and in 1929 assigned them to a subfamily Brachiosaurinae within the family Bothrosauropodidae. During the twentieth century, several sauropods were assigned to Brachiosauridae, including Astrodon, Bothriospondylus, Pelorosaurus, Pleurocoelus, and Ultrasauros. These assignments were often based on broad similarities rather than unambiguous synapomorphies, shared new traits, and most of these genera are currently regarded as dubious. In 1969, in a study by R.F. Kingham, B. altithorax, B. brancai and B. atalaiensis, along with many species now assigned to other genera, were placed in the genus Astrodon, creating an Astrodon altithorax. Kingham's views of brachiosaurid taxonomy have not been accepted by many other authors. Since the 1990s, computer-based cladistic analyses allow for postulating detailed hypotheses on the relationships between species, by calculating those trees that require the fewest evolutionary changes and thus are the most likely to be correct. Such cladistic analyses have cast doubt on the validity of the Brachiosauridae. In 1993, Leonardo Salgado suggested that they were an unnatural group into which all kinds of unrelated sauropods had been combined. In 1997, he published an analysis in which species traditionally considered brachiosaurids were subsequent offshoots of the stem of a larger grouping, the Titanosauriformes, and not a separate branch of their own. This study also pointed out that B. altithorax and B. brancai did not have any synapomorphies, so that there was no evidence to assume they were particularly closely related. Many cladistic analyses have since suggested that at least some genera can be assigned to the Brachiosauridae, and that this group is a basal branch within the Titanosauriformes. The exact status of each potential brachiosaurid varies from study to study. For example, a 2010 study by Chure and colleagues recognized Abydosaurus as a brachiosaurid together with Brachiosaurus, which in this study included B. brancai. In 2009, Taylor noted multiple anatomical differences between the two Brachiosaurus species, and consequently moved B. brancai into its own genus, Giraffatitan. In contrast to earlier studies, Taylor treated both genera as distinct units in a cladistic analysis, finding them to be sister groups. Another 2010 analysis focusing on possible Asian brachiosaurid material found a clade including Abydosaurus, Brachiosaurus, Cedarosaurus, Giraffatitan, and Paluxysaurus, but not Qiaowanlong, the putative Asian brachiosaurid. Several subsequent analyses have found Brachiosaurus and Giraffatitan not to be sister groups, but instead located at different positions on the evolutionary tree. A 2012 study by D'Emic placed Giraffatitan in a more basal position, in an earlier branch, than Brachiosaurus, while a 2013 study by Philip Mannion and colleagues had it the other way around. This cladogram follows that published by Michael D. D'Emic in 2012: Cladistic analyses also allow scientists to determine which new traits the members of a group have in common, their synapomorphies. According to the 2009 study by Taylor, B. altithorax shares with other brachiosaurids the classic trait of having an upper arm bone that is at least nearly as long as the femur (ratio of humerus length to femur length of at least 0.9). Another shared character is the very flattened femur shaft, its transverse width being at least 1.85 times the width measured from front to rear. Paleobiology Habits It was believed throughout the nineteenth and early twentieth centuries that sauropods like Brachiosaurus were too massive to support their own weight on dry land, and instead lived partly submerged in water. Riggs, affirming observations by John Bell Hatcher, was the first to defend in length that most sauropods were fully terrestrial animals in his 1904 account on Brachiosaurus, pointing out that their hollow vertebrae have no analogue in living aquatic or semiaquatic animals, and their long limbs and compact feet indicate specialization for terrestrial locomotion. Brachiosaurus would have been better adapted than other sauropods to a fully terrestrial lifestyle through its slender limbs, high chest, wide hips, high ilia and short tail. In its dorsal vertebrae the zygapophyses were very reduced while the hyposphene-hypantrum complex was extremely developed, resulting in a stiff torso incapable of bending sideways. The body was fit for only quadrupedal movement on land. Though Riggs's ideas were gradually forgotten during the first half of the twentieth century, the notion of sauropods as terrestrial animals has gained support since the 1950s, and is now universally accepted among paleontologists. In 1990 the paleontologist Stephen Czerkas stated that Brachiosaurus could have entered water occasionally to cool off (thermoregulate). Neck posture Ongoing debate revolves around the neck posture of brachiosaurids, with estimates ranging from near-vertical to horizontal orientations. The idea of near-vertical postures in sauropods in general was popular until 1999, when Stevens and Parrish argued that the sauropod neck was not flexible enough to be held in an upright, S-curved pose, and instead was held horizontally. Reflecting this research, various newspapers ran stories criticizing the Field Museum Brachiosaurus mount for having an upward curving neck. Museum paleontologists Olivier Rieppel and Christopher Brochu defended the posture in 1999, noting the long forelimbs and upward sloping backbone. They also stated that the most developed neural spines for muscle attachment being positioned in the region of the shoulder girdle would have permitted the neck to be raised in a giraffe-like posture. Furthermore, such a pose would have required less energy than lowering its neck, and the inter-vertebral discs would not have been able to counter the pressure caused by a lowered head for extended periods of time (though lowering its neck to drink must have been possible). Some recent studies also advocated a more upward directed neck. Christian and Dzemski (2007) estimated that the middle part of the neck in Giraffatitan was inclined by sixty to seventy degrees; a horizontal posture could be maintained only for short periods of time. With their heads held high above the heart, brachiosaurids would have had stressed cardiovascular systems. It is estimated that the heart of Brachiosaurus would have to pump double the blood pressure of a giraffe to reach the brain, and possibly weighed . The distance between head and heart would have been reduced by the S-curvature of the neck by more than in comparison to a totally vertical posture. The neck may also have been lowered during locomotion by twenty degrees. In studying the inner ear of Giraffatitan, Gunga & Kirsch (2001) concluded that brachiosaurids would have moved their necks in lateral directions more often than in dorsal-ventral directions while feeding. Feeding and diet Brachiosaurus is thought to have been a high browser, feeding on foliage well above the ground. Even if it did not hold its neck near vertical, and instead had a less inclined neck, its head height may still have been over above the ground. It probably fed mostly on foliage above . This does not preclude the possibility that it also fed lower at times, between up. Its diet likely consisted of ginkgos, conifers, tree ferns, and large cycads, with intake estimated at of plant matter daily in a 2007 study. Brachiosaurid feeding involved simple up-and-down jaw motion. As in other sauropods, animals would have swallowed plant matter without further oral processing, and relied on hindgut fermentation for food processing. The teeth were somewhat spoon-shaped and chisel-like. Such teeth are optimized for non-selective nipping, and the relatively broad jaws could crop large amounts of plant material. Even if a Brachiosaurus of forty tonnes would have needed half a tonne of fodder, its dietary needs could have been met by a normal cropping action of the head. If it fed sixteen hours per day, biting off between a tenth and two-thirds of a kilogram, taking between one and six bites per minute, its daily food intake would have equaled roughly 1.5 percent of its body mass, comparable to the requirement of a modern elephant. As Brachiosaurus shared its habitat, the Morrison, with many other sauropod species, its specialization for feeding at greater heights would have been part of a system of niche partitioning, the various taxa thus avoiding direct competition with each other. A typical food tree might have resembled Sequoiadendron. The fact that such tall conifers were relatively rare in the Morrison might explain why Brachiosaurus was much less common in its ecosystem than the related Giraffatitan, which seems to have been one of the most abundant sauropods in the Tendaguru. Brachiosaurus, with its shorter arms and lower shoulders, was not as well-adapted to high-browsing as Giraffatitan. It has been suggested that Brachiosaurus could rear on its hind legs to feed, using its tail for extra ground support. A detailed physical modelling-based analysis of sauropod rearing capabilities by Heinrich Mallison showed that while many sauropods could rear, the unusual body shape and limb length ratio of brachiosaurids made them exceptionally ill-suited for rearing. The forward position of its center of mass would have led to problems with stability, and required unreasonably large forces in the hips to obtain an upright posture. Brachiosaurus would also have gained only 33 percent more feeding height, compared to other sauropods, for which rearing may have tripled the feeding height. A bipedal stance might have been adopted by Brachiosaurus in exceptional situations, like male dominance fights. The downward mobility of the neck of Brachiosaurus would have allowed it to reach open water at the level of its feet, while standing upright. Modern giraffes spread their forelimbs to lower the mouth in a relatively horizontal position, to more easily gulp down the water. It is unlikely that Brachiosaurus could have attained a stable posture this way, forcing the animal to plunge the snout almost vertically into the surface of a lake or stream. This would have submerged its fleshy nostrils if they were located at the tip of the snout as Witmer hypothesized. Hallett and Wedel therefore in 2016 rejected his interpretation and suggested that they were in fact placed at the top of the head, above the bony nostrils, as traditionally thought. The nostrils might have evolved their retracted position to allow the animal to breathe while drinking. Nostril function The bony nasal openings of neosauropods like Brachiosaurus were large and placed on the top of their skulls. Traditionally, the fleshy nostrils of sauropods were thought to have been placed likewise on top of the head, roughly at the rear of the bony nostril opening, because these animals were erroneously thought to have been amphibious, using their large nasal openings as snorkels when submerged. The American paleontologist Lawrence M. Witmer rejected this reconstruction in 2001, pointing out that all living vertebrate land animals have their external fleshy nostrils placed at the front of the bony nostril. The fleshy nostrils of such sauropods would have been placed in an even more forward position, at the front of the narial fossa, the depression which extended far in front of the bony nostril toward the snout tip. Czerkas speculated on the function of the peculiar brachiosaurid nose, and pointed out that there was no conclusive way to determine where the nostrils where located, unless a head with skin impressions was found. He suggested that the expanded nasal opening would have made room for tissue related to the animal's ability to smell, which would have helped smell proper vegetation. He also noted that in modern reptiles, the presence of bulbous, enlarged, and uplifted nasal bones can be correlated with fleshy horns and knobby protuberances, and that Brachiosaurus and other sauropods with large noses could have had ornamental nasal crests. It has been proposed that sauropods, including Brachiosaurus, may have had proboscises (trunks) based on the position of the bony narial orifice, to increase their upward reach. Fabien Knoll and colleagues disputed this for Diplodocus and Camarasaurus in 2006, finding that the opening for the facial nerve in the braincase was small. The facial nerve was thus not enlarged as in elephants, where it is involved in operating the sophisticated musculature of the proboscis. However, Knoll and colleagues also noted that the facial nerve for Giraffatitan was larger, and could therefore not discard the possibility of a proboscis in this genus. Metabolism Like other sauropods, Brachiosaurus was probably homeothermic (maintaining a stable internal temperature) and endothermic (controlling body temperature through internal means) at least while growing, meaning that it could actively control its body temperature ("warm-blooded"), producing the necessary heat through a high basic metabolic rate of its cells. Russel (1989) used Brachiosaurus as an example of a dinosaur for which endothermy is unlikely, because of the combination of great size (leading to overheating) and great caloric needs to fuel endothermy. Sander (2010) found that these calculations were based on incorrect body mass estimates and faulty assumptions on the available cooling surfaces, as the presence of large air sacs was unknown at the time of the study. These inaccuracies resulted in the overestimation of heat production and the underestimation of heat loss. The large nasal arch has been postulated as an adaptation for cooling the brain, as a surface for evaporative cooling of the blood. Air sacs The respiration system of sauropods, like that of birds, made use of air sacs. There was not a bidirectional airflow as with mammals, in which the lungs function as bellows, first inhaling and then exhaling air. Instead the air was sucked from the trachea into an abdominal air sac in the belly which then pumped it forward through the parabronchi, air loops, of the stiff lung. Valves prevented the air from flowing backward when the abdominal air sac filled itself again; at the same time a cervical air sac at the neck base sucked out the spent air from the lung. Both air sacs contracted simultaneously to pump the used air out of the trachea. This procedure guaranteed a unidirectional airflow, the air always moving in a single forward direction in the lung itself. This significantly improved the oxygen intake and the release of carbon dioxide. Not only was dead air removed quickly but also the blood flow in the lung was counterdirectional in relation to the airflow, leading to a far more effective gas exchange. In sauropods, the air sacs did not simply function as an aid for respiration; by means of air channels they were connected to much of the skeleton. These branches, the diverticula, via pneumatic openings invaded many bones and strongly hollowed them out. It is not entirely clear what the evolutionary benefit of this phenomenon was but in any case it considerably lightened the skeleton. They might also have removed excess heat to aid thermoregulation. In 2016, Mark Hallett and Mathew Wedel for the first time reconstructed the entire air sac system of a sauropod, using B. altithorax as an example of how such a structure might have been formed. In their reconstruction a large abdominal air sac was located between the pelvis and the outer lung side. As with birds, three smaller sacs assisted the pumping process from the underside of the breast cavity: at the rear the posterior thoracic air sac, in the middle the anterior thoracic air sac and in front the clavicular air sac, in that order gradually diminishing in size. The cervical air sac was positioned under the shoulder blade, on top of the front lung. The air sacs were via tubes connected with the vertebrae. Diverticula filled the various fossae and pleurocoels that formed depressions in the vertebral bone walls. These were again connected with inflexible air cells inside the bones. Growth The ontogeny of Brachiosaurus has been reconstructed by Carballido and colleagues in 2012 based on Toni (SMA 0009), a postcranial skeleton of a young juvenile with an estimated total body length of just . This skeleton shares some unique traits with the B. altithorax holotype, indicating it is referable to this species. These commonalities include an elevation on the rear blade of the ilium; the lack of a postspinal lamina; vertical neural spines on the back; an ilium with a subtle notch between the appendage for the ischium and the rear blade; and the lack of a side bulge on the upper thighbone. There are also differences; these might indicate that the juvenile is not a B. altithorax individual after all, but belongs to a new species. Alternatively, they might be explained as juvenile traits that would have changed when the animal matured. Such ontogenetic changes are especially to be expected in the proportions of an organism. The middle neck vertebrae of SMA 0009 are remarkably short for a sauropod, being just 1.8 times longer than high, compared with a ratio of 4.5 in Giraffatitan. This suggests that the necks of brachiosaurids became proportionally much longer while their backs, to the contrary, experienced relative negative growth. The humerus of SMA 0009 is relatively robust: it is more slender than that of most basal titanosauriforms but thicker than the upper arm bone of B. altithorax. This suggests that it was already lengthening in an early juvenile stage and became even more slender during growth. This is in contrast to diplodocoids and basal macronarians, whose slender humeri are not due to such allometric growth. Brachiosaurus also appears to have experienced an elongation of the metacarpals, which in juveniles were shorter compared to the length of the radius; SMA 0009 had a ratio of just 0.33, the lowest known in the entire Neosauropoda. Another plausible ontogenetic change is the increased pneumatization of the vertebrae. During growth, the diverticula of the air sacs invaded the bones and hollowed them out. SMA 0009 already has pleurocoels, pneumatic excavations, at the sides of its neck vertebrae. These are divided by a ridge but are otherwise still very simple in structure, compared with the extremely complex ridge systems typically shown by adult derived sauropods. Its dorsal vertebrae still completely lack these. Two traits are not so obviously linked to ontogeny. The neural spines of the rear dorsal vertebrae and the front sacral vertebrae are extremely compressed transversely, being eight times longer from front to rear than wide from side to side. The spinodiapophyseal lamina or "SPOL", the ridge normally running from each side of the neural spine toward each diapophysis, the transverse process bearing the contact facet for the upper rib head, is totally lacking. Both traits could be autapomorphies, unique derived characters proving that SMA 0009 represents a distinct species, but there are indications that these traits are growth-related as well. Of the basal sauropod Tazoudasaurus a young juvenile is known that also lacks the spinodiapophyseal lamina, whereas the adult form has an incipient ridge. Furthermore, a very young juvenile of Europasaurus had a weak SPOL but it is well developed in mature individuals. These two cases represent the only finds in which the condition can be checked; they suggest that the SPOL developed during growth. As this very ridge widens the neural spine, its transverse compression is not an independent trait and the development of the SPOL plausibly precedes the thickening of the neural spine with more mature animals. Sauropods were likely able to sexually reproduce before they attained their maximum individual size. The maturation rate differed between species. Its bone structure indicates that Brachiosaurus was able to reproduce when it reached forty percent of its maximal size. Paleoecology Brachiosaurus is known only from the Morrison Formation of western North America (following the reassignment of the African species). The Morrison Formation is interpreted as a semiarid environment with distinct wet and dry seasons, and flat floodplains. Several other sauropod genera were present in the Morrison Formation, with differing body proportions and feeding adaptations. Among these were Apatosaurus, Barosaurus, Camarasaurus, Diplodocus, Haplocanthosaurus, and Supersaurus. Brachiosaurus was one of the less abundant Morrison Formation sauropods. In a 2003 survey of more than two hundred fossil localities, John Foster reported 12 specimens of the genus, comparable to Barosaurus (13) and Haplocanthosaurus (12), but far fewer than Apatosaurus (112), Camarasaurus (179), and Diplodocus (98). Brachiosaurus fossils are found only in the lower-middle part of the expansive Morrison Formation (stratigraphic zones 2–4), dated to about 154to 153million years ago, unlike many other types of sauropod which have been found throughout the formation. If the large foot reported from Wyoming (the northernmost occurrence of a brachiosaurid in North America) did belong to Brachiosaurus, the genus would have covered a wide range of latitudes. Brachiosaurids could process tough vegetation with their broad-crowned teeth, and might therefore have covered a wider range of vegetational zones than for example diplodocids. Camarasaurids, which were similar in tooth morphology to brachiosaurids, were also widespread and are known to have migrated seasonally, so this might have also been true for brachiosaurids. Other dinosaurs known from the Morrison Formation include the predatory theropods Koparion, Stokesosaurus, Ornitholestes, Ceratosaurus, Allosaurus, Torvosaurus and Saurophaganax, as well as the herbivorous ornithischians Camptosaurus, Dryosaurus, Othnielia, Gargoyleosaurus and Stegosaurus. Allosaurus accounted for 70 to 75 percent of theropod specimens and was at the top trophic level of the Morrison food web. Ceratosaurus might have specialized in attacking large sauropods, including smaller individuals of Brachiosaurus. Other vertebrates that shared this paleoenvironment included ray-finned fish, frogs, salamanders, turtles like Dorsetochelys, sphenodonts, lizards, terrestrial and aquatic crocodylomorphs such as Hoplosuchus, and several species of pterosaur like Harpactognathus and Mesadactylus. Shells of bivalves and aquatic snails are also common. The flora of the period has been revealed by fossils of green algae, fungi, mosses, horsetails, cycads, ginkgoes, and several families of conifers. Vegetation varied from river-lining forests in otherwise treeless settings (gallery forests) with tree ferns, and ferns, to fern savannas with occasional trees such as the Araucaria-like conifer Brachyphyllum. Cultural significance Riggs in the first instance tried to limit public awareness of the find. When reading a lecture to the inhabitants of Grand Junction, illustrated by lantern slides, on July 27, 1900, he explained the general evolution of dinosaurs and the exploration methods of museum field crews but did not mention that he had just found a spectacular specimen. He feared that teams of other institutions might soon learn of the discovery and take away the best of the remaining fossils. A week later, his host Bradbury published an article in the local Grand Junction News announcing the find of one of the largest dinosaurs ever. On August 14, The New York Times brought the story. At the time sauropod dinosaurs appealed to the public because of their great size, often exaggerated by sensationalist newspapers. Riggs in his publications played into this by emphasizing the enormous magnitude of Brachiosaurus. Replica skeletons of Brachiosaurus can be seen in Chicago, Illinois, one outside the Field Museum and another inside the O'Hare International Airport. Brachiosaurus has been called one of the most iconic dinosaurs, but most popular depictions are based on the African species B. brancai which has since been moved to its own genus, Giraffatitan. A main belt asteroid, , was named 9954 Brachiosaurus in honor of the genus in 1991. Brachiosaurus was featured in the 1993 movie Jurassic Park, as the first computer generated dinosaur shown. These effects were considered ground-breaking at the time, and the awe of the movie's characters upon seeing the dinosaur for the first time was mirrored by audiences. The movements of the movie's Brachiosaurus were based on the gait of a giraffe combined with the mass of an elephant. A scene later in the movie used an animatronic head and neck, for when a Brachiosaurus interacts with human characters. The digital model of Brachiosaurus used in Jurassic Park later became the starting point for the ronto models in the 1997 special edition of the film Star Wars Episode IV: A New Hope.
Biology and health sciences
Sauropods
Animals
20598060
https://en.wikipedia.org/wiki/Pando%20%28tree%29
Pando (tree)
Pando () is the world's largest tree, a quaking aspen (Populus tremuloides) located in Sevier County, Utah, United States, in the Fishlake National Forest. A male clonal organism, Pando has an estimated 47,000 stems (ramets) that appear to be individual trees but are not, because those stems are connected by a root system that spans . Pando is the largest tree by weight and landmass and the largest known aspen clone. Pando was identified as a single living organism because each of its stems possesses identical genetic markers. The massive interconnected root system coordinates energy production, defense and regeneration across the tree's landmass. Pando spans at its widest expanse along of the southwestern edge of the Fishlake Basin and lies to the west of Fish Lake, the largest natural mountain freshwater lake in Utah. Pando's landmass spreads from above sea level to approximately above sea level along the western side of a steep basin wall. Pando is estimated to weigh collectively , or 13.2 million pounds, making it the heaviest known organism. The Pando Tree's expanse also makes Pando the largest tree of any kind, by way of landmass. Systems of classification used to define large trees vary considerably, leading to some confusion about Pando's status. Within the United States, the Official Register of Champion Trees defines the largest trees in a species-specific way; in this case, Pando is the largest aspen tree (Populus tremuloides). In forestry, the largest trees are measured by the greatest volume of a single stem, regardless of species. In that case, the General Sherman Tree is the largest unitary (single-stem) tree. While many emphasize that Pando is the largest clonal organism, other large trees, including Redwoods can also reproduce via cloning. Pando being the heaviest tree and the largest tree by landmass, while also being the largest aspen clone leaves the Pando Tree in a class of its own. Since the early 2000s, little information has been adequately corroborated about Pando's origins and how its genetic integrity has been sustained over a long period of time, conservatively between 9,000 and 16,000 years old-by the latest (2024) estimate. Researchers have argued that Pando's future is uncertain due to a combination of factors including drought, cattle grazing, and fire suppression. In terms of drought, Pando's long lived nature suggests it has survived droughts that have driven out human societies for centuries at a time. In terms of grazing, a majority of Pando's land mass is fenced for permanent protection and management as a unique tree. In terms of cattle grazing, it is only permitted on a volume basis for 10 days a year in October, weather permitting in a small edge of Pando's expanse. Additionally, in 2023, local grazers group 7-Mile Grazers signed off on a plan that would bring remaining portions of Pando into protective care under the "Pando Protection Plan", which would bring nearly of the tree into protective care. In terms of fire suppression, research indicates Pando has survived fires that would have likely leveled the tree many times, after which Pando regenerated itself from the root system. The same research also indicates large-scale fire events are infrequent, which may be owed to the fact that aspen are water-heavy trees and thus, naturally fire resistant, earning them the name "asbestos forest" by Canadian Forest Ecologist Lori Daniels. There is broad consensus that wildlife controls and protection from deer and elk who feed on the new growth faster than it can reach maturity are critical to Pando's future and care. Protection systems coupled with ongoing monitoring and restoration efforts have been shown to be the most effective, dating back to the first projects to protect and care for the tree in the late 1980s and early 1990s, with new projects under way. Friends of Pando and the Fishlake National Forest partners to study and protect the Pando Tree working alongside Utah Division of Wildlife Resources. Notable organizations that also study and advocate to protect Pando's care include Western Aspen Alliance and Grand Canyon Trust. Discovery, naming and verification The Pando tree was identified in 1976 by Jerry Kemperman and Burton V. Barnes. A posthumous biography by Barnes' colleague, Daniel Kashian, details Pando's discovery: Work by Fishlake National Forest to understand and protect the tree began in 1987, according to interviews and articles written by Fishlake Forest as well as accounts gathered by Friends of Pando. Based on Barnes and Kempermans's 1976 paper noting Pando's discovery, Michael Grant, Jeffrey Mitton, and Yan Linhart of the University of Colorado at Boulder re-examined the clone in 1992 and described Pando as a single male aspen clone based on its morphological characteristics such as pollen production, leaves, and root structure. Michael Grant named the tree "Pando" which is Latin for "I spread" in an editorial which was later published in Discover Magazine. A large scale genetic sampling and analysis was published in 2008 by Jennifer DeWoody, Karen Mock, Valerie Hipkins and Carol Rowe. The research team's genetic study confirmed morphological analysis by Barnes and Kemperman as well as Mitton, Grant and Linhart thus, verifying Pando's size and scale of operation. Research and protection In late 1987, Fishlake National Forest began work to remove diseased trees and promote new growth using coppicing (a form of mechanical stimulation), which works by simultaneously removing diseased stems, while also stimulating the hormone response that spurs new growth. In 1993, Fishlake National Forest began work on the "Aspen Regeneration Project", installing fences to help control deer and elk who threatened to destroy the productive results of work to spur and protect new growth Today, 53 acres of Pando is protected by 8-foot fences to control populations of mule deer (Odocoileus hemionus) and elk (Cervus canadensis), and to control human uses, such as permitted grazing by domestic cattle (Bos taurus). Additional fencing protections are to be added in 2025 bringing approximately 84 acres of Pando's landmass into protective care or around 80% of the tree's landmass. Regeneration rates in portions of the "Aspen Regeneration Project" which started in the 1990s, showed promise based on photographic evidence and repeated survey plots by land managers, scientists and conservation groups between 1993 and today. Despite this, many have argued more work needs to be done to control wildlife, as the Pando Tree is surrounded by 700 square miles of de facto wildlife preserve managed by people, groups and agencies who do not have Pando's sustainability as a central concern in their land management policies. Paul Rogers and Darren McAvoy of Utah State University completed an assessment of Pando's status in 2018 and stressed the importance of reducing herbivory by mule deer and elk as critical to conserving Pando. In 2019, Rogers and Jan Šebesta surveyed other vegetation within Pando besides aspen, finding additional support for their 2018 conclusions and found that interactions between browsing and management strategy may have had adverse effects on Pando's long-term resilience to change. In 2023, a team of researchers, land managers, wildlife biologists and citizen scientists groups began long-term programs to monitor deer and elk using GPS collars and wildlife cameras to better understand wildlife, as well as deer and elk browsing on the tree. In 2022, Executive Order 14702 directed the US Forest Service to inventory old growth and mature forest as part of a plan to protect mature and old growth forest. Data submitted by Fishlake National Forest defined Pando's landmass as mature meaning it could be eligible for special care and protections. Size and age Most agree, based on Barnes' work and later work, that Pando encompasses , weighs an estimated or 13.2 million pounds, and features an estimated 47,000 stems, which die individually and are replaced by genetically identical stems that are sent up from the tree's vast root system, a process known as "suckering". The root system is estimated to be several thousand years old, with habitat modeling suggesting a maximum age of 14,000 years and 16,000 years by the latest (2024) estimate. Individual stems do not typically live more than 100–130 years. Mitton and Grant summarize the development of stems in aspen clones: Range of age estimates Due to the progressive replacement of stems and roots, the overall age of an aspen clone cannot be determined from tree rings. Speculations on Pando's age have ranged between 80,000 years to 1 million years old. Many news sources list Pando's age as 80,000 years old, but this claim derives from a now-removed National Park Service web page, which redacted that claim in 2023 and, was inconsistent with the Forest Service's post ice-age estimate. Glaciers repeatedly formed on the Fish Lake Plateau over the past several hundred thousand years and the mountains above Pando's landmass were crowned by glaciers as recently as the last glacial maximum. Ages greater than approximately 16,000 years therefore require Pando to have survived climate conditions during the Pinedale glaciation, something that appears unlikely under current estimates of Pando's age and modeling of variations in local climate. A 2017 paper by Chen Ding et al. seems to support US Forest Service claims that Pando could not be older than the last Glacial Maximum in the area based on paleo-climate models and genetic traits of aspen sites throughout North America. A 2024 paper indicates the age could be between 16,000 and 80,000 years old again based on the first somatic mutation model of the tree, but, that research has not finished peer review and also relies on older material and testing methods. Thus, charcoal studies published in 2022 provide the lower-end range of the Pando's potential age---around 9,000 years old, while the somatic mutation models' most conservative estimate of 16,000 years old awaits replication using new material and methods, and will also require climate models to prove conditions were favorable to the Pando seed being able to germinate and establish itself during this period. Estimates of Pando's age have also been influenced by changes in the understanding of establishment of aspen clones in western North America. Earlier sources argued germination and successful establishment of aspen on new sites was rare in the last 10,000 years and therefore, Pando's root system was likely over 10,000 years old. More recent observations, however, have shown seedling establishment of new aspen clones is a regular occurrence and can be abundant on sites exposed by wildfire. These findings are summarized in the U.S. Forest Service's Fire Effects Information System: In popular culture In 2006, the United States Postal Service published a stamp in commemorating Pando which was designed by artist Lonnie Busch, calling it one of the forty "Wonders of America". In 2013, Pando featured as the backdrop and the subject of a music video for a successful campaign led by 4th graders of nearby Monroe, Utah, United States, to have the State of Utah's Tree be changed from Colorado Spruce to the Quaking Aspen. The song the children sang was written by Utah folk artist and songwriter, Clive Romney. In 2018, the Pando Aspen Clone figures as a central figure in the life of the character Patricia Westerford in Richard Powers' novel The Overstory. In 2021, children's book author Kate Allen Fox published "Pando: A Living Wonder of Trees" earning her a School Library Journal Award for the work. In 2022, an episode of NBC TV Show "The Blacklist" entitled "The Trembling Giant" (a nickname for Pando) features a scene where central character Raymond 'Red' Reddington details the tree's operation. In 2022, Friends of Pando published audio works by sound conservationist and artist Jeff Rice documenting the tree's subterranean workings for the first time. In 2022, Pando was the subject of an issue of the webcomic xkcd published on December 23, which facetiously suggests adding to Pando's many world records that of world's largest Christmas tree by running a 9,300-foot-long string of Christmas lights through the branches along its perimeter. In 2024, the Hallmark Channel had a movie titled Branching Out, set in Utah, in which a young girl searches for the paternal side of her family, as her mom had used a donor for an IVF conception. The girl's mom and the donor fall in love. At the film's conclusion, as the girl presents her "family tree" project to her class, she explains her family tree is a Pando, as the newfound family and her are all interconnected, like the Pando tree. In 2024, Intel Computer Chip Corporation named a novel quantum computing control chip "Pando Tree". While many technology organizations have utilized "Pando" in terms of its Latin meaning for "spread", the naming is the first to explicitly honor the Pando Tree itself.
Biology and health sciences
Malpighiales
Plants
20598932
https://en.wikipedia.org/wiki/Hilbert%20space
Hilbert space
In mathematics, Hilbert spaces (named after David Hilbert) allow the methods of linear algebra and calculus to be generalized from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise naturally and frequently in mathematics and physics, typically as function spaces. Formally, a Hilbert space is a vector space equipped with an inner product that induces a distance function for which the space is a complete metric space. A Hilbert space is a special case of a Banach space. Hilbert spaces were studied beginning in the first decade of the 20th century by David Hilbert, Erhard Schmidt, and Frigyes Riesz. They are indispensable tools in the theories of partial differential equations, quantum mechanics, Fourier analysis (which includes applications to signal processing and heat transfer), and ergodic theory (which forms the mathematical underpinning of thermodynamics). John von Neumann coined the term Hilbert space for the abstract concept that underlies many of these diverse applications. The success of Hilbert space methods ushered in a very fruitful era for functional analysis. Apart from the classical Euclidean vector spaces, examples of Hilbert spaces include spaces of square-integrable functions, spaces of sequences, Sobolev spaces consisting of generalized functions, and Hardy spaces of holomorphic functions. Geometric intuition plays an important role in many aspects of Hilbert space theory. Exact analogs of the Pythagorean theorem and parallelogram law hold in a Hilbert space. At a deeper level, perpendicular projection onto a linear subspace plays a significant role in optimization problems and other aspects of the theory. An element of a Hilbert space can be uniquely specified by its coordinates with respect to an orthonormal basis, in analogy with Cartesian coordinates in classical geometry. When this basis is countably infinite, it allows identifying the Hilbert space with the space of the infinite sequences that are square-summable. The latter space is often in the older literature referred to as the Hilbert space. Definition and illustration Motivating example: Euclidean vector space One of the most familiar examples of a Hilbert space is the Euclidean vector space consisting of three-dimensional vectors, denoted by , and equipped with the dot product. The dot product takes two vectors and , and produces a real number . If and are represented in Cartesian coordinates, then the dot product is defined by The dot product satisfies the properties It is symmetric in and : . It is linear in its first argument: for any scalars , , and vectors , , and . It is positive definite: for all vectors , , with equality if and only if . An operation on pairs of vectors that, like the dot product, satisfies these three properties is known as a (real) inner product. A vector space equipped with such an inner product is known as a (real) inner product space. Every finite-dimensional inner product space is also a Hilbert space. The basic feature of the dot product that connects it with Euclidean geometry is that it is related to both the length (or norm) of a vector, denoted , and to the angle between two vectors and by means of the formula Multivariable calculus in Euclidean space relies on the ability to compute limits, and to have useful criteria for concluding that limits exist. A mathematical series consisting of vectors in is absolutely convergent provided that the sum of the lengths converges as an ordinary series of real numbers: Just as with a series of scalars, a series of vectors that converges absolutely also converges to some limit vector in the Euclidean space, in the sense that This property expresses the completeness of Euclidean space: that a series that converges absolutely also converges in the ordinary sense. Hilbert spaces are often taken over the complex numbers. The complex plane denoted by is equipped with a notion of magnitude, the complex modulus , which is defined as the square root of the product of with its complex conjugate: If is a decomposition of into its real and imaginary parts, then the modulus is the usual Euclidean two-dimensional length: The inner product of a pair of complex numbers and is the product of with the complex conjugate of : This is complex-valued. The real part of gives the usual two-dimensional Euclidean dot product. A second example is the space whose elements are pairs of complex numbers . Then an inner product of with another such vector is given by The real part of is then the four-dimensional Euclidean dot product. This inner product is Hermitian symmetric, which means that the result of interchanging and is the complex conjugate: Definition A is a real or complex inner product space that is also a complete metric space with respect to the distance function induced by the inner product. To say that a complex vector space is a means that there is an inner product associating a complex number to each pair of elements of that satisfies the following properties: The inner product is conjugate symmetric; that is, the inner product of a pair of elements is equal to the complex conjugate of the inner product of the swapped elements: Importantly, this implies that is a real number. The inner product is linear in its first argument. For all complex numbers and The inner product of an element with itself is positive definite: It follows from properties 1 and 2 that a complex inner product is , also called , in its second argument, meaning that A is defined in the same way, except that is a real vector space and the inner product takes real values. Such an inner product will be a bilinear map and will form a dual system. The norm is the real-valued function and the distance between two points in is defined in terms of the norm by That this function is a distance function means firstly that it is symmetric in and secondly that the distance between and itself is zero, and otherwise the distance between and must be positive, and lastly that the triangle inequality holds, meaning that the length of one leg of a triangle cannot exceed the sum of the lengths of the other two legs: This last property is ultimately a consequence of the more fundamental Cauchy–Schwarz inequality, which asserts with equality if and only if and are linearly dependent. With a distance function defined in this way, any inner product space is a metric space, and sometimes is known as a . Any pre-Hilbert space that is additionally also a complete space is a Hilbert space. The of is expressed using a form of the Cauchy criterion for sequences in : a pre-Hilbert space is complete if every Cauchy sequence converges with respect to this norm to an element in the space. Completeness can be characterized by the following equivalent condition: if a series of vectors converges absolutely in the sense that then the series converges in , in the sense that the partial sums converge to an element of . As a complete normed space, Hilbert spaces are by definition also Banach spaces. As such they are topological vector spaces, in which topological notions like the openness and closedness of subsets are well defined. Of special importance is the notion of a closed linear subspace of a Hilbert space that, with the inner product induced by restriction, is also complete (being a closed set in a complete metric space) and therefore a Hilbert space in its own right. Second example: sequence spaces The sequence space consists of all infinite sequences of complex numbers such that the following series converges: The inner product on is defined by: This second series converges as a consequence of the Cauchy–Schwarz inequality and the convergence of the previous series. Completeness of the space holds provided that whenever a series of elements from converges absolutely (in norm), then it converges to an element of . The proof is basic in mathematical analysis, and permits mathematical series of elements of the space to be manipulated with the same ease as series of complex numbers (or vectors in a finite-dimensional Euclidean space). History Prior to the development of Hilbert spaces, other generalizations of Euclidean spaces were known to mathematicians and physicists. In particular, the idea of an abstract linear space (vector space) had gained some traction towards the end of the 19th century: this is a space whose elements can be added together and multiplied by scalars (such as real or complex numbers) without necessarily identifying these elements with "geometric" vectors, such as position and momentum vectors in physical systems. Other objects studied by mathematicians at the turn of the 20th century, in particular spaces of sequences (including series) and spaces of functions, can naturally be thought of as linear spaces. Functions, for instance, can be added together or multiplied by constant scalars, and these operations obey the algebraic laws satisfied by addition and scalar multiplication of spatial vectors. In the first decade of the 20th century, parallel developments led to the introduction of Hilbert spaces. The first of these was the observation, which arose during David Hilbert and Erhard Schmidt's study of integral equations, that two square-integrable real-valued functions and on an interval have an inner product that has many of the familiar properties of the Euclidean dot product. In particular, the idea of an orthogonal family of functions has meaning. Schmidt exploited the similarity of this inner product with the usual dot product to prove an analog of the spectral decomposition for an operator of the form where is a continuous function symmetric in and . The resulting eigenfunction expansion expresses the function as a series of the form where the functions are orthogonal in the sense that for all . The individual terms in this series are sometimes referred to as elementary product solutions. However, there are eigenfunction expansions that fail to converge in a suitable sense to a square-integrable function: the missing ingredient, which ensures convergence, is completeness. The second development was the Lebesgue integral, an alternative to the Riemann integral introduced by Henri Lebesgue in 1904. The Lebesgue integral made it possible to integrate a much broader class of functions. In 1907, Frigyes Riesz and Ernst Sigismund Fischer independently proved that the space of square Lebesgue-integrable functions is a complete metric space. As a consequence of the interplay between geometry and completeness, the 19th century results of Joseph Fourier, Friedrich Bessel and Marc-Antoine Parseval on trigonometric series easily carried over to these more general spaces, resulting in a geometrical and analytical apparatus now usually known as the Riesz–Fischer theorem. Further basic results were proved in the early 20th century. For example, the Riesz representation theorem was independently established by Maurice Fréchet and Frigyes Riesz in 1907. John von Neumann coined the term abstract Hilbert space in his work on unbounded Hermitian operators. Although other mathematicians such as Hermann Weyl and Norbert Wiener had already studied particular Hilbert spaces in great detail, often from a physically motivated point of view, von Neumann gave the first complete and axiomatic treatment of them. Von Neumann later used them in his seminal work on the foundations of quantum mechanics, and in his continued work with Eugene Wigner. The name "Hilbert space" was soon adopted by others, for example by Hermann Weyl in his book on quantum mechanics and the theory of groups. The significance of the concept of a Hilbert space was underlined with the realization that it offers one of the best mathematical formulations of quantum mechanics. In short, the states of a quantum mechanical system are vectors in a certain Hilbert space, the observables are hermitian operators on that space, the symmetries of the system are unitary operators, and measurements are orthogonal projections. The relation between quantum mechanical symmetries and unitary operators provided an impetus for the development of the unitary representation theory of groups, initiated in the 1928 work of Hermann Weyl. On the other hand, in the early 1930s it became clear that classical mechanics can be described in terms of Hilbert space (Koopman–von Neumann classical mechanics) and that certain properties of classical dynamical systems can be analyzed using Hilbert space techniques in the framework of ergodic theory. The algebra of observables in quantum mechanics is naturally an algebra of operators defined on a Hilbert space, according to Werner Heisenberg's matrix mechanics formulation of quantum theory. Von Neumann began investigating operator algebras in the 1930s, as rings of operators on a Hilbert space. The kind of algebras studied by von Neumann and his contemporaries are now known as von Neumann algebras. In the 1940s, Israel Gelfand, Mark Naimark and Irving Segal gave a definition of a kind of operator algebras called C*-algebras that on the one hand made no reference to an underlying Hilbert space, and on the other extrapolated many of the useful features of the operator algebras that had previously been studied. The spectral theorem for self-adjoint operators in particular that underlies much of the existing Hilbert space theory was generalized to C*-algebras. These techniques are now basic in abstract harmonic analysis and representation theory. Examples Lebesgue spaces Lebesgue spaces are function spaces associated to measure spaces , where is a set, is a σ-algebra of subsets of , and is a countably additive measure on . Let be the space of those complex-valued measurable functions on for which the Lebesgue integral of the square of the absolute value of the function is finite, i.e., for a function in , and where functions are identified if and only if they differ only on a set of measure zero. The inner product of functions and in is then defined as or where the second form (conjugation of the first element) is commonly found in the theoretical physics literature. For and in , the integral exists because of the Cauchy–Schwarz inequality, and defines an inner product on the space. Equipped with this inner product, is in fact complete. The Lebesgue integral is essential to ensure completeness: on domains of real numbers, for instance, not enough functions are Riemann integrable. The Lebesgue spaces appear in many natural settings. The spaces and of square-integrable functions with respect to the Lebesgue measure on the real line and unit interval, respectively, are natural domains on which to define the Fourier transform and Fourier series. In other situations, the measure may be something other than the ordinary Lebesgue measure on the real line. For instance, if is any positive measurable function, the space of all measurable functions on the interval satisfying is called the weighted space , and is called the weight function. The inner product is defined by The weighted space is identical with the Hilbert space where the measure of a Lebesgue-measurable set is defined by Weighted spaces like this are frequently used to study orthogonal polynomials, because different families of orthogonal polynomials are orthogonal with respect to different weighting functions. Sobolev spaces Sobolev spaces, denoted by or , are Hilbert spaces. These are a special kind of function space in which differentiation may be performed, but that (unlike other Banach spaces such as the Hölder spaces) support the structure of an inner product. Because differentiation is permitted, Sobolev spaces are a convenient setting for the theory of partial differential equations. They also form the basis of the theory of direct methods in the calculus of variations. For a non-negative integer and , the Sobolev space contains functions whose weak derivatives of order up to are also . The inner product in is where the dot indicates the dot product in the Euclidean space of partial derivatives of each order. Sobolev spaces can also be defined when is not an integer. Sobolev spaces are also studied from the point of view of spectral theory, relying more specifically on the Hilbert space structure. If is a suitable domain, then one can define the Sobolev space as the space of Bessel potentials; roughly, Here is the Laplacian and is understood in terms of the spectral mapping theorem. Apart from providing a workable definition of Sobolev spaces for non-integer , this definition also has particularly desirable properties under the Fourier transform that make it ideal for the study of pseudodifferential operators. Using these methods on a compact Riemannian manifold, one can obtain for instance the Hodge decomposition, which is the basis of Hodge theory. Spaces of holomorphic functions Hardy spaces The Hardy spaces are function spaces, arising in complex analysis and harmonic analysis, whose elements are certain holomorphic functions in a complex domain. Let denote the unit disc in the complex plane. Then the Hardy space is defined as the space of holomorphic functions on such that the means remain bounded for . The norm on this Hardy space is defined by Hardy spaces in the disc are related to Fourier series. A function is in if and only if where Thus consists of those functions that are L2 on the circle, and whose negative frequency Fourier coefficients vanish. Bergman spaces The Bergman spaces are another family of Hilbert spaces of holomorphic functions. Let be a bounded open set in the complex plane (or a higher-dimensional complex space) and let be the space of holomorphic functions in that are also in in the sense that where the integral is taken with respect to the Lebesgue measure in . Clearly is a subspace of ; in fact, it is a closed subspace, and so a Hilbert space in its own right. This is a consequence of the estimate, valid on compact subsets of , that which in turn follows from Cauchy's integral formula. Thus convergence of a sequence of holomorphic functions in implies also compact convergence, and so the limit function is also holomorphic. Another consequence of this inequality is that the linear functional that evaluates a function at a point of is actually continuous on . The Riesz representation theorem implies that the evaluation functional can be represented as an element of . Thus, for every , there is a function such that for all . The integrand is known as the Bergman kernel of . This integral kernel satisfies a reproducing property A Bergman space is an example of a reproducing kernel Hilbert space, which is a Hilbert space of functions along with a kernel that verifies a reproducing property analogous to this one. The Hardy space also admits a reproducing kernel, known as the Szegő kernel. Reproducing kernels are common in other areas of mathematics as well. For instance, in harmonic analysis the Poisson kernel is a reproducing kernel for the Hilbert space of square-integrable harmonic functions in the unit ball. That the latter is a Hilbert space at all is a consequence of the mean value theorem for harmonic functions. Applications Many of the applications of Hilbert spaces exploit the fact that Hilbert spaces support generalizations of simple geometric concepts like projection and change of basis from their usual finite dimensional setting. In particular, the spectral theory of continuous self-adjoint linear operators on a Hilbert space generalizes the usual spectral decomposition of a matrix, and this often plays a major role in applications of the theory to other areas of mathematics and physics. Sturm–Liouville theory In the theory of ordinary differential equations, spectral methods on a suitable Hilbert space are used to study the behavior of eigenvalues and eigenfunctions of differential equations. For example, the Sturm–Liouville problem arises in the study of the harmonics of waves in a violin string or a drum, and is a central problem in ordinary differential equations. The problem is a differential equation of the form for an unknown function on an interval , satisfying general homogeneous Robin boundary conditions The functions , , and are given in advance, and the problem is to find the function and constants for which the equation has a solution. The problem only has solutions for certain values of , called eigenvalues of the system, and this is a consequence of the spectral theorem for compact operators applied to the integral operator defined by the Green's function for the system. Furthermore, another consequence of this general result is that the eigenvalues of the system can be arranged in an increasing sequence tending to infinity. Partial differential equations Hilbert spaces form a basic tool in the study of partial differential equations. For many classes of partial differential equations, such as linear elliptic equations, it is possible to consider a generalized solution (known as a weak solution) by enlarging the class of functions. Many weak formulations involve the class of Sobolev functions, which is a Hilbert space. A suitable weak formulation reduces to a geometrical problem, the analytic problem of finding a solution or, often what is more important, showing that a solution exists and is unique for given boundary data. For linear elliptic equations, one geometrical result that ensures unique solvability for a large class of problems is the Lax–Milgram theorem. This strategy forms the rudiment of the Galerkin method (a finite element method) for numerical solution of partial differential equations. A typical example is the Poisson equation with Dirichlet boundary conditions in a bounded domain in . The weak formulation consists of finding a function such that, for all continuously differentiable functions in vanishing on the boundary: This can be recast in terms of the Hilbert space consisting of functions such that , along with its weak partial derivatives, are square integrable on , and vanish on the boundary. The question then reduces to finding in this space such that for all in this space where is a continuous bilinear form, and is a continuous linear functional, given respectively by Since the Poisson equation is elliptic, it follows from Poincaré's inequality that the bilinear form is coercive. The Lax–Milgram theorem then ensures the existence and uniqueness of solutions of this equation. Hilbert spaces allow for many elliptic partial differential equations to be formulated in a similar way, and the Lax–Milgram theorem is then a basic tool in their analysis. With suitable modifications, similar techniques can be applied to parabolic partial differential equations and certain hyperbolic partial differential equations. Ergodic theory The field of ergodic theory is the study of the long-term behavior of chaotic dynamical systems. The protypical case of a field that ergodic theory applies to is thermodynamics, in which—though the microscopic state of a system is extremely complicated (it is impossible to understand the ensemble of individual collisions between particles of matter)—the average behavior over sufficiently long time intervals is tractable. The laws of thermodynamics are assertions about such average behavior. In particular, one formulation of the zeroth law of thermodynamics asserts that over sufficiently long timescales, the only functionally independent measurement that one can make of a thermodynamic system in equilibrium is its total energy, in the form of temperature. An ergodic dynamical system is one for which, apart from the energy—measured by the Hamiltonian—there are no other functionally independent conserved quantities on the phase space. More explicitly, suppose that the energy is fixed, and let be the subset of the phase space consisting of all states of energy (an energy surface), and let denote the evolution operator on the phase space. The dynamical system is ergodic if every invariant measurable functions on is constant almost everywhere. An invariant function is one for which for all on and all time . Liouville's theorem implies that there exists a measure on the energy surface that is invariant under the time translation. As a result, time translation is a unitary transformation of the Hilbert space consisting of square-integrable functions on the energy surface with respect to the inner product The von Neumann mean ergodic theorem states the following: If is a (strongly continuous) one-parameter semigroup of unitary operators on a Hilbert space , and is the orthogonal projection onto the space of common fixed points of , , then For an ergodic system, the fixed set of the time evolution consists only of the constant functions, so the ergodic theorem implies the following: for any function , That is, the long time average of an observable is equal to its expectation value over an energy surface. Fourier analysis One of the basic goals of Fourier analysis is to decompose a function into a (possibly infinite) linear combination of given basis functions: the associated Fourier series. The classical Fourier series associated to a function defined on the interval is a series of the form where The example of adding up the first few terms in a Fourier series for a sawtooth function is shown in the figure. The basis functions are sine waves with wavelengths (for integer ) shorter than the wavelength of the sawtooth itself (except for , the fundamental wave). A significant problem in classical Fourier series asks in what sense the Fourier series converges, if at all, to the function . Hilbert space methods provide one possible answer to this question. The functions form an orthogonal basis of the Hilbert space . Consequently, any square-integrable function can be expressed as a series and, moreover, this series converges in the Hilbert space sense (that is, in the mean). The problem can also be studied from the abstract point of view: every Hilbert space has an orthonormal basis, and every element of the Hilbert space can be written in a unique way as a sum of multiples of these basis elements. The coefficients appearing on these basis elements are sometimes known abstractly as the Fourier coefficients of the element of the space. The abstraction is especially useful when it is more natural to use different basis functions for a space such as . In many circumstances, it is desirable not to decompose a function into trigonometric functions, but rather into orthogonal polynomials or wavelets for instance, and in higher dimensions into spherical harmonics. For instance, if are any orthonormal basis functions of , then a given function in can be approximated as a finite linear combination The coefficients are selected to make the magnitude of the difference as small as possible. Geometrically, the best approximation is the orthogonal projection of onto the subspace consisting of all linear combinations of the , and can be calculated by That this formula minimizes the difference is a consequence of Bessel's inequality and Parseval's formula. In various applications to physical problems, a function can be decomposed into physically meaningful eigenfunctions of a differential operator (typically the Laplace operator): this forms the foundation for the spectral study of functions, in reference to the spectrum of the differential operator. A concrete physical application involves the problem of hearing the shape of a drum: given the fundamental modes of vibration that a drumhead is capable of producing, can one infer the shape of the drum itself? The mathematical formulation of this question involves the Dirichlet eigenvalues of the Laplace equation in the plane, that represent the fundamental modes of vibration in direct analogy with the integers that represent the fundamental modes of vibration of the violin string. Spectral theory also underlies certain aspects of the Fourier transform of a function. Whereas Fourier analysis decomposes a function defined on a compact set into the discrete spectrum of the Laplacian (which corresponds to the vibrations of a violin string or drum), the Fourier transform of a function is the decomposition of a function defined on all of Euclidean space into its components in the continuous spectrum of the Laplacian. The Fourier transformation is also geometrical, in a sense made precise by the Plancherel theorem, that asserts that it is an isometry of one Hilbert space (the "time domain") with another (the "frequency domain"). This isometry property of the Fourier transformation is a recurring theme in abstract harmonic analysis (since it reflects the conservation of energy for the continuous Fourier Transform), as evidenced for instance by the Plancherel theorem for spherical functions occurring in noncommutative harmonic analysis. Quantum mechanics In the mathematically rigorous formulation of quantum mechanics, developed by John von Neumann, the possible states (more precisely, the pure states) of a quantum mechanical system are represented by unit vectors (called state vectors) residing in a complex separable Hilbert space, known as the state space, well defined up to a complex number of norm 1 (the phase factor). In other words, the possible states are points in the projectivization of a Hilbert space, usually called the complex projective space. The exact nature of this Hilbert space is dependent on the system; for example, the position and momentum states for a single non-relativistic spin zero particle is the space of all square-integrable functions, while the states for the spin of a single proton are unit elements of the two-dimensional complex Hilbert space of spinors. Each observable is represented by a self-adjoint linear operator acting on the state space. Each eigenstate of an observable corresponds to an eigenvector of the operator, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. The inner product between two state vectors is a complex number known as a probability amplitude. During an ideal measurement of a quantum mechanical system, the probability that a system collapses from a given initial state to a particular eigenstate is given by the square of the absolute value of the probability amplitudes between the initial and final states. The possible results of a measurement are the eigenvalues of the operator—which explains the choice of self-adjoint operators, for all the eigenvalues must be real. The probability distribution of an observable in a given state can be found by computing the spectral decomposition of the corresponding operator. For a general system, states are typically not pure, but instead are represented as statistical mixtures of pure states, or mixed states, given by density matrices: self-adjoint operators of trace one on a Hilbert space. Moreover, for general quantum mechanical systems, the effects of a single measurement can influence other parts of a system in a manner that is described instead by a positive operator valued measure. Thus the structure both of the states and observables in the general theory is considerably more complicated than the idealization for pure states. Probability theory In probability theory, Hilbert spaces also have diverse applications. Here a fundamental Hilbert space is the space of random variables on a given probability space, having class (finite first and second moments). A common operation in statistics is that of centering a random variable by subtracting its expectation. Thus if is a random variable, then is its centering. In the Hilbert space view, this is the orthogonal projection of onto the kernel of the expectation operator, which a continuous linear functional on the Hilbert space (in fact, the inner product with the constant random variable 1), and so this kernel is a closed subspace. The conditional expectation has a natural interpretation in the Hilbert space. Suppose that a probability space is given, where is a sigma algebra on the set , and is a probability measure on the measure space . If is a sigma subalgebra of , then the conditional expectation is the orthogonal projection of onto the subspace of consisting of the -measurable functions. If the random variable in is independent of the sigma algebra then conditional expectation , i.e., its projection onto the -measurable functions is constant. Equivalently, the projection of its centering is zero. In particular, if two random variables and (in ) are independent, then the centered random variables and are orthogonal. (This means that the two variables have zero covariance: they are uncorrelated.) In that case, the Pythagorean theorem in the kernel of the expectation operator implies that the variances of and satisfy the identity: sometimes called the Pythagorean theorem of statistics, and is of importance in linear regression. As puts it, "the analysis of variance may be viewed as the decomposition of the squared length of a vector into the sum of the squared lengths of several vectors, using the Pythagorean Theorem." The theory of martingales can be formulated in Hilbert spaces. A martingale in a Hilbert space is a sequence of elements of a Hilbert space such that, for each , is the orthogonal projection of onto the linear hull of . If the are random variables, this reproduces the usual definition of a (discrete) martingale: the expectation of , conditioned on , is equal to . Hilbert spaces are also used throughout the foundations of the Itô calculus. To any square-integrable martingale, it is possible to associate a Hilbert norm on the space of equivalence classes of progressively measurable processes with respect to the martingale (using the quadratic variation of the martingale as the measure). The Itô integral can be constructed by first defining it for simple processes, and then exploiting their density in the Hilbert space. A noteworthy result is then the Itô isometry, which attests that for any martingale M having quadratic variation measure , and any progressively measurable process H: whenever the expectation on the right-hand side is finite. A deeper application of Hilbert spaces that is especially important in the theory of Gaussian processes is an attempt, due to Leonard Gross and others, to make sense of certain formal integrals over infinite dimensional spaces like the Feynman path integral from quantum field theory. The problem with integral like this is that there is no infinite dimensional Lebesgue measure. The notion of an abstract Wiener space allows one to construct a measure on a Banach space that contains a Hilbert space , called the Cameron–Martin space, as a dense subset, out of a finitely additive cylinder set measure on . The resulting measure on is countably additive and invariant under translation by elements of , and this provides a mathematically rigorous way of thinking of the Wiener measure as a Gaussian measure on the Sobolev space . Color perception Any true physical color can be represented by a combination of pure spectral colors. As physical colors can be composed of any number of spectral colors, the space of physical colors may aptly be represented by a Hilbert space over spectral colors. Humans have three types of cone cells for color perception, so the perceivable colors can be represented by 3-dimensional Euclidean space. The many-to-one linear mapping from the Hilbert space of physical colors to the Euclidean space of human perceivable colors explains why many distinct physical colors may be perceived by humans to be identical (e.g., pure yellow light versus a mix of red and green light, see Metamerism). Properties Pythagorean identity Two vectors and in a Hilbert space are orthogonal when . The notation for this is . More generally, when is a subset in , the notation means that is orthogonal to every element from . When and are orthogonal, one has By induction on , this is extended to any family of orthogonal vectors, Whereas the Pythagorean identity as stated is valid in any inner product space, completeness is required for the extension of the Pythagorean identity to series. A series of orthogonal vectors converges in if and only if the series of squares of norms converges, and Furthermore, the sum of a series of orthogonal vectors is independent of the order in which it is taken. Parallelogram identity and polarization By definition, every Hilbert space is also a Banach space. Furthermore, in every Hilbert space the following parallelogram identity holds: Conversely, every Banach space in which the parallelogram identity holds is a Hilbert space, and the inner product is uniquely determined by the norm by the polarization identity. For real Hilbert spaces, the polarization identity is For complex Hilbert spaces, it is The parallelogram law implies that any Hilbert space is a uniformly convex Banach space. Best approximation This subsection employs the Hilbert projection theorem. If is a non-empty closed convex subset of a Hilbert space and a point in , there exists a unique point that minimizes the distance between and points in , This is equivalent to saying that there is a point with minimal norm in the translated convex set . The proof consists in showing that every minimizing sequence is Cauchy (using the parallelogram identity) hence converges (using completeness) to a point in that has minimal norm. More generally, this holds in any uniformly convex Banach space. When this result is applied to a closed subspace of , it can be shown that the point closest to is characterized by This point is the orthogonal projection of onto , and the mapping is linear (see ). This result is especially significant in applied mathematics, especially numerical analysis, where it forms the basis of least squares methods. In particular, when is not equal to , one can find a nonzero vector orthogonal to (select and ). A very useful criterion is obtained by applying this observation to the closed subspace generated by a subset of . A subset of spans a dense vector subspace if (and only if) the vector 0 is the sole vector orthogonal to . Duality The dual space is the space of all continuous linear functions from the space into the base field. It carries a natural norm, defined by This norm satisfies the parallelogram law, and so the dual space is also an inner product space where this inner product can be defined in terms of this dual norm by using the polarization identity. The dual space is also complete so it is a Hilbert space in its own right. If is a complete orthonormal basis for then the inner product on the dual space of any two is where all but countably many of the terms in this series are zero. The Riesz representation theorem affords a convenient description of the dual space. To every element of , there is a unique element of , defined by where moreover, The Riesz representation theorem states that the map from to defined by is surjective, which makes this map an isometric antilinear isomorphism. So to every element of the dual there exists one and only one in such that for all . The inner product on the dual space satisfies The reversal of order on the right-hand side restores linearity in from the antilinearity of . In the real case, the antilinear isomorphism from to its dual is actually an isomorphism, and so real Hilbert spaces are naturally isomorphic to their own duals. The representing vector is obtained in the following way. When , the kernel is a closed vector subspace of , not equal to , hence there exists a nonzero vector orthogonal to . The vector is a suitable scalar multiple of . The requirement that yields This correspondence is exploited by the bra–ket notation popular in physics. It is common in physics to assume that the inner product, denoted by , is linear on the right, The result can be seen as the action of the linear functional (the bra) on the vector (the ket). The Riesz representation theorem relies fundamentally not just on the presence of an inner product, but also on the completeness of the space. In fact, the theorem implies that the topological dual of any inner product space can be identified with its completion. An immediate consequence of the Riesz representation theorem is also that a Hilbert space is reflexive, meaning that the natural map from into its double dual space is an isomorphism. Weakly convergent sequences In a Hilbert space , a sequence is weakly convergent to a vector when for every . For example, any orthonormal sequence converges weakly to 0, as a consequence of Bessel's inequality. Every weakly convergent sequence is bounded, by the uniform boundedness principle. Conversely, every bounded sequence in a Hilbert space admits weakly convergent subsequences (Alaoglu's theorem). This fact may be used to prove minimization results for continuous convex functionals, in the same way that the Bolzano–Weierstrass theorem is used for continuous functions on . Among several variants, one simple statement is as follows: If is a convex continuous function such that tends to when tends to , then admits a minimum at some point . This fact (and its various generalizations) are fundamental for direct methods in the calculus of variations. Minimization results for convex functionals are also a direct consequence of the slightly more abstract fact that closed bounded convex subsets in a Hilbert space are weakly compact, since is reflexive. The existence of weakly convergent subsequences is a special case of the Eberlein–Šmulian theorem. Banach space properties Any general property of Banach spaces continues to hold for Hilbert spaces. The open mapping theorem states that a continuous surjective linear transformation from one Banach space to another is an open mapping meaning that it sends open sets to open sets. A corollary is the bounded inverse theorem, that a continuous and bijective linear function from one Banach space to another is an isomorphism (that is, a continuous linear map whose inverse is also continuous). This theorem is considerably simpler to prove in the case of Hilbert spaces than in general Banach spaces. The open mapping theorem is equivalent to the closed graph theorem, which asserts that a linear function from one Banach space to another is continuous if and only if its graph is a closed set. In the case of Hilbert spaces, this is basic in the study of unbounded operators (see Closed operator). The (geometrical) Hahn–Banach theorem asserts that a closed convex set can be separated from any point outside it by means of a hyperplane of the Hilbert space. This is an immediate consequence of the best approximation property: if is the element of a closed convex set closest to , then the separating hyperplane is the plane perpendicular to the segment passing through its midpoint. Operators on Hilbert spaces Bounded operators The continuous linear operators from a Hilbert space to a second Hilbert space are bounded in the sense that they map bounded sets to bounded sets. Conversely, if an operator is bounded, then it is continuous. The space of such bounded linear operators has a norm, the operator norm given by The sum and the composite of two bounded linear operators is again bounded and linear. For y in H2, the map that sends to is linear and continuous, and according to the Riesz representation theorem can therefore be represented in the form for some vector in . This defines another bounded linear operator , the adjoint of . The adjoint satisfies . When the Riesz representation theorem is used to identify each Hilbert space with its continuous dual space, the adjoint of can be shown to be identical to the transpose of , which by definition sends to the functional The set of all bounded linear operators on (meaning operators ), together with the addition and composition operations, the norm and the adjoint operation, is a C*-algebra, which is a type of operator algebra. An element of is called 'self-adjoint' or 'Hermitian' if . If is Hermitian and for every , then is called 'nonnegative', written ; if equality holds only when , then is called 'positive'. The set of self adjoint operators admits a partial order, in which if . If has the form for some , then is nonnegative; if is invertible, then is positive. A converse is also true in the sense that, for a non-negative operator , there exists a unique non-negative square root such that In a sense made precise by the spectral theorem, self-adjoint operators can usefully be thought of as operators that are "real". An element of is called normal if . Normal operators decompose into the sum of a self-adjoint operator and an imaginary multiple of a self adjoint operator that commute with each other. Normal operators can also usefully be thought of in terms of their real and imaginary parts. An element of is called unitary if is invertible and its inverse is given by . This can also be expressed by requiring that be onto and for all . The unitary operators form a group under composition, which is the isometry group of . An element of is compact if it sends bounded sets to relatively compact sets. Equivalently, a bounded operator is compact if, for any bounded sequence , the sequence has a convergent subsequence. Many integral operators are compact, and in fact define a special class of operators known as Hilbert–Schmidt operators that are especially important in the study of integral equations. Fredholm operators differ from a compact operator by a multiple of the identity, and are equivalently characterized as operators with a finite dimensional kernel and cokernel. The index of a Fredholm operator is defined by The index is homotopy invariant, and plays a deep role in differential geometry via the Atiyah–Singer index theorem. Unbounded operators Unbounded operators are also tractable in Hilbert spaces, and have important applications to quantum mechanics. An unbounded operator on a Hilbert space is defined as a linear operator whose domain is a linear subspace of . Often the domain is a dense subspace of , in which case is known as a densely defined operator. The adjoint of a densely defined unbounded operator is defined in essentially the same manner as for bounded operators. Self-adjoint unbounded operators play the role of the observables in the mathematical formulation of quantum mechanics. Examples of self-adjoint unbounded operators on the Hilbert space are: A suitable extension of the differential operator where is the imaginary unit and is a differentiable function of compact support. The multiplication-by- operator: These correspond to the momentum and position observables, respectively. Neither nor is defined on all of , since in the case of the derivative need not exist, and in the case of the product function need not be square integrable. In both cases, the set of possible arguments form dense subspaces of . Constructions Direct sums Two Hilbert spaces and can be combined into another Hilbert space, called the (orthogonal) direct sum, and denoted consisting of the set of all ordered pairs where , , and inner product defined by More generally, if is a family of Hilbert spaces indexed by , then the direct sum of the , denoted consists of the set of all indexed families in the Cartesian product of the such that The inner product is defined by Each of the is included as a closed subspace in the direct sum of all of the . Moreover, the are pairwise orthogonal. Conversely, if there is a system of closed subspaces, , , in a Hilbert space , that are pairwise orthogonal and whose union is dense in , then is canonically isomorphic to the direct sum of . In this case, is called the internal direct sum of the . A direct sum (internal or external) is also equipped with a family of orthogonal projections onto the th direct summand . These projections are bounded, self-adjoint, idempotent operators that satisfy the orthogonality condition The spectral theorem for compact self-adjoint operators on a Hilbert space states that splits into an orthogonal direct sum of the eigenspaces of an operator, and also gives an explicit decomposition of the operator as a sum of projections onto the eigenspaces. The direct sum of Hilbert spaces also appears in quantum mechanics as the Fock space of a system containing a variable number of particles, where each Hilbert space in the direct sum corresponds to an additional degree of freedom for the quantum mechanical system. In representation theory, the Peter–Weyl theorem guarantees that any unitary representation of a compact group on a Hilbert space splits as the direct sum of finite-dimensional representations. Tensor products If and , then one defines an inner product on the (ordinary) tensor product as follows. On simple tensors, let This formula then extends by sesquilinearity to an inner product on . The Hilbertian tensor product of and , sometimes denoted by , is the Hilbert space obtained by completing for the metric associated to this inner product. An example is provided by the Hilbert space . The Hilbertian tensor product of two copies of is isometrically and linearly isomorphic to the space of square-integrable functions on the square . This isomorphism sends a simple tensor to the function on the square. This example is typical in the following sense. Associated to every simple tensor product is the rank one operator from to that maps a given as This mapping defined on simple tensors extends to a linear identification between and the space of finite rank operators from to . This extends to a linear isometry of the Hilbertian tensor product with the Hilbert space of Hilbert–Schmidt operators from to . Orthonormal bases The notion of an orthonormal basis from linear algebra generalizes over to the case of Hilbert spaces. In a Hilbert space , an orthonormal basis is a family of elements of satisfying the conditions: Orthogonality: Every two different elements of are orthogonal: for all with . Normalization: Every element of the family has norm 1: for all . Completeness: The linear span of the family , , is dense in H. A system of vectors satisfying the first two conditions basis is called an orthonormal system or an orthonormal set (or an orthonormal sequence if is countable). Such a system is always linearly independent. Despite the name, an orthonormal basis is not, in general, a basis in the sense of linear algebra (Hamel basis). More precisely, an orthonormal basis is a Hamel basis if and only if the Hilbert space is a finite-dimensional vector space. Completeness of an orthonormal system of vectors of a Hilbert space can be equivalently restated as: for every , if for all , then . This is related to the fact that the only vector orthogonal to a dense linear subspace is the zero vector, for if is any orthonormal set and is orthogonal to , then is orthogonal to the closure of the linear span of , which is the whole space. Examples of orthonormal bases include: the set forms an orthonormal basis of with the dot product; the sequence with forms an orthonormal basis of the complex space ; In the infinite-dimensional case, an orthonormal basis will not be a basis in the sense of linear algebra; to distinguish the two, the latter basis is also called a Hamel basis. That the span of the basis vectors is dense implies that every vector in the space can be written as the sum of an infinite series, and the orthogonality implies that this decomposition is unique. Sequence spaces The space of square-summable sequences of complex numbers is the set of infinite sequences of real or complex numbers such that This space has an orthonormal basis: This space is the infinite-dimensional generalization of the space of finite-dimensional vectors. It is usually the first example used to show that in infinite-dimensional spaces, a set that is closed and bounded is not necessarily (sequentially) compact (as is the case in all finite dimensional spaces). Indeed, the set of orthonormal vectors above shows this: It is an infinite sequence of vectors in the unit ball (i.e., the ball of points with norm less than or equal one). This set is clearly bounded and closed; yet, no subsequence of these vectors converges to anything and consequently the unit ball in is not compact. Intuitively, this is because "there is always another coordinate direction" into which the next elements of the sequence can evade. One can generalize the space in many ways. For example, if is any set, then one can form a Hilbert space of sequences with index set , defined by The summation over B is here defined by the supremum being taken over all finite subsets of . It follows that, for this sum to be finite, every element of has only countably many nonzero terms. This space becomes a Hilbert space with the inner product for all . Here the sum also has only countably many nonzero terms, and is unconditionally convergent by the Cauchy–Schwarz inequality. An orthonormal basis of is indexed by the set , given by Bessel's inequality and Parseval's formula Let be a finite orthonormal system in . For an arbitrary vector , let Then for every . It follows that is orthogonal to each , hence is orthogonal to . Using the Pythagorean identity twice, it follows that Let , be an arbitrary orthonormal system in . Applying the preceding inequality to every finite subset of gives Bessel's inequality: (according to the definition of the sum of an arbitrary family of non-negative real numbers). Geometrically, Bessel's inequality implies that the orthogonal projection of onto the linear subspace spanned by the has norm that does not exceed that of . In two dimensions, this is the assertion that the length of the leg of a right triangle may not exceed the length of the hypotenuse. Bessel's inequality is a stepping stone to the stronger result called Parseval's identity, which governs the case when Bessel's inequality is actually an equality. By definition, if is an orthonormal basis of , then every element of may be written as Even if is uncountable, Bessel's inequality guarantees that the expression is well-defined and consists only of countably many nonzero terms. This sum is called the Fourier expansion of , and the individual coefficients are the Fourier coefficients of . Parseval's identity then asserts that Conversely, if is an orthonormal set such that Parseval's identity holds for every , then is an orthonormal basis. Hilbert dimension As a consequence of Zorn's lemma, every Hilbert space admits an orthonormal basis; furthermore, any two orthonormal bases of the same space have the same cardinality, called the Hilbert dimension of the space. For instance, since has an orthonormal basis indexed by , its Hilbert dimension is the cardinality of (which may be a finite integer, or a countable or uncountable cardinal number). The Hilbert dimension is not greater than the Hamel dimension (the usual dimension of a vector space). The two dimensions are equal if and only if one of them is finite. As a consequence of Parseval's identity, if is an orthonormal basis of , then the map defined by is an isometric isomorphism of Hilbert spaces: it is a bijective linear mapping such that for all . The cardinal number of is the Hilbert dimension of . Thus every Hilbert space is isometrically isomorphic to a sequence space for some set . Separable spaces By definition, a Hilbert space is separable provided it contains a dense countable subset. Along with Zorn's lemma, this means a Hilbert space is separable if and only if it admits a countable orthonormal basis. All infinite-dimensional separable Hilbert spaces are therefore isometrically isomorphic to the square-summable sequence space In the past, Hilbert spaces were often required to be separable as part of the definition. In quantum field theory Most spaces used in physics are separable, and since these are all isomorphic to each other, one often refers to any infinite-dimensional separable Hilbert space as "the Hilbert space" or just "Hilbert space". Even in quantum field theory, most of the Hilbert spaces are in fact separable, as stipulated by the Wightman axioms. However, it is sometimes argued that non-separable Hilbert spaces are also important in quantum field theory, roughly because the systems in the theory possess an infinite number of degrees of freedom and any infinite Hilbert tensor product (of spaces of dimension greater than one) is non-separable. For instance, a bosonic field can be naturally thought of as an element of a tensor product whose factors represent harmonic oscillators at each point of space. From this perspective, the natural state space of a boson might seem to be a non-separable space. However, it is only a small separable subspace of the full tensor product that can contain physically meaningful fields (on which the observables can be defined). Another non-separable Hilbert space models the state of an infinite collection of particles in an unbounded region of space. An orthonormal basis of the space is indexed by the density of the particles, a continuous parameter, and since the set of possible densities is uncountable, the basis is not countable. Orthogonal complements and projections If is a subset of a Hilbert space , the set of vectors orthogonal to is defined by The set is a closed subspace of (can be proved easily using the linearity and continuity of the inner product) and so forms itself a Hilbert space. If is a closed subspace of , then is called the of . In fact, every can then be written uniquely as , with and . Therefore, is the internal Hilbert direct sum of and . The linear operator that maps to is called the onto . There is a natural one-to-one correspondence between the set of all closed subspaces of and the set of all bounded self-adjoint operators such that . Specifically, This provides the geometrical interpretation of : it is the best approximation to x by elements of V. Projections and are called mutually orthogonal if . This is equivalent to and being orthogonal as subspaces of . The sum of the two projections and is a projection only if and are orthogonal to each other, and in that case . The composite is generally not a projection; in fact, the composite is a projection if and only if the two projections commute, and in that case . By restricting the codomain to the Hilbert space , the orthogonal projection gives rise to a projection mapping ; it is the adjoint of the inclusion mapping meaning that for all and . The operator norm of the orthogonal projection onto a nonzero closed subspace is equal to 1: Every closed subspace V of a Hilbert space is therefore the image of an operator of norm one such that . The property of possessing appropriate projection operators characterizes Hilbert spaces: A Banach space of dimension higher than 2 is (isometrically) a Hilbert space if and only if, for every closed subspace , there is an operator of norm one whose image is such that . While this result characterizes the metric structure of a Hilbert space, the structure of a Hilbert space as a topological vector space can itself be characterized in terms of the presence of complementary subspaces: A Banach space is topologically and linearly isomorphic to a Hilbert space if and only if, to every closed subspace , there is a closed subspace such that is equal to the internal direct sum . The orthogonal complement satisfies some more elementary results. It is a monotone function in the sense that if , then with equality holding if and only if is contained in the closure of . This result is a special case of the Hahn–Banach theorem. The closure of a subspace can be completely characterized in terms of the orthogonal complement: if is a subspace of , then the closure of is equal to . The orthogonal complement is thus a Galois connection on the partial order of subspaces of a Hilbert space. In general, the orthogonal complement of a sum of subspaces is the intersection of the orthogonal complements: If the are in addition closed, then Spectral theory There is a well-developed spectral theory for self-adjoint operators in a Hilbert space, that is roughly analogous to the study of symmetric matrices over the reals or self-adjoint matrices over the complex numbers. In the same sense, one can obtain a "diagonalization" of a self-adjoint operator as a suitable sum (actually an integral) of orthogonal projection operators. The spectrum of an operator , denoted , is the set of complex numbers such that lacks a continuous inverse. If is bounded, then the spectrum is always a compact set in the complex plane, and lies inside the disc . If is self-adjoint, then the spectrum is real. In fact, it is contained in the interval where Moreover, and are both actually contained within the spectrum. The eigenspaces of an operator are given by Unlike with finite matrices, not every element of the spectrum of must be an eigenvalue: the linear operator may only lack an inverse because it is not surjective. Elements of the spectrum of an operator in the general sense are known as spectral values. Since spectral values need not be eigenvalues, the spectral decomposition is often more subtle than in finite dimensions. However, the spectral theorem of a self-adjoint operator takes a particularly simple form if, in addition, is assumed to be a compact operator. The spectral theorem for compact self-adjoint operators states: A compact self-adjoint operator has only countably (or finitely) many spectral values. The spectrum of has no limit point in the complex plane except possibly zero. The eigenspaces of decompose into an orthogonal direct sum: Moreover, if denotes the orthogonal projection onto the eigenspace , then where the sum converges with respect to the norm on . This theorem plays a fundamental role in the theory of integral equations, as many integral operators are compact, in particular those that arise from Hilbert–Schmidt operators. The general spectral theorem for self-adjoint operators involves a kind of operator-valued Riemann–Stieltjes integral, rather than an infinite summation. The spectral family associated to associates to each real number λ an operator , which is the projection onto the nullspace of the operator , where the positive part of a self-adjoint operator is defined by The operators are monotone increasing relative to the partial order defined on self-adjoint operators; the eigenvalues correspond precisely to the jump discontinuities. One has the spectral theorem, which asserts The integral is understood as a Riemann–Stieltjes integral, convergent with respect to the norm on . In particular, one has the ordinary scalar-valued integral representation A somewhat similar spectral decomposition holds for normal operators, although because the spectrum may now contain non-real complex numbers, the operator-valued Stieltjes measure must instead be replaced by a resolution of the identity. A major application of spectral methods is the spectral mapping theorem, which allows one to apply to a self-adjoint operator any continuous complex function defined on the spectrum of by forming the integral The resulting continuous functional calculus has applications in particular to pseudodifferential operators. The spectral theory of unbounded self-adjoint operators is only marginally more difficult than for bounded operators. The spectrum of an unbounded operator is defined in precisely the same way as for bounded operators: is a spectral value if the resolvent operator fails to be a well-defined continuous operator. The self-adjointness of still guarantees that the spectrum is real. Thus the essential idea of working with unbounded operators is to look instead at the resolvent where is nonreal. This is a bounded normal operator, which admits a spectral representation that can then be transferred to a spectral representation of itself. A similar strategy is used, for instance, to study the spectrum of the Laplace operator: rather than address the operator directly, one instead looks as an associated resolvent such as a Riesz potential or Bessel potential. A precise version of the spectral theorem in this case is: There is also a version of the spectral theorem that applies to unbounded normal operators. In popular culture In Gravity's Rainbow (1973), a novel by Thomas Pynchon, one of the characters is called "Sammy Hilbert-Spaess", a pun on "Hilbert Space". The novel refers also to Gödel's incompleteness theorems.
Mathematics
Calculus and analysis
null
3447346
https://en.wikipedia.org/wiki/Bryopsida
Bryopsida
The Bryopsida constitute the largest class of mosses, containing 95% of all moss species. It consists of approximately 11,500 species, common throughout the whole world. The group is distinguished by having spore capsules with teeth that are arthrodontous; the teeth are separate from each other and jointed at the base where they attach to the opening of the capsule. Consequently, mosses in the Class Bryopsida are commonly known as the “joint-toothed” or “arthrodontous” mosses. These teeth are exposed when the covering operculum falls off. In other groups of mosses, the capsule is either nematodontous with an attached operculum, or else splits open without operculum or teeth. Morphological groups The Bryopsida can be simplified into three groups: the acrocarpous (pinnate), the pleurocarpous (side-fruited), and the cladocarpous (branching) mosses. This is based on the position of the perichaetia and sporophytes. Acrocarps are generally characterized by an upright growth habit that is unbranched or only sparingly branched. Branching is usually sympodial with the branches similar to main shoot where they originate. Branches below the perichaetium are called subfloral innovations. Pleurocarps are generally characterized by creeping shoot systems and extensive lateral branching. The main stem is indeterminant and offshooting branches may be dissimilar. The perichaetia in pleurocarps are produced at the tips of extremely reduced, basally swollen lateral branches that are morphologically distinct from the vegetative branches. Cladocarps are mosses which produce perichaetia at the tips of unspecialized lateral branches. Such branches are themselves capable of branching. Although acrocarps, pleurocarps, and cladocarps generally have different branching habits, it is the morphology of the perichaetia which defines the groups. Capsule structure Among the Bryopsida, the structure of the capsule (sporangium) and its pattern of development is very useful both for classifying and for identifying moss families. Most Bryopsida produce a capsule with a lid (the operculum) which falls off when the spores inside are mature and thus ready to be dispersed. The opening thus revealed is called the stoma (meaning "mouth") and is surrounded by one or two peristomes. A peristome is a ring of triangular "teeth" formed from the remnants of specially thickened cell walls. There are usually 16 such teeth in a single peristome, and in the Bryopsida the teeth are separate from each other and able to both fold in to cover the stoma as well as fold back to open the stoma. This articulation of the teeth is termed arthrodontous. There are two basic arthrodontous peristome types. The first type is termed haplolepidous and consists of a single circle of 16 peristome teeth. This type of peristome is characteristic of subclass Dicranidae. The second type is the diplolepidous peristome found in subclasses Bryidae, Funariidae, and Timmiidae. In this type, there are two rings of peristome teeth—an inner endostome (short for endoperistome) and an exostome. The endostome is a more delicate membrane, and its teeth are aligned between the teeth of the exostome. There are a few mosses in the Bryopsida that have no peristome in their capsules. These mosses still undergo the same cell division patterns in capsule development, but the teeth do not fully develop. Classification In the past, the group Bryopsida included all mosses. Current circumscriptions of the group are more limited. Phylogeny A detailed phylogeny to the level of order, based on the work by Novíkov & Barabaš-Krasni 2015; Cole, Hilger & Goffinet 2021; Fedosov et al. 2016; Ignatov, Fedosov & Fedorova 2016; Bechteler et al. 2023. Unassigned Dicranidae: Pseudoditrichales
Biology and health sciences
Bryophytes
Plants
3447756
https://en.wikipedia.org/wiki/Carbon-based%20life
Carbon-based life
Carbon is a primary component of all known life on Earth, and represents approximately 45–50% of all dry biomass. Carbon compounds occur naturally in great abundance on Earth. Complex biological molecules consist of carbon atoms bonded with other elements, especially oxygen and hydrogen and frequently also nitrogen, phosphorus, and sulfur (collectively known as CHNOPS). Because it is lightweight and relatively small in size, carbon molecules are easy for enzymes to manipulate. Carbonic anhydrase is part of this process. Carbon has an atomic number of 6 on the periodic table. The carbon cycle is a biogeochemical cycle that is important in maintaining life on Earth over a long time span. The cycle includes carbon sequestration and carbon sinks. Plate tectonics are needed for life over a long time span, and carbon-based life is important in the plate tectonics process. Iron- and sulfur-based Anoxygenic photosynthesis life forms that lived from 3.80 to 3.85 billion years ago on Earth produced an abundance of black shale deposits. These shale deposits increase heat flow and crust buoyancy, especially on the sea floor, helping to increase plate tectonics. Talc is another organic mineral that helps drive plate tectonics. Inorganic processes also help drive plate tectonics. Carbon-based photosynthesis life caused a rise in oxygen on Earth. This increase of oxygen helped plate tectonics form the first continents. It is frequently assumed in astrobiology that if life exists elsewhere in the Universe, it will also be carbon-based. Critics, like Carl Sagan in 1973, refer to this assumption as carbon chauvinism. Characteristics Carbon is capable of forming a vast number of compounds, more than any other element, with almost ten million compounds described to date, and yet that is but a fraction of the number of compounds that are theoretically possible under standard conditions. The enormous diversity of carbon compounds, known as organic compounds, has led to a distinction between them and the inorganic compounds that do not contain carbon. The branch of chemistry that studies organic compounds is known as organic chemistry. Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in the universe by mass, after hydrogen, helium, and oxygen. Carbon's widespread abundance, its ability to form stable bonds with numerous other elements, and its unusual ability to form polymers at the temperatures commonly encountered on Earth enables it to serve as a common element of all known living organisms. In a 2018 study, carbon was found to compose approximately 550 billion tons of all life on Earth. It is the second most abundant element in the human body by mass (about 18.5%) after oxygen. The most important characteristics of carbon as a basis for the chemistry of cellular life are that each carbon atom is capable of forming up to four valence bonds with other atoms simultaneously, and that the energy required to make or break a bond with a carbon atom is at an appropriate level for building large and complex molecules which may be both stable and reactive. Carbon atoms bond readily to other carbon atoms; this allows the building of arbitrarily long macromolecules and polymers in a process known as catenation.<ref>Oxford English Dictionary, 1st edition (1889) s.v. 'chain', definition 4g</ref> "What we normally think of as 'life' is based on chains of carbon atoms, with a few other atoms, such as nitrogen or phosphorus", per Stephen Hawking in a 2008 lecture, "carbon [... has the richest chemistry." Norman Horowitz was the head of the Jet Propulsion Laboratory's bioscience section for the first U.S. mission, Viking Lander of 1976, to successfully land an unmanned probe on the surface of Mars. He considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival on other planets. However, the results of this mission indicated that Mars was presently extremely hostile to carbon-based life. He also considered that, in general, there was only a remote possibility that non-carbon life forms would be able to evolve with genetic information systems capable of self-replication and adaptation. Key molecules The most notable classes of biological macromolecules used in the fundamental processes of living organisms include: Proteins, which are the building blocks from which the structures of living organisms are constructed (this includes almost all enzymes, which catalyse organic chemical reactions). Amino acid, make up proteins, included the use in genetic code of life. Nucleic acids, which carry genetic information. Ribonucleic acid (RNA), production of proteins. Deoxyribonucleic acid (DNA), nucleic acid in genetic form. Peptide, building block of proteins. Lipids, which also store energy, but in a more concentrated form, and which may be stored for extended periods in the bodies of animals. Phospholipid used in cell membrane. Carbohydrates, which store energy in a form that can be used by living cells. Lectin, for binding proteins. Monosaccharide, simple sugars, including glucose and fructose. Disaccharides, sugar soluble in water, including lactose, maltose, and sucrose. Starch, made of amylose and amylopectin, plants energy storage. Glycogen, energy in animals. Cellulose, a biopolymer, found in the cell walls of plants. Fatty acid, two types, saturated fat and unsaturated fat (oil), are stored energy. Essential fatty acid, needed but not synthesized by the human body. Steroid, hormone, and used in cell membrane. Neurotransmitter, are signaling molecules. Cholesterol, used in the brain and spinal cord of animals. Wax, found in beeswax and lanolin. Plant wax used for protection. Water Liquid water is essential for carbon-based life. Chemical bonding of carbon molecules requires liquid water. Water has the chemical property to make compound-solvent pairing. Water provides the reversible hydration of carbon dioxide. Hydration of carbon dioxide is needed in carbon-based life. All life on Earth uses the same biochemistry of carbon. Water is important in life's carbonic anhydrase the interaction of between carbon dioxide and water. Carbonic anhydrase needs a family of carbon base enzymes for the hydration of carbon dioxide and acid–base homeostasis, that regulates PH levels in life. In plant life, liquid water is needed for photosynthesis, the biological process plants use to convert light energy and carbon dioxide into chemical energy. Water makes up 55% to 60% of the human body by weight. Other candidates A few other elements have been proposed as candidates for supporting biological systems and processes as fundamentally as carbon does, for example, processes such as metabolism. The most frequently suggested alternative is silicon. Silicon, atomic number of 14, more than twice the size of carbon, shares a group in the periodic table with carbon, can also form four valence bonds, and also bonds to itself readily, though generally in the form of crystal lattices rather than long chains. Despite these similarities, silicon is considerably more electropositive than carbon, and silicon compounds do not readily recombine into different permutations in a manner that would plausibly support lifelike processes. Silicon is abundant on Earth, but as it is more electropositive and in a water based environment it forms Si–O bonds rather than Si–Si bonds. Boron does not react with acids and does not form chains naturally. Thus boron is not a candidate for life. Arsenic is toxic to life, and its possible candidacy has been rejected. In the past (1960s-1970s) other candidates for life were plausible, but with time and more research, only carbon has the complexity and stability to make large molecules and polymers essential for life. Fiction Speculations about the chemical structure and properties of hypothetical non-carbon-based life have been a recurring theme in science fiction. Silicon is often used as a substitute for carbon in fictional lifeforms because of its chemical similarities. In cinematic and literary science fiction, when man-made machines cross from non-living to living, this new form is often presented as an example of non-carbon-based life. Since the advent of the microprocessor in the late 1960s, such machines are often classed as "silicon-based life". Other examples of fictional "silicon-based life" can be seen in the 1967 episode "The Devil in the Dark" from Star Trek: The Original Series, in which a living rock creature's biochemistry is based on silicon. In the 1994 The X-Files episode "Firewalker", in which a silicon-based organism is discovered in a volcano. In the 1984 film adaptation of Arthur C. Clarke's 1982 novel 2010: Odyssey Two, a character argues, "Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect." In JoJolion, the eighth part of the larger JoJo's Bizarre Adventure series, a mysterious race of silicon-based lifeforms "Rock Humans" serve as the primary antagonists. Gallery
Biology and health sciences
Biology basics
Biology
2510070
https://en.wikipedia.org/wiki/Dinocaridida
Dinocaridida
Dinocaridida is a proposed fossil taxon of basal arthropods, which flourished during the Cambrian period and survived up to Early Devonian. Characterized by a pair of frontal appendages and series of body flaps, the name of Dinocaridids (Greek for deinos "terrible" and Latin for caris "crab") refers to the suggested role of some of these members as the largest marine predators of their time. Dinocaridids are occasionally referred to as the 'AOPK group' by some literatures, as the group composed of Radiodonta (Anomalocaris and relatives), Opabiniidae (Opabinia and relatives), and the "gilled lobopodians" Pambdelurion and Kerygmachelidae. It is most likely paraphyletic, with Kerygmachelidae and Pambdelurion more basal than the clade compose of Opabiniidae, Radiodonta and other arthropods. Anatomy Dinocaridids were bilaterally symmetrical, with a mostly non-mineralized cuticle and a body divided into two major groupings of tagmata (body-sections): head and trunk. The head apparently unsegmented and had a pair of specialized frontal appendages just in front of the mouth and eyes. The frontal appendages are either lobopodous (soft as in gilled lobopodians) or arthropodized (hardened and segmented as in Radiodonta) and usually paired, but highly fused into a nozzle-like structure in Opabiniidae. Based on their preocular position and putative protocerebral origin, the frontal appendages are generally thought to be homologous to the labrum of euarthropods and primary antennae of onychophoran, while subsequent evidence also suggest a deutocerebral origin (homologous to the jaws of onychophora and great appendages/antennae/chelicerae of euarthropods). The trunk possessed multiple segments, each with its own gill branch and swimming flaps (lobes). It is thought that these flaps moved in an up-and-down motion, in order to propel the animal forward in a fashion similar to the cuttlefish. In gilled lobopodian genera, the trunk may have borne a lobopodous limb (lobopod) underneath each of the flaps. The midgut of dinocaridids had paired digestive glands similar to those of siberiid lobopodians and Cambrian euarthropods. The dinocaridid brain is relatively simple than those of a euarthropod (3-segmented), it is thought to be comprised either 1 (only protocerebrum) or 2 cerebral ganglions (protocerebrum and deutocerebrum). Classification Although some authors may rather suggest different taxonomic affinities (e.g. as cycloneuralian relatives), most of the phylogenetic studies suggest that dinocaridids are stem group arthropods. Under this scenario, Dinocaridida is a paraphyletic grade in correspond to the arthropod crown group (Euarthropoda or Deuteropoda) and also suggest a lobopodian origin of the arthropod lineage. In general, the gilled lobopodian genera Pambdelurion and Kerygmachela which have lobopodian traits (e.g. lobopodous appendage, annulation) occupied the basal position; while Opabiniidae and Radiodonta are more derived and closely related to the arthropod crown group, with the latter even having significant arthropod affinities such as arthropodization and head sclerites. In the original description, Dinocaridida was composed of only Opabiniidae and Radiodonta. With the exclusion of questionable taxa (e.g. the putative opabiniid Myoscolex), the former were known only by Opabinia, while all radiodont species were grouped under a single family: Anomalocarididae (hence the previous common name 'Anomalocaridids'). In later studies, the gilled lobopodians Pambdelurion and Kerygmachela were also regarded to be dinocaridids, two new opabiniid genera, Utaurora and Mieridduryn were described, other strange dinocaridids like Parvibellus (which might actually be a juvenile siberiid lobopodian), many radiodonts were reassigned to other new families (Amplectobeluidae, Tamisiocarididae and Hurdiidae), and a new family, Kerygmachelidae, was named. Distribution The group was geographically widespread, and has been reported from Cambrian strata in Canada, United States, Greenland, China, Australia and Russia, as well as the Early to Middle Ordovician of Morocco and Wales and the Early Devonian of Germany.
Biology and health sciences
Fossil arthropods
Animals
2511080
https://en.wikipedia.org/wiki/Retarded%20time
Retarded time
In electromagnetism, an electromagnetic wave (light) in vacuum travels at a finite speed (the speed of light c). The retarded time is the propagation delay between emission and observation, since it takes time for information to travel between emitter and observer. This arises due to causality. Retarded and advanced times Retarded time tr or t′ is calculated with a "speed-distance-time" calculation for EM fields. If the EM field is radiated at position vector r′ (within the source charge distribution), and an observer at position r measures the EM field at time t, the time delay for the field to travel from the charge distribution to the observer is |r − r′|/c. Subtracting this delay from the observer's time t then gives the time when the field began to propagate, i.e. the retarded time t′. The retarded time is: (which can be rearranged to , showing how the positions and times of source and observer are causally linked). A related concept is the advanced time ta, which takes the same mathematical form as above, but with a “+” instead of a “−”: This is the time it takes for a field to propagate from originating at the present time t to a distance . Corresponding to retarded and advanced times are retarded and advanced potentials. Retarded position The retarded position can be obtained from the current position of a particle by subtracting the distance it has travelled in the lapse from the retarded time to the current time. For an inertial particle, this position can be obtained by solving this equation: , where rc is the current position of the source charge distribution and v its velocity. Application Perhaps surprisingly - electromagnetic fields and forces acting on charges depend on their history, not their mutual separation. The calculation of the electromagnetic fields at a present time includes integrals of charge density ρ(r', tr) and current density J(r', tr'') using the retarded times and source positions. The quantity is prominent in electrodynamics, electromagnetic radiation theory, and in Wheeler–Feynman absorber theory, since the history of the charge distribution affects the fields at later times.
Physical sciences
Electrodynamics
Physics
2511534
https://en.wikipedia.org/wiki/Trans-Australian%20Railway
Trans-Australian Railway
The Trans-Australian Railway, opened in 1917, runs from Port Augusta in South Australia to Kalgoorlie in Western Australia, crossing the Nullarbor Plain in the process. As the only rail freight corridor between Western Australia and the eastern states, the line is economically and strategically important. The railway includes the world's longest section of completely straight track. The inaugural passenger train service was known as the Great Western Express; later, it became the Trans-Australian. , two passenger services use the line, both of them experiential tourism services: the Indian Pacific for the entire length of the railway, and The Ghan between Port Augusta and Tarcoola, where it leaves the line to proceed north to Darwin. History In 1901, the six Australian colonies federated to form the Commonwealth of Australia. At that time, Perth, the capital of Western Australia, was isolated from the remaining Australian states by thousands of miles of desert terrain and the only practicable method of transport was by sea. The voyage across the notoriously rough Great Australian Bight was time-consuming, inconvenient and often uncomfortable. One of the inducements held out to Western Australians to join the new federation was the promise of a federally funded railway line linking Western Australia with the rest of the continent. In 1907, legislation was passed, allowing for the route to be surveyed. The survey, completed in 1909, endorsed a route from Port Augusta (the existing railhead at the head of Spencer Gulf in South Australia's wheatfields) via Tarcoola to the gold mining centre of Kalgoorlie in Western Australia, a distance of . The line was to be built to the standard gauge of , even though the state railway systems at both ends were narrow gauge at the time. Its cost was estimated at £4,045,000, equivalent to in . Legislation authorising the construction was passed in December 1911 by the Fisher Commonwealth Government. Work commenced in September 1912 in Port Augusta. Work proceeded eastwards from Kalgoorlie and westwards from Port Augusta through the years of the First World War. Tracklaying proceeded briskly when materials were available. At its peak, up to of track were laid each day, and were completed in one calendar year – both Australian tracklaying records. By 1915, the two ends of the line were just over apart with materials being delivered daily. Construction progressed steadily as the line was extended through mainly dry and desolate regions until the two halves of the line met at Ooldea on 17 October 1917. Under the aegis of the federal department of transport, the Commonwealth Railways was established in 1917 to operate the line. Once passenger trains started to run, maintenance staff found that the high mineral salt content of bore water available along the route was playing havoc with steam locomotive boilers: repairs to boilers at one time accounted for an extraordinary 87 per cent of all locomotive maintenance. The problem was only arrested with the introduction of barium carbonate water treatment plants at watering points. In 1937, the eastern end was extended south to Port Pirie. Soon afterwards, the South Australian Railways extended its broad-gauge line north to Port Pirie from Redhill. These two projects made redundant the indirect , narrow-gauge connection from Terowie via Peterborough and Quorn, resulting in a much shorter and more comfortable journey to Adelaide. These two projects eliminated one break of gauge in the journey across Australia, but they turned Port Pirie Junction yard into a complex three-gauge facility. Railway engineers, however, were able to construct the new yards with no more than minimal dual-gauge track and the complex signalling that would have been incurred. The long-anticipated conversion of the entire line between Sydney and Perth to standard gauge occurred in 1970. In 2004, the gap in standard gauge connections between the mainland state capitals was finally closed with a connection between Port Pirie and Adelaide, thence Melbourne, and by completion of the northern component of the Adelaide–Darwin railway line, which diverges from the Trans–Australian Railway at Tarcoola. In 2008, the engineering heritage of the railway was recognised by the Engineering Heritage Recognition Program of Engineers Australia when markers were installed on the platform at the Port Augusta station in South Australia and the ticket office at Kalgoorlie station in Western Australia. On 17 October 2017, centenary celebrations were held at Ooldea. Named services When the line was inaugurated, the passenger service was named as the Great Western Express. Later, the train became known as the Trans-Australian or, colloquially, "The Trans". After the Sydney–Perth route was converted to standard gauge in 1970, the railway was no longer flanked at both ends by narrow-gauge lines and an all-through service, called the Indian Pacific, was started. Although passengers no longer had to move to different carriages at change-of-gauge localities, Commonwealth Railways remained responsible for the service where it operated between Port Pirie and Kalgoorlie, with its crews and locomotives taking over at those stations. In 1975, Commonwealth Railways was absorbed into an enlarged federal government corporation, Australian National Railways Commission, branded as Australian National Railways and later as "Australian National", which continued to operate the Trans Australian. In 1993, Australian National took over operation of the entire coast-to-coast service following agreement with the governments of Western Australia and New South Wales. In 1997, following the privatisation of Australian National, the Indian Pacific was sold to a company, Great Southern Rail (as of 2020 trading as "Journey Beyond"). , the Indian Pacific is a weekly, all-through, experiential tourism service. From the start of construction until 1996, the Tea and Sugar supply train carried vital provisions to the work sites and localities, all of them isolated, along the route: a butcher and banking and postal services were among the facilities provided. Terrain The length of the line, as constructed, was , slightly less than the original survey. Although there are several hundred curves and gradients on the line, the route includes the longest length of straight track in the world – . A Commonwealth Railways map marked the western end as from Port Augusta, between Loongana and Nurina, and states: "The 'Long Straight' extends from this point for a distance of 297 miles and terminates at the 496 miles [sic] between Ooldea and Watson." According to South Australian astronaut Andy Thomas, the line is identifiable from space because of its unnatural straightness: "It's a very fine line, it's like someone has drawn a very fine pencil line across the desert". At no point along the route does the line cross a permanent fresh watercourse. Bores and reservoirs were established at intervals, but the water was often brackish and unsuitable for steam locomotive use, let alone human consumption, so water supplies had to be carried on the train. In the days of steam locomotion, about half the total load was water for the engine. In later years, condenser plants were built at several major stations. Names of stopping places Reflecting the line's ownership by the Commonwealth Government, eight of the localities were named (or renamed) after Australian Prime Ministers. Other prominent people's names were also allocated, as shown on the adjacent map. Operations Because of the inevitable problems of finding suitable water for steam locomotives in a desert, the original engineer, Henry Deane envisaged diesel locomotives for the line and made inquiries with potential manufacturers, although the technology was not well developed at the time. Unfortunately, a scandal involving the supply of sleepers led to Deane's resignation before the proposal had advanced. Initially trains were hauled by G class locomotives and from 1938 by C class locomotives, both steam. From 1951, diesel-electric locomotives hauled passenger services, using the new GM class locomotives. The railway originally had -long crossing loops (passing sidings) every or so. As traffic increased the number of crossing loops increased. To handle longer trains, crossing loops were lengthened so that in 2008 they were all at least long and spaced about to apart. Most crossing loops are unattended and train crew operate the turnouts as required. Crossing loops have self-restoring points, so that points are reset to the straight route when a train departs from a crossing loop. The loops are fitted with radio controls so that train crew can set the points as they approach. Locomotive cabs are fitted with an activated points system (ICAPS) to set the required route without having to stop the train. Safeworking is by train orders, using verbal communication. Disruptions In the time of the operation of the railway, 'washaways' and 'flooding' in most cases rendered the railway inoperable at a range of locations. Washaways usually removed ballast, leaving rails suspended above the flooding areas. In most cases the underlying ballast was removed, and required replacing. Significant events on the line were in 1921, 1930 1937 and 2022. The 1930 flooding had two trains held up on sections of the line.
Technology
Railway lines
null
4645621
https://en.wikipedia.org/wiki/Biozone
Biozone
In biostratigraphy, biostratigraphic units or biozones are intervals of geological strata that are defined on the basis of their characteristic fossil taxa, as opposed to a lithostratigraphic unit which is defined by the lithological properties of the surrounding rock. A biostratigraphic unit is defined by the zone fossils it contains. These may be a single taxon or combinations of taxa if the taxa are relatively abundant, or variations in features related to the distribution of fossils. The same strata may be zoned differently depending on the diagnostic criteria or fossil group chosen, so there may be several, sometimes overlapping, biostratigraphic units in the same interval. Like lithostratigraphic units, biozones must have a type section designated as a stratotype. These stratotypes are named according to the typical taxon (or taxa) that are found in that particular biozone. The boundary of two distinct biostratigraphic units is called a biohorizon. Biozones can be further subdivided into subbiozones, and multiple biozones can be grouped together in a superbiozone in which the grouped biozones usually have a related characteristic. A succession of biozones is called biozonation. The length of time represented by a biostratigraphic zone is called a biochron. History The concept of a biozone was first established by the 19th century paleontologist Albert Oppel, who characterized rock strata by the species of the fossilized animals found in them, which he called zone fossils. Oppel's biozonation was mainly based on Jurassic ammonites he found throughout Europe, which he used to classify the period into 33 zones (now 60). Alcide d'Orbigny would further reinforce the concept in his Prodrome de Paléontologie Stratigraphique, in which he established comparisons between geological stages and their biostratigraphy. Types of biozone The International Commission on Stratigraphy defines the following types of biozones: Range zones Range zones are biozones defined by the geographic and stratigraphic range of occurrence of a taxon (or taxa). There are two types of range zones: Taxon-range zones A taxon-range zone is simply the biozone defined by the first (first appearance datum or FAD) and last (last appearance datum or LAD) occurrence of a single taxon. The boundaries are defined by the lowest and highest stratigraphic occurrence of that particular taxon. Taxon-range zones are named after the taxon in it. Concurrent-range zone A concurrent-range zone uses the overlapping range of two taxa, with low boundary defined by the appearance of one taxon and high boundary defined by the disappearance of the other taxon. Concurrent-range zones are named after both of the taxa in it. Interval zones An interval zone is defined as the body of strata between two bio-horizons, which are arbitrarily chosen. For example, a highest-occurrence zone is a biozone with the upper boundary being the appearance of one taxon, and the lower boundary the appearance of another taxon. Lineage zones A lineage zone, also called a consecutive range zone, are biozones which are defined by being a specific segment of an evolutionary lineage. For example, a zone can be bounded by the highest occurrence of the ancestor of a particular of a taxon and the lowest occurrence of its descendant, or between the lowest occurrence of a taxon and the lowest occurrence of its descendant. Lineage zones are different from most other biozones because they need that the segments its bounded by are successive segments of an evolutionary lineage. This makes them similar to chronostratigraphical units - however, lineage zones, being a biozone, are restricted by the actual spatial range of fossils. Lineage zones are named for the specific taxon they represent. Assemblage zones An assemblage zone is a biozone defined by three or more different taxa, which may or may not be related. The boundaries of an assemblage zone are defined by the typical, specified fossil assemblage's occurrence: this can include the appearance, but also the disappearance of certain taxa. Assemblage zones are named for the most characteristic or diagnostic fossils in its assemblage. Abundance zones An abundance zone, or acme zone, is a biozone that is defined by the range in which the abundance of a particular taxon is highest. Because an abundance zone requires a statistically high proportion of a particular taxon, the only way to define them is to trace the abundance of the taxon through time. As local environmental factors influence abundance, this can be an unreliable way of defining a biozone. Abundance zones are named after the taxon that is the most abundant within its range. Zone fossils used for biozonation A great variety of species can be used in establishing biozonation. Graptolites and ammonites are some of the most useful as zone fossils, as they preserve well and often have relatively short biozones. Microfossils, such as dinoflagellates, foraminiferans, or plant pollen are also good candidates because they tend to be present even in very small samples and evolve relatively rapidly. Fossils of pigs and cannabis can be used for biozonation of Quaternary rocks as they were used by hominids. As only a small portion of fossils are preserved, a biozone does not represent the true range of that species in time. Moreover, ranges can be influenced by the Signor-Lipps effect, meaning that the last "disappearance" of a species tends to be observed further back in time than was actually the case.
Physical sciences
Stratigraphy
Earth science
4648679
https://en.wikipedia.org/wiki/Culex
Culex
Culex or typical mosquitoes are a genus of mosquitoes, several species of which serve as vectors of one or more important diseases of birds, humans, and other animals. The diseases they vector include arbovirus infections such as West Nile virus, Japanese encephalitis, or St. Louis encephalitis, but also filariasis and avian malaria. They occur worldwide except for the extreme northern parts of the temperate zone, and are the most common form of mosquito encountered in some major U.S. cities, such as Los Angeles. Etymology In naming this genus, Carl Linnaeus used the nonspecific Latin term for a midge or gnat: . Description Depending on the species, the adult Culex mosquito may measure from . The adult morphology is typical of flies in the suborder Nematocera with the head, thorax, and abdomen clearly defined and the two forewings held horizontally over the abdomen when at rest. As in all Diptera capable of flight, the second pair of wings is reduced and modified into tiny, inconspicuous halteres. Formal identification is important in mosquito control, but it is demanding and requires careful measurements of bodily proportions and noting the presence or absence of various bristles or other bodily features. In the field, informal identification is more often important, and the first question as a rule is whether the mosquito is anopheline or culicine. Given a specimen in good condition, one of the first things to notice is the length of the maxillary palps. Especially in the female, palps as long as the proboscis are characteristic of anopheline mosquitoes. Culicine females have short palps. Anopheline mosquitoes tend to have dappled or spotted wings, while culicine wings tend to be clear. Anopheline mosquitoes tend to sit with their heads low and their rear ends raised high, especially when feeding, while culicine females keep their bodies horizontal. Anopheline larvae tend to float horizontal at the surface of the water when not in motion, whereas culicine larvae float with head low and only the siphon at the tail held at the surface. Life cycle The developmental cycle of most species takes about two weeks in warm weather. The metamorphosis is typical of holometabolism in an insect: the female lays eggs in rafts of as many as 300 on the water's surface. Suitable habitats for egg-laying are small bodies of standing fresh water: puddles, pools, ditches, tin cans, buckets, bottles, unmounted tires, and water storage tanks (tree boles are suitable for only a few species). The tiny, cigar-shaped, dark brown eggs adhere to each other through adhesion forces, not any kind of cement, and are easily separated. Eggs hatch only in the presence of water, and the larvae are obligately aquatic, linear in form, and maintain their position and mostly vertical attitude in water by movements of their bristly mouthparts. To swim, they lash their bodies back and forth through the water. During the larval stage, the insect lives submerged in water and feeds on particles of organic matter, microscopic organisms or plant material; after several instars it then develops into a pupa. Unlike the larva, the pupa is comma-shaped. It does not feed, but can swim in rapid jerking motions to avoid potential predators. It must remain in regular contact with the surface to breathe, but it must not become desiccated. After 24–48 hours, the pupa ruptures and the adult emerges from the shed exoskeleton. Vector of disease Diseases borne by one or more species of Culex mosquitoes vary in their dependence on the species of vector. Some are rarely and only incidentally transmitted by Culex species, but Culex and closely related genera of culicine mosquitoes readily support perennial epidemics of certain major diseases if they become established in a particular region. Cat Que Virus (CQV) has been largely reported in Culex mosquitoes in China and in pigs in Vietnam. For CQV, domestic pigs are considered to be the primary mammalian hosts. Antibodies against the virus have been reported in swine reared locally in China. Arbovirus infections transmitted by various species of Culex include West Nile virus, Japanese encephalitis, St. Louis encephalitis, and Western and Eastern equine encephalitis. Brazilian scientists are investigating if Culex species transmit zika virus. Nematode infections, mainly forms of filariasis may be borne by Culex species, as well as by other mosquitoes and bloodsucking flies. Protist parasites in the phylum Apicomplexa, such as various forms of avian malaria Nonanal has been identified as a compound that attracts Culex mosquitoes, perhaps pheromonally. Nonanal acts synergistically with carbon dioxide. Diversity Culex is a diverse genus. It comprises over 20 subgenera that include a total of well over 1,000 species. Publications of newly described species are frequent.
Biology and health sciences
Flies (Diptera)
Animals
4649165
https://en.wikipedia.org/wiki/Maple
Maple
Acer is a genus of trees and shrubs commonly known as maples. The genus is placed in the soapberry family Sapindaceae. There are approximately 132 species, most of which are native to Asia, with a number also appearing in Europe, northern Africa, and North America. Only one species, Acer laurinum, extends to the Southern Hemisphere. The type species of the genus is the sycamore maple Acer pseudoplatanus, one of the most common maple species in Europe. Most maples usually have easily identifiable palmate leaves (with a few exceptions, such as Acer carpinifolium, Acer laurinum, and Acer negundo) and all share distinctive winged fruits. The closest relative of the maples is the small east Asian genus Dipteronia, followed by the more widespread genus Aesculus (buckeyes and horse-chestnuts). Maple syrup is made from the sap of some maple species. It is one of the most common genera of trees in Asia. Many maple species are grown in gardens where they are valued for their autumn colour and often decorative foliage, some also for their attractive flowers, fruit, or bark. Evolutionary history The closest relative of Acer is Dipteronia, which only has two living species in China, but has a fossil record extending back to the middle Paleocene in North America. The oldest known fossils of Acer are from the late Paleocene of Northeast Asia and northern North America, around 60 million years old. The oldest fossils of Acer in Europe are from Svalbard, dating to the late Eocene (Priabonian ~38–34 million years ago). Morphology Most maples or acers are trees growing to a height of . Others are shrubs less than 10 meters tall with a number of small trunks originating at about ground level. Most species are deciduous, and many are renowned for their autumn leaf colours, but a few in southern Asia and the Mediterranean region are mostly evergreen. Most are shade-tolerant when young and are often riparian, understory, or pioneer species rather than climax overstory trees. There are a few exceptions such as sugar maple. Many of the root systems are typically dense and fibrous, inhibiting the growth of other vegetation underneath them. A few species, notably Acer cappadocicum, frequently produce root sprouts, which can develop into clonal colonies. Maples are distinguished by opposite leaf arrangement. The leaves in most species are palmate veined and lobed, with 3 to 9 (rarely to 13) veins each leading to a lobe, one of which is central or apical. A small number of species differ in having palmate compound, pinnate compound, pinnate veined or unlobed leaves. Several species, including Acer griseum (paperbark maple), Acer mandshuricum (Manchurian maple), Acer maximowiczianum (Nikko maple) and Acer triflorum (three-flowered maple), have trifoliate leaves. One species, Acer negundo (box-elder or Manitoba maple), has pinnately compound leaves that may be simply trifoliate or may have five, seven, or rarely nine leaflets. A few, such as Acer laevigatum (Nepal maple) and Acer carpinifolium (hornbeam maple), have pinnately veined simple leaves. Maple species, such as Acer rubrum, may be monoecious, dioecious or polygamodioecious. The flowers are regular, pentamerous, and borne in racemes, corymbs, or umbels. They have four or five sepals, four or five petals about 1–6 mm long (absent in some species), four to ten stamens about 6–10 mm long, and two pistils or a pistil with two styles. The ovary is superior and has two carpels, whose wings elongate the flowers, making it easy to tell which flowers are female. Maples flower in late winter or early spring, in most species with or just after the appearance of the leaves, but in some before the trees leaf out. Maple flowers are green, yellow, orange or red. Though individually small, the effect of an entire tree in flower can be striking in several species. Some maples are an early spring source of pollen and nectar for bees. The distinctive fruits are called samaras, "maple keys", "helicopters", "whirlybirds" or "polynoses". These seeds occur in distinctive pairs each containing one seed enclosed in a "nutlet" attached to a flattened wing of fibrous, papery tissue. They are shaped to spin as they fall and to carry the seeds a considerable distance on the wind. People often call them "helicopters" due to the way that they spin as they fall. During World War II, the US Army developed a special airdrop supply carrier that could carry up to of supplies and was based on the maple seed. Seed maturation is usually in a few weeks to six months after flowering, with seed dispersal shortly after maturity. However, one tree can release hundreds of thousands of seeds at a time. Depending on the species, the seeds can be small and green to orange and big with thicker seed pods. The green seeds are released in pairs, sometimes with the stems still connected. The yellow seeds are released individually and almost always without the stems. Most species require stratification in order to germinate, and some seeds can remain dormant in the soil for several years before germinating. The genus Acer, together with genus Dipteronia, were formerly often classified in a family of their own, the Aceraceae, but recent botanical consensus, including the Angiosperm Phylogeny Group system, includes them in the family Sapindaceae; their exclusion from Sapindaceae would leave that family paraphyletic. Within Sapindaceae, Acer is placed in the subfamily Hippocastanoideae. The genus is subdivided by its morphology into a multitude of sections and subsections. Molecular studies incorporating DNA sequence data from both chloroplast and nuclear genomes, aiming to resolve the internal relationships and reconstruct the evolutionairy history of the group, suggest a Late Paleocene origin for the group, appearing first in the northeastern Palearctic. Rapid lineage divergence was followed by several independent dispersals to the Nearctic and Western Palearctic regions. Fifty-four species of maples meet the International Union for Conservation of Nature criteria for being under threat of extinction in their native habitat. Pests and diseases The leaves are used as a food plant for the larvae of a number of the order Lepidoptera (see List of Lepidoptera that feed on maples). In high concentrations, caterpillars, like the greenstriped mapleworm (Dryocampa rubicunda), can feed on the leaves so much that they cause temporary defoliation of host maple trees. Aphids are also very common sap-feeders on maples. In horticultural applications a dimethoate spray will solve this. Infestations of the Asian long-horned beetle (Anoplophora glabripennis) have resulted in the destruction of thousands of maples and other tree species in Illinois, Massachusetts, New Jersey, New York, and Ohio in the United States and Ontario, Canada. Maples are affected by a number of fungal diseases. Several are susceptible to Verticillium wilt caused by Verticillium species, which can cause significant local mortality. Sooty bark disease, caused by Cryptostroma species, can kill trees that are under stress due to drought. Death of maples can rarely be caused by Phytophthora root rot and Ganoderma root decay. Maple leaves in late summer and autumn are commonly disfigured by "tar spot" caused by Rhytisma species and mildew caused by Uncinula species, though these diseases do not usually have an adverse effect on the trees' long-term health. Cultural significance A maple leaf is on the coat of arms of Canada, and is on the Canadian flag. The maple is a common symbol of strength and endurance and has been chosen as the national tree of Canada. Maple leaves are traditionally an important part of Canadian Forces military regalia, for example, the military rank insignia for generals use maple leaf symbols. There are 10 species naturally growing in the country, with at least one in each province. Although the idea of the tree as a national symbol originally hailed from the province of Quebec where the sugar maple is significant, today's arboreal emblem of Canada rather refers to a generic maple. The design on the flag is an eleven-point stylization modeled after a sugar maple leaf (which normally bears 23 points). It is also in the name of the Canadian ice hockey team, the Toronto Maple Leafs. The first attested use of the word was in 1260 as "mapole", and it also appears a century later in Geoffrey Chaucer's Canterbury Tales, spelled as "mapul". The maple is also a symbol of Hiroshima, ubiquitous in the local meibutsu. The maple leaf appears in the coat of arms of Sammatti, a former municipality of Uusimaa, Finland. Uses Horticulture Some species of maple are extensively planted as ornamental trees by homeowners, businesses, and municipalities due to their fall colour, relatively fast growth, ease of transplanting, and lack of hard seeds that would pose a problem for mowing lawns. Particularly popular are Norway maple (although it is considered invasive in North America), silver maple, Japanese maple, and red maple. The vine maple is also occasionally used as an ornamental tree. Other maples, especially smaller or more unusual species, are popular as specimen trees. Cultivars Numerous maple cultivars that have been selected for particular characteristics can be propagated only by asexual reproduction such as cuttings, tissue culture, budding or grafting. Acer palmatum (Japanese maple) alone has over 1,000 cultivars, most selected in Japan, and many of them no longer propagated or not in cultivation in the Western world. Some delicate cultivars are usually grown in pots and rarely reach heights of more than 50–100 cm. Bonsai Maples are a popular choice for the art of bonsai. Japanese maple (Acer palmatum), trident maple (A. buergerianum), Amur maple (A. ginnala), field maple (A. campestre) and Montpellier maple (A. monspessulanum) are popular choices and respond well to techniques that encourage leaf reduction and ramification, but most species can be used. Collections Maple collections, sometimes called aceretums, occupy space in many gardens and arboreta around the world including the "five great W's" in England: Wakehurst Place Garden, Westonbirt Arboretum, Windsor Great Park, Winkworth Arboretum and Wisley Garden. In the United States, the aceretum at the Harvard-owned Arnold Arboretum in Boston is especially notable. In the number of species and cultivars, the Esveld Aceretum in Boskoop, Netherlands, is the largest in the world. Commercial uses Maples are important as sources of syrup and wood. Dried wood is often used for the smoking of food. Charcoal from maples is an integral part of the Lincoln County Process used to make Tennessee whiskey. They are also cultivated as ornamental plants and have benefits for tourism and agriculture. Timber Some of the larger maple species have valuable timber, particularly Sugar maple in North America and Sycamore maple in Europe. Sugar maple wood—often known as "hard maple"—is the wood of choice for bowling pins, bowling alley lanes, pool and snooker cue shafts, and butcher's blocks. Maple wood is also used for the manufacture of wooden baseball bats, though less often than ash or hickory due to the tendency of maple bats to shatter if they do break. The maple bat was introduced to Major League Baseball (MLB) in 1998 by Sam Bat founder Sam Holman. Today it is the standard maple bat most in use by professional baseball. Maple is also commonly used in archery as the core material in the limbs of a recurve bow due to its stiffness and strength. Maple wood is often graded based on physical and aesthetic characteristics. The most common terminology includes the grading scale from common #2; which is unselected and often used for craft woods; common #1, used for commercial and residential buildings; clear; and select grade, which is sought for fine woodworking. Some maple wood has a highly decorative wood grain, known as flame maple, quilt maple, birdseye maple and burl wood. This condition occurs randomly in individual trees of several species and often cannot be detected until the wood has been sawn, though it is sometimes visible in the standing tree as a rippled pattern in the bark. These select decorative wood pieces also have subcategories that further filter the aesthetic looks. Crotch wood, bees wing, cats paw, old growth and mottled are some terms used to describe the look of these decorative woods. Maples have a long history of use for furniture production in the United States. The Cherokee people would produce a purple dye from maple bark, which they used to dye cloth. Tonewood Maple is considered a tonewood, or a wood that carries sound waves well, and is used in numerous musical instruments. Maple is harder and has a brighter sound than mahogany, which is another major tonewood used in instrument manufacturing. The back, sides, and neck of most violins, violas, cellos, and double basses are made from maple. Electric guitar necks are commonly made from maple, having good dimensional stability. The necks of the Fender Stratocaster and Telecaster were originally an entirely maple one piece neck, but later were also available with rosewood fingerboards. Les Paul desired an all maple guitar, but due to the weight of maple, only the tops of Gibson's Les Paul guitars are made from carved maple, often using quilted or flamed maple tops. Due to its weight, very few solid body guitars are made entirely from maple, but many guitars have maple necks, tops or veneers. Maple is also often used to make bassoons and sometimes for other woodwind instruments like maple recorders. Many drums are made from maple. From the 1970s to the 1990s, maple drum kits were a vast majority of all drum kits made, but in recent years, birch has become popular for drums once again. Some of the best drum-building companies use maple extensively throughout their mid-pro range. Maple drums are favored for their bright resonant sound. Certain types of drum sticks are also made from maple. Agriculture During late winter to early spring in northeastern North America, when the night-to-day temperatures change from freezing to thawing, maple trees may be tapped for sap to manufacture maple syrup. The sap is sent via tubing to a sugar house where it is boiled to produce syrup or made into maple sugar or maple taffy. It takes about of sugar maple sap to make of syrup. While any Acer species may be tapped for syrup, many do not have sufficient quantities of sugar to be commercially useful, whereas sugar maples (A. saccharum) are most commonly used to produce maple syrup. Québec, Canada is a major producer of maple syrup, an industry worth about 500 million Canadian dollars annually. Also, as these trees are a major source of pollen in early spring before many other plants have flowered, maple flowers are a source of foraging for honeybees that play a commercially important role in general agriculture and in natural habitats. Pulpwood Maple is used as pulpwood. The fibers have relatively thick walls that prevent collapsing upon drying. This gives good bulk and opacity in paper. Maple also gives paper good printing properties. Tourism Many maples have bright autumn foliage, and many countries have leaf-watching traditions. The sugar maple (Acer saccharum) is the primary contributor to fall "foliage season" in north-eastern North America. In Japan, the custom of viewing the changing colour of maples in the autumn is called momijigari. Nikkō and Kyoto are particularly favoured destinations for this activity. In Korea, the same viewing activity is called danpung-nori and the Seoraksan and Naejang-san mountains are among the best-known destinations. Gallery
Biology and health sciences
Sapindales
null
4649267
https://en.wikipedia.org/wiki/Methoxychlor
Methoxychlor
Methoxychlor is a synthetic organochloride insecticide, now obsolete. Tradenames for methoxychlor include Chemform, Maralate, Methoxo, Methoxcide, Metox, and Moxie. Usage Methoxychlor was used to protect crops, ornamentals, livestock, and pets against fleas, mosquitoes, cockroaches, and other insects. It was intended to be a replacement for DDT, but has since been banned for use as a pesticide based on its acute toxicity, bioaccumulation, and endocrine disruption activity. The amount of methoxychlor in the environment changes seasonally due to its use in farming and foresting. It does not dissolve readily in water, so it is mixed with a petroleum-based fluid and sprayed, or used as a dust. Sprayed methoxychlor settles on the ground or in aquatic ecosystems, where it can be detected in sediments. Its degradation may take many months. Methoxychlor is ingested and absorbed by living organisms, and it accumulates in the food chain. Some metabolites may have unwanted side effects. Banned The use of methoxychlor as a pesticide was banned in the United States in 2003 and in the European Union in 2002. Health and Environmental Impacts The EPA lists methoxychlor as "a persistent, bioaccumulative, and toxic (PBT) chemical by the EPA Toxics Release Inventory (TRI) program", and as such it is a waste minimization priority chemical. The 2023 Conference of the Parties of the United Nations Stockholm Convention on Persistent Organic Pollutants decided to eliminate the use of methoxychlor, by listing this chemical in Annex A to the Convention. Human exposure Human exposure to methoxychlor occurs via air, soil, and water, primarily in people who work with the substance or who are exposed to air, soil, or water that has been contaminated. It is unknown how quickly and efficiently the substance is absorbed by humans who have been exposed to contaminated air or via skin contact. In animal models, high doses can lead to neurotoxicity. Some methoxychlor's metabolites have estrogenic effects in adult and developing animals before and after birth. One studied metabolite is 2,2-bis(p-hydroxyphenyl)-1,1,1-trichloroethane (HPTE) which shows reproductive toxicity in an animal model by reducing testosterone biosynthesis. Such effects adversely affect both the male and female reproductive systems. It is expected that this "could occur in humans" but has not been proven. While one study has linked methoxychlor to the development of leukemia in humans, most studies in animals and humans have been negative, thus the EPA has determined that it is not classifiable as a carcinogen. The EPA indicates that levels above the Maximum Contaminant Level of 40 ppb "cause" central nervous depression, diarrhea, damage to liver, kidney, and heart, and - by chronic exposure - growth retardation. Little information is available regarding effects on human pregnancy and children, but it is assumed from animals studies that methoxychlor crosses the placenta, and it has been detected in human milk Exposure to children may be different than in adults because they tend to play on the ground, further, their reproductive system may be more sensitive to the effects of methoxychlor as an endocrine disruptor. Food contamination may occur at low levels and it is recommended to wash all foods. A number of hazardous waste sites are known to contain methoxychlor. Maximum pesticide residue limits for the EU/UK are set at 0.01 mg/kg for oranges and 0.01 mg/kg for apples.
Technology
Pest and disease control
null
4652664
https://en.wikipedia.org/wiki/Eye%20%28cyclone%29
Eye (cyclone)
The eye is a region of mostly calm weather at the center of a tropical cyclone. The eye of a storm is a roughly circular area, typically in diameter. It is surrounded by the eyewall, a ring of towering thunderstorms where the most severe weather and highest winds of the cyclone occur. The cyclone's lowest barometric pressure occurs in the eye and can be as much as 15 percent lower than the pressure outside the storm. In strong tropical cyclones, the eye is characterized by light winds and clear skies, surrounded on all sides by a towering, symmetric eyewall. In weaker tropical cyclones, the eye is less well defined and can be covered by the central dense overcast, an area of high, thick clouds that show up brightly on satellite imagery. Weaker or disorganized storms may also feature an eyewall that does not completely encircle the eye or have an eye that features heavy rain. In all storms, however, the eye is where the barometer reading is lowest. Structure A typical tropical cyclone has an eye approximately 30–65km (20–40mi) across at the geometric center of the storm. The eye may be clear or have spotty low clouds (a clear eye), it may be filled with low- and mid-level clouds (a filled eye), or it may be obscured by the central dense overcast. There is, however, very little wind and rain, especially near the center. This is in stark contrast to conditions in the eyewall, which contains the storm's strongest winds. Due to the mechanics of a tropical cyclone, the eye and the air directly above it are warmer than their surroundings. While normally quite symmetric, eyes can be oblong and irregular, especially in weakening storms. A large ragged eye is a non-circular eye which appears fragmented, and is an indicator of a weak or weakening tropical cyclone. An open eye is an eye which can be circular, but the eyewall does not completely encircle the eye, also indicating a weakening, moisture-deprived cyclone or a weak but strengthening one. Both of these observations are used to estimate the intensity of tropical cyclones via Dvorak analysis. Eyewalls are typically circular; however, distinctly polygonal shapes ranging from triangles to hexagons occasionally occur. While typical mature storms have eyes that are a few dozen miles across, rapidly intensifying storms can develop an extremely small, clear, and circular eye, sometimes referred to as a pinhole eye. Storms with pinhole eyes are prone to large fluctuations in intensity, and provide difficulties and frustrations for forecasters. Small/minuscule eyesthose less than ten nautical miles (19km, 12mi) acrossoften trigger eyewall replacement cycles, where a new eyewall begins to form outside the original eyewall. This can take place anywhere from fifteen to hundreds of kilometers (ten to a few hundred miles) outside the inner eye. The storm then develops two concentric eyewalls, or an "eye within an eye". In most cases, the outer eyewall begins to contract soon after its formation, which chokes off the inner eye and leaves a much larger but more stable eye. While the replacement cycle tends to weaken storms as it occurs, the new eyewall can contract fairly quickly after the old eyewall dissipates, allowing the storm to re-strengthen. This may trigger another re-strengthening cycle of eyewall replacement. Eyes can range in size from (Typhoon Carmen) to a mere (Hurricane Wilma) across. While it is uncommon for storms with large eyes to become very intense, it does occur, especially in annular hurricanes. Hurricane Isabel was the eleventh most powerful North Atlantic hurricane in recorded history, and sustained a wide65–80km (40–50mi)eye for a period of several days. Formation and detection Tropical cyclones typically form from large, disorganized areas of disturbed weather in tropical regions. As more thunderstorms form and gather, the storm develops rainbands which start rotating around a common center. As the storm gains strength, a ring of stronger convection forms at a certain distance from the rotational center of the developing storm. Since stronger thunderstorms and heavier rain mark areas of stronger updrafts, the barometric pressure at the surface begins to drop, and air begins to build up in the upper levels of the cyclone. This results in the formation of an upper level anticyclone, or an area of high atmospheric pressure above the central dense overcast. Consequently, most of this built up air flows outward anticyclonically above the tropical cyclone. Outside the forming eye, the anticyclone at the upper levels of the atmosphere enhances the flow towards the center of the cyclone, pushing air towards the eyewall and causing a positive feedback loop. However, a small portion of the built-up air, instead of flowing outward, flows inward towards the center of the storm. This causes air pressure to build even further, to the point where the weight of the air counteracts the strength of the updrafts in the center of the storm. Air begins to descend in the center of the storm, creating a mostly rain-free areaa newly formed eye. Many aspects of this process remain a mystery. Scientists do not know why a ring of convection forms around the center of circulation instead of on top of it, or why the upper-level anticyclone ejects only a portion of the excess air above the storm. Many theories exist as to the exact process by which the eye forms: all that is known for sure is that the eye is necessary for tropical cyclones to achieve high wind speeds. The formation of an eye is almost always an indicator of increasing tropical cyclone organisation and strength. Because of this, forecasters watch developing storms closely for signs of eye formation. For storms with a clear eye, detection of the eye is as simple as looking at pictures from a weather satellite. However, for storms with a filled eye, or an eye completely covered by the central dense overcast, other detection methods must be used. Observations from ships and hurricane hunters can pinpoint an eye visually, by looking for a drop in wind speed or lack of rainfall in the storm's center. In the United States, South Korea, and a few other countries, a network of NEXRAD Doppler weather radar stations can detect eyes near the coast. Weather satellites also carry equipment for measuring atmospheric water vapor and cloud temperatures, which can be used to spot a forming eye. In addition, scientists have recently discovered that the amount of ozone in the eye is much higher than the amount in the eyewall, due to air sinking from the ozone-rich stratosphere. Instruments sensitive to ozone perform measurements, which are used to observe rising and sinking columns of air, and provide indication of the formation of an eye, even before satellite imagery can determine its formation. One satellite study found eyes detected on average for 30 hours per storm. Associated phenomena Eyewall replacement cycles Eyewall replacement cycles, also called concentric eyewall cycles, naturally occur in intense tropical cyclones, generally with winds greater than 185km/h (115mph), or major hurricanes (Category 3 or higher on the Saffir–Simpson hurricane scale). When tropical cyclones reach this intensity, and the eyewall contracts or is already sufficiently small (see above), some of the outer rainbands may strengthen and organize into a ring of thunderstormsan outer eyewallthat slowly moves inward and robs the inner eyewall of its needed moisture and angular momentum. Since the strongest winds are located in a cyclone's eyewall, the tropical cyclone usually weakens during this phase, as the inner wall is "choked" by the outer wall. Eventually the outer eyewall replaces the inner one completely, and the storm can re-intensify. The discovery of this process was partially responsible for the end of the U.S. government's hurricane modification experiment Project Stormfury. This project set out to seed clouds outside the eyewall, causing a new eyewall to form and weakening the storm. When it was discovered that this was a natural process due to hurricane dynamics, the project was quickly abandoned. Research shows that 53 percent of intense hurricanes undergo at least one of these cycles during its existence. Hurricane Allen in 1980 went through repeated eyewall replacement cycles, fluctuating between Category5 and Category4 status on the Saffir–Simpson scale several times, while Hurricane Juliette (2001) is a documented case of triple eyewalls. Moats A moat in a tropical cyclone is a clear ring outside the eyewall, or between concentric eyewalls, characterized by subsidence (slowly sinking air) and little or no precipitation. The air flow in the moat is dominated by the cumulative effects of stretching and shearing. The moat between eyewalls is an area in the storm where the rotational speed of the air changes greatly in proportion to the distance from the storm's center; these areas are also known as rapid filamentation zones. Such areas can potentially be found near any vortex of sufficient strength, but are most pronounced in strong tropical cyclones. Eyewall mesovortices Eyewall mesovortices are small scale rotational features found in the eyewalls of intense tropical cyclones. They are similar, in principle, to small "suction vortices" often observed in multiple-vortex tornadoes. In these vortices, wind speeds may be greater than anywhere else in the eyewall. Eyewall mesovortices are most common during periods of intensification in tropical cyclones. Eyewall mesovortices often exhibit unusual behavior in tropical cyclones. They usually revolve around the low pressure center, but sometimes they remain stationary. Eyewall mesovortices have even been documented to cross the eye of a storm. These phenomena have been documented observationally, experimentally, and theoretically. Eyewall mesovortices are a significant factor in the formation of tornadoes after tropical cyclone landfall. Mesovortices can spawn rotation in individual convective cells or updrafts (a mesocyclone), which leads to tornadic activity. At landfall, friction is generated between the circulation of the tropical cyclone and land. This can allow the mesovortices to descend to the surface, causing tornadoes. These tornadic circulations in the boundary layer may be prevalent in the inner eyewalls of intense tropical cyclones but with short duration and small size they are not frequently observed. Stadium effect The stadium effect is a phenomenon observed in strong tropical cyclones. It is a fairly common event, where the clouds of the eyewall curve outward from the surface with height. This gives the eye an appearance resembling a sports stadium from the air. An eye is always larger at the top of the storm, and smallest at the bottom of the storm because the rising air in the eyewall follows isolines of equal angular momentum, which also slope outward with height. Eye-like features An eye-like structure is often found in intensifying tropical cyclones. Similar to the eye seen in hurricanes or typhoons, it is a circular area at the circulation center of the storm in which convection is absent. These eye-like features are most normally found in intensifying tropical storms and hurricanes of Category1 strength on the Saffir-Simpson scale. For example, an eye-like feature was found in Hurricane Beta when the storm had maximum wind speeds of only 80km/h (50mph), well below hurricane force. The features are typically not visible on visible wavelengths or infrared wavelengths from space, although they are easily seen on microwave satellite imagery. Their development at the middle levels of the atmosphere is similar to the formation of a complete eye, but the features might be horizontally displaced due to vertical wind shear. Hazards Though the eye is by far the calmest and quietest part of the storm (at least on land), with no wind at the center and typically clear skies, it is possibly the most hazardous area on the ocean. In the eyewall, wind-driven waves all travel in the same direction. In the center of the eye, however, the waves converge from all directions, creating erratic crests that can build on each other to become rogue waves. The maximum height of hurricane waves is unknown, but measurements during Hurricane Ivan when it was a Category 4 hurricane estimated that waves near the eyewall exceeded 40m (130ft) from peak to trough. A common mistake, especially in areas where hurricanes are uncommon, is for residents to exit their homes to inspect the damage while the calm eye passes over, only to be caught off guard by the violent winds in the opposite eyewall. Other cyclones Though only tropical cyclones have structures officially termed "eyes", there are other weather systems that can exhibit eye-like features. Polar lows Polar lows are mesoscale weather systems, typically smaller than 1,000km (600mi) across, found near the poles. Like tropical cyclones, they form over relatively warm water and can feature deep convection and winds of gale force or greater. Unlike storms of tropical nature, however, they thrive in much colder temperatures and at much higher latitudes. They are also smaller and last for shorter durations, with few lasting longer than a day or so. Despite these differences, they can be very similar in structure to tropical cyclones, featuring a clear eye surrounded by an eyewall and bands of rain and snow. Extratropical cyclones Extratropical cyclones are areas of low pressure which exist at the boundary of different air masses. Almost all storms found at mid-latitudes are extratropical in nature, including classic North American nor'easters and European windstorms. The most severe of these can have a clear "eye" at the site of lowest barometric pressure, though it is usually surrounded by lower, non-convective clouds and is found near the back end of the storm. Subtropical cyclones Subtropical cyclones are low-pressure systems with some extratropical characteristics and some tropical characteristics. As such, they may have an eye while not being truly tropical in nature. Subtropical cyclones can be very hazardous, generating high winds and seas, and often evolve into fully tropical cyclones. For this reason, the National Hurricane Center began including subtropical storms in its naming scheme in 2002. Tornadoes Tornadoes are destructive, small-scale storms, which produce the fastest winds on earth. There are two main types: single-vortex tornadoes, which consist of a single spinning column of air, and multiple-vortex tornadoes, which consist of small "suction vortices," resembling mini-tornadoes themselves, all rotating around a common center. Both types of vortex are theorized to contain calm eyes. These theories are supported by doppler velocity observations by weather radar and eyewitness accounts. Certain single-vortex tornadoes have also been shown to be relatively clear near the center vortex, visible by weak dBZ (reflectivity) returns seen on mobile radar, as well as containing slower wind speeds. Extraterrestrial vortices NASA reported in November 2006 that the Cassini spacecraft observed a "hurricane-like" storm locked to the south pole of Saturn with a clearly defined eyewall. The observation was particularly notable as eyewall clouds had not previously been seen on any planet other than Earth (including a failure to observe an eyewall in the Great Red Spot of Jupiter by the Galileo spacecraft). In 2007, very large vortices on both poles of Venus were observed by the Venus Express mission of the European Space Agency to have a dipole eye structure.
Physical sciences
Storms
Earth science
18413531
https://en.wikipedia.org/wiki/Sustainability
Sustainability
Sustainability is a social goal for people to co-exist on Earth over a long period of time. Definitions of this term are disputed and have varied with literature, context, and time. Sustainability usually has three dimensions (or pillars): environmental, economic, and social. Many definitions emphasize the environmental dimension. This can include addressing key environmental problems, including climate change and biodiversity loss. The idea of sustainability can guide decisions at the global, national, organizational, and individual levels. A related concept is that of sustainable development, and the terms are often used to mean the same thing. UNESCO distinguishes the two like this: "Sustainability is often thought of as a long-term goal (i.e. a more sustainable world), while sustainable development refers to the many processes and pathways to achieve it." Details around the economic dimension of sustainability are controversial. Scholars have discussed this under the concept of weak and strong sustainability. For example, there will always be tension between the ideas of "welfare and prosperity for all" and environmental conservation, so trade-offs are necessary. It would be desirable to find ways that separate economic growth from harming the environment. This means using fewer resources per unit of output even while growing the economy. This decoupling reduces the environmental impact of economic growth, such as pollution. Doing this is difficult. Some experts say there is no evidence that such a decoupling is happening at the required scale. It is challenging to measure sustainability as the concept is complex, contextual, and dynamic. Indicators have been developed to cover the environment, society, or the economy but there is no fixed definition of sustainability indicators. The metrics are evolving and include indicators, benchmarks and audits. They include sustainability standards and certification systems like Fairtrade and Organic. They also involve indices and accounting systems such as corporate sustainability reporting and Triple Bottom Line accounting. It is necessary to address many barriers to sustainability to achieve a sustainability transition or sustainability transformation. Some barriers arise from nature and its complexity while others are extrinsic to the concept of sustainability. For example, they can result from the dominant institutional frameworks in countries. Global issues of sustainability are difficult to tackle as they need global solutions. Existing global organizations such as the UN and WTO are seen as inefficient in enforcing current global regulations. One reason for this is the lack of suitable sanctioning mechanisms. Governments are not the only sources of action for sustainability. For example, business groups have tried to integrate ecological concerns with economic activity, seeking sustainable business. Religious leaders have stressed the need for caring for nature and environmental stability. Individuals can also live more sustainably. Some people have criticized the idea of sustainability. One point of criticism is that the concept is vague and only a buzzword. Another is that sustainability might be an impossible goal. Some experts have pointed out that "no country is delivering what its citizens need without transgressing the biophysical planetary boundaries". Definitions Current usage Sustainability is regarded as a "normative concept". This means it is based on what people value or find desirable: "The quest for sustainability involves connecting what is known through scientific study to applications in pursuit of what people want for the future." The 1983 UN Commission on Environment and Development (Brundtland Commission) had a big influence on the use of the term sustainability today. The commission's 1987 Brundtland Report provided a definition of sustainable development. The report, Our Common Future, defines it as development that "meets the needs of the present without compromising the ability of future generations to meet their own needs". The report helped bring sustainability into the mainstream of policy discussions. It also popularized the concept of sustainable development. Some other key concepts to illustrate the meaning of sustainability include: It may be a fuzzy concept but in a positive sense: the goals are more important than the approaches or means applied; It connects with other essential concepts such as resilience, adaptive capacity, and vulnerability. Choices matter: "it is not possible to sustain everything, everywhere, forever"; Scale matters in both space and time, and place matters; Limits exist (see planetary boundaries). In everyday usage, sustainability often focuses on the environmental dimension. Specific definitions Scholars say that a single specific definition of sustainability may never be possible. But the concept is still useful. There have been attempts to define it, for example: "Sustainability can be defined as the capacity to maintain or improve the state and availability of desirable materials or conditions over the long term." "Sustainability [is] the long-term viability of a community, set of social institutions, or societal practice. In general, sustainability is understood as a form of intergenerational ethics in which the environmental and economic actions taken by present persons do not diminish the opportunities of future persons to enjoy similar levels of wealth, utility, or welfare." "Sustainability means meeting our own needs without compromising the ability of future generations to meet their own needs. In addition to natural resources, we also need social and economic resources. Sustainability is not just environmentalism. Embedded in most definitions of sustainability we also find concerns for social equity and economic development." Some definitions focus on the environmental dimension. The Oxford Dictionary of English defines sustainability as: "the property of being environmentally sustainable; the degree to which a process or enterprise is able to be maintained or continued while avoiding the long-term depletion of natural resources". Historical usage The term sustainability is derived from the Latin word sustinere. "To sustain" can mean to maintain, support, uphold, or endure. So sustainability is the ability to continue over a long period of time. In the past, sustainability referred to environmental sustainability. It meant using natural resources so that people in the future could continue to rely on them in the long term. The concept of sustainability, or Nachhaltigkeit in German, goes back to Hans Carl von Carlowitz (1645–1714), and applied to forestry. The term for this now would be sustainable forest management. He used this term to mean the long-term responsible use of a natural resource. In his 1713 work Silvicultura oeconomica, he wrote that "the highest art/science/industriousness [...] will consist in such a conservation and replanting of timber that there can be a continuous, ongoing and sustainable use". The shift in use of "sustainability" from preservation of forests (for future wood production) to broader preservation of environmental resources (to sustain the world for future generations) traces to a 1972 book by Ernst Basler, based on a series of lectures at M.I.T. The idea itself goes back a very long time: Communities have always worried about the capacity of their environment to sustain them in the long term. Many ancient cultures, traditional societies, and indigenous peoples have restricted the use of natural resources. Comparison to sustainable development The terms sustainability and sustainable development are closely related. In fact, they are often used to mean the same thing. Both terms are linked with the "three dimensions of sustainability" concept. One distinction is that sustainability is a general concept, while sustainable development can be a policy or organizing principle. Scholars say sustainability is a broader concept because sustainable development focuses mainly on human well-being. Sustainable development has two linked goals. It aims to meet human development goals. It also aims to enable natural systems to provide the natural resources and ecosystem services needed for economies and society. The concept of sustainable development has come to focus on economic development, social development and environmental protection for future generations. Dimensions Development of three dimensions Scholars usually distinguish three different areas of sustainability. These are the environmental, the social, and the economic. Several terms are in use for this concept. Authors may speak of three pillars, dimensions, components, aspects, perspectives, factors, or goals. All mean the same thing in this context. The three dimensions paradigm has few theoretical foundations. The popular three intersecting circles, or Venn diagram, representing sustainability first appeared in a 1987 article by the economist Edward Barbier. Scholars rarely question the distinction itself. The idea of sustainability with three dimensions is a dominant interpretation in the literature. In the Brundtland Report, the environment and development are inseparable and go together in the search for sustainability. It described sustainable development as a global concept linking environmental and social issues. It added sustainable development is important for both developing countries and industrialized countries: The Rio Declaration from 1992 is seen as "the foundational instrument in the move towards sustainability". It includes specific references to ecosystem integrity. The plan associated with carrying out the Rio Declaration also discusses sustainability in this way. The plan, Agenda 21, talks about economic, social, and environmental dimensions: Agenda 2030 from 2015 also viewed sustainability in this way. It sees the 17 Sustainable Development Goals (SDGs) with their 169 targets as balancing "the three dimensions of sustainable development, the economic, social and environmental". Hierarchy Scholars have discussed how to rank the three dimensions of sustainability. Many publications state that the environmental dimension is the most important. (Planetary integrity or ecological integrity are other terms for the environmental dimension.) Protecting ecological integrity is the core of sustainability according to many experts. If this is the case then its environmental dimension sets limits to economic and social development. The diagram with three nested ellipses is one way of showing the three dimensions of sustainability together with a hierarchy: It gives the environmental dimension a special status. In this diagram, the environment includes society, and society includes economic conditions. Thus it stresses a hierarchy. Another model shows the three dimensions in a similar way: In this SDG wedding cake model, the economy is a smaller subset of the societal system. And the societal system in turn is a smaller subset of the biosphere system. In 2022 an assessment examined the political impacts of the Sustainable Development Goals. The assessment found that the "integrity of the earth's life-support systems" was essential for sustainability. The authors said that "the SDGs fail to recognize that planetary, people and prosperity concerns are all part of one earth system, and that the protection of planetary integrity should not be a means to an end, but an end in itself". The aspect of environmental protection is not an explicit priority for the SDGs. This causes problems as it could encourage countries to give the environment less weight in their developmental plans. The authors state that "sustainability on a planetary scale is only achievable under an overarching Planetary Integrity Goal that recognizes the biophysical limits of the planet". Other frameworks bypass the compartmentalization of sustainability into separate dimensions completely. Environmental sustainability The environmental dimension is central to the overall concept of sustainability. People became more and more aware of environmental pollution in the 1960s and 1970s. This led to discussions on sustainability and sustainable development. This process began in the 1970s with concern for environmental issues. These included natural ecosystems or natural resources and the human environment. It later extended to all systems that support life on Earth, including human society. Reducing these negative impacts on the environment would improve environmental sustainability. Environmental pollution is not a new phenomenon. But it has been only a local or regional concern for most of human history. Awareness of global environmental issues increased in the 20th century. The harmful effects and global spread of pesticides like DDT came under scrutiny in the 1960s. In the 1970s it emerged that chlorofluorocarbons (CFCs) were depleting the ozone layer. This led to the de facto ban of CFCs with the Montreal Protocol in 1987. In the early 20th century, Arrhenius discussed the effect of greenhouse gases on the climate (see also: history of climate change science). Climate change due to human activity became an academic and political topic several decades later. This led to the establishment of the IPCC in 1988 and the UNFCCC in 1992. In 1972, the UN Conference on the Human Environment took place. It was the first UN conference on environmental issues. It stated it was important to protect and improve the human environment.It emphasized the need to protect wildlife and natural habitats: In 2000, the UN launched eight Millennium Development Goals. The aim was for the global community to achieve them by 2015. Goal 7 was to "ensure environmental sustainability". But this goal did not mention the concepts of social or economic sustainability. Specific problems often dominate public discussion of the environmental dimension of sustainability: In the 21st century these problems have included climate change, biodiversity and pollution. Other global problems are loss of ecosystem services, land degradation, environmental impacts of animal agriculture and air and water pollution, including marine plastic pollution and ocean acidification. Many people worry about human impacts on the environment. These include impacts on the atmosphere, land, and water resources. Human activities now have an impact on Earth's geology and ecosystems. This led Paul Crutzen to call the current geological epoch the Anthropocene. Economic sustainability The economic dimension of sustainability is controversial. This is because the term development within sustainable development can be interpreted in different ways. Some may take it to mean only economic development and growth. This can promote an economic system that is bad for the environment. Others focus more on the trade-offs between environmental conservation and achieving welfare goals for basic needs (food, water, health, and shelter). Economic development can indeed reduce hunger or energy poverty. This is especially the case in the least developed countries. That is why Sustainable Development Goal 8 calls for economic growth to drive social progress and well-being. Its first target is for: "at least 7 per cent GDP growth per annum in the least developed countries". However, the challenge is to expand economic activities while reducing their environmental impact. In other words, humanity will have to find ways how societal progress (potentially by economic development) can be reached without excess strain on the environment. The Brundtland report says poverty causes environmental problems. Poverty also results from them. So addressing environmental problems requires understanding the factors behind world poverty and inequality. The report demands a new development path for sustained human progress. It highlights that this is a goal for both developing and industrialized nations. UNEP and UNDP launched the Poverty-Environment Initiative in 2005 which has three goals. These are reducing extreme poverty, greenhouse gas emissions, and net natural asset loss. This guide to structural reform will enable countries to achieve the SDGs. It should also show how to address the trade-offs between ecological footprint and economic development. Social sustainability The social dimension of sustainability is not well defined. One definition states that a society is sustainable in social terms if people do not face structural obstacles in key areas. These key areas are health, influence, competence, impartiality and meaning-making. Some scholars place social issues at the very center of discussions. They suggest that all the domains of sustainability are social. These include ecological, economic, political, and cultural sustainability. These domains all depend on the relationship between the social and the natural. The ecological domain is defined as human embeddedness in the environment. From this perspective, social sustainability encompasses all human activities. It goes beyond the intersection of economics, the environment, and the social. There are many broad strategies for more sustainable social systems. They include improved education and the political empowerment of women. This is especially the case in developing countries. They include greater regard for social justice. This involves equity between rich and poor both within and between countries. And it includes intergenerational equity. Providing more social safety nets to vulnerable populations would contribute to social sustainability. A society with a high degree of social sustainability would lead to livable communities with a good quality of life (being fair, diverse, connected and democratic). Indigenous communities might have a focus on particular aspects of sustainability, for example spiritual aspects, community-based governance and an emphasis on place and locality. Proposed additional dimensions Some experts have proposed further dimensions. These could cover institutional, cultural, political, and technical dimensions. Cultural sustainability Some scholars have argued for a fourth dimension. They say the traditional three dimensions do not reflect the complexity of contemporary society. For example, Agenda 21 for culture and the United Cities and Local Governments argue that sustainable development should include a solid cultural policy. They also advocate for a cultural dimension in all public policies. Another example was the Circles of Sustainability approach, which included cultural sustainability. Interactions between dimensions Environmental and economic dimensions People often debate the relationship between the environmental and economic dimensions of sustainability. In academia, this is discussed under the term weak and strong sustainability. In that model, the weak sustainability concept states that capital made by humans could replace most of the natural capital. Natural capital is a way of describing environmental resources. People may refer to it as nature. An example for this is the use of environmental technologies to reduce pollution. The opposite concept in that model is strong sustainability. This assumes that nature provides functions that technology cannot replace. Thus, strong sustainability acknowledges the need to preserve ecological integrity. The loss of those functions makes it impossible to recover or repair many resources and ecosystem services. Biodiversity, along with pollination and fertile soils, are examples. Others are clean air, clean water, and regulation of climate systems. Weak sustainability has come under criticism. It may be popular with governments and business but does not ensure the preservation of the earth's ecological integrity. This is why the environmental dimension is so important. The World Economic Forum illustrated this in 2020. It found that $44 trillion of economic value generation depends on nature. This value, more than half of the world's GDP, is thus vulnerable to nature loss. Three large economic sectors are highly dependent on nature: construction, agriculture, and food and beverages. Nature loss results from many factors. They include land use change, sea use change and climate change. Other examples are natural resource use, pollution, and invasive alien species. Trade-offs Trade-offs between different dimensions of sustainability are a common topic for debate. Balancing the environmental, social, and economic dimensions of sustainability is difficult. This is because there is often disagreement about the relative importance of each. To resolve this, there is a need to integrate, balance, and reconcile the dimensions. For example, humans can choose to make ecological integrity a priority or to compromise it. Some even argue the Sustainable Development Goals are unrealistic. Their aim of universal human well-being conflicts with the physical limits of Earth and its ecosystems. Measurement tools Environmental impacts of humans There are several methods to measure or describe human impacts on Earth. They include the ecological footprint, ecological debt, carrying capacity, and sustainable yield. The idea of planetary boundaries is that there are limits to the carrying capacity of the Earth. It is important not to cross these thresholds to prevent irreversible harm to the Earth. These planetary boundaries involve several environmental issues. These include climate change and biodiversity loss. They also include types of pollution. These are biogeochemical (nitrogen and phosphorus), ocean acidification, land use, freshwater, ozone depletion, atmospheric aerosols, and chemical pollution. (Since 2015 some experts refer to biodiversity loss as change in biosphere integrity. They refer to chemical pollution as introduction of novel entities.) The IPAT formula measures the environmental impact of humans. It emerged in the 1970s. It states this impact is proportional to human population, affluence and technology. This implies various ways to increase environmental sustainability. One would be human population control. Another would be to reduce consumption and affluence such as energy consumption. Another would be to develop innovative or green technologies such as renewable energy. In other words, there are two broad aims. The first would be to have fewer consumers. The second would be to have less environmental footprint per consumer. The Millennium Ecosystem Assessment from 2005 measured 24 ecosystem services. It concluded that only four have improved over the last 50 years. It found 15 are in serious decline and five are in a precarious condition. Economic costs Experts in environmental economics have calculated the cost of using public natural resources. One project calculated the damage to ecosystems and biodiversity loss. This was the Economics of Ecosystems and Biodiversity project from 2007 to 2011. An entity that creates environmental and social costs often does not pay for them. The market price also does not reflect those costs. In the end, government policy is usually required to resolve this problem. Decision-making can take future costs and benefits into account. The tool for this is the social discount rate. The bigger the concern for future generations, the lower the social discount rate should be. Another approach is to put an economic value on ecosystem services. This allows us to assess environmental damage against perceived short-term welfare benefits. One calculation is that, "for every dollar spent on ecosystem restoration, between three and 75 dollars of economic benefits from ecosystem goods and services can be expected". In recent years, economist Kate Raworth has developed the concept of doughnut economics. This aims to integrate social and environmental sustainability into economic thinking. The social dimension acts as a minimum standard to which a society should aspire. The carrying capacity of the planet acts an outer limit. Barriers There are many reasons why sustainability is so difficult to achieve. These reasons have the name sustainability barriers. Before addressing these barriers it is important to analyze and understand them. Some barriers arise from nature and its complexity ("everything is related"). Others arise from the human condition. One example is the value-action gap. This reflects the fact that people often do not act according to their convictions. Experts describe these barriers as intrinsic to the concept of sustainability. Other barriers are extrinsic to the concept of sustainability. This means it is possible to overcome them. One way would be to put a price tag on the consumption of public goods. Some extrinsic barriers relate to the nature of dominant institutional frameworks. Examples would be where market mechanisms fail for public goods. Existing societies, economies, and cultures encourage increased consumption. There is a structural imperative for growth in competitive market economies. This inhibits necessary societal change. Furthermore, there are several barriers related to the difficulties of implementing sustainability policies. There are trade-offs between the goals of environmental policies and economic development. Environmental goals include nature conservation. Development may focus on poverty reduction. There are also trade-offs between short-term profit and long-term viability. Political pressures generally favor the short term over the long term. So they form a barrier to actions oriented toward improving sustainability. Barriers to sustainability may also reflect current trends. These could include consumerism and short-termism. Transition Characteristics While no consensus definition exists, sustainability transformation (or transition) can be understood as “a fundamental, system-wide reorganization across technological, economic and social factors, including paradigms, goals and values”. Sustainability transformation is a process which is complex, multi-dimensional and politically contested. It needs to occur at scales ranging from households and communities to states and regional and global governance institutions. However, societal transformations are politically contested because different stakeholders may disagree over both the ends that transformation should achieve and the means of achieving those ends. Another reason is that transformations may involve or require disrupting existing configurations of power and resources. There are long-standing debates in research and policy about whether democratic practices are capable of fostering timely, large-scale transformations towards sustainability. While a few scholars argue that large-scale transformation to sustainability will require the rollback of democratic safeguards or the imposition of technocratic or authoritarian rule, a majority of researchers on the democracy-environment nexus argue that democratization and sustainability transformation are mutually supportive. A sustainability transition requires major change in societies. They must change their fundamental values and organizing principles. These new values would emphasize "the quality of life and material sufficiency, human solidarity and global equity, and affinity with nature and environmental sustainability". A transition may only work if far-reaching lifestyle changes accompany technological advances. Scientists have pointed out that: "Sustainability transitions come about in diverse ways, and all require civil-society pressure and evidence-based advocacy, political leadership, and a solid understanding of policy instruments, markets, and other drivers." There are four possible overlapping processes of transformation. They each have different political dynamics. Technology, markets, government, or citizens can lead these processes. The European Environment Agency defines a sustainability transition as "a fundamental and wide-ranging transformation of a socio-technical system towards a more sustainable configuration that helps alleviate persistent problems such as climate change, pollution, biodiversity loss or resource scarcities." The concept of sustainability transitions is similar to the concept of energy transitions. One expert argues a sustainability transition must be "supported by a new kind of culture, a new kind of collaboration, [and] a new kind of leadership". It requires a large investment in "new and greener capital goods, while simultaneously shifting capital away from unsustainable systems". In 2024 an interdisciplinary group of experts including Chip Fletcher, William J. Ripple, Phoebe Barnard, Kamanamaikalani Beamer, Christopher Field, David Karl, David King, Michael E. Mann and Naomi Oreskes advocated for a paradigm shift toward genuine sustainability and resource regeneration. They said that "such a transformation is imperative to reverse the tide of biodiversity loss due to overconsumption and to reinstate the security of food and water supplies, which are foundational for the survival of global populations." Principles It is possible to divide action principles to make societies more sustainable into four types. These are nature-related, personal, society-related and systems-related principles. Nature-related principles: decarbonize; reduce human environmental impact by efficiency, sufficiency and consistency; be net-positive – build up environmental and societal capital; prefer local, seasonal, plant-based and labor-intensive; polluter-pays principle; precautionary principle; and appreciate and celebrate the beauty of nature. Personal principles: practise contemplation, apply policies with caution, celebrate frugality. Society-related principles: grant the least privileged the greatest support; seek mutual understanding, trust and many wins; strengthen social cohesion and collaboration; engage stakeholders; foster education – share knowledge and collaborate. Systems-related principles: apply systems thinking; foster diversity; make what is relevant to the public more transparent; maintain or increase option diversity. Example steps There are many approaches that people can take to transition to environmental sustainability. These include maintaining ecosystem services, protecting and co-creating common resources, reducing food waste, and promoting dietary shifts towards plant-based foods. Another is reducing population growth by cutting fertility rates. Others are promoting new green technologies, and adopting renewable energy sources while phasing out subsidies to fossil fuels. In 2017 scientists published an update to the 1992 World Scientists' Warning to Humanity. It showed how to move towards environmental sustainability. It proposed steps in three areas: Reduced consumption: reducing food waste, promoting dietary shifts towards mostly plant-based foods. Reducing the number of consumers: further reducing fertility rates and thus population growth. Technology and nature conservation: there are several related approaches. One is to maintain nature's ecosystem services. Another is promote new green technologies. Another is changing energy use. One aspect of this is to adopt renewable energy sources. At the same time it is necessary to end subsidies to energy production through fossil fuels. Agenda 2030 for the Sustainable Development Goals In 2015, the United Nations agreed the Sustainable Development Goals (SDGs). Their official name is Agenda 2030 for the Sustainable Development Goals. The UN described this programme as a very ambitious and transformational vision. It said the SDGs were of unprecedented scope and significance. The UN said: "We are determined to take the bold and transformative steps which are urgently needed to shift the world on to a sustainable and resilient path." The 17 goals and targets lay out transformative steps. For example, the SDGs aim to protect the future of planet Earth. The UN pledged to "protect the planet from degradation, including through sustainable consumption and production, sustainably managing its natural resources and taking urgent action on climate change, so that it can support the needs of the present and future generations". Options for overcoming barriers Issues around economic growth Eco-economic decoupling is an idea to resolve tradeoffs between economic growth and environmental conservation. The idea is to "decouple environmental bads from economic goods as a path towards sustainability". This would mean "using less resources per unit of economic output and reducing the environmental impact of any resources that are used or economic activities that are undertaken". The intensity of pollutants emitted makes it possible to measure pressure on the environment. This in turn makes it possible to measure decoupling. This involves following changes in the emission intensity associated with economic output. Examples of absolute long-term decoupling are rare. But some industrialized countries have decoupled GDP growth from production- and consumption-based emissions. Yet, even in this example, decoupling alone is not enough. It is necessary to accompany it with "sufficiency-oriented strategies and strict enforcement of absolute reduction targets". One study in 2020 found no evidence of necessary decoupling. This was a meta-analysis of 180 scientific studies. It found that there is "no evidence of the kind of decoupling needed for ecological sustainability" and that "in the absence of robust evidence, the goal of decoupling rests partly on faith". Some experts have questioned the possibilities for decoupling and thus the feasibility of green growth. Some have argued that decoupling on its own will not be enough to reduce environmental pressures. They say it would need to include the issue of economic growth. There are several reasons why adequate decoupling is currently not taking place. These are rising energy expenditure, rebound effects, problem shifting, the underestimated impact of services, the limited potential of recycling, insufficient and inappropriate technological change, and cost-shifting. The decoupling of economic growth from environmental deterioration is difficult. This is because the entity that causes environmental and social costs does not generally pay for them. So the market price does not express such costs. For example, the cost of packaging into the price of a product. may factor in the cost of packaging. But it may omit the cost of disposing of that packaging. Economics describes such factors as externalities, in this case a negative externality. Usually, it is up to government action or local governance to deal with externalities. There are various ways to incorporate environmental and social costs and benefits into economic activities. Examples include: taxing the activity (the polluter pays); subsidizing activities with positive effects (rewarding stewardship); and outlawing particular levels of damaging practices (legal limits on pollution). Government action and local governance A textbook on natural resources and environmental economics stated in 2011: "Nobody who has seriously studied the issues believes that the economy's relationship to the natural environment can be left entirely to market forces." This means natural resources will be over-exploited and destroyed in the long run without government action. Elinor Ostrom (winner of the 2009Nobel economics prize) expanded on this. She stated that local governance (or self-governance) can be a third option besides the market or the national government. She studied how people in small, local communities manage shared natural resources. She showed that communities using natural resources can establish rules their for use and maintenance. These are resources such as pastures, fishing waters, and forests. This leads to both economic and ecological sustainability. Successful self-governance needs groups with frequent communication among participants. In this case, groups can manage the usage of common goods without overexploitation. Based on Ostrom's work, some have argued that: "Common-pool resources today are overcultivated because the different agents do not know each other and cannot directly communicate with one another." Global governance Questions of global concern are difficult to tackle. That is because global issues need global solutions. But existing global organizations (UN, WTO, and others) do not have sufficient means. For example, they lack sanctioning mechanisms to enforce existing global regulations. Some institutions do not enjoy universal acceptance. An example is the International Criminal Court. Their agendas are not aligned (for example UNEP, UNDP, and WTO) And some accuse them of nepotism and mismanagement.  Multilateral international agreements, treaties, and intergovernmental organizations (IGOs) face further challenges. These result in barriers to sustainability. Often these arrangements rely on voluntary commitments. An example is Nationally Determined Contributions for climate action. There can be a lack of enforcement of existing national or international regulation. And there can be gaps in regulation for international actors such as multi-national enterprises. Critics of some global organizations say they lack legitimacy and democracy. Institutions facing such criticism include the WTO, IMF, World Bank, UNFCCC, G7, G8 and OECD. Responses by nongovernmental stakeholders Businesses Sustainable business practices integrate ecological concerns with social and economic ones. One accounting framework for this approach uses the phrase "people, planet, and profit". The name of this approach is the triple bottom line. The circular economy is a related concept. Its goal is to decouple environmental pressure from economic growth. Growing attention towards sustainability has led to the formation of many organizations. These include the Sustainability Consortium of the Society for Organizational Learning, the Sustainable Business Institute, and the World Business Council for Sustainable Development. Supply chain sustainability looks at the environmental and human impacts of products in the supply chain. It considers how they move from raw materials sourcing to production, storage, and delivery, and every transportation link on the way. Religious communities Religious leaders have stressed the importance of caring for nature and environmental sustainability. In 2015 over 150 leaders from various faiths issued a joint statement to the UN Climate Summit in Paris 2015. They reiterated a statement made in the Interfaith Summit in New York in 2014:As representatives from different faith and religious traditions, we stand together to express deep concern for the consequences of climate change on the earth and its people, all entrusted, as our faiths reveal, to our common care. Climate change is indeed a threat to life, a precious gift we have received and that we need to care for. Individuals Individuals can also live in a more sustainable way. They can change their lifestyles, practise ethical consumerism, and embrace frugality. These sustainable living approaches can also make cities more sustainable. They do this by altering the built environment. Such approaches include sustainable transport, sustainable architecture, and zero emission housing. Research can identify the main issues to focus on. These include flying, meat and dairy products, car driving, and household sufficiency. Research can show how to create cultures of sufficiency, care, solidarity, and simplicity. Some young people are using activism, litigation, and on-the-ground efforts to advance sustainability. This is particularly the case in the area of climate action. Assessments and reactions Impossible to reach Scholars have criticized the concepts of sustainability and sustainable development from different angles. One was Dennis Meadows, one of the authors of the first report to the Club of Rome, called "The Limits to Growth". He argued many people deceive themselves by using the Brundtland definition of sustainability. This is because the needs of the present generation are actually not met today. Instead, economic activities to meet present needs will shrink the options of future generations. Another criticism is that the paradigm of sustainability is no longer suitable as a guide for transformation. This is because societies are "socially and ecologically self-destructive consumer societies". Some scholars have even proclaimed the end of the concept of sustainability. This is because humans now have a significant impact on Earth's climate system and ecosystems. It might become impossible to pursue sustainability because of these complex, radical, and dynamic issues. Others have called sustainability a utopian ideal: "We need to keep sustainability as an ideal; an ideal which we might never reach, which might be utopian, but still a necessary one." Vagueness The term is often hijacked and thus can lose its meaning. People use it for all sorts of things, such as saving the planet to recycling your rubbish. A specific definition may never be possible. This is because sustainability is a concept that provides a normative structure. That describes what human society regards as good or desirable. But some argue that while sustainability is vague and contested it is not meaningless. Although lacking in a singular definition, this concept is still useful. Scholars have argued that its fuzziness can actually be liberating. This is because it means that "the basic goal of sustainability (maintaining or improving desirable conditions [...]) can be pursued with more flexibility". Confusion and greenwashing Sustainability has a reputation as a buzzword. People may use the terms sustainability and sustainable development in ways that are different to how they are usually understood. This can result in confusion and mistrust. So a clear explanation of how the terms are being used in a particular situation is important. Greenwashing is a practice of deceptive marketing. It is when a company or organization provides misleading information about the sustainability of a product, policy, or other activity. Investors are wary of this issue as it exposes them to risk. The reliability of eco-labels is also doubtful in some cases. Ecolabelling is a voluntary method of environmental performance certification and labelling for food and consumer products. The most credible eco-labels are those developed with close participation from all relevant stakeholders.
Biology and health sciences
Ecology
null
19417081
https://en.wikipedia.org/wiki/Norfolk%20Black
Norfolk Black
The Norfolk Black, also known as the Black Spanish or Black Turkey, is a British breed of domestic turkey. It is thought to derive from birds taken to Britain from Spain, where they had arrived with Spanish explorers returning from the New World. It is generally considered the oldest turkey breed in the UK. History Turkeys were brought to Europe by early conquistadors returning from the New World, and were introduced to Britain – probably from Spain – in the early sixteenth century. According to the Chronicle of the Kings of England of Richard Baker of 1643, this was in the fifteenth year of the reign of Henry VIII, or about 1524. William Strickland is often credited with bringing them. Blackbirds had occasionally been seen among New World flocks of wild birds; European breeders selectively bred for this colour. In England, turkey farming was carried out mainly in East Anglia, particularly in Norfolk. In the seventeenth or eighteenth century, early colonists travelling to the New World took black-coloured turkeys with them. Cross-breeding of some of these with Meleagris gallopavo silvestris, the Eastern sub-species of the wild turkey, led to the later development of the Bronze, Narragansett and Slate breeds. They remained a commercially farmed variety in the U.S. until the early 20th century, but fell out of favour after the development of the Broad Breasted Bronze and Broad Breasted White. Reasonably common in Europe, they are considered an endangered variety of heritage turkey today by the American Livestock Breeds Conservancy, and are also included in Slow Food USA's Ark of Taste, a catalogue of heritage foods in danger of extinction. A 1998 census conducted by the American Livestock Breeds Conservancy found that only 200 Black Spanish turkeys remained in the United States, which just 15 different breeders were raising. To help with conservation efforts, the Accokeek Foundation helped reintroduce this bird to the Potomac River tidewater region by sharing breeding stock with other historical museums and local farmers. A rafter of Black Spanish turkeys is currently being preserved by the Heritage Breed Livestock Conservation Program within the National Colonial Farm at Piscataway Park to increase public awareness of this threatened breed.
Biology and health sciences
Turkeys
Animals
19419701
https://en.wikipedia.org/wiki/Pharmaceutical%20engineering
Pharmaceutical engineering
Pharmaceutical engineering is a branch of engineering focused on discovering, formulating, and manufacturing medication, analytical and quality control processes, and on designing, building, and improving manufacturing sites that produce drugs. It utilizes the fields of chemical engineering, biomedical engineering, pharmaceutical sciences, and industrial engineering. History Humans have a long history of using derivatives of natural resources, such as plants, as medication. However, it was not until the late 19th century when the technological advancements of chemical companies were combined with medical research that scientists began to manipulate and engineer new medications, drug delivery techniques, and methods of mass production. Synthesizing new medications One of the first prominent examples of an engineered, synthetic medication was made by Paul Erlich. Erlich had found that Atoxyl, an arsenic-containing compound which is harmful to humans, was very effective at killing Treponema pallidum, the bacteria which causes syphilis. He hypothesized that if the structure of Atoxyl was altered, a "magic bullet" could potentially be identified which would kill the parasitic bacteria without having any adverse effects on human health. He developed many compounds stemming from the chemical structure of Atoxyl and eventually identified one compound which was the most effective against syphilis while being the least harmful to humans, which became known as Salvarsan. Salvarsan was widely used to treat syphilis within years of its discovery. Beginning of mass production In 1928, Alexander Fleming discovered a mold named Penicillium chrysogenum which prevented many types of bacteria from growing. Scientists identified the potential of this mold to provide treatment in humans against bacteria which cause infections. During World War II, the United Kingdom and the United States worked together to find a method of mass-producing penicillin, a derivative of the Penicillium mold, which had the potential to save many lives during the war since it could treat infections common in injured soldiers. Although penicillin could be isolated from the mold in a laboratory setting, there was no known way to obtain the amount of medication needed to treat the quantity of people who needed it. Scientists with major chemical companies such as Pfizer were able to develop a deep-fermentation process which could produce a high yield of penicillin. In 1944, Pfizer opened the first penicillin factory, and its products were exported to aid the war efforts overseas. Controlled drug release Tablets for oral consumption of medication have been utilized since approximately 1500 B.C.; however, for a long time the only method of drug release was immediate release, meaning all of the medication is released in the body at once. In the 1950s, sustained release technology was developed. Through mechanisms such as osmosis and diffusion, pills were designed that could release the medication over a 12-hour to 24-hour period. Smith, Kline & French developed one of the first major successful sustained release technologies. Their formulation consisted of a collection of small tablets taken at the same time, with varying amounts of wax coating that allowed some tablets to dissolve in the body faster than others. The result was a continuous release of the drug as it travelled through the intestinal tract. Although modern day research focuses on extending the controlled release timescale to the order of months, once-a-day and twice-a-day pills are still the most widely utilized controlled drug release method. Formation of the ISPE In 1980, the International Society for Pharmaceutical Engineering was formed to support and guide professionals in the pharmaceutical industry through all parts of the process of bringing new medications to the market. The ISPE writes standards and guidelines for individuals and companies to use and to model their practices after. The ISPE also hosts training sessions and conferences for professionals to attend, learn, and collaborate with others in the field.
Technology
Disciplines
null
19420000
https://en.wikipedia.org/wiki/Oryza%20glaberrima
Oryza glaberrima
Oryza glaberrima, commonly known as African rice, is one of the two domesticated rice species. It was first domesticated and grown in West Africa around 3,000 years ago. In agriculture, it has largely been replaced by higher-yielding Asian rice (O. sativa), and the number of varieties grown is declining. It still persists, making up an estimated 20% of rice grown in West Africa. It is now rarely sold in West African markets, having been replaced by Asian strains. In comparison to Asian rice, African rice is hardy, pest-resistant, low-labour, and suited to a larger variety of African conditions. It is described as filling, with a distinct nutty flavour. It is also grown for cultural reasons; for instance, it is sacred to followers of Awasena (a traditional African religion) among the Jola people, and is a heritage variety in the United States. Crossbreeding between African and Asian rice is difficult, but there exist some crosses. Jones et al. 1997 and Gridley et al. 2002 provide hybrids combining glaberrimas disease resistance and sativas yield potential. History It is highly likely that humans have independently domesticated two different rice species. African rice is very genetically similar to wild African rice (O. barthii), as Asian rice (O. sativa) is to wild Asian rice (O. rufipogon), and these two divisions have wide genetic differences between them. O. barthii still grows wild in Africa, in a wide variety of open habitats. The Sahara was formerly wetter, with massive paleolakes in what is now the Western Sahara. As the climate dried, the wild rice retreated and probably became increasingly domesticated as it relied on humans for irrigation. Rice growing in deeper, more permanent water became floating rice. It was domesticated about 3000 years ago in the inland delta of the Upper Niger River, in what is now Mali. It then spread through West Africa. It has also been recorded off the east coast of Africa, in the Zanzibar Archipelago. O. barthii seedheads shatter, while O. glaberrima does not shatter as much. In the late fifteenth and sixteenth centuries, the Portuguese sailed to the Southern Rivers area in West Africa and wrote that the land was rich in rice. "hey said they found the country covered by vast crops, with many cotton trees and large fields planted in rice ... the country looked to them as having the aspect of a pond (i.e., a marais)". The Portuguese accounts speak of the Falupo Jola, Landuma, Biafada, and Bainik growing rice. André Álvares de Almada wrote about the dike systems used for rice cultivation, from which modern West African rice dike systems are descended. African rice was brought to the Americas with the transatlantic slave trade, arriving in Brazil probably by the 1550s and in the U.S. in 1784. The seed was carried as provisions on slave ships, and the technology and skills needed to grow it were brought by enslaved rice farmers. Newly imported African slaves were marketed for their rice-growing skills, as the high price of rice made it a major cash crop. Not all Africans came to the Americas with knowledge in rice growing, due to the vast variabilities in cultures and ethnicities, but the practice of cultivation was shared throughout the Carolina plantations, which allowed the enslaved people to develop a new sense of culture and made African rice the primary source of nutrition. The tolerance of African rice for brackish water meant it could be grown on coastal deltas, as it was in West Africa. There are numerous stories about how the rice came to North America, including a slave smuggling grains in her hair and a ship driven in to trade by a storm. African rice is a rare crop in Brazil, Guyana, El Salvador and Panama, but it is still occasionally grown there. There are also native South American rices, which makes it hard to recognize the arrival of African rice in histories. Asian rice came to West Africa in the late 1800s, and by the late twentieth century had substantially supplanted native African rice. However, African rice was still used in specific, often marginal habitats, and preferred for its taste. Farmers may grow African rice to eat and Asian rice to sell, as African rice is not exported. The 2007 food price shocks drove efforts to raise rice production. Rice-growing regions of Africa are generally net rice importers (partly due to a lack of good local rice-processing capacity) so price increases hurt. Among the efforts to increase yield was the adoption of nerica cultivars, crossbred to specifications from local farmers using African rice varieties provided by local farmers. These were bred during the 1990s and released in the early 21st century. Results so far have been mixed; the nerica varieties are less hardy and more labour-intensive, and effects on real-world yields vary. Subsidies of nerica seeds have also been criticized for encouraging the loss of native varieties and reducing the independence of farmers. Uses Multiple varieties of African rice are often grown so that the harvest is staggered. In this way, the harvest can be eaten fresh. Freshly harvested rice is moist, and can be puffed in fire, and eaten. Fried rice has a brownish color when fried; this is because of the husk which is green in color and turns brown when heated. African rice can be prepared in much the same way as Asian rice, but has a distinct nutty flavor, for which it is favored in West Africa. African rice grains are often reddish in colour; some varieties are strongly aromatic, other, like Carolina Gold, are not at all aromatic. African rice is also used in local traditional medicine. Traits Overall, O. glaberrima is considered a much more desirable and healthier choice in places like Nigeria by West African farmers, where it is used to make Ofada rice because of its high nutrition content, despite being less popular than O. sativa cultivars (). Appearance African rice is a tall rice plant, usually under but up to for floating varieties, which may also branch and root from higher stem nodes. Generally, African rice has small, pear-shaped grain, reddish bran and green to black hulls, straight, simply-branched panicles, and short, rounded ligules. There are, however, exceptions, and it can be hard to distinguish from Asian rice. For complete certainty, a genetic test can be used. Grain qualities Grains are brittle. Hardiness African rice is well adapted to the West African environment. It is drought- and deep-water-resistant, and tolerates fluctuations in water depth, iron toxicity, infertile soils, severe climatic conditions, and human neglect better than Asian rice. Some varieties also mature more quickly, and may be sown directly on higher ground, eliminating the need to transplant seedlings. Most are rain-watered, and the soil is often not cultivated. African rice has profuse vegetative growth, which smothers weeds. It exhibits better resistance to various rice pests and diseases, such as blast disease, African rice gall midge (Orseolia oryzivora), parasitic nematodes (Heterodera sacchari and Meloidogyne spp.), rice stripe necrosis virus (a Benyvirus), rice yellow mottle virus, and the parasitic plant Striga. Yield and processing African rices shatter more than Asian rices, possibly because they have not been domesticated for as long. A few varieties of African rice are as resistant to shattering as shatter-resistant Asian varieties, but most are not; on average, about half of the grains are scattered and lost. This is why yield is lower; when the heads of African rice are bagged before they become ripe, so that the shattered grains are caught in paper bags, the yield of African rice is the same as the yield of Asian rice. Like other grains, rice may lodge, or fall over, when grain heads are full. African rice's greater height and weaker stems makes it more likely to lodge, although it also lets it survive in deep water, and makes it easier to harvest. African rice tends to elongate rapidly if completely submerged, which is not advantageous in regions prone to short floods, as it weakens the plant. The grains of African rice are more brittle than those of Asian rice. The grains are more likely to break during industrial polishing. Broken rice is widely used in West Africa, and some cookbooks from the region will suggest manually breaking the grains for certain recipes, but most broken rice eaten is from Asian rice, about 16% of which is broken in processing. The genome of O. glaberrima has been sequenced, and was published in 2014. This allowed genomic as well as physiological comparison with related species, and identified some effects of some genes. Breeding African and Asian rice do not readily interpollinate, even under controlled conditions, and when they do, the offspring are very rarely fertile. Even the fertile crossbred offspring have low fertility. Crossbreeding seems to have succeeded in at least one area of Maritime Guinea, as some varieties there show crossbred genes. More recently, the nerica cultivars (new rice for Africa) have been developed using green revolution techniques like embryo rescue. Over 3000 crosses were made as part of the NERICA program. Breeding within the species is easier, and there are uncounted numbers of African rice varieties, although the majority may have been lost. A similar crossed variety was bred in the United States in 2011, and work is being done on crosses with Indian rice varieties. Cultivars African cultivars There are a great many varieties of African rice. In the 1960s older women in Jipalom (Ziguinchor Region, Senegal) could unhesitatingly name more than ten varieties of African rice that were no longer planted, besides the half-dozen that were then still being planted. Each woman would plant multiple different varieties, to suit varying microhabitats and to stagger the harvest. A 2006 survey showed that a village typically cultivated 25 varieties of rice; an individual household would on average have 14 varieties and grow four per year; this, however, is down from the seven to nine varieties per woman that was average in previous decades. Women, who are traditionally responsible for the seeds, trade them often over long-distance networks. Varieties, each with subtypes, include: aspera (Latin: "rough") ebenicolorata ("ebony-colored") evoluta ("unfolded") rigida ("rigid") rustica ("coarse") The cultivars the Africa Rice Center calls and have low shattering, and thus yields comparable with low-shattering Asian rice varieties. Scientists from the Africa Rice Center managed to cross-breed African rice with Asian rice varieties to produce a group of interspecific cultivars called New Rice for Africa (NERICA). American cultivars Carolina Gold is an heirloom cultivar grown in the early United States, sometimes known as golden-seed rice for the colour of its grains. Long-grain gold-seed rice boasted grains long (up 11% from ), and was brought to market by planter Joshua John Ward in the 1840s. Despite its popularity, the variety was lost in the American Civil War. Charleston Gold was released in 2011 and is a crossbreed of Carolina Gold and two breeding lines of O. s. indica called and (a dwarf, fragrant breeding line), which raised the yield, shortened the stem, and added an aromatic quality to the rice.
Biology and health sciences
Grains
Plants
6092793
https://en.wikipedia.org/wiki/Odobenocetops
Odobenocetops
Odobenocetops () is an extinct genus of small toothed whale known from Chile and Peru. Its fossils are found in Miocene-aged marine strata of the Bahía Inglesa Formation and Pisco Formation. Two species of Odobenocetops are currently recognized, O. peruvianus and the slightly younger O. leptodon. Odobenocetops is mostly known for its large asymmetric tusks, which emerge from pronounced processes formed by the premaxillae, known as the alveolar sheaths. These tusks are thought to be sexually dimorphic and are only strongly pronounced in male individuals, while females appear to possess two similarly sized tusks. In the holotype of O. peruvianus the elongated right tusk is broken, leaving its precise length ambiguous. O. leptodon on the other hand preserves complete tusks, showing that at least in this species the longer tusk reached a total length of long, of which is located outside of the alveolar sheath. While these tusks are reminiscent of the tusk seen in the closely related narwhals, they evolved independently. Their purpose remains unknown, but the most common interpretation is that they served a non-violent social role, as they are too fragile for combat. The alveolar sheaths on the other hand may have been used as orientation guides during foraging. Besides the two tusks in the upper jaw, Odobenocetops is thought to have been toothless. Another difference between this genus and other whales is that the melon, an organ important for echolocation, is reduced in O. leptodon and vestigial or entirely absent in O. peruvianus. At least the older species compensated for this by having large, dorsally located eyes giving it binocular vision. The fact that only the older species lost its melon has been taken as evidence that they were sister taxa, rather than one species evolving directly from the other. In addition to their vision or echolocation, Christian de Muizon argues that they may have possessed tactile hair, which are also found in walrus and to a lesser extent in Amazon river dolphins. Odobenocetops is among the cetaceans with the greatest range of head motion, exceeding even the values of the beluga whale. This may have helped while foraging, extending the neck in a way that keeps their tusks roughly parallel to the rest of the body. Due to the anatomy of the palate and other similarities to the walrus, it is thought that this whale was a suctionfeeding molluscivore, searching for bivalves on the ocean floor, uncovering them with precise jets of water, grasping the uncovered molluscs with a powerful upper lip and using its tongue like a piston to suck out the soft parts of their prey, leaving the shell intact. History and naming The first fossil material, a single skull missing much of its left side, was recovered in 1990 from the Sud Sadaco horizon of the Pisco Formation in Peru. Although initially thought to correlate with the earliest Pliocene, later studies have found that these sediments were deposited during the Miocene. This skull, designated USNM 460306 initially and later USNM 488252, was described by Christian de Muizon in 1993, establishing the genus Odobenocetops with O. peruvianus serving as the type species. Due to this genus' strange anatomy Muizon also coined the family Odobenocetopsidae. Several additional fragmentary fossils, namely periotic and tympanic bones, were later referred to the genus as well. More substantial material was found in the form of three additional specimens, one of which is thought to represent a female O. peruvianus while the other two were described as a second, younger species named O. leptodon. The holotype specimen of O. leptodon is a nearly complete skull with the associated atlas, the topmost of the neck vertebrae. The other specimen of O. leptodon is a much less complete skull, badly weathered and missing the right tusk, but preserving an assortment of postcranial elements such as ribs, vertebrae and a partial forelimb. While the referred O. peruvianus skull was found in the SAS horizon like the type specimen, the new species stems from the SAO horizon, which is slightly younger. The generic name Odobenocetops comes from the Greek odon for "tooth", baino which means "walk", the Latin word cetus for "whale" and ops, "like". In combination the name means "cetacean that seems to walk on its teeth", a name chosen both to reflect the animal's potential feeding position as well as referring to the similarity with the extant walrus (Odobenus). The species name of O. peruvianus refers to Peru, the country it was found in. Description Size The body length has been estimated to range from . It is possible that Odobenocetops reached a mass similar to that of modern narwhals, between . Skull The skull of O. peruvianus is large, measuring throughout its preserved length. The skull has a characteristic profile, appearing strongly concave between the elevated snout and skullroof. When viewed from above, it is also clearly separated into two large portions. The anterior most portion, which includes the premaxillae, tusks and nares is separated from the back of the head by a strong constriction, giving the skull somewhat of an hourglass-shape. The skulls of modern whales show a great variety of adaptations towards aquatic life, clearly setting them apart from all other mammals. Among these adaptations is what is commonly referred to as "telescoping", a term that generally describes the fact that bones typically far apart are very closely spaced in cetaceans and largely overlap. However, Odobenocetops is unique due to how its skull appears to reverse the telescoped condition of the cetacean skull. This is achieved through the maxilla and frontal bones regressing towards the tip of the snout and the bony nares being moved forward. Subsequently, this gives the rostrum its characteristic short and round appearance, in contrast to the elongated skulls found in other cetaceans. Related to this the type species O. peruvianus is thought to have lacked a melon (an important sensory organ), or at the least only had a vestigial melon. The bony nares are now located near the tip of the skull, in contrast to the blowholes of whales and dolphins located on the top of the skull. In other odontocetes, parts of the frontal and maxillae cover the temporal fossae. In Odobenocetops, these bones are reduced and narrowed so that the temporal fossae are open dorsally. Additionally the parietal bones are well exposed dorsally, which corresponds with a well developed temporalis muscle. The periotic and tympanic bones are similar to those in other dolphins. The eye-sockets are oriented upwards and sideways, and not fully laterally like in other dolphins. The palate is arched, large and deep like in walruses and besides the two tusks in the premaxilla, Odobenocetops was toothless. The tip of the snout, specifically the premaxilla, is covered in important insertion points for facial musculature while also housing a great number of neurovascular foramina. This has been interpreted as supporting a strong upper lip and potentially even vibrissae similar to those in a walrus. The skull of O. leptodon differs from that of O. peruvianus in several ways. The palate is much deeper, longer and wider and the anterior border is curved more gently, giving it a U-shape rather than a V-shape as in O. peruvianus. The palate itself is also asymmetrical and was likely positioned parallel to the seafloor, but not at a right angle with the sagittal plane. The apex of the snout in general is more massive than in the type species and at the tip of the rostrum, between the premaxillae, there is a unique pair of supplementary bones not present in the older species at all. These bones, which may have been the rostral or prenasal bones, are similar to what is seen in mammals such as elephant shrews, moles, tapirs, saigas and pigs. A unifying factor of these groups is the presence of a strong upper lip, further supporting what was already inferred for O. peruvianus. The premaxillary foramina are missing in O. leptodon and a dorsal fossa (shallow depression) is present on the premaxilla. This fossa suggests the presence of a melon in O. leptodon, an organ either absent or strongly reduced in O. peruvianus. On the other hand, the orbit in O. leptodon has an anterior edge that is only slightly concave, whereas it is deeply notched in O. peruvianus which suggests that the binocular vision of O. leptodon wasn't as well-developed as in the older form. Although no mandibles are known for either species, it has been inferred that it was short and toothless based on the well developed pterygoid and temporalis muscles. Like the rest of the skull, the periotic bone surrounding the inner ear is highly derived and does not closely resemble the typical anatomy expected from a cetacean. Still, the cochlear canal follows the general anatomy also present in its closest relatives the beluga and narwhal. The inner ear also has large semicircular canals, vestibular aqueduct and a large count of facial and vestibular nerve fibres. Other toothed whales have semicircular canals that are notably smaller than the cochlear canal. Again Odobenocetops most closely resembles the related beluga. It is possible that this correlates with increased mobility of the neck and head. Tusks In addition to the unique shape of the skull, Odobenocetops is most easily distinguished from other cetacean by the presence of a pair of asymmetric tusks composed entirely of dentine. These tusks are housed by large processes, known as the alveolar sheaths, formed by the premaxilla, which are directed back- and downwards at a 60° angle from the horizontal plane of the skull. Such a tusk is only known in a single other cetacean, the extant narwhal, in which they are also asymmetric. However, in narwhals the tusk is implanted in the left maxilla, whereas the tusk in Odobenocetops originates in the right premaxilla. The tusks in these two genera are therefore not homologous, and the occurrence of tusks in Odobenocetops is a convergence with narwhals. In the holotype of Odobenocetops peruvianus both tusks are incomplete. Based on the preserved elements it is estimated that the longer, right tusk measured between in length. The left tusk was notably smaller, being estimated at no more than long and possibly entirely contained in the premaxillary process. Following the discovery of O. leptodon, Muizon speculated that both tusks could have obtained larger sizes in other individuals. This however would require additional specimens to corroborate the hypothesis. The second skull, described by Muizon in 1999, differs significantly from the 1993 skull despite being found in the same horizon. Unlike the drastic size difference between the left and right tusk of the type specimen, the tusks of the second specimen were both mostly symmetrical and short. Rather than taking this as evidence for an additional species, Muizon suggests that this skull may have belonged to a female individual displaying sexual dimorphism. This is supported by the dimorphism seen in modern narwhals, in which only males possess the iconic tusk while most females lack them. The pulp cavity of the elongated tusk was long, indicating that it grew continuously. The short tusk also appeared to still be growing, even if at a much slower speed. The holotype of the younger species O. leptodon was found with both tusks in situ, the right one was needle-like and long, of which being located outside of the premaxillary process, the left tusk was only long, similar in length to the small tusk of O. perivianus. Despite the small tusk only being slightly longer than that of the type species, the wear facet of it indicates that it was erupted and not confined within the bony process of the premaxilla. The larger tusk also preserves a clear wear facet at its tip, giving the apex of the tusk a strongly oblique shape due to how the tusk was used. Given the tusks length and slenderness, it was likely very fragile, which is considered to be an argument against it being held at a 45° angle from the body. This is corroborated by the anatomy of the neck. The way the atlas and the occipital condyle articulate suggests that in neutral position, the neck would have been angled slightly downward, which inclines the tusk slightly upward into a position parallel to the rest of the body. Due to the fact that the tusk additionally projects slightly towards the side, Odobenocetops could lower its head even further than that without having its tooth get in the way of its front flippers. The atlas itself, like the skull, is asymmetrical, with a stronger left side to compensate for the enlarged tusk. Phylogeny Odobenocetops was an early member of the dolphin superfamily, more closely related to narwhals than dolphins but with tusks projecting towards the rear of its body. Muizon placed Odobenocetopsidae as a sister group to the Monodontidae (the family including the narwhal and beluga whale). Murakami and colleagues placed Odobenocetopsidae in a large clade together with Phocoenidae (porpoises), Monodontidae, and Albireonidae (an extinct group similar to porpoises). This clade originated in the Pacific Ocean in the Langhian (15–13 ma) and diversified from there during the Serravallian and Tortonian (13–7 ma). The relation between the two species may be that of sister taxa rather than successive species. Muizon points out that Odobenocetops leptodon clearly has a more derived palate than its older relative, yet retains the basal melon which is heavily reduced in O. peruvianus as a derived trait.. This suggests that they are two different branches of the same genus, rather than one species having evolved from the other. Despite their very limited range, no transitional form between Odobenocetops and other whales is known, leaving their precise origin a mystery. Paleobiology Senses The melon, an important fatty sensory organ present towards the front of the head and associated with echolocation in toothed whales, appears to be either heavily reduced or entirely absent in Odobenocetops due to the highly specialised skull shape of the animal. The anatomy of the inner ear as seen in O. peruvianus indicates that Odobenocetops was capable of ultrasonic hearing. Specifically, the cochlear anatomy resembles that of belugas and narwhals, which generally allows for the peak perception of sounds below 80 kHz. The anatomy points towards Odobenocetops being at the lower end of this range, its peak sensitivity likely ranging from 35 to 50 kHz. Despite this, O. peruvianus displays a series of characters suggesting that it was generally less capable at producing sounds itself. This includes not just the absent melon, but also the lack or the reduction of premaxillary sacs, nasal plugs and the diverticula in the nasal passage. Furthermore, the extremely derived skull of Odobenocetops likely means that many of the nasolabialis muscles were reduced or entirely absent. Overall this suggests that Odobenocetops peruvianus was most likely incapable of producing the beamed, gated signals that define echolocation and could only passively listen to ultrasonic sound. While this could still provide valuable information about the animals surroundings, it is not nearly as complex as the biosonar seen in other toothed whales. However, Muizon suggests that this may not have been a hindrance to the animal and that if positioned oblique to the seafloor, the enlarged and dorsally located eyes of the whale may have provided good binocular vision. This would mean that while losing its biosonar, O. peruvianus instead developed much better vision to compensate for this. Odobenocetops leptodon differed significantly in this regard. The eyes were still oriented more dorsally, but the shape of the orbits does not support the idea that this species also had well developed binocular vision. The precise state of this species' vision is uncertain, but it may have ranged from only having reduced binocular vision to no binocular vision at all. Unlike the older species though, O. leptodon preserves small depressions on the premaxillae that indicate that the animal was equipped with premaxillary sacs. This would mean that although the melon is absent in the type species, it was at least somewhat developed in O. leptodon. Such a small melon would further be supported by the width of the apex of the rostrum. In O. peruvianus the rostrum is simply too narrow and tapering to have room for a melon. Regardless, even with a melon present in O. leptodon, this organ would still be reduced relative to other toothed whales. All this suggests that the two species varied in their approach to foraging. O. peruvianus with its vestigial or non-existent melon relied on its vision, which was exceptionally well developed compared to that of other cetaceans. O. leptodon on the other hand appears to have had much poorer binocular vision if at all and instead possessed a small melon, likely hunting primarily through the use of echolocation. Another sense Odobenocetops may have used was touch through the presence of vibrissae, sensitive hair as seen in the modern walrus and other seals, which Muizon speculates may have been present. The later discovery of small foramina along the sheaths of the tusks may correspond with this idea. However, the presence of fully formed vibrissae is not confirmed and would be a unique adaptation among toothed whales, as tactile hair are typically a vestigial structure in the group, only found in adult Amazon river dolphins, some mysticetes and the calves of a few other toothed whales. If they were present, they and the strong upper lip may have formed a structure similar to the rostral disc of the modern dugong. Tusk function Generally its length and slenderness both make it rather fragile, which is supported by the fact that the tooth of the holotype was broken while the animal was still alive. The function of the tusks themselves is not entirely clear. Initially, Muizon proposed that they were merely a social instrument and not used in foraging, a hypothesis favored by later discoveries. The fragile nature of the enlarged tusks indicates that they were not used in any ways that would require it to apply force, for instance digging or fighting. The idea that they were possibly used in a non-violent social way may explain why female individuals lacked these enlarged tusks. One way the tusks could have been used in such a fashion would to establish hierarchy without actually having to fight. However they might have still served some unknown role in feeding that wouldn't require the tusk to be endangered by breakage. Studies conducted on the second species shows that in O. leptodon, both tusks have noticeable wear facets. In the case of the enlarged right tusk, this facet runs parallel to the crest of the palate and the seafloor, indicating that the wear of the tooth may have been caused by it being dragged along the bottom of the ocean during foraging. Muizon and colleagues suggest that the tusks could have been held parallel to the seafloor, serving as a sort of orientation guide for the animal during foraging. In 2002 Muizon and colleagues considered the function of the tusks in greater detail, writing on the pros and cons of various potential uses. Many of these proposed functions were however quickly dismissed due to their strange nature. The use as ballast or as a forceful feeding adaptation is considered unlikely due to their asymmetrical nature, nor would such a use be supported by modern relatives or analogues. Using the tusks for climbing on land is quickly dismissed due to how such behavior would be out of place for anything but the basalmost cetaceans, while any use that would involve sea ice (such as creating breathing holes) is dismissed due to the climate of the region, which is much too warm. The possibility that they are simply an evolutionary leftover is also discarded due to how such a trait would quickly be lost if it served no purpose. Again a social function appears to have been the most likely function even though the exact details of how they would be used remains mysterious. Muizon and colleagues argue that although the tusks were positioned in a way that would allow them to slash at the flanks of other individuals after approaching head on, their fragile nature seems to preclude the use in actual combat. Purely visual display would be more consistent with the strength of the structure, but is not favored by the orientation of the tusk and how it's only visible on one side. The very limited sample size only serves to deepen the mystery. Regardless of their function, Muizon and colleagues propose that they were a secondary sexual character that was subject of rapid sexual selection in a very narrow timeframe. The sheathes of the tusks themselves may have been an important feature in their own right. Muizon and colleagues speculate that they may have served as orientation guides and stabilizers to the mouth and the speculative set of vibrissae. Muizon compares this to the use of sled runners, especially relative to underwater photography, which keep the camera stable and pointed in the right direction. Their function is no less disputed than that of the tusks, even though the fact that they are nearly symmetrical and found in both sexes suggests that it was unrelated to sexually dimorphic behavior. The minor asymmetry is thought to be merely a compromise necessitated by the enlarged tusk. Regardless, the fact that they are of generally similar size indicates that they have a function beyond housing the tusks and were likely subject to their own selective pressure. Their possible functions were explored in greater detail in the same publication as the tusks, again exploring different ideas and assessing the advantages and disadvantages. For instance, although their function to support the tusks seems natural, this would not explain why the sheaths are much more symmetrical despite only one of the two tusks being enlarged. Although Odobenocetops may have profited from the presence of hydrofoils, the sheaths are considered to be too small to serve this function effectively. Furthermore, hydrofoils might not have been very useful for the slow-swimming Odobenocetops, the sheaths would have generated little lift and if anything been counterproductive when the animal tried to feed and their stiff attachment to the skull makes them less effective than flippers. The sheaths are not dense enough to serve as ballast and are not angled correctly to form an effective plough during foraging (which would further clash with the interpretation that they might have been covered in tactile hair). They may not have been points for muscle attachment, as the back of the skull already serves this purpose, and they appear to have been overdeveloped for simply restricting the area affected by their suction force. They could have expanded the surface area for tactile hair, however only parts of the entire sheath contain the foramina used to infer these hair, whose presence is not confirmed to begin with. Although Muizon and colleagues find flaws with these last two hypothetical functions, they cannot rule them out entirely and suggest that they may even have been factors in the early evolution, before the length of the sheaths reached the size seen in the known fossils of Odobenocetops. It is possible that they could have also served social functions in display or combat, making the animal appear larger or serving as a shield against attacks. The idea that they are a retained primitive feature on the other hand is questioned as it does not explain what caused them to attain their size in the first place, as they likely didn't serve to support the tusks. Two of the hypotheses regarding the function of the sheaths were however found to lack any direct evidence to the contrary. Skin attached to the sheaths could have been an adaptation for feeding, protected the eyes from mud and sediment, while the use as orientation guides is an idea Muizon had already suggested in previous publications. Although there may have been several possible advantages to the alveolar sheaths, most are thought to have been secondary and not the reason for their evolution. Instead, Muizon suggests that it was primarily the function as hydrofoils that caused Odobenocetops to develop these elongated structures. Range of motion Research conducted on the atlas of Odobenocetops suggests that the head, when held in its neutral position, would be positioned at a 133° angle relative to the axis of the body. This means that the tusk, which is angled downward relative to the skull, would be held in a raised position at a 13° angle relative to the torso. Due to the flexibility of the neck, Odobenocetops could have easily changed the angle at which it held its head, allowing it to change the position of the tusk as needed. Muizon and colleagues suggest that it may have angled its head down while swimming, which would effectively bring the tusk into a position roughly parallel with the rest of the body and reduce drag. When looking the anatomy of the occipital condyle these values change however. Here the tusk would diverge from the body at an angle of only 6°, running effectively parallel to the rest of the body. This difference may be caused by the difficulty in determining the neutral position between maximum flexion and maximum extension of the neck. Whichever the case, Odobenocetops shows pronounced points for muscle attachment on the basioccipital bone, which corroborates the presence of strong neck musculature that would be needed to compensate for the weight of the large tusks. While the neck would flex to bring the tusks up while swimming, during foraging the densely built skull and tusk would keep the head down, essentially pulling the head to the seafloor while the buoyant body would be held oblique to the ground. In this position the neck would be hyperextended and the tusks may have been held at a 45° angle relative to the torso. Here too Odobenocetops shows clear convergence with the walrus. Muizon and colleagues contrast this to the position taken on by sirenians like the dugong, which are capable of swimming parallel to the seafloor thanks to their denser bonestructure relative to ceteaceans and pinnipeds. The hyperextension performed during feeding is the result of the incredibly mobile neck, which allows for a range of motion of up to 83°. This includes the hyperextension of 7° during feeding and hyperflexion of up to 90°. This far exceeds the 50° range of motion seen in the beluga whale, which is the odontocete with the greatest range of motion still living today. Further support for this can be found in the anatomy of the atlantooccipital joint. In accordance with this, Odobenocetops had a range of motion at least 29% greater than belugas. While the hyperextension of the neck would be used in feeding, the precise purpose of the great possible range of flexion is not known. Regardless of purpose, such a position would have effectively allowed Odobenocetops to bring the tips of the tusks in a position above their origin without hindering the movement of the flippers due to the angle at which the tusks protrude outwards. The great range of motion estimated from the articulation of the bones is further supported by the numerous strongly developed muscle attachments seen on the skull of Odobenocetops. Foraging and feeding In the morphology of the skull, Odobenocetops peruvianus shares many characteristics with the modern walrus. Due to this it is believed that the two animals, although unrelated, likely shared a very similar lifestyle. The deep palate, rounded snout supporting a strong upper lip, tusks and reduced dentition are all traits shared between this cetacean and walruses, both extant and extinct. The powerful musculature associated with the movement of the lower jaw in particular stands out. As the lower jaw was likely toothless, similar to the upper jaw, the strong musculature could not have functioned to allow for chewing and grinding. Instead the musculature is thought to have enabled Odobenocetops to suctionfeed. Like the walrus, Odobenocetops might have used its upper lip to grab various marine bivalves and sucked out the foot and siphon with the help of a large piston-like tongue. The entire mouth would essentially function like a vacuum pump. Such a feeding mechanism is further supported by the musculature connecting the upper and lower jaw. The glenoid fossa allows for forward and backward movement of the mandible while the temporalis muscle, masseter, tongue and throat musculature may have all contributed to moving the lower jaw back. The pterygoid muscles would have been responsible for forward movement. After having sucked out the soft parts of the bivalves it fed on, Odobenocetops could have simply ejected the remains of the shell. The same is also applicable to O. leptodon, although the modified and more pronounced anatomy of the palate may indicate that its ability to suck out molluscs was even greater than that of the older species. Regardless, both species are considered to be bottomfeeders like the modern walrus. The asymmetrical palate was inclined more towards the left to compensate for the massive right tusk. Muizon and colleagues also mention the possibility that it was the other way around, and that the asymmetry wasn't responsible for the preference of the left side, but rather that this already established preference was responsible for the development of the tusk. A preference for one particular side is not unheard of in modern cetaceans, as bottlenose dolphins occasionally and gray whales consistently show a preference for feeding using their right side. All this combined indicates that Odobenocetops was a bottomfeeding molluscivore, detecting various bivalves or crustaceans either through, depending on the species, the use of echolocation or exceptional vision and possibly with the assistance of tactile hair. While foraging the animal would keep its head down and the tusks parallel to the sediment while the rest of the body would be held oblique due to its greater density. The tail fluke would help keep this position while also providing propulsion, whereas the forelimbs may have been used as stabilizers. Once a suitable prey item was detected, Odobenocetops could have created a powerful jet of water using its mouth (an ability also seen in belugas and orcas), excavating the target from the sediment. It would then have likely used its powerful upper lip to grasp and hold the invertebrate in place before utilizing a complex suctionfeeding mechanism created by the palate and tongue to suck out the soft parts. Once these were out, the shell could be easily discarded. Paleoenvironment Odobenocetops is mainly known from the Miocene Pisco Formation of Peru, which is thought to represent a coastal environment with calm, shallow waters. The rock units preserve a great cetacean diversity, including cetotheriids, rorquals, the pontoporiid Pliopontos, the beaked whale Ninoziphius as well as the porpoise Piscolithax and multiple sperm whales including the giant Livyatan. Other marine animals include the marine sloth Thalassocnus, the giant shark Megalodon, two species of marine gharials, and various seals and penguins. Bivalves that could have served as prey to Odobenocetops have also been found in the area, including the genera Anadara, Trachycardium, Hybolophus, Panopea and Miltha. Odobenocetops is also known from the Late Miocene-aged Cerro Ballena locality of the Bahía Inglesa Formation of Chile. It is composed of silty sandstones and sands that were deposited in a supratidal flat (flattened beach or berm zone). Contemporaneous vertebrates from this locality include the seal Acrophoca, balaenopterid and sperm whales, billfish, the shark Carcharodon, and the marine sloth Thalassocnus. Other faunal components include a variety of trace fossils left by invertebrates.
Biology and health sciences
Cetaceans
Animals
6095269
https://en.wikipedia.org/wiki/G-factor%20%28physics%29
G-factor (physics)
A g-factor (also called g value) is a dimensionless quantity that characterizes the magnetic moment and angular momentum of an atom, a particle or the nucleus. It is the ratio of the magnetic moment (or, equivalently, the gyromagnetic ratio) of a particle to that expected of a classical particle of the same charge and angular momentum. In nuclear physics, the nuclear magneton replaces the classically expected magnetic moment (or gyromagnetic ratio) in the definition. The two definitions coincide for the proton. Definition Dirac particle The spin magnetic moment of a charged, spin-1/2 particle that does not possess any internal structure (a Dirac particle) is given by where μ is the spin magnetic moment of the particle, g is the g-factor of the particle, e is the elementary charge, m is the mass of the particle, and S is the spin angular momentum of the particle (with magnitude ħ/2 for Dirac particles). Baryon or nucleus Protons, neutrons, nuclei, and other composite baryonic particles have magnetic moments arising from their spin (both the spin and magnetic moment may be zero, in which case the g-factor is undefined). Conventionally, the associated g-factors are defined using the nuclear magneton, and thus implicitly using the proton's mass rather than the particle's mass as for a Dirac particle. The formula used under this convention is where μ is the magnetic moment of the nucleon or nucleus resulting from its spin, g is the effective g-factor, I is its spin angular momentum, μN is the nuclear magneton, e is the elementary charge, and mp is the proton rest mass. Calculation Electron g-factors There are three magnetic moments associated with an electron: one from its spin angular momentum, one from its orbital angular momentum, and one from its total angular momentum (the quantum-mechanical sum of those two components). Corresponding to these three moments are three different g-factors: Electron spin g-factor The most known of these is the electron spin g-factor (more often called simply the electron g-factor), ge, defined by where μs is the magnetic moment resulting from the spin of an electron, S is its spin angular momentum, and μ = eħ/2m is the Bohr magneton. In atomic physics, the electron spin g-factor is often defined as the absolute value of ge: The z-component of the magnetic moment then becomes where are the eigenvalues of the Sz operator, meaning ms can take on values . The value gs is roughly equal to 2.002319 and is known to extraordinary precision – one part in 1013. The reason it is not precisely two is explained by quantum electrodynamics calculation of the anomalous magnetic dipole moment. The spin g-factor is related to spin frequency for a free electron in a magnetic field of a cyclotron: Electron orbital g-factor Secondly, the electron orbital g-factor, gL, is defined by where μL is the magnetic moment resulting from the orbital angular momentum of an electron, L is its orbital angular momentum, and μB is the Bohr magneton. For an infinite-mass nucleus, the value of gL is exactly equal to one, by a quantum-mechanical argument analogous to the derivation of the classical magnetogyric ratio. For an electron in an orbital with a magnetic quantum number ml, the z-component of the orbital magnetic moment is which, since gL = 1, is −μBml For a finite-mass nucleus, there is an effective g value where M is the ratio of the nuclear mass to the electron mass. Total angular momentum (Landé) g-factor Thirdly, the Landé g-factor, gJ, is defined by where μJ is the total magnetic moment resulting from both spin and orbital angular momentum of an electron, is its total angular momentum, and μB is the Bohr magneton. The value of gJ is related to gL and gs by a quantum-mechanical argument; see the article Landé g-factor. μJ and J vectors are not collinear, so only their magnitudes can be compared. Muon g-factor The muon, like the electron, has a g-factor associated with its spin, given by the equation where μ is the magnetic moment resulting from the muon's spin, S is the spin angular momentum, and mμ is the muon mass. That the muon g-factor is not quite the same as the electron g-factor is mostly explained by quantum electrodynamics and its calculation of the anomalous magnetic dipole moment. Almost all of the small difference between the two values (99.96% of it) is due to a well-understood lack of heavy-particle diagrams contributing to the probability for emission of a photon representing the magnetic dipole field, which are present for muons, but not electrons, in QED theory. These are entirely a result of the mass difference between the particles. However, not all of the difference between the g-factors for electrons and muons is exactly explained by the Standard Model. The muon g-factor can, in theory, be affected by physics beyond the Standard Model, so it has been measured very precisely, in particular at the Brookhaven National Laboratory. In the E821 collaboration final report in November 2006, the experimental measured value is , compared to the theoretical prediction of . This is a difference of 3.4 standard deviations, suggesting that beyond-the-Standard-Model physics may be a contributory factor. The Brookhaven muon storage ring was transported to Fermilab where the Muon g–2 experiment used it to make more precise measurements of muon g-factor. On April 7, 2021, the Fermilab Muon g−2 collaboration presented and published a new measurement of the muon magnetic anomaly. When the Brookhaven and Fermilab measurements are combined, the new world average differs from the theory prediction by 4.2 standard deviations. Measured g-factor values The electron g-factor is one of the most precisely measured values in physics.
Physical sciences
Quantum mechanics
Physics
6097297
https://en.wikipedia.org/wiki/Linux
Linux
Linux (, ) is a family of open-source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991, by Linus Torvalds. Linux is typically packaged as a Linux distribution (distro), which includes the kernel and supporting system software and libraries—most of which are provided by third parties—to create a complete operating system, designed as a clone of Unix and released under the copyleft GPL license. Thousands of Linux distributions exist, many based directly or indirectly on other distributions; popular Linux distributions include Debian, Fedora Linux, Linux Mint, Arch Linux, and Ubuntu, while commercial distributions include Red Hat Enterprise Linux, SUSE Linux Enterprise, and ChromeOS. Linux distributions are frequently used in server platforms. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses and recommends the name "GNU/Linux" to emphasize the use and importance of GNU software in many distributions, causing some controversy. Other than the Linux kernel, key components that make up a distribution may include a display server (windowing system), a package manager, a bootloader and a Unix shell. Linux is one of the most prominent examples of free and open-source software collaboration. While originally developed for x86 based personal computers, it has since been ported to more platforms than any other operating system, and is used on a wide variety of devices including PCs, workstations, mainframes and embedded systems. Linux is the predominant operating system for servers and is also used on all of the world's 500 fastest supercomputers. When combined with Android, which is Linux-based and designed for smartphones, they have the largest installed base of all general-purpose operating systems. Overview The Linux kernel was designed by Linus Torvalds, following the lack of a working kernel for GNU, a Unix-compatible operating system made entirely of free software that had been undergoing development since 1983 by Richard Stallman. A working Unix system called Minix was later released but its license was not entirely free at the time and it was made for an educative purpose. The first entirely free Unix for personal computers, 386BSD, did not appear until 1992, by which time Torvalds had already built and publicly released the first version of the Linux kernel on the Internet. Like GNU and 386BSD, Linux did not have any Unix code, being a fresh reimplementation, and therefore avoided the then legal issues. Linux distributions became popular in the 1990s and effectively made Unix technologies accessible to home users on personal computers whereas previously it had been confined to sophisticated workstations. Desktop Linux distributions include a windowing system such as X11 or Wayland and a desktop environment such as GNOME, KDE Plasma or Xfce. Distributions intended for servers may not have a graphical user interface at all or include a solution stack such as LAMP. The source code of Linux may be used, modified, and distributed commercially or non-commercially by anyone under the terms of its respective licenses, such as the GNU General Public License (GPL). The license means creating novel distributions is permitted by anyone and is easier than it would be for an operating system such as MacOS or Microsoft Windows. The Linux kernel, for example, is licensed under the GPLv2, with an exception for system calls that allows code that calls the kernel via system calls not to be licensed under the GPL. Because of the dominance of Linux-based Android on smartphones, Linux, including Android, has the largest installed base of all general-purpose operating systems . Linux is, , used by around 4 percent of desktop computers. The Chromebook, which runs the Linux kernel-based ChromeOS, dominates the US K–12 education market and represents nearly 20 percent of sub-$300 notebook sales in the US. Linux is the leading operating system on servers (over 96.4% of the top one million web servers' operating systems are Linux), leads other big iron systems such as mainframe computers, and is used on all of the world's 500 fastest supercomputers (, having gradually displaced all competitors). Linux also runs on embedded systems, i.e., devices whose operating system is typically built into the firmware and is highly tailored to the system. This includes routers, automation controls, smart home devices, video game consoles, televisions (Samsung and LG smart TVs), automobiles (Tesla, Audi, Mercedes-Benz, Hyundai, and Toyota), and spacecraft (Falcon 9 rocket, Dragon crew capsule, and the Ingenuity Mars helicopter). History Precursors The Unix operating system was conceived of and implemented in 1969, at AT&T's Bell Labs in the United States, by Ken Thompson, Dennis Ritchie, Douglas McIlroy, and Joe Ossanna. First released in 1971, Unix was written entirely in assembly language, as was common practice at the time. In 1973, in a key pioneering approach, it was rewritten in the C programming language by Dennis Ritchie (except for some hardware and I/O routines). The availability of a high-level language implementation of Unix made its porting to different computer platforms easier. As a 1956 antitrust case forbade AT&T from entering the computer business, AT&T provided the operating system's source code to anyone who asked. As a result, Unix use grew quickly and it became widely adopted by academic institutions and businesses. In 1984, AT&T divested itself of its regional operating companies, and was released from its obligation not to enter the computer business; freed of that obligation, Bell Labs began selling Unix as a proprietary product, where users were not legally allowed to modify it. Onyx Systems began selling early microcomputer-based Unix workstations in 1980. Later, Sun Microsystems, founded as a spin-off of a student project at Stanford University, also began selling Unix-based desktop workstations in 1982. While Sun workstations did not use commodity PC hardware, for which Linux was later originally developed, it represented the first successful commercial attempt at distributing a primarily single-user microcomputer that ran a Unix operating system. With Unix increasingly "locked in" as a proprietary product, the GNU Project, started in 1983 by Richard Stallman, had the goal of creating a "complete Unix-compatible software system" composed entirely of free software. Work began in 1984. Later, in 1985, Stallman started the Free Software Foundation and wrote the GNU General Public License (GNU GPL) in 1989. By the early 1990s, many of the programs required in an operating system (such as libraries, compilers, text editors, a command-line shell, and a windowing system) were completed, although low-level elements such as device drivers, daemons, and the kernel, called GNU Hurd, were stalled and incomplete. Minix was created by Andrew S. Tanenbaum, a computer science professor, and released in 1987 as a minimal Unix-like operating system targeted at students and others who wanted to learn operating system principles. Although the complete source code of Minix was freely available, the licensing terms prevented it from being free software until the licensing changed in April 2000. Creation While attending the University of Helsinki in the fall of 1990, Torvalds enrolled in a Unix course. The course used a MicroVAX minicomputer running Ultrix, and one of the required texts was Operating Systems: Design and Implementation by Andrew S. Tanenbaum. This textbook included a copy of Tanenbaum's Minix operating system. It was with this course that Torvalds first became exposed to Unix. In 1991, he became curious about operating systems. Frustrated by the licensing of Minix, which at the time limited it to educational use only, he began to work on his operating system kernel, which eventually became the Linux kernel. On July 3, 1991, to implement Unix system calls, Linus Torvalds attempted unsuccessfully to obtain a digital copy of the POSIX standards documentation with a request to the comp.os.minix newsgroup. After not finding the POSIX documentation, Torvalds initially resorted to determining system calls from SunOS documentation owned by the university for use in operating its Sun Microsystems server. He also learned some system calls from Tanenbaum's Minix text. Torvalds began the development of the Linux kernel on Minix and applications written for Minix were also used on Linux. Later, Linux matured and further Linux kernel development took place on Linux systems. GNU applications also replaced all Minix components, because it was advantageous to use the freely available code from the GNU Project with the fledgling operating system; code licensed under the GNU GPL can be reused in other computer programs as long as they also are released under the same or a compatible license. Torvalds initiated a switch from his original license, which prohibited commercial redistribution, to the GNU GPL. Developers worked to integrate GNU components with the Linux kernel, creating a fully functional and free operating system. Although not released until 1992, due to legal complications, the development of 386BSD, from which NetBSD, OpenBSD and FreeBSD descended, predated that of Linux. Linus Torvalds has stated that if the GNU kernel or 386BSD had been available in 1991, he probably would not have created Linux. Naming Linus Torvalds had wanted to call his invention "Freax", a portmanteau of "free", "freak", and "x" (as an allusion to Unix). During the start of his work on the system, some of the project's makefiles included the name "Freax" for about half a year. Torvalds considered the name "Linux" but dismissed it as too egotistical. To facilitate development, the files were uploaded to the FTP server of FUNET in September 1991. Ari Lemmke, Torvalds' coworker at the Helsinki University of Technology (HUT) who was one of the volunteer administrators for the FTP server at the time, did not think that "Freax" was a good name, so he named the project "Linux" on the server without consulting Torvalds. Later, however, Torvalds consented to "Linux". According to a newsgroup post by Torvalds, the word "Linux" should be pronounced ( ) with a short 'i' as in 'print' and 'u' as in 'put'. To further demonstrate how the word "Linux" should be pronounced, he included an audio guide with the kernel source code. However, in this recording, he pronounces Linux as () with a short but close front unrounded vowel, instead of a near-close near-front unrounded vowel as in his newsgroup post. Commercial and popular uptake The adoption of Linux in production environments, rather than being used only by hobbyists, started to take off first in the mid-1990s in the supercomputing community, where organizations such as NASA started to replace their increasingly expensive machines with clusters of inexpensive commodity computers running Linux. Commercial use began when Dell and IBM, followed by Hewlett-Packard, started offering Linux support to escape Microsoft's monopoly in the desktop operating system market. Today, Linux systems are used throughout computing, from embedded systems to virtually all supercomputers, and have secured a place in server installations such as the popular LAMP application stack. The use of Linux distributions in home and enterprise desktops has been growing. Linux distributions have also become popular in the netbook market, with many devices shipping with customized Linux distributions installed, and Google releasing their own ChromeOS designed for netbooks. Linux's greatest success in the consumer market is perhaps the mobile device market, with Android being the dominant operating system on smartphones and very popular on tablets and, more recently, on wearables, and vehicles. Linux gaming is also on the rise with Valve showing its support for Linux and rolling out SteamOS, its own gaming-oriented Linux distribution, which was later implemented in their Steam Deck platform. Linux distributions have also gained popularity with various local and national governments, such as the federal government of Brazil. Development Linus Torvalds is the lead maintainer for the Linux kernel and guides its development, while Greg Kroah-Hartman is the lead maintainer for the stable branch. Zoë Kooyman is the executive director of the Free Software Foundation, which in turn supports the GNU components. Finally, individuals and corporations develop third-party non-GNU components. These third-party components comprise a vast body of work and may include both kernel modules and user applications and libraries. Linux vendors and communities combine and distribute the kernel, GNU components, and non-GNU components, with additional package management software in the form of Linux distributions. Design Many developers of open-source software agree that the Linux kernel was not designed but rather evolved through natural selection. Torvalds considers that although the design of Unix served as a scaffolding, "Linux grew with a lot of mutations – and because the mutations were less than random, they were faster and more directed than alpha-particles in DNA." Eric S. Raymond considers Linux's revolutionary aspects to be social, not technical: before Linux, complex software was designed carefully by small groups, but "Linux evolved in a completely different way. From nearly the beginning, it was rather casually hacked on by huge numbers of volunteers coordinating only through the Internet. Quality was maintained not by rigid standards or autocracy but by the naively simple strategy of releasing every week and getting feedback from hundreds of users within days, creating a sort of rapid Darwinian selection on the mutations introduced by developers." Bryan Cantrill, an engineer of a competing OS, agrees that "Linux wasn't designed, it evolved", but considers this to be a limitation, proposing that some features, especially those related to security, cannot be evolved into, "this is not a biological system at the end of the day, it's a software system." A Linux-based system is a modular Unix-like operating system, deriving much of its basic design from principles established in Unix during the 1970s and 1980s. Such a system uses a monolithic kernel, the Linux kernel, which handles process control, networking, access to the peripherals, and file systems. Device drivers are either integrated directly with the kernel or added as modules that are loaded while the system is running. The GNU userland is a key part of most systems based on the Linux kernel, with Android being the notable exception. The GNU C library, an implementation of the C standard library, works as a wrapper for the system calls of the Linux kernel necessary to the kernel-userspace interface, the toolchain is a broad collection of programming tools vital to Linux development (including the compilers used to build the Linux kernel itself), and the coreutils implement many basic Unix tools. The GNU Project also develops Bash, a popular CLI shell. The graphical user interface (or GUI) used by most Linux systems is built on top of an implementation of the X Window System. More recently, some of the Linux community has sought to move to using Wayland as the display server protocol, replacing X11. Many other open-source software projects contribute to Linux systems. Installed components of a Linux system include the following: A bootloader, for example GNU GRUB, LILO, SYSLINUX or systemd-boot. This is a program that loads the Linux kernel into the computer's main memory, by being executed by the computer when it is turned on and after the firmware initialization is performed. An init program, such as the traditional sysvinit and the newer systemd, OpenRC and Upstart. This is the first process launched by the Linux kernel, and is at the root of the process tree. It starts processes such as system services and login prompts (whether graphical or in terminal mode). Software libraries, which contain code that can be used by running processes. On Linux systems using ELF-format executable files, the dynamic linker that manages the use of dynamic libraries is known as ld-linux.so. If the system is set up for the user to compile software themselves, header files will also be included to describe the programming interface of installed libraries. Besides the most commonly used software library on Linux systems, the GNU C Library (glibc), there are numerous other libraries, such as SDL and Mesa. The C standard library is the library necessary to run programs written in C on a computer system, with the GNU C Library being the standard. It provides an implementation of the POSIX API, as well as extensions to that API. For embedded systems, alternatives such as musl, EGLIBC (a glibc fork once used by Debian) and uClibc (which was designed for uClinux) have been developed, although the last two are no longer maintained. Android uses its own C library, Bionic. However, musl can additionally be used as a replacement for glibc on desktop and laptop systems, as seen on certain Linux distributions like Void Linux. Basic Unix commands, with GNU coreutils being the standard implementation. Alternatives exist for embedded systems, such as the copyleft BusyBox, and the BSD-licensed Toybox. Widget toolkits are the libraries used to build graphical user interfaces (GUIs) for software applications. Numerous widget toolkits are available, including GTK and Clutter developed by the GNOME Project, Qt developed by the Qt Project and led by The Qt Company, and Enlightenment Foundation Libraries (EFL) developed primarily by the Enlightenment team. A package management system, such as dpkg and RPM. Alternatively packages can be compiled from binary or source tarballs. User interface programs such as command shells or windowing environments. User interface The user interface, also known as the shell, is either a command-line interface (CLI), a graphical user interface (GUI), or controls attached to the associated hardware, which is common for embedded systems. For desktop systems, the default user interface is usually graphical, although the CLI is commonly available through terminal emulator windows or on a separate virtual console. CLI shells are text-based user interfaces, which use text for both input and output. The dominant shell used in Linux is the Bourne-Again Shell (bash), originally developed for the GNU Project; other shells such as Zsh are also used. Most low-level Linux components, including various parts of the userland, use the CLI exclusively. The CLI is particularly suited for automation of repetitive or delayed tasks and provides very simple inter-process communication. On desktop systems, the most popular user interfaces are the GUI shells, packaged together with extensive desktop environments, such as KDE Plasma, GNOME, MATE, Cinnamon, LXDE, Pantheon, and Xfce, though a variety of additional user interfaces exist. Most popular user interfaces are based on the X Window System, often simply called "X" or "X11". It provides network transparency and permits a graphical application running on one system to be displayed on another where a user may interact with the application; however, certain extensions of the X Window System are not capable of working over the network. Several X display servers exist, with the reference implementation, X.Org Server, being the most popular. Several types of window managers exist for X11, including tiling, dynamic, stacking, and compositing. Window managers provide means to control the placement and appearance of individual application windows, and interact with the X Window System. Simpler X window managers such as dwm, ratpoison, or i3wm provide a minimalist functionality, while more elaborate window managers such as FVWM, Enlightenment, or Window Maker provide more features such as a built-in taskbar and themes, but are still lightweight when compared to desktop environments. Desktop environments include window managers as part of their standard installations, such as Mutter (GNOME), KWin (KDE), or Xfwm (xfce), although users may choose to use a different window manager if preferred. Wayland is a display server protocol intended as a replacement for the X11 protocol; , it has received relatively wide adoption. Unlike X11, Wayland does not need an external window manager and compositing manager. Therefore, a Wayland compositor takes the role of the display server, window manager, and compositing manager. Weston is the reference implementation of Wayland, while GNOME's Mutter and KDE's KWin are being ported to Wayland as standalone display servers. Enlightenment has already been successfully ported since version 19. Additionally, many window managers have been made for Wayland, such as Sway or Hyprland, as well as other graphical utilities such as Waybar or Rofi. Video input infrastructure Linux currently has two modern kernel-userspace APIs for handling video input devices: V4L2 API for video streams and radio, and DVB API for digital TV reception. Due to the complexity and diversity of different devices, and due to the large number of formats and standards handled by those APIs, this infrastructure needs to evolve to better fit other devices. Also, a good userspace device library is the key to the success of having userspace applications to be able to work with all formats supported by those devices. Development The primary difference between Linux and many other popular contemporary operating systems is that the Linux kernel and other components are free and open-source software. Linux is not the only such operating system, although it is by far the most widely used. Some free and open-source software licenses are based on the principle of copyleft, a kind of reciprocity: any work derived from a copyleft piece of software must also be copyleft itself. The most common free software license, the GNU General Public License (GPL), is a form of copyleft and is used for the Linux kernel and many of the components from the GNU Project. Linux-based distributions are intended by developers for interoperability with other operating systems and established computing standards. Linux systems adhere to POSIX, Single UNIX Specification (SUS), Linux Standard Base (LSB), ISO, and ANSI standards where possible, although to date only one Linux distribution has been POSIX.1 certified, Linux-FT. Free software projects, although developed through collaboration, are often produced independently of each other. The fact that the software licenses explicitly permit redistribution, however, provides a basis for larger-scale projects that collect the software produced by stand-alone projects and make it available all at once in the form of a Linux distribution. Many Linux distributions manage a remote collection of system software and application software packages available for download and installation through a network connection. This allows users to adapt the operating system to their specific needs. Distributions are maintained by individuals, loose-knit teams, volunteer organizations, and commercial entities. A distribution is responsible for the default configuration of the installed Linux kernel, general system security, and more generally integration of the different software packages into a coherent whole. Distributions typically use a package manager such as apt, yum, zypper, pacman or portage to install, remove, and update all of a system's software from one central location. Community A distribution is largely driven by its developer and user communities. Some vendors develop and fund their distributions on a volunteer basis, Debian being a well-known example. Others maintain a community version of their commercial distributions, as Red Hat does with Fedora, and SUSE does with openSUSE. In many cities and regions, local associations known as Linux User Groups (LUGs) seek to promote their preferred distribution and by extension free software. They hold meetings and provide free demonstrations, training, technical support, and operating system installation to new users. Many Internet communities also provide support to Linux users and developers. Most distributions and free software / open-source projects have IRC chatrooms or newsgroups. Online forums are another means of support, with notable examples being Unix & Linux Stack Exchange, LinuxQuestions.org and the various distribution-specific support and community forums, such as ones for Ubuntu, Fedora, Arch Linux, Gentoo, etc. Linux distributions host mailing lists; commonly there will be a specific topic such as usage or development for a given list. There are several technology websites with a Linux focus. Print magazines on Linux often bundle cover disks that carry software or even complete Linux distributions. Although Linux distributions are generally available without charge, several large corporations sell, support, and contribute to the development of the components of the system and free software. An analysis of the Linux kernel in 2017 showed that well over 85% of the code was developed by programmers who are being paid for their work, leaving about 8.2% to unpaid developers and 4.1% unclassified. Some of the major corporations that provide contributions include Intel, Samsung, Google, AMD, Oracle, and Facebook. Several corporations, notably Red Hat, Canonical, and SUSE have built a significant business around Linux distributions. The free software licenses, on which the various software packages of a distribution built on the Linux kernel are based, explicitly accommodate and encourage commercialization; the relationship between a Linux distribution as a whole and individual vendors may be seen as symbiotic. One common business model of commercial suppliers is charging for support, especially for business users. A number of companies also offer a specialized business version of their distribution, which adds proprietary support packages and tools to administer higher numbers of installations or to simplify administrative tasks. Another business model is to give away the software to sell hardware. This used to be the norm in the computer industry, with operating systems such as CP/M, Apple DOS, and versions of the classic Mac OS before 7.6 freely copyable (but not modifiable). As computer hardware standardized throughout the 1980s, it became more difficult for hardware manufacturers to profit from this tactic, as the OS would run on any manufacturer's computer that shared the same architecture. Programming on Linux Most programming languages support Linux either directly or through third-party community based ports. The original development tools used for building both Linux applications and operating system programs are found within the GNU toolchain, which includes the GNU Compiler Collection (GCC) and the GNU Build System. Amongst others, GCC provides compilers for Ada, C, C++, Go and Fortran. Many programming languages have a cross-platform reference implementation that supports Linux, for example PHP, Perl, Ruby, Python, Java, Go, Rust and Haskell. First released in 2003, the LLVM project provides an alternative cross-platform open-source compiler for many languages. Proprietary compilers for Linux include the Intel C++ Compiler, Sun Studio, and IBM XL C/C++ Compiler. BASIC is available in procedural form from QB64, PureBasic, Yabasic, GLBasic, Basic4GL, XBasic, wxBasic, SdlBasic, and Basic-256, as well as object oriented through Gambas, FreeBASIC, B4X, Basic for Qt, Phoenix Object Basic, NS Basic, ProvideX, Chipmunk Basic, RapidQ and Xojo. Pascal is implemented through GNU Pascal, Free Pascal, and Virtual Pascal, as well as graphically via Lazarus, PascalABC.NET, or Delphi using FireMonkey (previously through Borland Kylix). A common feature of Unix-like systems, Linux includes traditional specific-purpose programming languages targeted at scripting, text processing and system configuration and management in general. Linux distributions support shell scripts, awk, sed and make. Many programs also have an embedded programming language to support configuring or programming themselves. For example, regular expressions are supported in programs like grep and locate, the traditional Unix message transfer agent Sendmail contains its own Turing complete scripting system, and the advanced text editor GNU Emacs is built around a general purpose Lisp interpreter. Most distributions also include support for PHP, Perl, Ruby, Python and other dynamic languages. While not as common, Linux also supports C# and other CLI languages (via Mono), Vala, and Scheme. Guile Scheme acts as an extension language targeting the GNU system utilities, seeking to make the conventionally small, static, compiled C programs of Unix design rapidly and dynamically extensible via an elegant, functional high-level scripting system; many GNU programs can be compiled with optional Guile bindings to this end. A number of Java virtual machines and development kits run on Linux, including the original Sun Microsystems JVM (HotSpot), and IBM's J2SE RE, as well as many open-source projects like Kaffe and Jikes RVM; Kotlin, Scala, Groovy and other JVM languages are also available. GNOME and KDE are popular desktop environments and provide a framework for developing applications. These projects are based on the GTK and Qt widget toolkits, respectively, which can also be used independently of the larger framework. Both support a wide variety of languages. There are a number of Integrated development environments available including Anjuta, Code::Blocks, CodeLite, Eclipse, Geany, ActiveState Komodo, KDevelop, Lazarus, MonoDevelop, NetBeans, and Qt Creator, while the long-established editors Vim, nano and Emacs remain popular. Hardware support The Linux kernel is a widely ported operating system kernel, available for devices ranging from mobile phones to supercomputers; it runs on a highly diverse range of computer architectures, including ARM-based Android smartphones and the IBM Z mainframes. Specialized distributions and kernel forks exist for less mainstream architectures; for example, the ELKS kernel fork can run on Intel 8086 or Intel 80286 16-bit microprocessors, while the μClinux kernel fork may run on systems without a memory management unit. The kernel also runs on architectures that were only ever intended to use a proprietary manufacturer-created operating system, such as Macintosh computers (with PowerPC, Intel, and Apple silicon processors), PDAs, video game consoles, portable music players, and mobile phones. Linux has a reputation for supporting old hardware very well by maintaining standardized drivers for a long time. There are several industry associations and hardware conferences devoted to maintaining and improving support for diverse hardware under Linux, such as FreedomHEC. Over time, support for different hardware has improved in Linux, resulting in any off-the-shelf purchase having a "good chance" of being compatible. In 2014, a new initiative was launched to automatically collect a database of all tested hardware configurations. Uses Market share and uptake Many quantitative studies of free/open-source software focus on topics including market share and reliability, with numerous studies specifically examining Linux. The Linux market is growing, and the Linux operating system market size is expected to see a growth of 19.2% by 2027, reaching $15.64 billion, compared to $3.89 billion in 2019. Analysts project a Compound Annual Growth Rate (CAGR) of 13.7% between 2024 and 2032, culminating in a market size of US$34.90 billion by the latter year. Analysts and proponents attribute the relative success of Linux to its security, reliability, low cost, and freedom from vendor lock-in. Desktops and laptops According to web server statistics (that is, based on the numbers recorded from visits to websites by client devices), in October 2024, the estimated market share of Linux on desktop computers was around 4.3%. In comparison, Microsoft Windows had a market share of around 73.4%, while macOS covered around 15.5%. Web servers W3Cook publishes stats that use the top 1,000,000 Alexa domains, which estimate that 96.55% of web servers run Linux, 1.73% run Windows, and 1.72% run FreeBSD. W3Techs publishes stats that use the top 10,000,000 Alexa domains and the top 1,000,000 Tranco domains, updated monthly and estimate that Linux is used by 39% of the web servers, versus 21.9% being used by Microsoft Windows. 40.1% used other types of Unix. IDC's Q1 2007 report indicated that Linux held 12.7% of the overall server market at that time; this estimate was based on the number of Linux servers sold by various companies, and did not include server hardware purchased separately that had Linux installed on it later. As of 2024, estimates suggest Linux accounts for at least 80% of the public cloud workload, partly thanks to its widespread use in platforms like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform.ZDNet report that 96.3% of the top one million web servers are running Linux. W3Techs state that Linux powers at least 39.2% of websites whose operating system is known, with other estimates saying 55%. Mobile devices Android, which is based on the Linux kernel, has become the dominant operating system for smartphones. In April 2023, 68.61% of mobile devices accessing websites using StatCounter were from Android. Android is also a popular operating system for tablets, being responsible for more than 60% of tablet sales . According to web server statistics, Android has a market share of about 71%, with iOS holding 28%, and the remaining 1% attributed to various niche platforms. Film production For years, Linux has been the platform of choice in the film industry. The first major film produced on Linux servers was 1997's Titanic. Since then major studios including DreamWorks Animation, Pixar, Weta Digital, and Industrial Light & Magic have migrated to Linux. According to the Linux Movies Group, more than 95% of the servers and desktops at large animation and visual effects companies use Linux. Use in government Linux distributions have also gained popularity with various local and national governments. News of the Russian military creating its own Linux distribution has also surfaced, and has come to fruition as the G.H.ost Project. The Indian state of Kerala has gone to the extent of mandating that all state high schools run Linux on their computers. China uses Linux exclusively as the operating system for its Loongson processor family to achieve technology independence. In Spain, some regions have developed their own Linux distributions, which are widely used in education and official institutions, like gnuLinEx in Extremadura and Guadalinex in Andalusia. France and Germany have also taken steps toward the adoption of Linux. North Korea's Red Star OS, developed , is based on a version of Fedora Linux. Copyright, trademark, and naming The Linux kernel is licensed under the GNU General Public License (GPL), version 2. The GPL requires that anyone who distributes software based on source code under this license must make the originating source code (and any modifications) available to the recipient under the same terms. Other key components of a typical Linux distribution are also mainly licensed under the GPL, but they may use other licenses; many libraries use the GNU Lesser General Public License (LGPL), a more permissive variant of the GPL, and the X.Org implementation of the X Window System uses the MIT License. Torvalds states that the Linux kernel will not move from version 2 of the GPL to version 3. He specifically dislikes some provisions in the new license which prohibit the use of the software in digital rights management. It would also be impractical to obtain permission from all the copyright holders, who number in the thousands. A 2001 study of Red Hat Linux 7.1 found that this distribution contained 30 million source lines of code. Using the Constructive Cost Model, the study estimated that this distribution required about eight thousand person-years of development time. According to the study, if all this software had been developed by conventional proprietary means, it would have cost about to develop in in the United States. Most of the source code (71%) was written in the C programming language, but many other languages were used, including C++, Lisp, assembly language, Perl, Python, Fortran, and various shell scripting languages. Slightly over half of all lines of code were licensed under the GPL. The Linux kernel itself was 2.4 million lines of code, or 8% of the total. In a later study, the same analysis was performed for Debian version 4.0 (etch, which was released in 2007). This distribution contained close to 283 million source lines of code, and the study estimated that it would have required about seventy three thousand man-years and cost (in dollars) to develop by conventional means. In the United States, the name Linux is a trademark registered to Linus Torvalds. Initially, nobody registered it. However, on August 15, 1994, William R. Della Croce Jr. filed for the trademark Linux, and then demanded royalties from Linux distributors. In 1996, Torvalds and some affected organizations sued him to have the trademark assigned to Torvalds, and, in 1997, the case was settled. The licensing of the trademark has since been handled by the Linux Mark Institute (LMI). Torvalds has stated that he trademarked the name only to prevent someone else from using it. LMI originally charged a nominal sublicensing fee for use of the Linux name as part of trademarks, but later changed this in favor of offering a free, perpetual worldwide sublicense. The Free Software Foundation (FSF) prefers GNU/Linux as the name when referring to the operating system as a whole, because it considers Linux distributions to be variants of the GNU operating system initiated in 1983 by Richard Stallman, president of the FSF. The foundation explicitly takes no issue over the name Android for the Android OS, which is also an operating system based on the Linux kernel, as GNU is not a part of it. A minority of public figures and software projects other than Stallman and the FSF, notably distributions consisting of only free software, such as Debian (which had been sponsored by the FSF up to 1996), also use GNU/Linux when referring to the operating system as a whole. Most media and common usage, however, refers to this family of operating systems simply as Linux, as do many large Linux distributions (for example, SUSE Linux and Red Hat Enterprise Linux). , about 8% to 13% of the lines of code of the Linux distribution Ubuntu (version "Natty") is made of GNU components (the range depending on whether GNOME is considered part of GNU); meanwhile, 6% is taken by the Linux kernel, increased to 9% when including its direct dependencies.
Technology
Operating systems
null
240462
https://en.wikipedia.org/wiki/Spinal%20nerve
Spinal nerve
A spinal nerve is a mixed nerve, which carries motor, sensory, and autonomic signals between the spinal cord and the body. In the human body there are 31 pairs of spinal nerves, one on each side of the vertebral column. These are grouped into the corresponding cervical, thoracic, lumbar, sacral and coccygeal regions of the spine. There are eight pairs of cervical nerves, twelve pairs of thoracic nerves, five pairs of lumbar nerves, five pairs of sacral nerves, and one pair of coccygeal nerves. The spinal nerves are part of the peripheral nervous system. Structure Each spinal nerve is a mixed nerve, formed from the combination of nerve root fibers from its dorsal and ventral roots. The dorsal root is the afferent sensory root and carries sensory information to the brain. The ventral root is the efferent motor root and carries motor information from the brain. The spinal nerve emerges from the spinal column through an opening (intervertebral foramen) between adjacent vertebrae. This is true for all spinal nerves except for the first spinal nerve pair (C1), which emerges between the occipital bone and the atlas (the first vertebra). Thus the cervical nerves are numbered by the vertebra below, except spinal nerve C8, which exists below vertebra C7 and above vertebra T1. The thoracic, lumbar, and sacral nerves are then numbered by the vertebra above. In the case of a lumbarized S1 vertebra (also known as L6) or a sacralized L5 vertebra, the nerves are typically still counted to L5 and the next nerve is S1. Outside the vertebral column, the nerve divides into branches. The dorsal ramus contains nerves that serve the posterior portions of the trunk carrying visceral motor, somatic motor, and somatic sensory information to and from the skin and muscles of the back (epaxial muscles). The ventral ramus contains nerves that serve the remaining anterior parts of the trunk and the upper and lower limbs (hypaxial muscles) carrying visceral motor, somatic motor, and sensory information to and from the ventrolateral body surface, structures in the body wall, and the limbs. The meningeal branches (recurrent meningeal or sinuvertebral nerves) branch from the spinal nerve and re-enter the intervertebral foramen to serve the ligaments, dura, blood vessels, intervertebral discs, facet joints, and periosteum of the vertebrae. The rami communicantes contain autonomic nerves that serve visceral functions carrying visceral motor and sensory information to and from the visceral organs. Some anterior rami merge with adjacent anterior rami to form a nerve plexus, a network of interconnecting nerves. Nerves emerging from a plexus contain fibers from various spinal nerves, which are now carried together to some target location. The spinal plexuses are the cervical plexus, brachial plexus, lumbar plexus, the sacral plexus and the much smaller coccygeal plexus. Regional nerves Cervical nerves The cervical nerves are the spinal nerves from the cervical vertebrae in the cervical segment of the spinal cord. Although there are seven cervical vertebrae (C1–C7), there are eight cervical nerves C1–C8. C1–C7 emerge above their corresponding vertebrae, while C8 emerges below the C7 vertebra. Everywhere else in the spine, the nerve emerges below the vertebra with the same name. The posterior distribution includes the suboccipital nerve (C1), the greater occipital nerve (C2) and the third occipital nerve (C3). The anterior distribution includes the cervical plexus (C1–C4) and brachial plexus (C5–T1). The cervical nerves innervate the sternohyoid, sternothyroid and omohyoid muscles. A loop of nerves called ansa cervicalis is part of the cervical plexus. Thoracic nerves The thoracic nerves are the twelve spinal nerves emerging from the thoracic vertebrae. Each thoracic nerve T1–T12 originates from below each corresponding thoracic vertebra. Branches also exit the spine and go directly to the paravertebral ganglia of the autonomic nervous system where they are involved in the functions of organs and glands in the head, neck, thorax and abdomen. Anterior divisions The intercostal nerves come from thoracic nerves T1–T11, and run between the ribs. At T2 and T3, further branches form the intercostobrachial nerve. The subcostal nerve comes from nerve T12, and runs below the twelfth rib. Posterior divisions The medial branches (ramus medialis) of the posterior branches of the upper six thoracic nerves run between the semispinalis dorsi and multifidus, which they supply; they then pierce the rhomboid and trapezius muscles, and reach the skin by the sides of the spinous processes. This sensitive branch is called the medial cutaneous ramus. The medial branches of the lower six are distributed chiefly to the multifidus and longissimus dorsi, occasionally they give off filaments to the skin near the middle line. This sensitive branch is called the posterior cutaneous ramus. Lumbar nerves The lumbar nerves are the five spinal nerves emerging from the lumbar vertebrae. They are divided into posterior and anterior divisions. Posterior divisions The medial branches of the posterior divisions of the lumbar nerves run close to the articular processes of the vertebrae and end in the multifidus muscle. The laterals supply the erector spinae muscles. The upper three give off cutaneous nerves which pierce the aponeurosis of the latissimus dorsi at the lateral border of the erector spinae muscles, and descend across the posterior part of the iliac crest to the skin of the buttock, some of their twigs running as far as the level of the greater trochanter. Anterior divisions The anterior divisions of the lumbar nerves (rami anteriores) increase in size from above downward. They are joined, near their origins, by gray rami communicantes from the lumbar ganglia of the sympathetic trunk. These rami consist of long, slender branches which accompany the lumbar arteries around the sides of the vertebral bodies, beneath the psoas major. Their arrangement is somewhat irregular: one ganglion may give rami to two lumbar nerves, or one lumbar nerve may receive rami from two ganglia. The first and second, and sometimes the third and fourth lumbar nerves are each connected with the lumbar part of the sympathetic trunk by a white ramus communicans. The nerves pass obliquely outward behind the psoas major, or between its fasciculi, distributing filaments to it and the quadratus lumborum. The first three and the greater part of the fourth are connected together in this situation by anastomotic loops, and form the lumbar plexus. The smaller part of the fourth joins with the fifth to form the lumbosacral trunk, which assists in the formation of the sacral plexus. The fourth nerve is named the furcal nerve, from the fact that it is subdivided between the two plexuses. Sacral nerves The sacral nerves are the five pairs of spinal nerves which exit the sacrum at the lower end of the vertebral column. The roots of these nerves begin inside the vertebral column at the level of the L1 vertebra, where the cauda equina begins, and then descend into the sacrum. There are five paired sacral nerves, half of them arising through the sacrum on the left side and the other half on the right side. Each nerve emerges in two divisions: one division through the anterior sacral foramina and the other division through the posterior sacral foramina. The nerves divide into branches and the branches from different nerves join with one another, some of them also joining with lumbar or coccygeal nerve branches. These anastomoses of nerves form the sacral plexus and the lumbosacral plexus. The branches of these plexus give rise to nerves that supply much of the hip, thigh, leg and foot. The sacral nerves have both afferent and efferent fibers, thus they are responsible for part of the sensory perception and the movements of the lower extremities of the human body. From the S2, S3 and S4 arise the pudendal nerve and parasympathetic fibers whose electrical potential supply the descending colon and rectum, urinary bladder and genital organs. These pathways have both afferent and efferent fibers and, this way, they are responsible for conduction of sensory information from these pelvic organs to the central nervous system (CNS) and motor impulses from the CNS to the pelvis that control the movements of these pelvic organs. Coccygeal nerves The bilateral coccygeal nerves, Co, are the 31st pair of spinal nerves. It arises from the conus medullaris, and its ventral ramus helps form the coccygeal plexus. It does not divide into a medial and lateral branch. Its fibers are distributed to the skin superficial and posterior to the coccyx bone via the anococcygeal nerve of the coccygeal nerve plexus. Function Spinal plexuses A spinal plexus is a weblike nerve plexus formed by the anterior nerve roots that branch and merge repeatedly. The only region that does not have a plexus is the thoracic region. The small cervical plexus is in the neck, the brachial plexus is in the shoulder, the lumbar plexus is in the lower back, beneath this is the sacral plexus, and next to the lower sacrum and coccyx is the very small coccygeal plexus. Clinical significance The muscles that one particular spinal root supplies are that nerve's myotome, and the dermatomes are the areas of sensory innervation on the skin for each spinal nerve. Lesions of one or more nerve roots result in typical patterns of neurologic defects (muscle weakness, abnormal sensation, changes in reflexes) that allow localization of the responsible lesion. There are several procedures used in sacral nerve stimulation for the treatment of various related disorders. Sciatica is generally caused by the compression of lumbar nerves L4, or L5 or sacral nerves S1, S2, or S3, or by compression of the sciatic nerve itself Additional Images
Biology and health sciences
Nervous system
Biology
240496
https://en.wikipedia.org/wiki/Bus%20stop
Bus stop
A bus stop is a place where buses stop for passengers to get on and off the bus. The construction of bus stops tends to reflect the level of usage, where stops at busy locations may have shelters, seating, and possibly electronic passenger information systems; less busy stops may use a simple pole and flag to mark the location. Bus stops are, in some locations, clustered together into transport hubs allowing interchange between routes from nearby stops and with other public transport modes to maximise convenience. Types of service For operational purposes, there are three main kinds of stops: Scheduled stops, at which the bus should stop irrespective of demand; request stops (or flag stop), at which the vehicle will stop only on request; and hail and ride stops, at which a vehicle will stop anywhere along the designated section of road on request. Certain stops may be restricted to "discharge/set-down only" or "pick-up only". Some stops may be designated as "timing points", and if the vehicle is ahead of schedule it will wait there to ensure correct synchronization with the timetable. In dense urban areas where bus volumes are high, skip-stops are sometimes used to increase efficiency and reduce delays at bus stops. Fare stages may also be defined by the location of certain stops in distance or zone-based fare collection systems. Sunday stops are close to a church and used only on Sundays. History From the 17th to the 19th century, horse-drawn stage coaches ran regular services between many European towns, starting and stopping at designated coaching inns where the horses could be changed and passengers board or alight, in effect constituting the earliest form of bus stop. The Angel Inn, Islington, the first stop on the route from London to York, was a noted example of such an inn. A seat in a stage coach usually had to be booked in advance. John Greenwood opened the first bus line in Britain in Manchester in 1824, running a fixed route and allowing passengers to board on request along the way without a reservation. Landmarks such as public houses, rail stations and road junctions became customary stopping points. Regular horse-drawn buses started in Paris in 1828. George Shillibeer started his London horse Omnibus service in 1829, running between stops at Paddington (at the Yorkshire Stingo pub) and the Bank of England to a designated route and timetable. By the mid-19th century, guides were available to London bus routes, including maps with routes and the main stops. Design Bus stop infrastructure ranges from a simple pole and sign, to a rudimentary shelter, to sophisticated structures. The usual minimum is a pole mounted flag with suitable name/symbol. Bus stop shelters may have a full or partial roof, supported by a two, three or four sided construction. Modern stops are mere steel and glass/perspex constructions, although in other places, such as rural Britain, stops may be wooden brick or concrete built. The construction may include small inbuilt seats. The construction may feature advertising, from simple posters, to complex illuminated, changeable or animated displays. Some installations have also included interactive advertising. Advertising may be the primary reason for the shelter, and the advertising pays for the bus shelter. Design and construction may be uniform to reflect a large corporate or local authority provider, or installations may be more personal or distinctive where a small local authority such as a parish council is responsible for the stop. The stop may include separate street furniture such as a bench, lighting and a trash receptacle (dustbin). Individual bus stops may simply be placed on the sidewalk/pavement next to the roadway, although they can also be placed to facilitate use of a busway. More complex installations can include construction of a bus turnout or a bus bulb, for traffic management reasons, although use of a bus lane can make these unnecessary. A 'floating bus stop' or 'bus stop bypass' is located between a road and a cycle lane, so that passengers must cross the cycle lane in order to reach it. They are "ubiquitous in the Netherlands, and common across Europe". Several bus stops may be grouped together to facilitate easy transfer between routes. These may be arranged in a simple row along the street, or in parallel or diagonal rows of multiple stops. Groups of bus stops may be integral to transportation hubs. With extra facilities such as a waiting room or ticket office, outside groupings of bus stops can be classed as a rudimentary bus station. Convention is usually for the bus to draw level with the 'flag', although in areas of mixed front and rear entrance buses, such as London, a head stop, and more rarely a tail stop, indicates to the driver whether they should stop the bus with either the rear platform or the driver's cab level with the flag. In certain areas, the area of road next the bus stop may be specially marked, and protected in law. Often, car drivers can be unaware of the legal implications of stopping or parking at a bus stop. In bus rapid transit systems, bus stops may be more elaborate than street bus stops, and can be termed "stations" to reflect this difference. These may have enclosed areas to allow off-bus fare collection for rapid boarding, and be spaced further apart, like tram stops. Bus stops on a bus rapid transit line may also have a more complex construction allowing level boarding platforms, and doors separating the enclosure from the bus until ready to board. Traffic signs The bus stop flag (bus stop pole) is usually not only a carrier of information for passengers, but it also fulfills the role of a road sign that indicates the beginning (front) of the stop. In some places the flag may not indicate exactly the front of the stop, but is placed anywhere within the stop area. In some countries (e.g. Czechia and Slovakia), there is also a different road sign that is intended to mark the end of the stop and thus indicate its length. The use of such a sign may be limited to only certain types of stops, for example only to stops located in a continuous traffic lane, or only to stops that can be used by more than one vehicle at the same time, or if the stop is located in an interruption of the parking lane. There are also various types of horizontal traffic markings of bus stops on the road. Some consist only of writings that draw attention to a stop or a dedicated stop lane; some can precisely define the space and length of the stop, including the space designated for entering and exiting the stop. In dangerous places, another warning sign can be placed in front of the bus stop, or a sign prohibiting from going around the bus in the bus stop, etc. In rare cases, traffic signals may also be placed to allow the bus to exit the stop lane or to stop traffic while the bus is at the stop. The mutual position of the opposite stops and their position in relation to the pedestrian crossing should be designed in such a way that the danger to pedestrians is minimized. Information Public-facing information Most bus stops are identified with a metal sign attached to a pole or light standard. Some stops are plastic strips strapped on to poles and others involve a sign attached to a bus shelter. The signs are often identified with a picture of a bus and/or with the words "bus stop" in the local language. The bus stop "flag" (a panel usually projecting from the top of a bus stop pole) will often show the route numbers of all the buses calling at the stop, perhaps distinguishing frequent, infrequent, 24-hour, and night services. The flag may also show the logo of the dominant bus operator, or the logo of a local transit authority with responsibility for bus services in the area. Additional information may include an unambiguous, unique name for the stop, and the direction/common destination(s) of most calling routes. Bus stops will often show timetable information: either the full timetable, or for busier routes, the times or frequency that a bus will call at the specific stop. Route maps and tariff information may also be provided, and telephone numbers for relevant travel information services. The stop may also incorporate, or have nearby, real time information displays with the arrival times of the next buses. Increasingly, mobile phone technology is being referenced on more remote stops, allowing the next bus times to be sent to a passenger's handset based on the stop location and the real time information. Automated ticket machines may be provided at busy stops. Data model Modern passenger information systems and journey planners require a detailed digital representation of stops and stations. The CEN Transmodel data model, and the related IFOPT data interchange standard, define how transport systems, including bus stops, should be described for use in computer models. In Transmodel, a single bus stop is modeled as a "Stop Point", and a grouping of nearby bus stops as a "Stop Area" or "Stop Place". The General Transit Feed Specification (GTFS) standard, originally developed by Google and TriMet, defines a simple and widely used data interchange standard for public transport schedules. GTFS also includes a table of stop locations which for each stop gives a name, identifier, location, and identification with any larger station that the stop may be a part of. OpenStreetMap also has a modelling standard for bus stops. The United Kingdom has collected a complete database of its public transport access points, including bus stops, into the National Public Transport Access Nodes (NaPTAN) database with details of 350,000 nodes and which is available as open Data from data.gov.uk. In this database, developed by the Department of Transport in 2001, stops are classified as "marked" or "custom and usage" (i.e. unmarked stops where the driver will stop the vehicle on request). Use of marked stops varies: either the bus will always stop, or will stop by request only. Safety Bus stops enhance passenger safety in a number of ways: Bus stops prevent passengers from trying to board or alight in hazardous situations such as at intersections or where a bus is turning and is not using the curb lane. A bus driver cannot be expected to continuously look for intending passengers. A bus stop means that the driver only needs to look for intending passengers at the approach to each bus stop. Having bus stops requires passengers to group themselves prior to boarding, which reduces time spent at boarding. At night, when passenger numbers are lower, restrictions are sometimes relaxed and passengers may be allowed to exit the bus anywhere within reason. Bus turnouts, or lay-bys, allow buses to stop without impeding the flow of traffic on the main roadway. Bus stop shelters Cooling In countries with hot climates, air-conditioned bus stop shelters are sometimes used, for example in Dubai in United Arab Emirates, Hyderabad in India, Eilat in Israel, Ashgabat in Turkmenistan. As an alternative to air conditioning, passive daytime radiative cooling has been used to cool bus stop shelters. Bus stops at Arizona State University and the surrounding areas of Tempe, Arizona used a 3M film to lower shelter temperatures by 4 °C. A bus shelter in a mid-rise area of Tehran used passive cooling to cool a bus shelter by up to 10 °C. Regulation Some jurisdictions have introduced particularised legislative controls to foster safer bus stop design and management. The State of Victoria, Australia, for example, has enacted a Bus Safety Act which contains performance-based duties of care which apply to all industry participants who are in a position to influence the safety of bus operations - what is called the "chain of responsibility". The safety duties apply to all bus services, both commercial and non-commercial, and to all buses regardless of seating capacity. Breach of the duty is a serious criminal offence which carries a heavy penalty. The primary duty holder under the Bus Safety Act is the operator of the bus service, as the person who has effective responsibility and control over the whole operation. However, the Act also contains a safety duty covering "people with responsibility for bus stops", including people who design, build, or maintain the stop, plus those who decide on its location. This duty was introduced in response to research showing that the most serious hazard associated with bus travel occurs when passengers, especially children, are crossing the road after alighting from the bus. The location and layout of a bus stop is therefore a factor in the level of risk. Safety duties are also imposed by the Bus Safety Act on a range of other people including - "bus safety workers" including drivers, schedulers who set bus timetables, and mechanics and testers who repair or assess vehicle safety "procurers" - people who procure the bus service, known as the "customer" in the commercial charter sector. All of these persons can clearly affect bus safety. They are required by the Bus Safety Act to ensure that, in carrying out their activities, they eliminate risks to health and safety if 'practicable' - or work to reduce those risks 'so far as is reasonably practicable'. This familiar practicability formula is borrowed from Victoria's Rail Safety Act (and a subsequent national model Rail Safety Bill) and the Occupational Health and Safety Act 2004. In Europe, as a rule, the design of roads and the placement of road signs are subject to detailed technical standards, the requirements of which should ensure the safety of local traffic regulation, and is subject to official approval. As a rule, it is permissible to place a stop of a bus line only in a place that is approved and marked as a bus stop. Research Bus stop capacity is often an important consideration in the planning of bus stops serving multiple routes within urban centers. Limited capacity may mean buses queue up behind each other at the bus stop, which can cause traffic blockages or delays. Bus stop capacity is typically measured in terms of buses/hour that can reliably use the bus stop. The main factors that affect bus stop capacity are: Number of loading areas (or number of buses that can stop at one time) Average dwell time (How much time it takes a bus to load/unload passengers) G/C ratio of nearby traffic signal (green time / cycle length) Clearance time (time it takes bus to re-enter the traffic stream) Detailed procedures for calculating bus stop capacity and bus lane capacity using skip stops are outlined in Part 4 of the Transit Capacity and Quality of Service Manual, published by the US Transportation Research Board. Transit agencies are increasingly looking at consolidation of possibly previously haphazardly placed bus stops as a way to improve service cheaply and easily. Bus stop consolidation evaluates the bus stops along an established bus route and develops a new pattern for optimal bus stop placement. Bus stop consolidation has been proven to improve operating efficiency and ridership on bus routes. Fakes Some nursing homes and hospitals have built fake, imitation bus stops for their residents who have dementia. Some of these bus stops are even fitted with old advertisements and timetables to give a sense of familiarity. The residents will sit at the bus stop waiting for a bus to take them to their imagined destination. After some time, a staff member comes to escort the clients back to the home. In popular culture Bus stops are common tropes in popular culture. In 1956, there was a Marilyn Monroe film called Bus Stop. A famous scene in the movie Forrest Gump takes place at a bus stop and almost all episodes of South Park series start by presenting the main characters in a bus stop. In Japanese culture, the movie My Neighbor Totoro featured a bus stop, both for ordinary buses and a cat bus. The opening scene of the anime Air shows the main character getting off at a bus stop. The Japanese movie Summer Wars features a rural bus stop. Renowned rabbis have taught lessons in Judaism from their interaction and experience with bus stops. Gallery
Technology
Concepts of ground transport
null
240512
https://en.wikipedia.org/wiki/Duiker
Duiker
A duiker is a small to medium-sized brown antelope native to sub-Saharan Africa, found in heavily wooded areas. The 22 extant species, including three sometimes considered to be subspecies of the other species, form the subfamily Cephalophinae or the tribe Cephalophini. Taxonomy and phylogeny The tribe Cephalophini (formerly the subfamily Cephalophinae) comprises three genera and 22 species, three of which are sometimes considered to be subspecies of the other species. The three genera include Cephalophus (15 species and three disputed taxa), Philantomba (three species), and Sylvicapra (one species). The subfamily was first described by British zoologist John Edward Gray in 1871 in Proceedings of the Zoological Society of London. The scientific name "Cephalophinae" probably comes from the combination of the New Latin word cephal, meaning head, and the Greek word lophos, meaning crest. The three disputed species in Cephalophus are Brooke's duiker (C. brookei), Ruwenzori duiker (C. rubidis), and the white-legged duiker (C. crusalbum). Considered to be a subspecies of Ogilby's duiker (C. nigrifrons), Brooke's duiker was elevated to species status by British ecologist Peter Grubb in 1998. Its status as a species was further seconded in a 2002 publication by Grubb and colleague Colin Groves. However, zoologists such as Jonathan Kingdon continue to treat it as a subspecies. The Ruwenzori duiker is generally considered to be a subspecies of the black-fronted duiker (C. nigrifrons). However, significant differences from another race of the same species, C. n. kivuensis, with which it is sympatric on the Ruwenzori mountain range, led Kingdon to suggest that it might be a different species altogether. Grubb treated the white-legged duiker as a subspecies of Ogilby's duiker in 1978, but regarded as an independent species by him and Groves after a revision in 2011. This was supported by a 2003 study. A 2001 phylogenetic study divided Cephalophus into three distinct lineages - the giant duikers, east African red duikers, and west African red duikers. Abbott's duiker (C. spadix), the bay duiker (C. dorsalis), Jentink's duiker (C. jentinki) and the yellow-backed duiker (C. silvicultor) were classified as the giant duikers. The east African red duikers include the black-fronted duiker (C. nigrifrons), Harvey's duiker (C. harveyi), red-flanked duiker (C. rufilatus), red forest duiker (C. natalensis), Ruwenzori duiker, and white-bellied duiker (C. leucogaster). The third group, the west African red duikers, comprises the black duiker (C. niger), Ogilby's duiker, Peters' duiker (C. callipygus), and Weyns's duiker (C. weynsi). However, the status of two species, Aders's duiker and zebra duiker, remained dubious. In 2012, Anne R. Johnston (of the University of Orleans) and colleagues constructed a cladogram of the subfamily Cephalophinae (duiker) based on mitochondrial analysis. Etymology The common name "duiker" comes from the Afrikaans word duik, or Dutch duiken - both mean "to dive", which refers to the practice of the animals to frequently dive into vegetation for cover. Description Duikers are split into two groups based on their habitat – forest and bush duikers. All forest species inhabit the rainforests of sub-Saharan Africa, while the only known bush duiker, grey common duiker occupies savannas. Duikers are very shy, elusive creatures with a fondness for dense cover; those that tend to live in more open areas, for example, are quick to disappear into thickets for protection. Because of their rarity and interspersed population, not much is known about duikers; thus, further generalizations are widely based on the most commonly studied red forest, blue, yellow-backed, and common grey duiker. In tropical rainforest zones of Africa, people nonselectively hunt duikers for their hide, meat, and horns at highly unsustainable rates. Population trends for all species of duikers, excluding the common duiker and the smallest blue duiker, are significantly decreasing; Aders' and particularly the larger duiker species such as the Jentink's and Abbott's duikers, are now considered endangered by the IUCN Red List of Threatened Species. Anatomy and physiology Duikers range from the blue duiker to the yellow-backed duiker. With their bodies low to the ground and with very short horns, forest duikers are built to navigate effectively through dense rainforests and quickly dive into bushes when threatened. Since the common grey duiker lives in more open areas, such as savannas, it has longer legs and vertical horns, which allow it to run faster and for longer distances; only the males, which are more confrontational and territorial, exhibit horns. Also, duikers have well-developed preorbital glands, which resemble slits under their eyes, or in the cases of blue duikers, pedal glands on their hooves. Males use secretions from these glands to mark their territories. Besides reproduction, duikers behave in highly independent manner and prefer to act alone. This may, in part, explain the limited sexual size dimorphism shown by most duiker species, excluding the common duiker, in which the females are distinctly larger than the males. Also, body size is proportional to the amount of food intake and the size of food. Anatomical features such as the head and neck shape also limit the amount and size of food intake. “Anatomical variations... impose further constraints on ingestion” causing differences in the food sources among different species of duiker. Behaviour Interactions In 2001, Helen Newing's study in West Africa on the interactions of duikers found that body size, habitat preference, and activity patterns were the main differentiating factors among the seven species of duikers. These differences specific to each species of duiker allow them to coexist by limiting niche overlap. However, although some species are yet to be considered endangered, because of the repeated damage and habitat fragmentation of their habitat by human activities, such specialization of the niches are gradually becoming impaired and are contributing to the significant decrease in population. Due to their relative size and reserved nature, duikers' primary defense mechanism is to hide from predators. Duikers are known for their extreme shyness, freezing at the slightest sign of a threat and diving into the nearest bush. Duikers' social behavior involves maintaining sufficient distance between individuals. However, in contrast to their conserved nature, duikers are more aggressive when dealing with territories; they mark their territory and their mates with secretions from their preorbital glands and fight other duikers that challenge their authorities. Male common duikers, especially the younger males, mark their territories also by defecation. For those duikers that travel alone, they choose to interact with other duikers once or twice a year, solely for the purpose of mating. Although duikers occasionally form temporary groups to gather fallen fruit, because so little is known about how they interact and affect one another, determining which factors contribute the most to their endangerment is difficult. Duikers prefer to live alone or as pairs to avoid the competition that comes from living in a large group. They have also evolved to become highly selective feeders, feeding only on specific parts of plants. In fact, in his study regarding the relationship between group size and feeding style, P.J. Jarman found that the more selective an organism's diet is, the more dispersed its food will be, and consequently, the smaller the group becomes. Diet Duikers are primarily browsers rather than grazers, eating leaves, shoots, seeds, fruit, buds, and bark, and often following flocks of birds or troops of monkeys to take advantage of the fruit they drop. They supplement their diets with meat: duikers consume insects and carrion from time to time and even manage to capture rodents or small birds. Since food is the deciding factor, various locations of food sources often dictate the distribution of duikers. While they feed on a wide range of plants, they choose to eat specific parts of the plant that are most nutritious. Therefore, to feed efficiently, they must be familiar with their territory and be thoroughly acquainted with the geography and distribution of specific plants. For such reasons, duikers readjusting to novel environments created by human settlements and deforestation is not easy. The smaller species, for example the blue duiker, generally tend to eat various seeds, while larger ones tend to feast more on larger fruits. Since blue duikers are very small, they are more efficient in digesting small, high-quality items. Receiving most of their water from the foods they eat, duikers do not rely on drinking water and can be found in waterless areas. Activity patterns Duikers can be diurnal, nocturnal, or both. Since the majority of the food source is available in the daytime, duiker evolution has rendered most duikers as diurnal. A correlation exists between body size and sleep pattern in duikers. While smaller to medium-sized duikers show increased activity and scavenge for food during the daytime, larger duikers are most active at night. An exception to this is the yellow-backed duiker, the largest species, which is active during both day and night. Distribution and abundance Duikers are found sympatrically in many different regions. Most species dwell in the tropical rainforests of Central and West Africa, creating overlapping regions among different species of forest duikers. Although "body size is the primary factor in defining the fundamental niches of each species", often dictating the distribution and abundance of duikers in a given habitat, distinguishing between the numerous species of duikers based purely on distribution and abundance is often difficult. For example, the blue duiker and red forest duiker coexist within a small area of Mossapoula (Central African Republic). While blue duikers are seen more frequently than red forest duikers in the heavily hunted area of Mossapoula, red forest duikers are more observed in a less exploited regions such as the western Dja Reserve of Cameroon. Ecology Conservation of duikers has a direct and critical relationship with their ecology. Disruption of balance in the system leads to unprecedented competition, both interspecific and intraspecific. Before intervention, the system of specialized resources in which larger duikers exploit a particular type of food and smaller duikers on another, is functional as modeled in the diurnal and nocturnal nature of the duikers; this allows the niche to be shared by others without distinct interspecific competition. Similarly, they decrease intraspecific competition by being solitary, independent, and selective in eating habits. In consequence, disruption of the competitive balance in one habitat often cascades its effect on to the competitive balance in another habitat. Also, a correlation exists between body size and diet. Larger animals have more robust digestive systems, stronger jaws, and wider necks, which allow them to consume lower-quality foods and larger fruits and seeds. Similarly, bay and Peters' duikers can coexist because of their different sleep patterns. This allows Peters' duikers to eat fruits by day, and the bay duikers to eat what is left by night. In consequence of such a life pattern, the bay duiker's digestive system has evolved to consume remaining, rather poor-quality foods. Another critical influence that duikers have on the environment is acting as “seed dispersers for some plants”. They maintain a mutualistic relationship with certain plants; the plants serve as a nutritious and abundant food source for the duikers, and simultaneously benefit from the extensive dispersal of their seeds by the duikers. Conservation Duikers live in an environment where even a subtle change in their life patterns can greatly impact the surrounding ecosystem. Two of the main factors that directly lead to duiker extinction are habitat loss and overexploitation. Constant urbanization and the process of “shifting agriculture” is gradually taking over many of duikers' habitats; at the same time, overexploitation is also permitting the overgrowth of other interacting species, resulting in an inevitable disruption of coexistence. Overexploitation of duikers affects their population and organisms that rely on them for survival. For instance, plants that depend on duikers for seed dispersal may lose their primary method of reproduction, and other organisms that depend on these particular plants as their resources would also have their major source of food reduced. Duikers are often captured for bushmeat. In fact, duikers are one of the most hunted animals both in terms of number and biomass in Central Africa. For example, in areas near the African rainforests, because people do not raise their own livestock, many people of all classes rely on bushmeat as their source of protein For these people, if the trend of overexploitation continues at such a high rate, the effects of the population decrease in duikers will be too severe for these organisms to serve as a reliable food source. In addition to the unnaturally high demand for bushmeat, unenforced hunting law is a perpetual threat to many species, including the duiker. Most hunters believe that the diminishing number of animals was due to overexploitation. The direct effects of hunting include overexploitation of target species and incidental hunting of nontargeted or rare species (because hunting is largely nonselective). To avoid this outcome, viable methods of conserving duikers are access restriction and captive breeding. Access restriction involves imposing temporal or spatial restrictions on hunting duikers. Temporal restrictions include closing off certain seasons, such as the main birth season, to hunting; spatial restrictions include closing off certain regions where endangered duikers are found. Captive breeding has been used and is often looked to as a solution to ensuring the survival of the duiker population; however, due to the duikers' low reproductive rate, even with the protection provided by the conservationists, captive breeding would not increase the overall population's growth rate. The greatest challenge facing the conservation of duikers is the lack of sufficient knowledge regarding these organisms, coupled with their unique population dynamics. The need is to not only thoroughly understand their population dynamics, but also establish methods to differentiate among the various species. Bushmeat industry The World Health Organization (WHO) has identified the sale of duiker bushmeat as contributing to the spread of filoviruses such as Ebola, citing Georges et al., 1999. The WHO notes that risk of infection predominantly arises from slaughter and preparation of meat, and that consumption of properly cooked meat does not pose a risk. Species Tribe Cephalophini Genus Cephalophus Abbott's duiker, C. spadix Aders's duiker, C. adersi Bay duiker, C. dorsalis Black duiker, C. niger Black-fronted duiker, C. nigrifrons Brooke's duiker, C. brookei Harvey's duiker, C. harveyi Jentink's duiker, C. jentinki Ogilby's duiker, C. ogilbyi Peters' duiker, C. callipygus Red-flanked duiker, C. rufilatus Red forest duiker, C. natalensis Ruwenzori duiker, C. rubidus (may be a subspecies of the black-fronted duiker or the red-flanked duiker) Weyns's duiker, C. weynsi White-bellied duiker, C. leucogaster White-legged duiker C. crusalbum (may be a subspecies of Ogilby's duiker) Yellow-backed duiker, C. silvicultor Zebra duiker, C. zebra Genus Philantomba Blue duiker, P. monticola Maxwell's duiker, P. maxwellii Walter's duiker, P. walteri Genus Sylvicapra Common duiker, S. grimmia
Biology and health sciences
Bovidae
Animals
240616
https://en.wikipedia.org/wiki/Sea%20cucumber
Sea cucumber
Sea cucumbers are echinoderms from the class Holothuroidea ( ). They are marine animals with a leathery skin and an elongated body containing a single, branched gonad. They are found on the sea floor worldwide. The number of known holothurian ( ) species worldwide is about 1,786, with the greatest number being in the Asia–Pacific region. Many of these are gathered for human consumption, and some species are cultivated in aquaculture systems. The harvested product is variously referred to as trepang, namako, bêche-de-mer, or balate. Sea cucumbers serve a useful role in the marine ecosystem as they help recycle nutrients, breaking down detritus and other organic matter, after which bacteria can continue the decomposition process. Like all echinoderms, sea cucumbers have an endoskeleton just below the skin, calcified structures that are usually reduced to isolated microscopic ossicles (or sclerietes) joined by connective tissue. In some species these can sometimes be enlarged to flattened plates, forming an armour. In pelagic species such as Pelagothuria natatrix (order Elasipodida, family Pelagothuriidae), the skeleton is absent and there is no calcareous ring. Sea cucumbers are named for their resemblance to the fruit of the cucumber plant. Overview Most sea cucumbers have a soft and cylindrical body, rounded off and occasionally fat in the extremities, and generally without solid appendages. Their shape ranges from almost spherical for "sea apples" (genus Pseudocolochirus) to serpent-like for Apodida or the classic sausage-shape, while others resemble caterpillars. The mouth is surrounded by tentacles, which can be pulled back inside the animal. Holothurians measure generally between long, with extremes of some millimetres for Rhabdomolgus ruber and up to more than for Synapta maculata. The largest American species, Holothuria floridana, which abounds just below low-water mark on the Florida reefs, has a volume of well over , and long. Most possess five rows of tube feet (called "podia"), but Apodida lacks these and moves by crawling; the podia can be of smooth aspect or provided with fleshy appendages (like Thelenota ananas). The podia on the dorsal surface generally have no locomotive role, and are transformed into papillae. At one of the extremities opens a rounded mouth, generally surrounded with a crown of tentacles which can be very complex in some species (they are in fact modified podia); the anus is postero-dorsal. Holothurians do not look like other echinoderms at first glance, because of their tubular body, without visible skeleton nor hard appendixes. Furthermore, the fivefold symmetry, classical for echinoderms, although preserved structurally, is doubled here by a bilateral symmetry which makes them look like chordates. However, a central symmetry is still visible in some species through five 'radii', which extend from the mouth to the anus (just like for sea urchins), on which the tube feet are attached. There is thus no "oral" or "aboral" face as for sea stars and other echinoderms, but the animal stands on one of its sides, and this face is called trivium (with three rows of tube feet), while the dorsal face is named bivium. A remarkable feature of these animals is the "catch" collagen that forms their body wall. This can be loosened and tightened at will, and if the animal wants to squeeze through a small gap, it can essentially liquefy its body and pour into the space. To keep itself safe in these crevices and cracks, the sea cucumber will hook up all its collagen fibers to make its body firm again. The most common way to separate the subclasses is by looking at their oral tentacles. Order Apodida have a slender and elongate body lacking tube feet, with up to 25 simple or pinnate oral tentacles. Aspidochirotida are the most common sea cucumbers encountered, with a strong body and 10 to 30 leaflike or shield-like oral tentacles. Dendrochirotida are filter-feeders, with plump bodies and eight to 30 branched oral tentacles (which can be extremely long and complex). Anatomy Sea cucumbers are typically in length, although the smallest known species are just long, and the largest can reach . The body ranges from almost spherical to worm-like, and lacks the arms found in many other echinoderms, such as starfish. The anterior end of the animal, containing the mouth, corresponds to the oral pole of other echinoderms (which, in most cases, is the underside), while the posterior end, containing the anus, corresponds to the aboral pole. Thus, compared with other echinoderms, sea cucumbers can be said to be lying on their side. Body plan The body of a holothurian is roughly cylindrical. It is radially symmetrical along its longitudinal axis, and has weak bilateral symmetry transversely with a dorsal and a ventral surface. As in other echinozoans, there are five ambulacra separated by five ambulacral grooves, the interambulacra. The ambulacral grooves bear four rows of tube feet but these are diminished in size or absent in some holothurians, especially on the dorsal surface. The two dorsal ambulacra make up the bivium while the three ventral ones are known as the trivium. At the anterior end, the mouth is surrounded by a ring of tentacles which are usually retractable into the mouth. These are called the primary tentacles and were present in the common ancestor of echinoderms, but have been lost in all the other classes of the phylum, and may be simple, branched or arborescent. They are known as the introvert and posterior to them there is an internal ring of large calcareous ossicles. Attached to this are five bands of muscle running internally longitudinally along the ambulacra. There are also circular muscles, contraction of which cause the animal to elongate and the introvert to extend. Anterior to the ossicles lie further muscles, contraction of which cause the introvert to retract. The body wall consists of an epidermis and a dermis and contains smaller calcareous ossicles, the types of which are characteristics which help to identify different species. Inside the body wall is the coelom which is divided by three longitudinal mesenteries which surround and support the internal organs. Digestive system A pharynx lies behind the mouth and is surrounded by a ring of ten calcareous plates. In most sea cucumbers, this is the only substantial part of the skeleton, and it forms the point of attachment for muscles that can retract the tentacles into the body for safety as for the main muscles of the body wall. Many species possess an oesophagus and stomach, but in some the pharynx opens directly into the intestine. The intestine is typically long and coiled, and loops through the body three times before terminating in a cloacal chamber, or directly as the anus. Nervous system Sea cucumbers have no true brain. A ring of neural tissue surrounds the oral cavity, and sends nerves to the tentacles and the pharynx. The animal is, however, quite capable of functioning and moving about if the nerve ring is surgically removed, demonstrating that it does not have a central role in nervous coordination. In addition, five major nerves run from the nerve ring down the length of the body beneath each of the ambulacral areas. Most sea cucumbers have no distinct sensory organs, although there are various nerve endings scattered through the skin, giving the animal a sense of touch and a sensitivity to the presence of light. There are, however, a few exceptions: members of the Apodida order are known to possess statocysts, while some species possess small eye-spots near the bases of their tentacles. Respiratory system Sea cucumbers extract oxygen from water in a pair of "respiratory trees" that branch in the cloaca just inside the anus, so that they "breathe" by drawing water in through the anus and then expelling it. The trees consist of a series of narrow tubules branching from a common duct, and lie on either side of the digestive tract. Gas exchange occurs across the thin walls of the tubules, to and from the fluid of the main body cavity. Together with the intestine, the respiratory trees also act as excretory organs, with nitrogenous waste diffusing across the tubule walls in the form of ammonia and phagocytic coelomocytes depositing particulate waste. Circulatory systems Like all echinoderms, sea cucumbers possess both a water vascular system that provides hydraulic pressure to the tentacles and tube feet, allowing them to move, and a haemal system. The latter is more complex than that in other echinoderms, and consists of well-developed vessels as well as open sinuses. A central haemal ring surrounds the pharynx next to the ring canal of the water vascular system, and sends off additional vessels along the radial canals beneath the ambulacral areas. In the larger species, additional vessels run above and below the intestine and are connected by over a hundred small muscular ampullae, acting as miniature hearts to pump blood around the haemal system. Additional vessels surround the respiratory trees, although they contact them only indirectly, via the coelomic fluid. Indeed, the blood itself is essentially identical with the coelomic fluid that bathes the organs directly, and also fills the water vascular system. Phagocytic coelomocytes, somewhat similar in function to the white blood cells of vertebrates, are formed within the haemal vessels, and travel throughout the body cavity as well as both circulatory systems. An additional form of coelomocyte, not found in other echinoderms, has a flattened discoid shape, and contains hemoglobin. As a result, in many (though not all) species, both the blood and the coelomic fluid are red in colour. Vanadium has been reported in high concentrations in holothurian blood, however researchers have been unable to reproduce these results. Locomotive organs Like all echinoderms, sea cucumbers possess pentaradial symmetry, with their bodies divided into five nearly identical parts around a central axis. However, because of their posture, they have secondarily evolved a degree of bilateral symmetry. For example, because one side of the body is typically pressed against the substratum, and the other is not, there is usually some difference between the two surfaces (except for Apodida). Like sea urchins, most sea cucumbers have five strip-like ambulacral areas running along the length of the body from the mouth to the anus. The three on the lower surface have numerous tube feet, often with suckers, that allow the animal to crawl along; they are called trivium. The two on the upper surface have under-developed or vestigial tube feet, and some species lack tube feet altogether; this face is called bivium. In some species, the ambulacral areas can no longer be distinguished, with tube feet spread over a much wider area of the body. Those of the order Apodida have no tube feet or ambulacral areas at all, and burrow through sediment with muscular contractions of their body similar to that of worms, however five radial lines are generally still obvious along their body. Even in those sea cucumbers that lack regular tube feet, those that are immediately around the mouth are always present. These are highly modified into retractile tentacles, much larger than the locomotive tube feet. Depending on the species, sea cucumbers have between 10 and 30 such tentacles and these can have a wide variety of shapes depending on the diet of the animal and other conditions. Many sea cucumbers have papillae, conical fleshy projections of the body wall with sensory tube feet at their apices. These can even evolve into long antennae-like structures, especially on the abyssal genus Scotoplanes. Endoskeleton Echinoderms typically possess an internal skeleton composed of plates of calcium carbonate. In most sea cucumbers, however, these have become reduced to microscopic ossicles embedded beneath the skin. A few genera, such as Sphaerothuria, retain relatively large plates, giving them a scaly armour. Life history and behaviour Habitat Sea cucumbers can be found in great numbers on the deep seafloor, where they often make up the majority of the animal biomass. At depths deeper than , sea cucumbers comprise 90% of the total mass of the macrofauna. Sea cucumbers form large herds that move across the bathygraphic features of the ocean, hunting food. The body of some deep water holothurians, such as Enypniastes eximia, Peniagone leander and Paelopatides confundens, is made of a tough gelatinous tissue with unique properties that makes the animals able to control their own buoyancy, making it possible for them to either live on the ocean floor or to actively swim or float over it in order to move to new locations, in a manner similar to how the group Torquaratoridae floats through water. Holothurians appear to be the echinoderms best adapted to extreme depths, and are still very diversified beyond deep: several species from the family Elpidiidae ("sea pigs") can be found deeper than , and the record seems to be some species of the genus Myriotrochus (in particular Myriotrochus bruuni), identified down to deep. In more shallow waters, sea cucumbers can form dense populations. The strawberry sea cucumber (Squamocnus brevidentis) of New Zealand lives on rocky walls around the southern coast of the South Island where populations sometimes reach densities of . For this reason, one such area in Fiordland is called the strawberry fields. Locomotion Some abyssal species in the abyssal order Elasipodida have evolved to a "benthopelagic" behaviour: their body is nearly the same density as the water around them, so they can make long jumps (up to high), before falling slowly back to the ocean floor. Most of them have specific swimming appendages, such as some kind of umbrella (like Enypniastes), or a long lobe on top of the body (Psychropotes). Only one species is known as a true completely pelagic species, that never comes close to the bottom: Pelagothuria natatrix. Diet Holothuroidea are generally scavengers, feeding on debris in the benthic zone of the ocean. Exceptions include some pelagic cucumbers and the species Rynkatorpa pawsoni, which has a commensal relationship with deep-sea anglerfish. The diet of most cucumbers consists of plankton and decaying organic matter found in the sea. Some sea cucumbers position themselves in currents and catch food that flows by with their open tentacles. They also sift through the bottom sediments using their tentacles. Other species can dig into bottom silt or sand until they are completely buried. They then extrude their feeding tentacles, ready to withdraw at any hint of danger. In the South Pacific, sea cucumbers may be found in densities of . These populations can process of sediment per year. The shape of the tentacles is generally adapted to the diet, and to the size of the particles to be ingested: the filter-feeding species mostly have complex arborescent tentacles, intended to maximize the surface area available for filtering, while the species feeding on the substratum will more often need digitate tentacles to sort out the nutritional material; the detritivore species living on fine sand or mud more often need shorter "peltate" tentacles, shaped like shovels. A single specimen can swallow more than of sediment a year, and their excellent digestive capacities allow them to reject a finer, purer and homogeneous sediment. Therefore, sea cucumbers play a major role in the biological processing of the sea bed (bioturbation, purge, homogenization of the substratum etc.). Communication and sociability Reproduction Most sea cucumbers reproduce by releasing sperm and ova into the ocean water. Depending on conditions, one organism can produce thousands of gametes. Sea cucumbers are typically dioecious, with separate male and female individuals, but some species are protandric. The reproductive system consists of a single gonad, consisting of a cluster of tubules emptying into a single duct that opens on the upper surface of the animal, close to the tentacles. At least 30 species, including the red-chested sea cucumber (Pseudocnella insolens), fertilize their eggs internally and then pick up the fertilized zygote with one of their feeding tentacles. The egg is then inserted into a pouch on the adult's body, where it develops and eventually hatches from the pouch as a juvenile sea cucumber. A few species are known to brood their young inside the body cavity, giving birth through a small rupture in the body wall close to the anus. Development In all other species, the egg develops into a free-swimming larva, typically after around three days of development. The first stage of larval development is known as an auricularia, and is only around in length. This larva swims by means of a long band of cilia wrapped around its body, and somewhat resembles the bipinnaria larva of starfish. As the larva grows it transforms into the doliolaria, with a barrel-shaped body and three to five separate rings of cilia. The pentacularia is the third larval stage of sea cucumber, where the tentacles appear. The tentacles are usually the first adult features to appear, before the regular tube feet. Symbiosis and commensalism Numerous small animals can live in symbiosis or commensalism with sea cucumbers, as well as some parasites. Some cleaner shrimps can live on the tegument of holothurians, in particular several species of the genus Periclimenes (genus which is specialized in echinoderms), in particular Periclimenes imperator. A variety of fish, most commonly pearl fish, have evolved a commensalistic symbiotic relationship with sea cucumbers in which the pearl fish will live in sea cucumber's cloaca using it for protection from predation, a source of food (the nutrients passing in and out of the anus from the water), and to develop into their adult stage of life. Many polychaete worms (family Polynoidae) and crabs (like Lissocarcinus orbicularis) have also specialized to use the mouth or the cloacal respiratory trees for protection by living inside the sea cucumber. Nevertheless, holothurians species of the genus Actinopyga have anal teeth that prevent visitors from penetrating their anus. Sea cucumbers can also shelter bivalves as endocommensals, such as Entovalva sp. Predators and defensive systems Sea cucumbers are often ignored by most of the marine predators because of the toxins they contain (in particular, holothurin) and because of their often spectacular defensive systems. However, they remain a prey for some highly specialized predators which are not affected by their toxins, such as the big mollusks Tonna galea and Tonna perdix, which paralyzes them using powerful poison before swallowing them completely. Some other less specialized and opportunist predators can also prey on sea cucumbers sometimes when they cannot find any better food, such as certain species of fish (triggerfish, pufferfish) and crustaceans (crabs, lobsters, hermit crabs). Some species of coral-reef sea cucumbers within the order Aspidochirotida can defend themselves by expelling their sticky cuvierian tubules (enlargements of the respiratory tree that float freely in the coelom) to entangle potential predators. When startled, these cucumbers may expel some of them through a tear in the wall of the cloaca in an autotomic process known as evisceration. Replacement tubules grow back in one and a half to five weeks, depending on the species. The release of these tubules can also be accompanied by the discharge of a toxic chemical known as holothurin, which has similar properties to soap. This chemical can kill animals in the vicinity and is one more method by which these sedentary animals can defend themselves. Estivation If the water temperature becomes too high, some species of sea cucumber from temperate seas can aestivate. While they are in this state of dormancy, they stop feeding, their gut atrophies, their metabolism slows down and they lose weight. The body returns to its normal state when conditions improve. Phylogeny and classification Holothuroidea (sea cucumbers) are one of five extant classes that make up the phylum Echinodermata. This is one of the most distinctive and diverse phyla, ranging from starfish to urchins to sea cucumbers and many other organisms. The echinoderms are mainly distinguished from other phyla by their body plan and organization. The earliest sea cucumbers are known from the middle Ordovician, over 450 million years ago. The apodida is the sister group to the other orders of sea cucumbers. All echinoderms share three main characteristics. When mature, echinoderms have a pentamerous radial symmetry. While this can easily be seen in a sea star or brittle star, in the sea cucumber it is less distinct and seen in their five primary tentacles. The pentamerous radial symmetry can also be seen in their five ambulacral canals. The ambulacral canals are used in their water vascular system which is another characteristic that binds this phylum together. The water vascular system develops from their middle coelom or hydrocoel. Echinoderms use this system for many things including movement by pushing water in and out of their podia or "tube feet". Echinoderms tube feet (including sea cucumbers) can be seen aligned along the side of their axes. While echinoderms are invertebrates, meaning they do not have a spine, they do all have an endoskeleton that is secreted by the mesenchyme. This endoskeleton is composed of plates called ossicles. They are always internal but may only be covered by a thin epidermal layer like in sea urchin's spines. In the sea cucumber, the ossicles are only found in the dermis, making them a very supple organism. For most echinoderms, their ossicles are found in units making up a three dimensional structure. However, in sea cucumbers, the ossicles are found in a two-dimensional network. All echinoderms also possess anatomical feature(s) called mutable collagenous tissues, or MCTs. Such tissues can rapidly change their passive mechanical properties from soft to stiff under the control of the nervous system and coordinated with muscle activity. Different echinoderm classes use MCTs in different ways. The asteroids, sea stars, can detach limbs for self-defense and then regenerate them. The Crinoidea, sea fans, can go from stiff to limp depending on the current for optimal filter feeding. The Echinoidea, sand dollars, use MCTs to grow and replace their rows of teeth when they need new ones. The Holothuroidea, sea cucumbers, use MCTs to eviscerate their gut as a self-defense response. MCTs can be used in many ways but are all similar at the cellular level and in mechanics of function. A common trend in the uses of MCTs is that they are generally used for self-defense mechanisms and in regeneration. Holothurian classification is complex and their paleontological phylogeny relies on a limited number of well-preserved specimens. The modern taxonomy is based first of all on the presence or the shape of certain soft parts (podia, lungs, tentacles, peripharingal crown) to determine the main orders, and secondarily on the microscopic examination of ossicles to determine the genus and the species. Contemporary genetic methods have been helpful in clarifying their classification. Taxonomic classification according to World Register of Marine Species: subclass Actinopoda Ludwig, 1891 order Dendrochirotida Grube, 1840 family Cucumariidae Ludwig, 1894 family Cucumellidae Thandar & Arumugam, 2011 family Heterothyonidae Pawson, 1970 family †Monilipsolidae Smith & Gallemí, 1991 family Paracucumidae Pawson & Fell, 1965 family Phyllophoridae Östergren, 1907 family Placothuriidae Pawson & Fell, 1965 family Psolidae Burmeister, 1837 family Rhopalodinidae Théel, 1886 family Sclerodactylidae Panning, 1949 family Vaneyellidae Pawson & Fell, 1965 family Ypsilothuriidae Heding, 1942 order Elasipodida Théel, 1882 family Elpidiidae Théel, 1882 family Laetmogonidae Ekman, 1926 family †Palaeolaetmogonidae Reich, 2012 family Pelagothuriidae Ludwig, 1893 family Psychropotidae Théel, 1882 order Holothuriida Miller, Kerr, Paulay, Reich, Wilson, Carvajal & Rouse, 2017 family Holothuriidae Burmeister, 1837 family Mesothuriidae Smirnov, 2012 order Molpadida Haeckel, 1896 family Caudinidae Heding, 1931 family Eupyrgidae Semper, 1867 family Gephyrothuriidae Koehler & Vaney, 1905 family Molpadiidae Müller, 1850 order Persiculida Miller, Kerr, Paulay, Reich, Wilson, Carvajal & Rouse, 2017 family Gephyrothuriidae Koehler & Vaney, 1905 family Molpadiodemidae Miller, Kerr, Paulay, Reich, Wilson, Carvajal & Rouse, 2017 family Pseudostichopodidae Miller, Kerr, Paulay, Reich, Wilson, Carvajal & Rouse, 2017 order Synallactida Miller, Kerr, Paulay, Reich, Wilson, Carvajal & Rouse, 2017 family Deimatidae Théel, 1882 family Stichopodidae Haeckel, 1896 family Synallactidae Ludwig, 1894 subclass †ArthrochirotaceaSmirnov, 2012 order †Arthrochirotida Brandt, 1835 family †Palaeocucumariidae Frizzell & Exline, 1966 subclass Paractinopoda Ludwig, 1891 order Apodida Brandt, 1835 family Chiridotidae Östergren, 1898 family Myriotrochidae Théel, 1877 family Synaptidae Burmeister, 1837 Relation to humans Food To supply the markets of Southern China, Makassar trepangers traded with the Indigenous Australians of Arnhem Land from at least the 18th century and probably earlier. This is the first recorded example of trade between the inhabitants of the Australian continent and their Asian neighbours. There are many commercially important species of sea cucumber that are harvested and dried for export for use in Chinese cuisine as hoisam. Some of the more commonly found species in markets include: Acaudina molpadioides Actinopyga echinites Actinopyga mauritiana Actinopyga palauensis Apostichopus californicus Apostichopus japonicus Holothuria nobilis Holothuria scabra Holothuria fuscogilva Isostichopus fuscus Thelenota ananas Medicine According to the American Cancer Society, although it has been used in traditional Asian folk medicine for a variety of ailments, "there is little reliable scientific evidence to support claims that sea cucumber is effective in treating cancer, arthritis, and other diseases" but research is examining "whether some compounds made by sea cucumbers may be helpful against cancer". Various pharmaceutical companies emphasize gamat, the Malay traditional medicinal usage of this animal. Extracts are prepared and made into oil, cream or cosmetics. Some products are intended to be taken internally. A review article found that chondroitin sulfate and related compounds found in sea cucumbers can help in treating joint-pain, and that dried sea cucumber is "medicinally effective in suppressing arthralgia". Another study suggested that sea cucumbers contain all the fatty acids necessary to play a potentially active role in tissue repair. Sea cucumbers are under investigation for use in treating ailments including colorectal cancer. Surgical probes made of nanocomposite material based on the sea cucumber have been shown to reduce brain scarring. One study found that a lectin from Cucumaria echinata impaired the development of the malaria parasite when produced by transgenic mosquitoes. Procurement Sea cucumbers are harvested from the environment, both legally and illegally, and are increasingly farmed via aquaculture. The harvested animals are normally dried for resale. In 2016, prices on Alibaba ranged up to . Commercial harvest In recent years, the sea cucumber industry in Alaska has increased due to increased demand for the skins and muscles to China. Wild sea cucumbers are caught by divers. Wild Alaskan sea cucumbers have higher nutritional value and are larger than farmed Chinese sea cucumbers. Larger size and higher nutritional value has allowed the Alaskan fisheries to continue to compete for market share. One of Australia's oldest fisheries is the collection of sea cucumber, harvested by divers from throughout the Coral Sea in far North Queensland, Torres Straits and Western Australia. In the late 1800s, as many as 400 divers operated from Cook Town, Queensland. Overfishing of sea cucumbers in the Great Barrier Reef is threatening their population. Their popularity as luxury seafood in East Asian countries poses a serious threat. Black market As of 2013, a thriving black market was driven by demand in China where at its peak might have sold for the equivalent of and a single sea cucumber for about . A crackdown by governments both in and out of China reduced both prices and consumption, particularly among government officials who had been known to eat (and were able to afford purchasing) the most expensive and rare species. In the Caribbean Sea off the shores of the Yucatán Peninsula near fishing ports such as Dzilam de Bravo, illegal harvesting had devastated the population and resulted in conflict as rival gangs struggled to control the harvest. Aquaculture Overexploitation of sea cucumber stocks in many parts of the world provided motivation for the development of sea cucumber aquaculture in the early 1980s. The Chinese and Japanese were the first to develop successful hatchery technology on Apostichopus japonicus, prized for its high meat content and success in commercial hatcheries. Using techniques pioneered by the Chinese and Japanese, a second species, Holothuria scabra, was cultured for the first time in India in 1988. In recent years Australia, Indonesia, New Caledonia, Maldives, Solomon Islands and Vietnam have successfully cultured H. scabra using the same technology, and now culture other species. Conservation In India, the commercial harvest and transportation of sea cucumbers has been strictly banned under Schedule I of the Wild Life (Protection) Act, 1972 (WLPA) since 2001. In 2020, the Indian government created the world's first sea cucumber conservation area, the Dr. K.K. Mohammed Koya Sea Cucumber Conservation Reserve, to protect the sea cucumber species. In popular culture Sea cucumbers have inspired thousands of haiku in Japan, where they are called namako (海鼠), written with characters that can be translated as "sea mice" (an example of gikun). In English translations of these haiku, they are usually called "sea slugs". According to the Oxford English Dictionary, the English term "sea slug" was originally applied to holothurians during the 18th century. The term is now applied to several groups of sea snails, marine gastropod mollusks that have no shell or only a very reduced shell, including the nudibranchs. Almost 1,000 Japanese holothurian haiku translated into English appear in the book Rise, Ye Sea Slugs! by Robin D. Gill.
Biology and health sciences
Echinoderms
null
240717
https://en.wikipedia.org/wiki/Tangerine
Tangerine
The tangerine is a type of citrus fruit that is orange in color, that is considered either a variety of Citrus reticulata, the mandarin orange, or a closely related species, under the name Citrus tangerina, or yet as a hybrid (Citrus × tangerina) of mandarin orange varieties, with some pomelo contribution. Etymology The word "tangerine" was originally an adjective meaning "of or pertaining to, or native of Tangier, a seaport in Morocco, on the Strait of Gibraltar", and "a native of Tangier." The name was first used for fruit coming from Tangier, Morocco, described as a mandarin variety. The OED cites this usage from Addison's The Tatler in 1710 with similar uses from the 1800s. The adjective was applied to the fruit, once known scientifically as "Citrus nobilis var. tangeriana" which grew in the region of Tangiers. This usage appears in the 1800s. Taxonomy Under the Tanaka classification system, Citrus tangerina is considered a separate species. Under the Swingle system, tangerines are considered a group of mandarin (C. reticulata) varieties. Some differ only in disease resistance. The term is also currently applied to any reddish-orange mandarin (and, in some jurisdictions, mandarin-like hybrids, including some tangors). Description Tangerines are smaller and less rounded than oranges. They taste less sour, as well as sweeter and stronger, than oranges do. A ripe tangerine is firm to slightly soft, and pebbly-skinned with no deep grooves, as well as orange in color. The peel is thin, with little bitter white mesocarp. All of these traits are shared by mandarins generally. The peak tangerine season lasts from autumn to spring. Tangerines are most commonly peeled and eaten by hand. The fresh fruit is also used in salads, desserts and main dishes. The peel is used fresh or dried as a spice or zest for baking and drinks. Fresh tangerine juice and frozen juice concentrate are commonly available in the United States. Production In 2021, world production of tangerines (including mandarins and clementines) was 42 million tonnes, led by China with 60% of the total (table). Nomenclature and varieties Tangerines were first grown and cultivated as a distinct crop in the Americas by a Major Atway in Palatka, Florida. Atway was said to have imported them from Morocco (more specifically its third-largest city, the port of Tangier), which was the origin of the name. Major Atway sold his groves to N. H. Moragne in 1843, giving the Moragne tangerine the other part of its name. The Moragne tangerine produced a seedling which became one of the oldest and most popular American varieties, the Dancy tangerine (zipper-skin tangerine, kid-glove orange). Genetic analysis has shown the parents of the Dancy to have been two mandarin orange hybrids each with a small pomelo contribution, a Ponkan mandarin orange and a second unidentified mandarin. The Dancy is no longer widely commercially grown; it is too delicate to handle and ship well, it is susceptible to Alternaria fungus, and it bears more heavily in alternate years. Dancys are still grown for personal consumption, and many hybrids of the Dancy are grown commercially. Until the 1970s, the Dancy was the most widely grown tangerine in the United States; the popularity of the fruit led to the term "tangerine" being broadly applied as a marketing name. Florida classifies tangerine-like hybrid fruits as tangerines for the purposes of sale and regulation; this classification is widely used but regarded as technically inaccurate in the industry. Among the most important tangerine hybrids of Florida are murcotts (a late-fruiting type of tangor marketed as "honey tangerine") and Sunbursts (an early-fruiting complex tangerine-orange-grapefruit hybrid). The fallglo, also a three-way hybrid ( tangerine, orange and grapefruit), is also grown. Nutrition Tangerines contain 85% water, 13% carbohydrates, and negligible amounts of fat and protein (table). Among micronutrients, only vitamin C is in significant content (30% of the Daily Value) in a reference serving, with all other micronutrients in low amounts.
Biology and health sciences
Citrus fruits
Plants
240808
https://en.wikipedia.org/wiki/Eastern%20copperhead
Eastern copperhead
The eastern copperhead (Agkistrodon contortrix), also known simply as the copperhead, is a species of venomous snake, a pit viper, endemic to eastern North America; it is a member of the subfamily Crotalinae in the family Viperidae. The eastern copperhead has distinctive, dark brown, hourglass-shaped markings, overlaid on a light reddish brown or brown/gray background. The body type is heavy, rather than slender. Neonates are born with green or yellow tail tips, which progress to a darker brown or black within one year. Adults grow to a typical length (including tail) of . In most of North America, the eastern copperhead favors deciduous forest and mixed woodlands. It may occupy rock outcroppings and ledges, but is also found in low-lying, swampy regions. During the winter, it hibernates in dens or limestone crevices, often together with timber rattlesnakes and black rat snakes. The eastern copperhead is known to feed on a wide variety of prey, including invertebrates (primarily arthropods) and vertebrates. Like most pit vipers, the eastern copperhead is generally an ambush predator; it takes up a promising position and waits for suitable prey to arrive. As a common species within its range, it may be encountered by humans. Unlike other viperids, it often "freezes" instead of slithering away and fleeing, due to its habit of relying on excellent camouflage. Bites occur due to people unknowingly stepping on or near them. Copperhead bites account for half of the treated snake bites in the United States. Five subspecies have been recognized in the past, but recent genetic analysis had yielded new species information. Etymology Its generic name is derived from the Greek words ankistron "hook, fishhook" and odon, variant of odous "tooth". The trivial name, or specific epithet, comes from the Latin contortus (twisted, intricate, complex), which is usually interpreted to reference the distorted pattern of darker bands across the snake's back, which are broad at the lateral base, but "pinched" into narrow hourglass shapes in the middle at the vertebral area. Description Adults grow to a typical length (including tail) of . Some may exceed , although that is exceptional for this species. Males do not typically exceed and weigh from , with a mean of roughly . Females do not typically exceed , and have a mean body mass of . The maximum length reported for this species is for A. c. mokasen (Ditmars, 1931). Brimley (1944) mentions a specimen of A. c. mokasen from Chapel Hill, North Carolina, that was "four feet, six inches" (137.2 cm), but this may have been an approximation. The maximum length for A. c. contortrix is (Conant, 1958). The body is relatively stout and the head is broad and distinct from the neck. Because the snout slopes down and back, it appears less blunt than that of the cottonmouth, A. piscivorus. Consequently, the top of the head extends further forward than the mouth. The escalation includes 21–25 (usually 23) rows of dorsal scales at midbody, 138–157 ventral scales in both sexes, and 38–62 and 37–57 subcaudal scales in males and females, respectively. The subcaudals are usually single, but the percentage thereof decreases clinally from the northeast, where about 80% are undivided, to the southwest of the geographic range where as little as 50% may be undivided. On the head are usually 9 large symmetrical plates, 6–10 (usually 8) supralabial scales, and 8–13 (usually 10) sublabial scales. The color pattern consists of a pale tan to pinkish-tan ground color that becomes darker towards the foreline, overlaid with a series of 10–18 (13.4) crossbands. Characteristically, both the ground color and crossband pattern are pale in A. c. contortrix. These crossbands are light tan to pinkish-tan to pale brown in the center, but darker towards the edges. They are about two scales wide or less at the midline of the back, but expand to a width of 6–10 scales on the sides of the body. They do not extend down to the ventral scales. Often, the crossbands are divided at the midline and alternate on either side of the body, with some individuals even having more half bands than complete ones. A series of dark brown spots is also present on the flanks, next to the belly, and are largest and darkest in the spaces between the crossbands. The belly is the same color as the ground color, but may be a little whitish in part. At the base of the tail are one to three (usually two) brown crossbands followed by a gray area. In juveniles, the pattern on the tail is more distinct: 7–9 crossbands are visible, while the tip is yellow. On the head, the crown is usually unmarked, except for a pair of small dark spots, one near the midline of each parietal scale. A faint postocular stripe is also present; diffuse above and bordered below by a narrow brown edge. Several aberrant color patterns for A. c. contortrix, or populations that intergrade with it, have also been reported. In a specimen described by Livezey (1949) from Walker County, Texas, 11 of 17 crossbands were not joined middorsally, while on one side, three of the crossbands were fused together longitudinally to form a continuous, undulating band, surmounted above by a dark stripe that was 2.0–2.5 scales wide. In another specimen, from Lowndes County, Alabama, the first three crossbands were complete, followed by a dark stripe that ran down either side of the body, with points of pigment reaching up to the midline in six places, but never getting there, after which the last four crossbands on the tail were also complete. A specimen found in Terrebonne Parish, Louisiana had a similar striped pattern, with only the first and last two crossbands being normal. Distribution and habitat The eastern copperhead is found in North America; its range within the United States is in Alabama, Arkansas, Connecticut, Delaware, Florida, Georgia, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Maryland, Massachusetts, Mississippi, Missouri, Nebraska, New Jersey, New York, North Carolina, Ohio, Oklahoma, Pennsylvania, South Carolina, Tennessee, Texas, Virginia, and West Virginia. In Mexico, it occurs in Chihuahua and Coahuila. The type locality is "Carolina". Schmidt (1953) proposed the type locality be restricted to "Charleston, South Carolina". Unlike some other species of North American pit vipers, such as the timber rattlesnake and massasauga, the copperhead has mostly not re-established itself north of the terminal moraine after the last glacial period (the Wisconsin glaciation), though it is found in southeastern New York and southern New England, north of the Wisconsin glaciation terminal moraine on Long Island. Eastern copperheads are habitat generalists which are species able to survive in different habitats (fragmented and unfragmented). Within its range, it occupies a variety of different habitats. In most of North America, it favors deciduous forest and mixed woodlands. It is often associated with rock outcroppings and ledges, but is also found in low-lying, swampy regions. During the winter, it hibernates in dens or limestone crevices, often together with timber rattlesnakes and black rat snakes. In the states around the Gulf of Mexico, however, this species is also found in coniferous forest. In the Chihuahuan Desert of West Texas and northern Mexico, it occurs in riparian habitats, usually near permanent or semipermanent water and sometimes in dry arroyos (brooks). Conservation status This species is classified as least concern on the IUCN Red List of Threatened Species (v3.1, 2001). This means that relative to many other species, it is not at risk of extinction in the near future. The population trend was stable when assessed in 2007. Their venom has potential medicinal value to humans. Behavior In the Southern United States, copperheads are nocturnal during the hot summer, but are commonly active during the day during the spring and fall. Unlike other viperids, they often "freeze" instead of slithering away, and as a result, many bites occur due to people unknowingly stepping on or near them. This tendency to freeze most likely evolved because of the extreme effectiveness of their camouflage. When lying on dead leaves or red clay, they can be almost impossible to notice. They frequently stay still even when approached closely, and generally strike only if physical contact is made. Like most other New World vipers, copperheads exhibit defensive tail vibration behavior when closely approached. This species is capable of vibrating its tail in excess of 40 times per second— faster than almost any other nonrattlesnake snake species. Diet and feeding behavior The eastern copperhead is a diet generalist and is known to feed on a wide variety of prey, including invertebrates (primarily arthropods) and vertebrates. A generalized ontogenetic shift in the diet occurs, with juveniles feeding on higher percentages of invertebrates and ectotherms, and adults feeding on a higher percentage vertebrate endotherms. Both juveniles and adults, though, feed on invertebrates and vertebrates opportunistically. The diet is also known to vary among geographic populations. Studies conducted at various locations within the range of the eastern copperhead (A. contortrix), including Tennessee, Kentucky, Kansas, and Texas, identified some consistently significant prey items included cicadas (Tibicen), caterpillars (Lepidoptera), lizards (Sceloporus and Scincella), voles (Microtus), and mice (Peromyscus). Accounts of finding large numbers of copperheads in bushes, vines, and trees seeking newly emerged cicadas, some as high as 40 feet above ground, have been reported from Texas by various herpetologists. Other items documented in the diet include various invertebrates, e.g. millipedes (Diplopoda), spiders (Arachnida), beetles (Coleoptera), dragonflies (Odonata), grasshoppers (Orthoptera), and mantids (Mantidae), as well as numerous species of vertebrates, including salamanders, frogs, lizards, snakes, small turtles, small birds, young opossums, squirrels, chipmunks, rabbits, bats, shrews, moles, rats, and mice. Like most pit vipers, the eastern copperhead is generally an ambush predator; it takes up a promising position and waits for suitable prey to arrive. One exception to ambush foraging occurs when copperheads feed on insects such as caterpillars and freshly molted cicadas. When hunting insects, copperheads actively pursue their prey. They possess facial pit organs which is a complex infrared-imaging system that allows accurate and precise strikes on potential prey. Juveniles use a brightly colored tail to attract frogs and perhaps lizards, a behavior termed caudal luring (see video: ). Sight, odor, and heat detection are used in locating prey, although after the prey has been envenomated, odor and taste become the primary means of tracking. Smaller prey items and birds are often seized and held in the mouth until dead, while larger prey items are typically bitten, released, and then tracked until dead. Copperheads occasionally feed on carrion. Gravid females typically fast, although some individuals occasionally take small volumes of food. An individual may eat up to twice its body mass in a year. One study found an individual that ate eight times during an annual activity period, totaling 1.25 times its body mass. Predators of the eastern copperhead are not well known, but may include owls, hawks, opossums, bullfrogs, and other snakes. They will use anti-predatory behaviors to discourage predators. These include: move away or flee, musking, tail vibrating, mouth gaping, or curling up into a camouflage pile. Reproduction Eastern copperheads breed in late summer, but not every year; sometimes, females produce young for several years running, then do not breed at all for a time. Mating is sometimes preceded by male combat. Females give birth to live young, each of which is about in total length. The typical litter size is four to seven, but as few as one, or as many as 20 may be seen. Females are capable of storing sperm for up to a year. Their size apart, the young are similar to the adults, but lighter in color, and with a yellowish-green-marked tip to the tail, which is used to lure lizards and frogs. A. contortrix males have longer tongue tie lengths than females during the breeding season, which may aid in chemoreception of males searching for females. Facultative parthenogenesis Parthenogenesis is a natural form of reproduction in which growth and development of embryos occur without fertilization. A. contortrix can reproduce by facultative parthenogenesis, that is, they are capable of switching from a sexual mode of reproduction to an asexual mode. The type of parthenogenesis that likely occurs is automixis with terminal fusion, a process in which two terminal products from the same meiosis fuse to form a diploid zygote. This process leads to genome-wide homozygosity, expression of deleterious recessive alleles, and often to developmental failure (inbreeding depression). Both captive-born and wild-born A. contortrix snakes appear to be capable of this form of parthenogenesis. Venom Although venomous, eastern copperheads are generally not aggressive and bites are rarely fatal. Copperhead venom has an estimated lethal dose around 100 mg, and tests on mice show its potency is among the lowest of all pit vipers, and slightly weaker than that of its close relative, the cottonmouth. Copperheads often employ a "warning bite" when stepped on or agitated and inject a relatively small amount of venom, if any at all. "Dry bites" involving no venom are particularly common with the copperhead, though all pit vipers are capable of a dry bite. The fangs of dead pit vipers are capable of delivering venom in amounts that necessitate the use of antivenom. Bite symptoms include extreme pain, tingling, throbbing, swelling, and severe nausea. Damage can occur to muscle and bone tissue, especially when the bite occurs in the outer extremities such as the hands and feet, areas in which a large muscle mass is not available to absorb the venom. A bite from any venomous snake should be taken very seriously and immediate medical attention sought, as an allergic reaction and secondary infection are always possible. The venom of the southern copperhead has been found to hold the protein contortrostatin that halts the growth of cancer cells in mice and also stops the migration of the tumors to other sites. However, this is an animal model, and further testing is required to verify safety and efficacy in humans. The antivenom CroFab is used to treat copperhead envenomations that demonstrate localized or systemic reactions to the venom. As many copperhead bites can be dry (no envenomation), CroFab is not given in the absence of a reaction (such as swelling) due to the risk of complications of an allergic reaction to the treatment. The antivenom can cause an immune reaction called serum sickness. Pain management, tetanus immunization, laboratory evaluation, and medical supervision in the case of complications are additional courses of action. In 2002, an Illinois poison control center report on the availability of antivenom stated it used 1 Acp to 5 Acp depending on the symptoms and circumstances. Antivenom use however may not be necessary in the majority of cases, A study that analyzed 88 copperhead bite victims reported that all the victims survived and none required antivenom. Subspecies This species was long considered to contain five subspecies listed below, but gene analysis suggests that A. c. laticinctus represents its own distinct species, while A. c. mokasen and A. c. phaeogaster are regional variants of A. c. contortrix, and A. c. pictigaster is a regional variant of A. c. laticinctus. Five subspecies have been recognized in the past, but recent genetic analysis shows that A c. contorix and two of the subspecies are monotypic, while Agkistrodon laticinctus (formerly Agkistrodon contortrix laticinctus) and the fifth subspecies are a single distinct species. Gallery
Biology and health sciences
Reptiles
null
240844
https://en.wikipedia.org/wiki/Reef
Reef
A reef is a ridge or shoal of rock, coral, or similar relatively stable material lying beneath the surface of a natural body of water. Many reefs result from natural, abiotic (non-living) processes such as deposition of sand or wave erosion planning down rock outcrops. However, reefs such as the coral reefs of tropical waters are formed by biotic (living) processes, dominated by corals and coralline algae. Artificial reefs, such as shipwrecks and other man-made underwater structures, may occur intentionally or as the result of an accident. These are sometimes designed to increase the physical complexity of featureless sand bottoms to attract a more diverse range of organisms. They provide shelter to various aquatic animals which help prevent extinction. Another reason reefs are put in place is for aquaculture, and fish farmers who are looking to improve their businesses sometimes invest in them. Reefs are often quite near to the surface, but not all definitions require this. Earth's largest coral reef system is the Great Barrier Reef in Australia, at a length of over . Etymology The word "reef" traces its origins back to the Old Norse word rif, meaning "rib" or "reef". Rif comes from the Proto-Germanic term ribją meaning "rib". Classification Reefs may be classified in terms of their origin, geographical location, depth, and topography. For example a tropical coral fringing reef, or a temperate rocky intertidal reef. Biotic A variety of biotic reef types exists, including oyster reefs and sponge reefs, but the most massive and widely distributed are tropical coral reefs. Although corals are major contributors to the framework and bulk material comprising a coral reef, the organisms most responsible for reef growth against the constant assault from ocean waves are calcareous algae, especially, although not entirely, coralline algae. Oyster larvae prefer to settle on adult oysters and thereby develop layers building upwards. These eventually form a fairly massive hard stony calcium carbonate structure on which other reef organisms like sponges and seaweeds can grow, and provide a habitat for mobile benthic organisms. These biotic reef types take on additional names depending upon how the reef lies in relation to the land, if any. Reef types include fringing reefs, barrier reefs, and atolls. A fringing reef is a reef that is attached to an island. Whereas, a barrier reef forms a calcareous barrier around an island, resulting in a lagoon between the shore and the reef. Conversely, an atoll is a ring reef with no land present. The reef front, facing the ocean, is a high energy locale. Whereas, the internal lagoon will be at a lower energy with fine grained sediments. Mounds Both mounds and reefs are considered to be varieties of organosedimentary buildups, which are sedimentary features, built by the interaction of organisms and their environment. These interactions have a synoptic relief and whose biotic composition differs from that found on and beneath the surrounding sea floor. However, reefs are held up by a macroscopic skeletal framework, as what is seen on coral reefs. Corals and calcareous algae grow on top of one another, forming a three-dimensional framework that is modified in various ways by other organisms and inorganic processes. Conversely, mounds lack a macroscopic skeletal framework. Instead, they are built by microorganisms or by organisms that also lack a skeletal framework. A microbial mound might be built exclusively or primarily by cyanobacteria. Examples of biostromes formed by cyanobacteria occur in the Great Salt Lake in Utah, United States, and in Shark Bay on the coast of Western Australia. Cyanobacteria do not have skeletons, and individual organisms are microscopic. However, they can encourage the precipitation or accumulation of calcium carbonate to produce distinct sediment bodies in composition that have relief on the seafloor. Cyanobacterial mounds were most abundant before the evolution of shelly macroscopic organisms, but they still exist today. Stromatolites, for instance, are microbial mounds with a laminated internal structure. Whereas, bryozoans and crinoids, common contributors to marine sediments during the Mississippian period, produce a different kind of mound. Although bryozoans are small and crinoid skeletons disintegrate, bryozoan and crinoid meadows can persist over time and produce compositionally distinct bodies of sediment with depositional relief. The Proterozoic Belt Supergroup contains evidence of possible microbial mat and dome structures similar to stromatolite and chicken reef complexes. Geologic Rocky reefs are underwater outcrops of rock projecting above the adjacent unconsolidated surface with varying relief. They can be found in depth ranges from intertidal to deep water and provide a substrate for a large range of sessile benthic organisms, and shelter for a large range of mobile organisms. They are often located in sub-tropical, temperate, and sub-polar latitudes. Structures Ancient reefs buried within stratigraphic sections are of considerable interest to geologists because they provide paleo-environmental information about the location in Earth's history. In addition, reef structures within a sequence of sedimentary rocks provide a discontinuity which may serve as a trap or conduit for fossil fuels or mineralizing fluids to form petroleum or ore deposits. Corals, including some major extinct groups Rugosa and Tabulata, have been important reef builders through much of the Phanerozoic since the Ordovician Period. However, other organism groups, such as calcifying algae, especially members of the red algae (Rhodophyta), and molluscs (especially the rudist bivalves during the Cretaceous Period) have created massive structures at various times. During the Cambrian Period, the conical or tubular skeletons of Archaeocyatha, an extinct group of uncertain affinities (possibly sponges), built reefs. Other groups, such as the Bryozoa, have been important interstitial organisms, living between the framework builders. The corals which build reefs today, the Scleractinia, arose after the Permian–Triassic extinction event that wiped out the earlier rugose corals (as well as many other groups). They became increasingly important reef builders throughout the Mesozoic Era. They may have arisen from a rugose coral ancestor. Rugose corals built their skeletons of calcite and have a different symmetry from that of the scleractinian corals, whose skeletons are aragonite. However, there are some unusual examples of well-preserved aragonitic rugose corals in the Late Permian. In addition, calcite has been reported in the initial post-larval calcification in a few scleractinian corals. Nevertheless, scleractinian corals (which arose in the middle Triassic) may have arisen from a non-calcifying ancestor independent of the rugosan corals (which disappeared in the late Permian). Artificial An artificial reef is a human-created underwater structure, typically built to promote marine life in areas with a generally featureless bottom, to control erosion, block ship passage, block the use of trawling nets, or improve surfing. Many reefs are built using objects that were built for other purposes, for example by sinking oil rigs (through the Rigs-to-Reefs program), scuttling ships, or by deploying rubble or construction debris. Other artificial reefs are purpose built (e.g. the reef balls) from PVC or concrete. Shipwrecks become artificial reefs on the seafloor. Regardless of construction method, artificial reefs generally provide stable hard surfaces where algae and invertebrates such as barnacles, corals, and oysters attach; the accumulation of attached marine life in turn provides intricate structure and food for assemblages of fish.
Physical sciences
Oceanic and coastal landforms
Earth science
240849
https://en.wikipedia.org/wiki/Islet
Islet
An islet ( ) is generally a small island. Definitions vary, and are not precise, but some suggest that an islet is a very small, often unnamed, island with little or no vegetation to support human habitation. It may be made of rock, sand and/or hard coral; may be permanent or tidal (i.e. surfaced reef or seamount); and may exist in the sea, lakes, rivers or any other sizeable bodies of water. Definition As suggested by its origin islette, an Old French diminutive of "isle", use of the term implies small size, but little attention is given to drawing an upper limit on its applicability. The World Landforms website says, "An islet landform is generally considered to be a rock or small island that has little vegetation and cannot sustain human habitation", and further that size may vary from a few square feet to several square miles, with no specific rule pertaining to size. Other terms Ait (/eɪt/, like eight) or eyot (/aɪ(ə)t, eɪt/), a small island. It is especially used to refer to river islands found on the River Thames and its tributaries in England. Cay or key, an islet formed by the accumulation of fine sand deposits atop a reef, especially in the Caribbean and West Atlantic. Rum Cay in the Bahamas and the Florida Keys off Florida are examples of islets. The French suffix -hou from the Scandinavian -holm, is used for the names of some islets in the Channel Islands, such as Écréhous, Burhou, Lihou and Les Houmets, and off Normandy, such as Tatihou. Inch, a term used especially in Scotland, from the Gaelic innis, which originally meant island, but has been supplanted to refer to smaller islands, such the islet of Inch, off St Mary's Isle Priory, Inch Kenneth, Inchkeith, Keith Inch (no longer an island) and Inchcailloch. Motu, a reef islet formed by broken coral and sand, surrounding an atoll, especially in Polynesia, such as Motu One, Motu Nao and Motu Paahi. River island, an islet within the current of a river, such as the Île de la Cité in Paris. , in the sense of a type of islet, is an uninhabited landform composed of exposed rocks, lying offshore, and having at most minimal vegetation, such as Albino Rock in the Palm Island group off Queensland, Australia. Sandbar or shoal, an exposed sandbar. Sea stack, a thin, vertical landform jutting out of a body of water. Skerry, a small rocky island, usually defined to be too small for habitation, especially in Ireland. Subsidiary islets, a more technical application, is applied to small land features isolated by water, lying off the shore of a larger island. Similarly, any emergent land in an atoll is also called an islet. Tidal island, small islands (not always islets) which lie closely off the coast of a mainland or a much larger island, being connected to it (and thus becomes a peninsula/promontory) in low tide and isolated by a channel in high tide. In international law Whether an islet is considered a rock or not, it can have significant economic consequences under Article 121 of the UN Convention on the Law of the Sea, which stipulates that "Rocks which cannot sustain human habitation or economic life of their own shall have no exclusive economic zone or continental shelf." One long-term dispute over the status of such an islet was that of Snake Island (Black Sea). The International Court of Justice jurisprudence however sometimes ignores islets, regardless of inhabitation status, in deciding territorial disputes; it did so in 2009 in adjudicating the Romania-Ukraine dispute, and previously in the dispute between Libya and Malta involving the islet of Filfla. List of islets There are thousands of islets on Earth: approximately 24,000 islands and islets in the Stockholm archipelago alone. The following is a list of example islets from around the world. Águila Islet, the southernmost point of The Americas Aplin Islet (Queensland) Apia Auster Lake Islet Ball's Pyramid, South Pacific Bay Islet or See Chau, Hong Kong Benggala Island, Indonesia Bikirrin, in Majuro, Marshall Islands Black Rock, South Atlantic Boundary Islet, Australia Bogskär, Finland Briggs Islet, southeastern Australia Bushy Islet (Queensland) Capitancillo Islet, in Bogo City, Cebu, Philippines Chão, in the Madeira Islands, Portugal Cholmondeley Islet (Queensland) Clubes Island, Brasília, Brazil Columbretes Islands, Spain Cone Islet, southeastern Australia Douglas Islet (Queensland) Dry Tortugas, Florida Keys, USA Dugay Islet, southeastern Australia Edwards Islet, southeastern Australia Enekalamur in Majuro, Marshall Islands Enemanit in Majuro, Marshall Islands Fairway Rock, Bering Strait Fastnet Rock, Ireland Filfla, southern Malta Formigas, in the Azores islands Gáshólmur, Faroe Islands Granite Island (South Australia), Victor Harbor, South Australia. Galatasaray Islet, Istanbul, Turkey Halfway Islet, Queensland, Australia Herald Island, Arctic Ocean Île Vierge, France Isles of Scilly, United Kingdom Kid Island, Lake of the Ozarks, Missouri United States Islets of Caroline Island, in Kiribati Islets of Mauritius Isla de Alborán, Spain, Western Mediterranean Jardine Islet (Queensland) Jethou, Bailiwick of Guernsey Keelung Islet, off the northern shore of Taiwan Klein Bonaire, Netherlands Kolbeinsey, Iceland Liancourt Rocks, South Korea Lihou, Bailiwick of Guernsey Magra Islet (Queensland) Mañagaha, Saipan Martin Islet (New South Wales) Mid Woody Islet, southeastern Australia Milman Islet (Queensland) Monchique Islet, Europe's westernmost point, in the Azores, Portugal Na Mokulua, Oahu, Hawaii, United States Noorderhaaks, off the coast of the Netherlands Oodaaq, Greenland Pula Ulor, Beserah, Kuantan District, Pahang, Malaysia Parece Vela, West Pacific Perejil Island, Strait of Gibraltar Penguin Islet (Tasmania) Pigeon Island, Sri Lanka Pokonji Dol, Croatia Velika Sestrica, Croatia Rockall, North Atlantic Saint Peter and Saint Paul rocks, equatorial Atlantic Salas y Gómez, Northeast from Easter Island San Juan Islet, Puerto Rico Saunders Islet (Queensland) Seacrow Islet, southeastern Australia Shag Rocks, South Atlantic Silver Islet, Ontario Sinclair Islet (Queensland) Skull Islet, in British Columbia, Canada Star Keys/Motuhope, New Zealand Saltholm, small Islet in Oresund west of Copenhagen Sue Islet (Queensland) Sumbiarholmur, Faroe Islands Sunday Islet (Queensland) Taprobane Island, Sri Lanka Thomson Islet (Queensland) Tindhólmur, Faroe Islands Vilkitsky Island, Arctic Ocean Wallace Islet (Queensland) Wachusett Reef, Ernest Legouve Reef, and Maria Theresa Reef, South Pacific Ocean Westward Islet, in the Pitcairn Islands Ynys Lawd, Wales
Physical sciences
Oceanic and coastal landforms
Earth science
240920
https://en.wikipedia.org/wiki/Blue%20shark
Blue shark
The blue shark (Prionace glauca), also known as the great blue shark, is a species of requiem shark, in the family Carcharhinidae and the only member of its genus which inhabits deep waters in the world's temperate and tropical oceans. Averaging around and preferring cooler waters, the blue shark migrates long distances, such as from New England to South America. It is listed as Near Threatened by the IUCN. Although generally lethargic, they can move very quickly. Blue sharks are viviparous and are noted for large litters of 25 to over 100 pups. They feed primarily on small fish and squid, although they can take larger prey. Some of the blue shark’s predators include the killer whale and larger sharks like tiger sharks and the great white shark. Maximum lifespan is still unknown, but it is believed that they can live up to 20 years. They are one of the most abundant pelagic sharks, with large numbers being caught by fisheries as bycatch on longlines and nets. Anatomy and appearance Blue sharks are light-bodied with long pectoral fins. Like many other sharks, blue sharks are countershaded: the top of the body is deep blue, lighter on the sides, and the underside is white. The male blue shark commonly grows to at maturity, whereas the larger females commonly grow to at maturity. Large specimens can grow to long. Occasionally, an outsized blue shark is reported, with one widely printed claim of a length of , but no shark even approaching this size has been scientifically documented. The blue shark is fairly elongated and slender in build and typically weighs from in males and from in large females. Occasionally, a female in excess of will weigh over . The heaviest reported weight for the species was . However, anecdotal claims exist for the species to exceptionally reach in weight, though these are not verified. The blue shark is also ectothermic and it has a unique sense of smell. Sensory The five senses that blue sharks share with other members of the Carcharhinidae family is vision, hearing, lateral line, chemoreception, and electroreception. These senses allow them to perceive and react to a variety of biotic and/or abiotic stimuli in their immediate environment and across a different range of spatial scales. The well-developed eyes of blue sharks exhibit interspecific variations in their eye structure, which are characteristic of adaptations for vision in a variety of light environments, from the brightly lit surface waters to the darkness of the deep sea. The lateral position of the eyes in the head allows a cyclopean visual field of 360° in the vertical plane and between 308° and 338° in the horizontal plane. The morphology of the inner ears of blue sharks is similar to that of other gnathostomes. It consists of a membranous labyrinth that is made up of three semicircular canals that are filled with fluid and arranged orthogonally, as well as three otolithic organs, which are the sacculus, utriculus, and lagena. These sharks are most sensitive to frequencies below roughly 100 Hz, but they can hear sounds up to roughly 1000 Hz. The blue shark's lateral line is a mechanosensory structure that can detect particle motion. As such, it can react to mechanical disturbances caused by hydrodynamic stimuli that are not auditory. It is used to determine the direction and speed of water currents as well as the vibrations produced by prey, predators, and similar species moving through the water. Blue sharks' chemosensory system is made up of gustation (taste), olfaction (smell), which is a common chemical sense. Functions like intraspecific social interactions, communication, reproduction, and food detection are all linked to smell. Gustation is mainly related to feeding and involves using taste buds to process food and assess its palatability through direct contact, which usually results in a decision to swallow or reject it. Blue sharks can detect weak electrical potentials generated by inanimate objects and other animals through specialized receptors. These sharks use their electrical sense to locate and capture prey, as well as to avoid predators. As they move through the Earth's magnetic field, they may also sense the weak electrical fields produced by nearby water currents or their own bodies, which can aid in navigation and orientation. The electroreceptors, known as the ampullae of Lorenzini, are of the ampullary type and develop from the lateral line placodes. Each ampulla consists of a pore on the surface of the skin, connected to a narrow dermal chamber called an ampullary bulb by a small canal with a diameter of about one millimetre. These sharks are most responsive to fluctuating electrical fields between 0.1 and 10 Hz, with peak sensitivity around 1 Hz. Although the receptors primarily detect low-frequency alternating currents (AC), they are particularly attracted to steady direct current (DC) electric fields. However, for the shark to detect a continuous DC voltage, it must move relative to the voltage source. In the northeastern coast of the United States, it is discovered that blue shark are able to maintain a straight courses for hundreds of kilometres over many days. It appears that the only continuously available cue that could be used to accomplish this is the geomagnetic field. Reproduction Maturity in Male and Female Blue Sharks Maturity is assessed by observing sexual products and the developmental stage of reproductive organs. Five reproductive variables are examined for their relationship to body growth: presence or absence of semen in the ductus deferens ampullae, length and wet weight of the testicle, Size and rigidity of the claspers. To assess maturity, the clasper's inner length and degree of calcification are recorded: Mature males have fully calcified claspers that extend beyond the inner margin of their pelvic fins. Immature males have claspers that are either shorter or longer than the inner border but not fully calcified. Mating Behaviour Male blue sharks primarily court non-pregnant mature females since mating marks on females are common, appearing as several tiny incisions arranged in a semicircle on their Dorsal fins. These marks are the result of non-feeding bites during courtship and mating. Female Blue sharks have evolved skin three times thicker than that of males to withstand the rigors of mating. Female blue sharks are classified as immature, subadult, or mature based on the size and development of their ovary, oviducal gland, and uterus. Mature females have enlarged uterine walls, a fully differentiated oviducal gland, visible and enlarged Ovarian follicles and a right ovary separated and developed from the epigonal organ. Immature females have an undifferentiated oviducal gland and uterus, a small right ovary lodged within the epigonal organ, and no visible follicles. Reproductive Stategy and Lifecycle Blue sharks are viviparous, with a yolk-sac placenta, giving birth to 4 to 135 pups per litter after a gestation period of 9 to 12 months. After birth, young sharks are left in specific nursery areas outside adult regions to develop independently. These nurseries offer a safe environment for newborns during their early months. Females mature at five to six years of age, while males mature at four to five years. Research suggests that females may exhibit natal and reproductive philopatry, meaning they return to specific sites to give birth. Ecology Range and habitat The blue shark is an oceanic and epipelagic shark found worldwide in deep temperate and tropical waters from the surface to about . In temperate seas it may approach shore, where it can be observed by divers; while in tropical waters, it inhabits greater depths. It lives as far north as Norway and as far south as Chile. Blue sharks are found off the coasts of every continent, except Antarctica. Its greatest Pacific concentrations occur between 20° and 50° North, but with strong seasonal fluctuations. In the tropics, it spreads evenly between 20° N and 20° S. It prefers water temperatures between , but can be seen in water ranging from . Records from the Atlantic show a regular clockwise migration within the prevailing currents. Migration Blue sharks are highly migratory species, travelling vast distances across temperate and tropical waters. Their migrations are influenced by seasonal changes, prey availability, and the need for optimal environmental conditions. These sharks move both horizontally and vertically. Their swimming behaviour varies depending on the time of day. During the day, blue sharks move at a mean rate of 1.2 kilometers per hour, with a mean swimming speed of 1.3 kilometers per hour. At night, their activity increases, with a mean movement rate of 1.8 kilometers per hour and a swimming speed of 2.8 kilometers per hour. These increases in speed often occur during brief dives, particularly at night when sharks exhibit more vertical movement, ranging from shallow waters to depths exceeding 100 meters. Blue sharks are most active at night, particularly in the early evening, with their lowest activity occurring during the early morning hours. During the day, they tend to remain around a depth of 30 meters, while at night they venture slightly deeper, around 40 meters. Most of their time is spent within a depth range of 18 to 42 meters, although they sometimes dive deeper. Their behaviour is also influenced by water temperature, preferring a narrow range of 14 to 16°C, though they are found in waters between 8.5 and 17.5°C. Blue sharks often swim near the surface in cooler months, but this behaviour decreases during the coldest or warmest months, likely due to surface temperature changes. Feeding Squid are the most important prey for blue sharks, but their diet includes other invertebrates, such as cuttlefish, blanket octopuses, and pelagic octopuses, as well as lobster, shrimp, crab, a large number of bony fishes (such as long-snouted lancetfish, snake mackerel and oilfish), small sharks, mammalian carrion and occasional sea birds (such as great shearwaters). Whale and porpoise blubber and meat have been retrieved from the stomachs of captured specimens and they are known to take cod from trawl nets. Sharks have been observed and documented working together as a "pack" to herd prey into a concentrated group from which they can easily feed. Blue sharks may eat tuna, which have been observed taking advantage of the herding behaviour to opportunistically feed on escaping prey. The observed herding behaviour was undisturbed by different species of shark in the vicinity that normally would pursue the common prey. The blue shark can swim at fast speeds, allowing it to catch up with prey easily. Its triangular teeth allow it to easily catch hold of slippery prey. Predators Younger and smaller individuals may be eaten by larger sharks, such as the great white shark and the tiger shark. Orcas have been reported to hunt blue sharks. This shark may host several species of parasites. For example, the blue shark is a definitive host of the tetraphyllidean tapeworm, Pelichnibothrium speciosum (Prionacestus bipartitus). It becomes infected by eating intermediate hosts, probably opah (Lampris guttatus) and/or longnose lancetfish (Alepisaurus ferox). California sea lions (Zalophus californianus), Northern elephant seals (Mirounga angustirostris) and Cape fur seals (Arctocephalus pusillus pusillus) have been observed to feed on blue sharks. Despite having excellent binocular vision and the capacity to see ahead when pursuing prey, research indicates that blue sharks are not always adept at spotting predators approaching from behind. According to an experiment, a large predator's best attack angle when pursuing a blue shark is probably from the caudal direction. This puts the predator in a position to strike the caudal fin of the shark and immobilize it. Blue sharks are not totally helpless against a tail-on approach, though, as they can adjust their escape performance based on the reaction distance. Rather than reacting at a greater distance and trying to swim away at a high sustained speed, blue sharks likely concentrate their energy on outmaneuvering predators with sharp turns and brief bursts of acceleration. Relationship to humans Blue shark meat is edible, but not widely sought after; it is consumed fresh, dried, smoked and salted and diverted for fishmeal. There is a report of high concentration of heavy metals (mercury and lead) in the edible flesh. The skin is used for leather, the fins for shark-fin soup and the liver for oil. Blue sharks are occasionally sought as game fish for their beauty and speed. Blue sharks rarely bite humans. From 1580 up until 2013, the blue shark was implicated in only 13 biting incidents, four of which ended fatally. In captivity Blue sharks, like most pelagic sharks, tend to fare poorly in captivity. The first attempt of keeping blue sharks in captivity was at Sea World San Diego in 1968, and since then a small number of other public aquaria in North America, Europe and Asia have attempted it. Most of these were in captivity for about three months or less, and some of them were released back to the wild afterwards. The record time for blue sharks in captivity is 246 and 224 days for two individuals at Tokyo Sea Life Park, 210 days for an individual at New Jersey Aquarium, and 194 days for one at Lisbon Oceanarium and 252 and 873 days for two individuals at Sendai Umino-Mori Aquarium. The blue shark that survived the longest in captivity was captured in Shizugawa Bay on July 27, 2018, and taken to the Sendai Umino-Mori Aquarium. The total length at the time of delivery was , the estimated weight was , and the age was about 1 year old. After that, it lived for 873 days, but died due to factors such as disordered swimming due to dehydration. At the time of death, the total length was and the weight was . This growth rate is said to be the same as that of wild blue sharks. Blue sharks are relatively easy to feed and store in captivity, and the three primary issues appear to be transport, predation by larger sharks and trouble avoiding smooth surfaces in tanks. Small blue sharks, up to long, are relatively easy to transport to aquaria, but it is much more complicated to transport larger individuals. However, this typical small size when introduced to aquaria means that they are highly vulnerable to predation by other sharks that are often kept, such as bull, grey reef, sandbar and sand tiger sharks. For example, several blue sharks kept at Sea World San Diego initially did fairly well, but were eaten when bull sharks were added to their exhibit. Attempts of keeping blue sharks in tanks of various sizes, shapes and depths have shown that they have trouble avoiding walls, aquarium windows and other smooth surfaces, eventually leading to abrasions to the fins or snout, which may result in serious infections. To keep blue sharks, it is therefore necessary with tanks that allow for relatively long, optimum swimming paths where potential contact with smooth surfaces is kept at a minimum. It has been suggested that prominent rockwork may be easier to avoid for blue sharks than smooth surfaces, as has been shown in captive tiger sharks. Conservation status Blue sharks make up approximately 85–90% of the total elasmobranchs caught by oceanic fisheries as bycatch. In June 2018 the New Zealand Department of Conservation classified the blue shark as "Not Threatened" with the qualifier "Secure Overseas" under the New Zealand Threat Classification System. The species is listed as Near Threatened by the IUCN.
Biology and health sciences
Sharks
Animals
240935
https://en.wikipedia.org/wiki/SN1%20reaction
SN1 reaction
{{DISPLAYTITLE:SN1 reaction}} The unimolecular nucleophilic substitution (SN1) reaction is a substitution reaction in organic chemistry. The Hughes-Ingold symbol of the mechanism expresses two properties—"SN" stands for "nucleophilic substitution", and the "1" says that the rate-determining step is unimolecular. Thus, the rate equation is often shown as having first-order dependence on the substrate and zero-order dependence on the nucleophile. This relationship holds for situations where the amount of nucleophile is much greater than that of the intermediate. Instead, the rate equation may be more accurately described using steady-state kinetics. The reaction involves a carbocation intermediate and is commonly seen in reactions of secondary or tertiary alkyl halides under strongly basic conditions or, under strongly acidic conditions, with secondary or tertiary alcohols. With primary and secondary alkyl halides, the alternative SN2 reaction occurs. In inorganic chemistry, the SN1 reaction is often known as the dissociative substitution. This dissociation pathway is well-described by the cis effect. A reaction mechanism was first introduced by Christopher Ingold et al. in 1940. This reaction does not depend much on the strength of the nucleophile, unlike the SN2 mechanism. This type of mechanism involves two steps. The first step is the ionization of alkyl halide in the presence of aqueous acetone or ethyl alcohol. This step provides a carbocation as an intermediate. In the first step of SN1 mechanism, a carbocation is formed which is planar and hence attack of nucleophile (second step) may occur from either side to give a racemic product, but actually complete racemization does not take place. This is because the nucleophilic species attacks the carbocation even before the departing halides ion has moved sufficiently away from the carbocation. The negatively charged halide ion shields the carbocation from being attacked on the front side, and backside attack, which leads to inversion of configuration, is preferred. Thus the actual product no doubt consists of a mixture of enantiomers but the enantiomers with inverted configuration would predominate and complete racemization does not occur. Mechanism An example of a reaction taking place with an SN1 reaction mechanism is the hydrolysis of tert-butyl bromide forming tert-butanol: This SN1 reaction takes place in three steps: Formation of a tert-butyl carbocation by separation of a leaving group (a bromide anion) from the carbon atom: this step is slow. Nucleophilic attack: the carbocation reacts with the nucleophile. If the nucleophile is a neutral molecule (i.e. a solvent) a third step is required to complete the reaction. When the solvent is water, the intermediate is an oxonium ion. This reaction step is fast. Deprotonation: Removal of a proton on the protonated nucleophile by water acting as a base forming the alcohol and a hydronium ion. This reaction step is fast. Rate law Although the rate law of the SN1 reaction is often regarded as being first order in alkyl halide and zero order in nucleophile, this is a simplification that holds true only under certain conditions. While it, too, is an approximation, the rate law derived from the steady state approximation (SSA) provides more insight into the kinetic behavior of the SN1 reaction. Consider the following reaction scheme for the mechanism shown above: Though a relatively stable tertiary carbocation, tert-butyl cation is a high-energy species that is present only at very low concentration and cannot be directly observed under normal conditions. Thus, the SSA can be applied to this species: (1) Steady state assumption: (2) Concentration of t-butyl cation, based on steady state assumption: (3) Overall reaction rate, assuming rapid final step: (4) Steady state rate law, by plugging (2) into (3): Under normal synthetic conditions, the entering nucleophile is more nucleophilic than the leaving group and is present in excess. Moreover, kinetic experiments are often conducted under initial rate conditions (5 to 10% conversion) and without the addition of bromide, so is negligible. For these reasons, often holds. Under these conditions, the SSA rate law reduces to: the simple first-order rate law described in introductory textbooks. Under these conditions, the concentration of the nucleophile does not affect the rate of the reaction, and changing the nucleophile (e.g. from H2O to MeOH) does not affect the reaction rate, though the product is, of course, different. In this regime, the first step (ionization of the alkyl bromide) is slow, rate-determining, and irreversible, while the second step (nucleophilic addition) is fast and kinetically invisible. However, under certain conditions, non-first-order reaction kinetics can be observed. In particular, when a large concentration of bromide is present while the concentration of water is limited, the reverse of the first step becomes important kinetically. As the SSA rate law indicates, under these conditions there is a fractional (between zeroth and first order) dependence on [H2O], while there is a negative fractional order dependence on [Br–]. Thus, SN1 reactions are often observed to slow down when an exogenous source of the leaving group (in this case, bromide) is added to the reaction mixture. This is known as the common ion effect and the observation of this effect is evidence for an SN1 mechanism (although the absence of a common ion effect does not rule it out). Scope The SN1 mechanism tends to dominate when the central carbon atom is surrounded by bulky groups because such groups sterically hinder the SN2 reaction. Additionally, bulky substituents on the central carbon increase the rate of carbocation formation because of the relief of steric strain that occurs. The resultant carbocation is also stabilized by both inductive stabilization and hyperconjugation from attached alkyl groups. The Hammond–Leffler postulate suggests that this, too, will increase the rate of carbocation formation. The SN1 mechanism therefore dominates in reactions at tertiary alkyl centers. An example of a reaction proceeding in a SN1 fashion is the synthesis of 2,5-dichloro-2,5-dimethylhexane from the corresponding diol with concentrated hydrochloric acid: As the alpha and beta substitutions increase with respect to leaving groups, the reaction is diverted from SN2 to SN1. Stereochemistry The carbocation intermediate formed in the reaction's rate determining step (RDS) is an sp2 hybridized carbon with trigonal planar molecular geometry. This allows two different ways for the nucleophilic attack, one on either side of the planar molecule. If neither approach is favored, then these two ways occur equally, yielding a racemic mixture of enantiomers if the reaction takes place at a stereocenter. This is illustrated below in the SN1 reaction of S-3-chloro-3-methylhexane with an iodide ion, which yields a racemic mixture of 3-iodo-3-methylhexane: However, an excess of one stereoisomer can be observed, as the leaving group can remain in proximity to the carbocation intermediate for a short time and block nucleophilic attack. This stands in contrast to the SN2 mechanism, which is a stereospecific mechanism where stereochemistry is always inverted as the nucleophile comes in from the rear side of the leaving group. Side reactions Two common side reactions are elimination reactions and carbocation rearrangement. If the reaction is performed under warm or hot conditions (which favor an increase in entropy), E1 elimination is likely to predominate, leading to formation of an alkene. At lower temperatures, SN1 and E1 reactions are competitive reactions and it becomes difficult to favor one over the other. Even if the reaction is performed cold, some alkene may be formed. If an attempt is made to perform an SN1 reaction using a strongly basic nucleophile such as hydroxide or methoxide ion, the alkene will again be formed, this time via an E2 elimination. This will be especially true if the reaction is heated. Finally, if the carbocation intermediate can rearrange to a more stable carbocation, it will give a product derived from the more stable carbocation rather than the simple substitution product. Solvent effects Since the SN1 reaction involves formation of an unstable carbocation intermediate in the rate-determining step (RDS), anything that can facilitate this process will speed up the reaction. The normal solvents of choice are both polar (to stabilize ionic intermediates in general) and protic solvents (to solvate the leaving group in particular). Typical polar protic solvents include water and alcohols, which will also act as nucleophiles, and the process is known as solvolysis. The Y scale correlates solvolysis reaction rates of any solvent (k) with that of a standard solvent (80% v/v ethanol/water) (k0) through with m a reactant constant (m = 1 for tert-butyl chloride) and Y a solvent parameter. For example, 100% ethanol gives Y = −2.3, 50% ethanol in water Y = +1.65 and 15% concentration Y = +3.2.
Physical sciences
Organic reactions
Chemistry