id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
495192
https://en.wikipedia.org/wiki/Nothofagus
Nothofagus
Nothofagus, also known as the southern beeches, is a genus of 43 species of trees and shrubs native to the Southern Hemisphere in southern South America (Chile, Argentina) and east and southeast Australia, New Zealand, New Guinea, and New Caledonia. The species are ecological dominants in many temperate forests in these regions. Some species are reportedly naturalised in Germany and Great Britain. The genus has a rich fossil record of leaves, cupules, and pollen, with fossils extending into the late Cretaceous period and occurring in Australia, New Zealand, Antarctica, and South America. Description The leaves are toothed or entire, evergreen or deciduous. The fruit is a small, flattened or triangular nut, borne in cupules containing one to seven nuts. Reproduction Many individual trees are extremely old, and at one time, some populations were thought to be unable to reproduce in present-day conditions where they were growing, except by suckering (clonal reproduction), being remnant forest from a cooler time. Sexual reproduction has since been shown to be possible. Taxonomy The genus Nothofagus was first formally described in 1850 by Carl Ludwig Blume who published the description in his book Museum botanicum Lugduno-Batavum, sive, Stirpium exoticarum novarum vel minus cognitarum ex vivis aut siccis brevis expositio et descriptio. Nothofagus means "false beech", which Blume chose to indicate that Nothofagus species were different from beeches in the Northern Hemisphere. In the past, they were included in the family Fagaceae, but genetic tests revealed them to be genetically distinct, and they are now included in their own family, Nothofagaceae. Species list The following is a list of species, hybrids and varieties accepted by the Plants of the World Online as of April 2023: Nothofagus aequilateralis (Baum.-Bod.) Steenis (New Caledonia) Nothofagus alessandrii Espinosa (Central Chile) Nothofagus alpina (Poepp. & Endl.) Oerst. (Argentina South, Chile Central, Chile South) Nothofagus antarctica (G.Forst.) Oerst. (Argentina South, Chile Central, Chile South) Nothofagus balansae (Baill.) Steenis (New Caledonia) Nothofagus baumanniae (Baum.-Bod.) Steenis (New Caledonia) Nothofagus betuloides (Mirb.) Oerst. (Argentina South, Chile South) Nothofagus brassii Steenis (New Guinea) Nothofagus carrii Steenis (New Guinea) Nothofagus cliffortioides (Hook.f.) Oerst. (New Zealand North, New Zealand South) Nothofagus codonandra (Hook.f.) Oerst. (New Caledonia) Nothofagus crenata Steenis (New Guinea) Nothofagus cunninghamii (Hook.f.) Oerst. (Tasmania, Victoria) Nothofagus discoidea (Baum.-Bod.) Steenis (New Caledonia) Nothofagus dombeyi (Mirb.) Oerst. (Argentina South, Chile Central, Chile South) Nothofagus flaviramea Steenis (New Guinea) Nothofagus fusca (Hook.f.) Oerst. (New Zealand North, New Zealand South) Nothofagus glauca (Phil.) Krasser (Chile Central) Nothofagus grandis Steenis (New Guinea) Nothofagus gunnii (Hook.f.) Oerst. (Tasmania) Nothofagus macrocarpa (A.DC.) F.M.Vázquez & R.A.Rodr. (Chile Central) Nothofagus menziesii (Hook.f.) Oerst. (New Zealand North, New Zealand South) Nothofagus moorei (F.Muell.) Krasser (New South Wales, Queensland) Nothofagus nitida (Phil.) Krasser (Chile South) Nothofagus nuda Steenis (New Guinea) Nothofagus obliqua (Mirb.) Oerst. (Argentina South, Chile Central, Chile South) Nothofagus perryi Steenis (New Guinea) Nothofagus pseudoresinosa Steenis (New Guinea) Nothofagus pullei Steenis (New Guinea) Nothofagus pumilio (Poepp. & Endl.) Krasser (Argentina South, Chile Central, Chile South) Nothofagus resinosa Steenis (New Guinea) Nothofagus rubra Steenis (New Guinea) Nothofagus rutila Ravenna (Chile Central) Nothofagus solandri (Hook.f.) Oerst. (New Zealand North, New Zealand South) Nothofagus starkenborghiorum Steenis (Bismarck Archipelago, New Guinea) Nothofagus stylosa Steenis (New Guinea) Nothofagus truncata (Colenso) Cockayne (New Zealand North, New Zealand South) Nothofagus womersleyi Steenis (New Guinea) Nothofagus × apiculata (Colenso) Cockayne (New Zealand North, New Zealand South) Nothofagus × blairii Kirk (New Zealand North, New Zealand South) Nothofagus × dodecaphleps Mike L.Grant & E.J.Clement (artificial hybrid) Nothofagus × eugenananus Gilland. (artificial hybrid) Nothofagus × leoni Espinosa (Chile Central) Nothofagus × solfusca Allan (New Zealand North) Subgenera Four subgenera are recognized, based on morphology and DNA analysis: Subgenus Fuscospora, six species (N. alessandri, N. cliffortioides, N. fusca, N. gunnii, N. solandri, and N. truncata) in New Zealand, Tasmania, and southern South America. Subgenus Lophozonia, seven species (N. alpina, N. cunninghamii, N. glauca, N. macrocarpa, N. menziesii, N. moorei, and N. obliqua) in New Zealand, Australia, and southern South America. Subgenus Nothofagus, five species (N. antarctica, N. betuloides, N. dombeyi, N. nitida, and N. pumilio) in southern South America. Subgenus Brassospora (or Trisyngyne), 20 accepted species (N. aequilateralis, N. balansae, N. baumanniae, N. brassii, N. carrii, N. codonandra, N. crenata, N. discoidea, N. flaviramea, N. grandis, N. nuda, N. perryi, N. pseudoresinosa, N, pullei, N. recurva, N. resinosa, N. rubra, N. starkenborghiorum, N. stylosa, and N. womersleyi) in New Guinea and New Caledonia. In 2013, Peter Brian Heenan and Rob D. Smissen proposed splitting the genus into four, turning the four recognized subgenera into the new genera Fuscospora, Lophozonia and Trisyngyne, with the five South American species of subgenus Nothofagus remaining in genus Nothofagus. The proposed new genera are not accepted at the World Checklist of Selected Plant Families. Extinct species The following additional species are listed as extinct: †Nothofagus australis (Argentina, Early Oligocene-Early Miocene) †Nothofagus balfourensis (Tasmania, Late Oligocene-Early Miocene) †Nothofagus beardmorensis (Antarctica, Late Pliocene) †Nothofagus bulbosa (Tasmania, Early Oligocene) †Nothofagus cethanica (Tasmania, Early Oligocene) †Nothofagus cooksoniae (Tasmania, Early Oligocene) †Nothofagus crenulata (Argentina, Mid Oligocene-Early Miocene) †Nothofagus cretacea (Antarctica, Late Cretaceous) †Nothofagus densinervosa (Argentina, Mid Oligocene-Early Miocene) †Nothofagus elongata (Argentina, Early Oligocene-Early Miocene) †Nothofagus glandularis (Tasmania, Mid Oligocene-Early Miocene) †Nothofagus glaucifolia (Antarctica, Late Cretaceous) †Nothofagus lanceolata (Argentina, Late Oligocene-Early Miocene) †Nothofagus lobata (Tasmania, Early Oligocene) †Nothofagus magelhaenica (Argentina, Early Oligocene-Early Miocene) †Nothofagus magellanica (Argentina, Late Oligocene-Mid Miocene) †Nothofagus maideni (Tasmania, Early Oligocene-Mid Miocene) †Nothofagus microphylla (Tasmania, Late Oligocene-Mid Miocene) †Nothofagus mucronata (Tasmania, Early Oligocene) †Nothofagus muelleri (New South Wales, Late Eocene) †Nothofagus novae-zealandiae (New Zealand, Mid-Late Miocene) †Nothofagus pachyphylla (Tasmania, Early Pleistocene) †Nothofagus palustris (New Zealand, Late Oligocene-Early Miocene) †Nothofagus peduncularis (Tasmania, Early Oligocene) †Nothofagus robusta (Tasmania, Early Oligocene) †Nothofagus serrata (Tasmania, Early Oligocene) †Nothofagus serrulata (Argentina, Mid Oligocene-Early Miocene) †Nothofagus simplicidens (Argentina, Mid Oligocene-Early Miocene) †Nothofagus smithtonensis (Tasmania, Early Oligocene) †Nothofagus tasmanica (Tasmania, Eocene-Early Oligocene) †Nothofagus ulmifolia (Antarctica, Late Cretaceous) †Nothofagus variabilis (Argentina, Oligocene) †Nothofagus zastawniakiae (Antarctica, Late Cretaceous) Distribution The pattern of distribution around the southern Pacific Rim suggests the dissemination of the genus dates to the time when Antarctica, Australia, and South America were connected in a common land-mass or supercontinent referred to as Gondwana. However, genetic evidence using molecular dating methods has been used to argue that the species in New Zealand and New Caledonia evolved from species that arrived in these landmasses by dispersal across oceans. Uncertainty exists in molecular dates and controversy rages as to whether the distribution of Nothofagus derives from the break-up of Gondwana (i.e. vicariance), or if long-distance dispersal has occurred across oceans. In South America, the northern limit of the genus can be construed as La Campana National Park and the Vizcachas Mountains in the central part of Chile. Evolutionary history Nothofagus first appeared in Antarctica during the early Campanian stage (83.6 to 72.1 million years ago) of the Late Cretaceous. During the Campanian Nothofagus diversified and became dominant within Antarctic ecosystems, with the appearance of all four modern subgenera by the end of the stage. Nothofagus shows a progressive decline in the Antarctic pollen record through the Maastrichtian, before substantially recovering after the Cretaceous-Paleogene boundary. Nothofagus persisted in Antarctica deep into the Cenozoic, despite the increasingly inhospitable conditions, with the final records from the late Neogene, around 15-5 million years old, which were small tundra-adapted prostrate shrubs, similar to Salix arctica (Arctic willow). Nothofagus first appeared in southern South America during the late Campanian. During the Paleocene and Eocene they were mostly restricted to southern Patagonia, before reaching a peak abundance during the Miocene. Their distribution contracted westwards during the late Miocene due to the aridification of Patagonia. Although the genus now mostly occurs in cool, isolated, high-altitude environments at temperate and tropical latitudes, the fossil record shows that it survived in climates that appear to be much warmer than those that Nothofagus now occupies. Ecology Nothofagus species are used as food plants by the larvae of hepialid moths of the genus Aenetus, including A. eximia and A. virescens. Zelopsis nothofagi is a leaf hopper, endemic to New Zealand, which is found on Nothofagus. Cyttaria is genus of ascomycete fungi found on or associated with Nothofagus in Australia and South America. Misodendrum are specialist parasitic plants found on various species of Nothofagus in South America. Additionally, the beetle, Brachysternus prasinus, has been known to live in Nothofagus in Chile and in parts of Argentina. The geographic range of B. prasinus is highly dependent on the availability and distribution of Nothofagus on which B. prasinus is believed to feed. B. prasinus have been observed in the Nothofagus forests near the cities of Coquimbo and Llanquihue in Chile as well as the areas of Neuquén and Chubut in Western Argentina.The species of subgenus Brassospora are evergreen, and distributed in the tropics of New Guinea, New Britain, and New Caledonia. In New Guinea and New Britain Nothofagus is characteristic of lower montane rain forests between 1000 and 2500 meters elevation, occurring infrequently at elevations as low as 600 meters, and in upper montane forests between 2500 and 3150 meters elevation. Nothofagus is most commonly found above the Castanopsis-Lithocarpus zone in the lower montane forests, and below the conifer-dominated upper montane forests. Nothofagus grows in mixed stands with trees of other species or in pure stands, particularly on ridge crests and upper slopes. The Central Range has the greatest diversity of species, with fewer species distributed among the mountains of western and northern New Guinea, New Britain, and Goodenough and Normanby islands. The New Caledonian species are endemic to the main island (Grand Terre), most commonly on soils derived from ultramafic rocks between 150 and 1350 meters elevation. They occur in isolated stands, forming a low or stunted and irregular and fairly open canopy. The conifers Agathis and Araucaria are sometimes present as emergents, rising 10 to 20 meters above the Nothofagus canopy. Beech mast Every four to six years or so, Nothofagus'' produces a heavier crop of seeds and is known as the beech mast. In New Zealand, the beech mast causes an increase in the population of introduced mammals such as mice, rats, and stoats. When the rodent population collapses, the stoats begin to prey on native bird species, many of which are threatened with extinction. This phenomenon is covered in more detail in the article on stoats in New Zealand.
Biology and health sciences
Fagales
Plants
495884
https://en.wikipedia.org/wiki/Sound%20pressure
Sound pressure
Sound pressure or acoustic pressure is the local pressure deviation from the ambient (average or equilibrium) atmospheric pressure, caused by a sound wave. In air, sound pressure can be measured using a microphone, and in water with a hydrophone. The SI unit of sound pressure is the pascal (Pa). Mathematical definition A sound wave in a transmission medium causes a deviation (sound pressure, a dynamic pressure) in the local ambient pressure, a static pressure. Sound pressure, denoted p, is defined by where ptotal is the total pressure, pstat is the static pressure. Sound measurements Sound intensity In a sound wave, the complementary variable to sound pressure is the particle velocity. Together, they determine the sound intensity of the wave. Sound intensity, denoted I and measured in W·m−2 in SI units, is defined by where p is the sound pressure, v is the particle velocity. Acoustic impedance Acoustic impedance, denoted Z and measured in Pa·m−3·s in SI units, is defined by where is the Laplace transform of sound pressure, is the Laplace transform of sound volume flow rate. Specific acoustic impedance, denoted z and measured in Pa·m−1·s in SI units, is defined by where is the Laplace transform of sound pressure, is the Laplace transform of particle velocity. Particle displacement The particle displacement of a progressive sine wave is given by where is the amplitude of the particle displacement, is the phase shift of the particle displacement, k is the angular wavevector, ω is the angular frequency. It follows that the particle velocity and the sound pressure along the direction of propagation of the sound wave x are given by where vm is the amplitude of the particle velocity, is the phase shift of the particle velocity, pm is the amplitude of the acoustic pressure, is the phase shift of the acoustic pressure. Taking the Laplace transforms of v and p with respect to time yields Since , the amplitude of the specific acoustic impedance is given by Consequently, the amplitude of the particle displacement is related to that of the acoustic velocity and the sound pressure by Inverse-proportional law When measuring the sound pressure created by a sound source, it is important to measure the distance from the object as well, since the sound pressure of a spherical sound wave decreases as 1/r from the centre of the sphere (and not as 1/r2, like the sound intensity): This relationship is an inverse-proportional law. If the sound pressure p1 is measured at a distance r1 from the centre of the sphere, the sound pressure p2 at another position r2 can be calculated: The inverse-proportional law for sound pressure comes from the inverse-square law for sound intensity: Indeed, where is the convolution operator, z−1 is the convolution inverse of the specific acoustic impedance, hence the inverse-proportional law: Sound pressure level Sound pressure level (SPL) or acoustic pressure level (APL) is a logarithmic measure of the effective pressure of a sound relative to a reference value. Sound pressure level, denoted Lp and measured in dB, is defined by: where p is the root mean square sound pressure, p0 is a reference sound pressure, is the neper, is the bel, is the decibel. The commonly used reference sound pressure in air is which is often considered as the threshold of human hearing (roughly the sound of a mosquito flying 3 m away). The proper notations for sound pressure level using this reference are or , but the suffix notations , , dBSPL, or dBSPL are very common, even if they are not accepted by the SI. Most sound-level measurements will be made relative to this reference, meaning will equal an SPL of . In other media, such as underwater, a reference level of is used. These references are defined in ANSI S1.1-2013. The main instrument for measuring sound levels in the environment is the sound level meter. Most sound level meters provide readings in A, C, and Z-weighted decibels and must meet international standards such as IEC 61672-2013. Examples The lower limit of audibility is defined as SPL of , but the upper limit is not as clearly defined. While ( or ) is the largest pressure variation an undistorted sound wave can have in Earth's atmosphere (i. e., if the thermodynamic properties of the air are disregarded; in reality, the sound waves become progressively non-linear starting over 150 dB), larger sound waves can be present in other atmospheres or other media, such as underwater or through the Earth. Ears detect changes in sound pressure. Human hearing does not have a flat spectral sensitivity (frequency response) relative to frequency versus amplitude. Humans do not perceive low- and high-frequency sounds as well as they perceive sounds between 3,000 and 4,000 Hz, as shown in the equal-loudness contour. Because the frequency response of human hearing changes with amplitude, three weightings have been established for measuring sound pressure: A, B and C. In order to distinguish the different sound measures, a suffix is used: A-weighted sound pressure level is written either as dBA or LA. B-weighted sound pressure level is written either as dBB or LB, and C-weighted sound pressure level is written either as dBC or LC. Unweighted sound pressure level is called "linear sound pressure level" and is often written as dBL or just L. Some sound measuring instruments use the letter "Z" as an indication of linear SPL. Distance The distance of the measuring microphone from a sound source is often omitted when SPL measurements are quoted, making the data useless, due to the inherent effect of the inverse proportional law. In the case of ambient environmental measurements of "background" noise, distance need not be quoted, as no single source is present, but when measuring the noise level of a specific piece of equipment, the distance should always be stated. A distance of one metre (1 m) from the source is a frequently used standard distance. Because of the effects of reflected noise within a closed room, the use of an anechoic chamber allows sound to be comparable to measurements made in a free field environment. According to the inverse proportional law, when sound level Lp1 is measured at a distance r1, the sound level Lp2 at the distance r2 is Multiple sources The formula for the sum of the sound pressure levels of n incoherent radiating sources is Inserting the formulas in the formula for the sum of the sound pressure levels yields Examples of sound pressure
Physical sciences
Waves
Physics
496264
https://en.wikipedia.org/wiki/Dialysis%20%28chemistry%29
Dialysis (chemistry)
In chemistry, dialysis is the process of separating molecules in solution by the difference in their rates of diffusion through a semipermeable membrane, such as dialysis tubing. Dialysis is a common laboratory technique that operates on the same principle as medical dialysis. In the context of life science research, the most common application of dialysis is for the removal of unwanted small molecules such as salts, reducing agents, or dyes from larger macromolecules such as proteins, DNA, or polysaccharides. Dialysis is also commonly used for buffer exchange and drug binding studies. The concept of dialysis was introduced in 1861 by the Scottish chemist Thomas Graham. He used this technique to separate sucrose (small molecule) and gum Arabic solutes (large molecule) in aqueous solution. He called the diffusible solutes crystalloids and those that would not pass the membrane colloids. From this concept dialysis can be defined as a spontaneous separation process of suspended colloidal particles from dissolved ions or molecules of small dimensions through a semi permeable membrane. Most common dialysis membrane are made of cellulose, modified cellulose or synthetic polymer (cellulose acetate or nitrocellulose). Etymology Dialysis derives from the Greek , 'through', and , 'to loosen'. Principles Dialysis is the process used to change the matrix of molecules in a sample by differentiating molecules by the classification of size. It relies on diffusion, which is the random, thermal movement of molecules in solution (Brownian motion) that leads to the net movement of molecules from an area of higher concentration to a lower concentration until equilibrium is reached. Due to the pore size of the membrane, large molecules in the sample cannot pass through the membrane, thereby restricting their diffusion from the sample chamber. By contrast, small molecules will freely diffuse across the membrane and obtain equilibrium across the entire solution volume, thereby changing the overall concentration of these molecules in the sample and dialysate (see dialysis figure at right). Osmosis is another principle that makes dialysis work. During osmosis, fluid moves from areas of high water concentration to lower water concentration across a semi-permeable membrane until equilibrium. In dialysis, excess fluid moves from sample to the dialysate through a membrane until the fluid level is the same between sample and dialysate. Finally, ultrafiltration is the convective flow of water and dissolved solute down a pressure gradient caused by hydrostatic forces or osmotic forces. In dialysis, ultrafiltration removes molecules of waste and excess fluids from sample. For example, dialysis occurs when a sample contained in a cellulose bag and is immersed into a dialysate solution. During dialysis, equilibrium is achieved between the sample and dialysate since only small molecules can pass the cellulose membrane, leaving only larger particles behind. Once equilibrium is reached, the final concentration of molecules is dependent on the volumes of the solutions involved, and if the equilibrated dialysate is replaced (or exchanged) with fresh dialysate (see procedure below), diffusion will further reduce the concentration of the small molecules in the sample. Dialysis can be used to either introduce or remove small molecules from a sample, because small molecules move freely across the membrane in both directions. Dialysis can also be used to remove salts. This makes dialysis a useful technique for a variety of applications. See dialysis tubing for additional information on the history, properties, and manufacturing of semipermeable membranes used for dialysis. Types Diffusion dialysis Diffusion dialysis is a spontaneous separation process where the driving force which produces the separation is the concentration gradient. It has an increase in entropy and decrease in Gibbs free energy which means that it is thermodynamically favorable. Diffusion dialysis uses anion exchange membranes (AEM) or cation exchange membranes (CEM) depending on the compounds to separate. AEM allows the passage of anions while it obstructs the passage of cations due to the co-ion rejection and preservation of electrical neutrality. The opposite happens with cation exchange membranes. Electrodialysis Electrodialysis is a process of separation which uses ion-exchange membranes and an electrical potential as a driving force. It is mainly used to remove ions from aqueous solutions. There are three electrodialysis processes which are commonly used - Donnan dialysis, reverse electrodialysis, and electro-electrodialysis. These processes are explained below. Donnan dialysis Donnan dialysis is a separation process which is used to exchange ions between two aqueous solutions which are separated by a CEM or an AEM membrane. In the case of a cation exchange membrane separating two solutions with different acidity, protons (H+) go through the membrane to the less acidic side. This induces an electrical potential that will instigate a flux of the cations present in the less acidic side to the more acidic side. The process will finish when the variation of concentration of H+ is the same order of magnitude as the difference of concentration of the separated cation. Reverse electrodialysis Reverse electrodialysis is a technology based on membranes which gets electricity from a mixing of two water streams with different salinities. It commonly uses anion exchange membranes (AEM) and cation exchange membranes (CEM). AEMs are used to allow the pass of anions and obstruct the pass of cations and CEMs are used to do the opposite. The cations and anions in the high salinity water moves to the low salinity water, cations passing through the CEMs and anions through the AEMs. This phenomenon can be converted to electricity. Electro-electrodialysis Electro-electrodialysis is an electromembrane process utilizing three compartments, which combines electrodialysis and electrolysis. It is commonly used to recover acid from a solution using AEM, CEM and electrolysis. The three compartments are separated by two barriers, which are the ion exchange membranes. The compartment in the middle has the water to be treated. The compartments located on the sides contain clean water. The anions pass through the AEM, while the cations pass through the CEM. The electricity creates H+ in the anions' side and OH− in the cations' side, which react with the respective ions. Procedure Equipment Separating molecules in a solution by dialysis is a relatively straightforward process. Other than the sample and dialysate buffer, all that is typically needed is: Dialysis membrane in an appropriate format (e.g., tubing, cassette, etc.) and molecular weight cut-off (MWCO) A container to hold the dialysate buffer The ability to stir the solutions and control the temperature General protocol A typical dialysis procedure for protein samples is as follows: Prepare the membrane according to instructions Load the sample into dialysis tubing, cassette or device Place sample into an external chamber of dialysis buffer (with gentle stirring of the buffer) Dialyze for 2 hours (at room temperature or 4 °C) Change the dialysis buffer and dialyze for another 2 hours Change the dialysis buffer and dialyze for 2 hours or overnight The total volume of sample and dialysate determine the final equilibrium concentration of the small molecules on both sides of the membrane. By using the appropriate volume of dialysate and multiple exchanges of the buffer, the concentration of small contaminants within the sample can be decreased to acceptable or negligible levels. For example, when dialyzing 1mL of sample against 200mL of dialysate, the concentration of unwanted dialyzable substances will be decreased 200-fold when equilibrium is attained. Following two additional buffer changes of 200mL each, the contaminant level in the sample will be reduced by a factor of 8 x 106 (200 x 200 x 200). Variables and protocol optimization Although dialyzing a sample is relatively simple, a universal dialysis procedure for all applications cannot be provided due to the following variables: The sample volume The size of the molecules being separated The membrane used The geometry of the membrane, which affects the diffusion distance Additionally, the dialysis endpoint is somewhat subjective and application specific. Therefore, the general procedure might require optimization. Dialysis membranes and MWCO Dialysis membranes are produced and characterized according to molecular-weight cutoff (MWCO) limits. While membranes with MWCOs ranging from 1-1,000,000 kDa are commercially available, membranes with MWCOs near 10 kDa are most commonly used. The MWCO of a membrane is the result of the number and average size of the pores created during production of the dialysis membrane. The MWCO typically refers to the smallest average molecular mass of a standard molecule that will not effectively diffuse across the membrane during extended dialysis. Thus, a dialysis membrane with a 10K MWCO will generally retain greater than 90% of a protein having a molecular mass of at least 10kDa. It is important to note that the MWCO of a membrane is not a sharply defined value. Molecules with mass near the MWCO limit of the membrane will diffuse across the membrane more slowly than molecules significantly smaller than the MWCO. In order for a molecule to rapidly diffuse across a membrane, it typically needs to be at least 20- to 50-times smaller than the MWCO rating of a membrane. Therefore, it is not practical to separate a 30kDa protein from a 10kDa protein using dialysis across a 20K rated dialysis membrane. Dialysis membranes for laboratory use are typically made of a film of regenerated cellulose or cellulose esters. See reference for a review of cellulose membranes and manufacturing. Laboratory dialysis formats Dialysis is generally performed in clipped bags of dialysis tubing or in a variety of formatted dialyzers. The choice of the dialysis set up used is largely dependent on the size of the sample and the preference of the user. Dialysis tubing is the oldest and generally the least expensive format used for dialysis in the lab. Tubing is cut and sealed with a clip at one end, then filled and sealed with a clip on the other end. Tubing provides flexibility but has increased concerns regarding handling, sealing and sample recovery. Dialysis tubing is typically supplied either wet or dry in rolls or pleated telescoped tubes. A wide variety of dialysis devices (or dialyzers) are available from several vendors. Dialyzers are designed for specific sample volume ranges and provide greater sample security and improved ease of use and performance for dialysis experiments over tubing. The most common preformatted dialyzers are Slide-A-Lyzer, Float-A-Lyzer, and the Pur-A-lyzer/D-Tube/GeBAflex Dialyzers product lines. Applications Dialysis has a wide range of applications. These can be divided into two categories depending on the type of dialysis used. Diffusion dialysis Some applications of the diffusion dialysis are explained below. Strong aqueous caustic soda solutions can be purified of hemicellulose by diffusion dialysis. This is specific to the largely-obsolete viscose process. The first step in that process is to treat almost-pure cellulose (cotton linters or dissolving pulp) with strong (17-20% w/w) solutions of sodium hydroxide (caustic soda) in water. One effect of that step is to dissolve the hemicelluloses (low-MW polymers). In some circumstances, it is desirable to remove as much hemicellulose as possible out of the process, and that can be done using dialysis. Acids can be recovered from aqueous solutions using anion-exchange membranes. That process is an alternative treatment of industrial wastewater. It is used for the recovery of mixed acid (HF+ HNO3), the recovery and concentration of Zn2+ and Cu2+, in H2SO4+ CuSO4 and H2SO4+ ZnSO4 and the recovery of H2SO4 from waste sulphuric acid solutions containing Fe and Ni ions, which are produced at the diamond manufacturing process. Alkali waste can be recovered using diffusion dialysis because of its low energy cost. The NaOH base can be recovered from the aluminium etching solution applying a technique develop by Astom Corporation of Japan. De-alcoholisation of beer is another application of the diffusion dialysis. Taking into account that a concentration gradient is applied for this technique, the alcohol and other small molecule compounds transfer across the membrane from higher concentrations to lower, which is water. It is used for this application for the low operation conditions and the possibility to remove alcohol to 0.5%. Electrodialysis Some applications of the electrodialysis are explained below. The desalination of whey is the largest area of use for this type of dialysis in the food industry. It is necessary to remove crude cheese whey containing calcium, phosphorus and other inorganic salts to produce different foods such as cake, bread, ice cream and baby foods. The limit of whey demineralisation is almost 90%. De-acidification of fruit juice such as grape, orange, apple and lemon are processes in which electrodialysis is applied. An anion-exchange membrane is employed in this technique implying that citrate ions from the juice are extracted and replaced by hydroxide ions. Desalting of soy sauce can be done by electrodialysis. The conventional values of salt in brewed soy sauce are about 16-18 %, which is a quite high content. Electrodialysis is used to reduce the amount of salt present in the soy sauce. Nowadays diets of low salt content are very present in the society. Electrodialysis allows the separation of amino acids into acidic, basic and neutral groups. Specifically, cytoplasmic leaf proteins are extracted from alfalfa leaves applying electrodialysis. When proteins are denatured, the solutions can be desalted (of K+ ions) and acidified with H+ ions. Advantages and disadvantages Dialysis has both advantages and disadvantages. Following the structure of the previous section, the pros and cons are discussed based on the type of dialysis used. Advantages and drawbacks of both, diffusion dialysis and electrodialysis, are outlined below. Diffusion dialysis The main advantage of diffusion dialysis is the low energy consumption of the unit. This membrane technique operates under normal pressure and does not have a state change. Consequently, the energy required is significantly reduced, which reduces the operating cost. There is also the low installation cost, easy operation and the stability and reliability of the process. Another advantage is that diffusion dialysis does not pollute the environment. A disadvantage is that a diffusion dialyser has a low processing capability and low processing efficiency. There are other methods such as electrodialysis and reverse osmosis that can achieve better efficiencies than diffusion dialysis. Electrodialysis The main benefit of electrodialysis is the high recovery, especially in the water recovery. Another advantage is the fact that not high pressure is applied which implies that the effect fouling is not significant and consequently no chemicals are required to fight against them. Moreover, the fouling layer is not compact which leads to a higher recovery and to a long membrane life. It is also important that the treatments are for concentrations higher than 70,000 ppm eliminating the concentration limit. Finally, the energy required to operate is low due to the non-phase change. In fact, it is lower in comparison with the needed in the multi effect distillation (MED) and mechanical vapour compression (MVC) processes. The main drawback of electrodialysis is the current density limit, the process must be operated at a lower current density than the maximum allowed. The fact is that at certain voltage applied the diffusion of ions through the membrane are not linear leading to water dissociation, which would reduce the efficiency of the operation. Another aspect to take into account is that although low energy is required to operate, the higher the salt feed concentration is, the higher the energy needed will be. Finally, in the case of some products, it must be considered that electrodialysis does not remove microorganisms and organic contaminants, therefore a post-treatment is necessary.
Physical sciences
Other separations
Chemistry
496360
https://en.wikipedia.org/wiki/Siltstone
Siltstone
Siltstone, also known as aleurolite, is a clastic sedimentary rock that is composed mostly of silt. It is a form of mudrock with a low clay mineral content, which can be distinguished from shale by its lack of fissility. Although its permeability and porosity is relatively low, siltstone is sometimes a tight gas reservoir rock, an unconventional reservoir for natural gas that requires hydraulic fracturing for economic gas production. Siltstone was prized in ancient Egypt for manufacturing statuary and cosmetic palettes. The siltstone quarried at Wadi Hammamat was a hard, fine-grained siltstone that resisted flaking and was almost ideal for such uses. Description There is not complete agreement on the definition of siltstone. One definition is that siltstone is mudrock (clastic sedimentary rock containing at least 50% clay and silt) in which at least 2/3 of the clay and silt fraction is composed of silt-sized particles. Silt is defined as grains 2–62 μm in diameter, or 4 to 8 on the Krumbein phi (φ) scale. An alternate definition is that siltstone is any sedimentary rock containing 50% or more of silt-sized particles. Siltstones can be distinguished from claystone in the field by chewing a small sample; claystone feels smooth while siltstone feels gritty. Siltstones differ significantly from sandstones due to their smaller pores and a higher propensity for containing a significant clay fraction. Although often mistaken for a shale, siltstone lacks the laminations and fissility along horizontal lines which are typical of shale. Siltstones may contain concretions. Unless the siltstone is fairly shaly, stratification is likely to be obscure and it tends to weather at oblique angles unrelated to bedding. Origin Siltstone is an unusual rock, in which most of the silt grains are made of quartz. The origin of quartz silt has been a topic of much research and debate. Some quartz silt likely has its origin in fine-grained foliated metamorphic rock, while much marine silt is likely biogenic, but most quartz sediments come from granitic rocks in which quartz grains are much larger than quartz silt. Highly energetic processes are required to break these grains down to silt size. Among proposed mechanism are glacial grinding; weathering in cold, tectonically active mountain ranges; normal weathering, particularly in tropical regions; and formation in hot desert environments by salt weathering. Siltstones form in relatively quiet depositional environments where fine particles can settle out of the transporting medium (air or water) and accumulate on the surface. They are found in turbidite sequences, in deltas, in glacial deposits, and in miogeosynclinal settings. Locations with siltstone donation Cheltenham Badlands, Canada Chek Chau, Hong Kong - Siltstone layered with conglomerate
Physical sciences
Sedimentary rocks
Earth science
496540
https://en.wikipedia.org/wiki/Propofol
Propofol
Propofol is the active component of an intravenous anesthetic formulation used for induction and maintenance of general anesthesia. It is chemically termed 2,6-diisopropylphenol. The formulation was approved under the brand name Diprivan. Numerous generic versions have since been released. Intravenous administration is used to induce unconsciousness after which anesthesia may be maintained using a combination of medications. It is manufactured as part of a sterile injectable emulsion formulation using soybean oil and lecithin, giving it a white milky coloration. Recovery from propofol-induced anesthesia is generally rapid and associated with less frequent side effects (e.g. drowsiness, nausea, vomiting) compared to other anesthetic agents. Propofol may be used prior to diagnostic procedures requiring anesthesia, in the management of refractory status epilepticus, and for induction and/or maintenance of anesthesia prior to and during surgeries. It may be administered as a bolus or an infusion, or some combination of the two. First synthesized in 1973, by John B. Glen, a British veterinary anesthesiologist working for Imperial Chemical Industries (ICI, later AstraZeneca), in 1986 propofol was introduced for therapeutic use as a lipid emulsion in the United Kingdom and New Zealand. Propofol (Diprivan) received FDA approval in October 1989. It is on the World Health Organization's List of Essential Medicines. Uses Anesthesia To induce general anesthesia, propofol is the drug used almost exclusively, having largely replaced sodium thiopental. It is often administered as part of an anesthesia maintenance technique called total intravenous anesthesia, using either manually programmed infusion pumps or computer-controlled infusion pumps in a process called target controlled infusion (TCI). Propofol is also used to sedate individuals who are receiving mechanical ventilation but not undergoing surgery, such as patients in the intensive care unit. In critically ill patients, propofol is superior to lorazepam both in effectiveness and overall cost. Propofol is relatively inexpensive compared to medications of similar use due to shorter ICU stay length. One of the reasons propofol is thought to be more effective (although it has a longer half-life than lorazepam) is that studies have found that benzodiazepines like midazolam and lorazepam tend to accumulate in critically ill patients, prolonging sedation. Propofol has also been suggested as a sleep aid in critically ill adults in an ICU setting; however, the effectiveness of this medicine in replicating the mental and physical aspects of sleep for people in the ICU is not clear. Propofol can be administered via a peripheral IV or central line. Propofol is often paired with fentanyl (for pain relief) in intubated and sedated people. The two drugs are molecularly compatible in an IV mixture form. Propofol is also used to deepen anesthesia to relieve laryngospasm. It may be used alone or followed by succinylcholine. Its use can avoid the need for paralysis and in some instances the potential side-effects of succinylcholine. Routine procedural sedation Propofol is safe and effective for gastrointestinal endoscopy procedures (colonoscopies etc.). Its use in these settings results in a faster recovery compared to midazolam. It can also be combined with opioids or benzodiazepines. Because of its rapid induction and recovery time, propofol is also widely used for sedation of infants and children undergoing MRI procedures. It is also often used in combination with ketamine with minimal side effects. COVID-19 In March 2021, the U.S. Food and Drug Administration (FDA) issued an emergency use authorization (EUA) for Propofol‐Lipuro 1% to maintain sedation via continuous infusion in people older than sixteen with suspected or confirmed COVID-19 who require mechanical ventilation in an intensive care unit ICU setting. During the public health emergency, it was considered unfeasible to limit Fresenius Propoven 2% Emulsion or Propofol-Lipuro 1% to patients with suspected or confirmed COVID-19, so it was made available to all ICU patients under mechanical ventilation. This EUA has since been revoked. Status epilepticus Status epilepticus may be defined as seizure activity lasting beyond five minutes and needing anticonvulsant medication. Several guidelines recommend the use of propofol for the treatment of refractory status epilepticus. Other uses Assisted death in Canada A lethal dose of propofol is used for medical assistance in dying in Canada to quickly induce deep coma and death, but rocuronium is always given as a paralytic ensuring death, even when the patient has died as a result of initial propofol overdose. Capital punishment The use of propofol as part of an execution protocol has been considered, although no individual has been executed using this agent. This is largely due to European manufacturers and governments banning the export of this propofol for such use. Recreational use Recreational use of the drug via self-administration has been reported but is relatively rare due to its potency and the level of monitoring required for safe use. Critically, a steep dose-response curve makes recreational use of propofol very dangerous, and deaths from self-administration continue to be reported. The short-term effects sought via recreational use include mild euphoria, hallucinations, and disinhibition. Recreational use of the drug has been described among medical staff, such as anesthetists who have access to the drug. It is reportedly more common among anesthetists on rotations with short rest periods, as usage generally produces a well-rested feeling. Long-term use has been reported to result in addiction. Attention to the risks of off-label use of propofol increased in August 2009 due to the Los Angeles County coroner's conclusion that musician Michael Jackson died from a mixture of propofol and the benzodiazepine drugs lorazepam, midazolam, and diazepam on 25 June 2009. According to a 22 July 2009 search warrant affidavit unsealed by the district court of Harris County, Texas, Jackson's physician, Conrad Murray, administered 25 milligrams of propofol diluted with lidocaine shortly before Jackson's death. Manufacturing Propofol as a commercial sterile emulsified formulation is considered difficult to manufacture. It was initially formulated in Cremophor for human use, but this original formulation was implicated in an unacceptable number of anaphylactic events. It was eventually manufactured as a 1% emulsion in soybean oil. Sterile emulsions represent complex formulation, the stability of which is dependent on the interplay of many factors such as micelle size and distribution. Side effects One of propofol's most common side effects is pain on injection, especially in smaller veins. This pain arises from activation of the pain receptor, TRPA1, found on sensory nerves and can be mitigated by pretreatment with lidocaine. Less pain is experienced when infused at a slower rate in a large vein (antecubital fossa). Patients show considerable variability in their response to propofol, at times showing profound sedation with small doses. Additional side effects include low blood pressure related to vasodilation, transient apnea following induction doses, and cerebrovascular effects. Propofol has more pronounced hemodynamic effects relative to many intravenous anesthetic agents. Reports of blood pressure drops of 30% or more are thought to be at least partially due to inhibition of sympathetic nerve activity. This effect is related to the dose and rate of propofol administration. It may also be potentiated by opioid analgesics. Propofol can also cause decreased systemic vascular resistance, myocardial blood flow, and oxygen consumption, possibly through direct vasodilation. There are also reports that it may cause green discoloration of the urine. Although propofol is widely used in the adult ICU setting, the side effects associated with medication seem to be more concerning in children. In the 1990s, multiple reported deaths of children in ICUs associated with propofol sedation prompted the FDA to issue a warning. As a respiratory depressant, propofol frequently produces apnea. The persistence of apnea can depend on factors such as premedication, dose administered, and rate of administration, and may sometimes persist for longer than 60 seconds. Possibly as the result of depression of the central inspiratory drive, propofol may produce significant decreases in respiratory rate, minute volume, tidal volume, mean inspiratory flow rate, and functional residual capacity. Propofol administration also results in decreased cerebral blood flow, cerebral metabolic oxygen consumption, and intracranial pressure. In addition, propofol may decrease intraocular pressure by as much as 50% in patients with normal intraocular pressure. A more serious but rare side effect is dystonia. Mild myoclonic movements are common, as with other intravenous hypnotic agents. Propofol appears to be safe for use in porphyria, and has not been known to trigger malignant hyperpyrexia. Propofol is also reported to induce priapism in some individuals, and has been observed to suppress REM sleep and to worsen the poor sleep quality in some patients. Rare side effects include: anxiety changes in vision cloudy urine coughing up blood delirium or hallucinations difficult urination difficulty swallowing dry eyes, mouth, nose, or throat As with any other general anesthetic agent, propofol should be administered only where appropriately trained staff and facilities for monitoring are available, as well as proper airway management, a supply of supplemental oxygen, artificial ventilation, and cardiovascular resuscitation. Because of propofol's formulation (using lecithin and soybean oil), it is prone to bacterial contamination, despite the presence of the bacterial inhibitor benzyl alcohol; consequently, some hospital facilities require the IV tubing (of continuous propofol infusions) to be changed after 12 hours. This is a preventive measure against microbial growth and potential infection. Propofol infusion syndrome A rare, but serious, side effect is propofol infusion syndrome. This potentially lethal metabolic derangement has been reported in critically ill patients after a prolonged infusion of high-dose propofol, sometimes in combination with catecholamines and/or corticosteroids. Interactions The respiratory effects of propofol are increased if given with other respiratory depressants, including benzodiazepines. Pharmacology Pharmacodynamics Propofol has been proposed to have several mechanisms of action, both through potentiation of GABAA receptor activity and therefore acting as a GABAA receptor positive allosteric modulator, thereby slowing the channel-closing time. At high doses, propofol may be able to activate GABAA receptors in the absence of GABA, behaving as a GABAA receptor agonist as well. Propofol analogs have been shown to also act as sodium channel blockers. Some research has also suggested that the endocannabinoid system may contribute significantly to propofol's anesthetic action and to its unique properties, as endocannabinoids also play an important role in the physiologic control of sleep, pain processing and emesis. An EEG study on patients undergoing general anesthesia with propofol found that it causes a prominent reduction in the brain's information integration capacity. Propofol is an inhibitor of the enzyme fatty acid amide hydrolase, which metabolizes the endocannabinoid anandamide (AEA). Activation of the endocannabinoid system by propofol, possibly via inhibition of AEA catabolism, generates a significant increase in the whole-brain content of AEA, contributing to the sedative properties of propofol via CB1 receptor activation. This may explain the psychotomimetic and antiemetic properties of propofol. By contrast, there is a high incidence of postoperative nausea and vomiting after administration of volatile anesthetics, which contribute to a significant decrease in the whole-brain content of AEA that can last up to forty minutes after induction. Pharmacokinetics Propofol is highly protein-bound in vivo and is metabolized by conjugation in the liver. The half-life of elimination of propofol has been estimated to be between 2 and 24 hours. However, its duration of clinical effect is much shorter, because propofol is rapidly distributed into peripheral tissues. When used for IV sedation, a single dose of propofol typically wears off within minutes. Onset is rapid, in as little as 15–30 seconds. Propofol is versatile; the drug can be given for short or prolonged sedation, as well as for general anesthesia. Its use is not associated with nausea as is often seen with opioid medications. These characteristics of rapid onset and recovery along with its amnestic effects have led to its widespread use for sedation and anesthesia. History John B. Glen, a veterinarian and researcher at Imperial Chemical Industries (ICI), spent thirteen years developing propofol, an effort for which he was awarded the 2018 Lasker Award for clinical research. Originally developed as ICI 35868, propofol was chosen after extensive evaluation and structure–activity relationship studies of the anesthetic potencies and pharmacokinetic profiles of a series of ortho-alkylated phenols. First identified as a drug candidate in 1973, propofol entered clinical trials in 1977, using a form solubilized in cremophor EL. However, due to anaphylactic reactions to cremophor, this formulation was withdrawn from the market and subsequently reformulated as an emulsion of a soya oil and propofol mixture in water. The emulsified formulation was relaunched in 1986 by ICI (whose pharmaceutical division later became a constituent of AstraZeneca) under the brand name Diprivan. The preparation contains 1% propofol, 10% soybean oil, and 1.2% purified egg phospholipid as an emulsifier, with 2.25% glycerol as a tonicity-adjusting agent, and sodium hydroxide to adjust the pH. Diprivan contains EDTA, a common chelation agent, that also acts alone (bacteriostatically against some bacteria) and synergistically with some other antimicrobial agents. Newer generic formulations contain sodium metabisulfite as an antioxidant and benzyl alcohol as an antimicrobial agent. Propofol emulsion is an opaque white fluid due to the scattering of light from the emulsified micelle formulation. Developments A water-soluble prodrug form, fospropofol, has been developed and tested with positive results. Fospropofol is rapidly broken down by the enzyme alkaline phosphatase to form propofol. Marketed as Lusedra, this formulation may not produce the pain at the injection site that often occurs with the conventional form of the drug. The U.S. Food and Drug Administration (FDA) approved the product in 2008. By incorporation of an azobenzene unit, a photoswitchable version of propofol (AP2) was developed in 2012 that allows for optical control of GABAA receptors with light. In 2013, a propofol binding site on mammalian GABAA receptors has been identified by photolabeling using a diazirine derivative. Additionally, it was shown that the hyaluronan polymer present in the synovia can be protected from free-radical depolymerization by propofol. Ciprofol is another derivative of propofol that is 4–6 times more potent than propofol. it is undergoing Phase III trials. Ciprofol appears to have a lower incidence of injection site pain and respiratory depression than propofol. Propofol has also been studied for treatment resistant depression. Veterinary uses In November 2024, the US Food and Drug Administration approved PropofolVet Multidose, the first generic propofol injectable emulsion for dogs. PropofolVet Multidose is approved for use as an injectable anesthetic in dogs. PropofolVet Multidose contains the same active ingredient (propofol injectable emulsion) as the approved brand name drug product, PropoFlo 28, which was first approved on 4 February 2011. In addition, the FDA determined that PropofolVet Multidose contains no inactive ingredients that may significantly affect the bioavailability of the active ingredient. PropofolVet Multidose is sponsored by Parnell Technologies Pty. Ltd. based in New South Wales, Australia.
Biology and health sciences
Anesthetics
Health
496667
https://en.wikipedia.org/wiki/Auxin
Auxin
Auxins (plural of auxin ) are a class of plant hormones (or plant-growth regulators) with some morphogen-like characteristics. Auxins play a cardinal role in coordination of many growth and behavioral processes in plant life cycles and are essential for plant body development. The Dutch biologist Frits Warmolt Went first described auxins and their role in plant growth in the 1920s. Kenneth V. Thimann became the first to isolate one of these phytohormones and to determine its chemical structure as indole-3-acetic acid (IAA). Went and Thimann co-authored a book on plant hormones, Phytohormones, in 1937. Overview Auxins were the first of the major plant hormones to be discovered. They derive their name from the Greek word ( – 'to grow/increase'). Auxin is present in all parts of a plant, although in very different concentrations. The concentration in each position is crucial developmental information, so it is subject to tight regulation through both metabolism and transport. The result is the auxin creates "patterns" of auxin concentration maxima and minima in the plant body, which in turn guide further development of respective cells, and ultimately of the plant as a whole. The (dynamic and environment responsive) pattern of auxin distribution within the plant is a key factor for plant growth, its reaction to its environment, and specifically for development of plant organs (such as leaves or flowers). It is achieved through very complex and well-coordinated active transport of auxin molecules from cell to cell throughout the plant body—by the so-called polar auxin transport. Thus, a plant can (as a whole) react to external conditions and adjust to them, without requiring a nervous system. Auxins typically act in concert with, or in opposition to, other plant hormones. For example, the ratio of auxin to cytokinin in certain plant tissues determines initiation of root versus shoot buds. On the molecular level, all auxins are compounds with an aromatic ring and a carboxylic acid group. The most important member of the auxin family is indole-3-acetic acid (IAA), which generates the majority of auxin effects in intact plants, and is the most potent native auxin. And as native auxin, its equilibrium is controlled in many ways in plants, from synthesis, through possible conjugation to degradation of its molecules, always according to the requirements of the situation. Auxin can act in a heat sensitive manner in many situations, which will in turn effect a plant's phenotypic development. Five naturally occurring (endogenous) auxins in plants include indole-3-acetic acid, 4-chloroindole-3-acetic acid, phenylacetic acid, indole-3-butyric acid, and indole-3-propionic acid. However, most of the knowledge described so far in auxin biology and as described in the sections which follow, apply basically to IAA; the other three endogenous auxins seems to have marginal importance for intact plants in natural environments. Alongside endogenous auxins, scientists and manufacturers have developed many synthetic compounds with auxinic activity. Synthetic auxins fall into four classes: dicamba quinolinecarboxylic acids, which includes quinclorac derivatives of pyridinecarboxylic acids, which includes picloram, triclopyr, clopyralid phenoxyacetic acid, phenoxypropionic acid, and phenoxybutyric acid, 1-naphthaleneacetic acid derivatives 2,4-D, 2,4-DP 2,4-DB 2,4,5-T MCPA MCPB, mecoprop Some synthetic auxins, such as 2,4-D and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T), are sold as herbicides. Broad-leaf plants (dicots), such as dandelions, are much more susceptible to auxins than narrow-leaf plants (monocots) such as grasses and cereal crops, making these synthetic auxins valuable as herbicides. Discovery Charles Darwin In 1881, Charles Darwin and his son Francis performed experiments on coleoptiles, the sheaths enclosing young leaves in germinating grass seedlings. The experiment exposed the coleoptile to light from a unidirectional source, and observed that they bend towards the light. By covering various parts of the coleoptiles with a light-impermeable opaque cap, the Darwins discovered that light is detected by the coleoptile tip, but that bending occurs in the hypocotyl. However the seedlings showed no signs of development towards light if the tip was covered with an opaque cap, or if the tip was removed. The Darwins concluded that the tip of the coleoptile was responsible for sensing light, and proposed that a messenger is transmitted in a downward direction from the tip of the coleoptile, causing it to bend. Peter Boysen Jensen In 1910, Danish scientist Peter Boysen Jensen demonstrated that the phototropic stimulus in the oat coleoptile could propagate through an incision. These experiments were extended and published in greater detail in 1911 and 1913. He found that the tip could be cut off and put back on, and that a subsequent one-sided illumination was still able to produce a positive phototropic curvature in the basal part of the coleoptile. He demonstrated that the transmission could take place through a thin layer of gelatin separating the unilaterally illuminated tip from the shaded stump. By inserting a piece of mica he could block transmission in the illuminated and non-illuminated side of the tip, respectively, which allowed him to show that the transmission took place in the shaded part of the tip. Thus, the longitudinal half of the coleoptile that exhibits the greater rate of elongation during the phototropic curvature, was the tissue to receive the growth stimulus. In 1911, Boysen Jensen concluded from his experimental results that the transmission of the phototropic stimulus was not a physical effect (for example due to a change in pressure) but serait dû à une migration de substance ou d’ions (was caused by the transport of a substance or of ions). These results were fundamental for further work on the auxin theory of tropisms. Frits Went In 1928, the Dutch botanist Frits Warmolt Went showed that a chemical messenger diffuses from coleoptile tips. Went's experiment identified how a growth promoting chemical causes a coleoptile to grow towards the light. Went cut the tips of the coleoptiles and placed them in the dark, putting a few tips on agar blocks that he predicted would absorb the growth-promoting chemical. On control coleoptiles, he placed a block that lacked the chemical. On others, he placed blocks containing the chemical, either centered on top of the coleoptile to distribute the chemical evenly or offset to increase the concentration on one side. When the growth-promoting chemical was distributed evenly the coleoptile grew straight. If the chemical was distributed unevenly, the coleoptile curved away from the side with the cube, as if growing towards the light, even though it was grown in the dark. Went later proposed that the messenger substance is a growth-promoting hormone, which he named auxin, that becomes asymmetrically distributed in the bending region. Went concluded that auxin is at a higher concentration on the shaded side, promoting cell elongation, which results in coleoptiles bending towards the light. Hormonal activity Auxins help development at all levels in plants, from the cellular level, through organs, and ultimately to the whole plant. Molecular mechanisms When a plant cell comes into contact with auxin, it causes dramatic changes in gene expression, with many genes up- or down-regulated. The precise mechanisms by which this occurs are still an area of active research, but there is now a general consensus on at least two auxin signalling pathways. Perception The best-characterized auxin receptors are the TIR1/ AFB family of F-box proteins. F-box proteins target other proteins for degradation via the ubiquitin degradation pathway. When TIR1/ AFB proteins bind to auxin, the auxin molecule acts as a 'molecular glue', a term coined by Ning Zheng, that allows these proteins to then bind to their targets (see below). The atomic structure of the perception mechanism of auxin by TIR1 was determined by X-ray crystallography. Another auxin-binding protein, ABP1 is now often regarded as an auxin receptor (at the apoplast), but it is generally considered to have a much more minor role than the TIR1/AFB signaling pathway, and much less is known about ABP1 signaling. Aux/IAA and ARF signalling modules Auxin response factors (ARFs) are a large group of transcription factors that act in auxin signaling. In the absence of auxin, ARFs bind to a class of repressors known as Aux/IAAs. Aux/IAA suppress the ability of ARFs to enhance gene transcription. Additionally, the binding of Aux/IAA to ARFs brings Aux/IAA into contact with the promoters of auxin-regulated genes. When at these promoters, Aux/IAA repress the expression of these genes through recruiting other factors to make modifications to the DNA structure. The binding of auxin to TIR1/AFBs allows them to bind to Aux/IAAs. When bound by TIR1/AFBs, Aux/IAAs are marked for degradation. The degradation of Aux/IAA frees ARF proteins, which are then able to activate or repress genes at whose promoters they are bound. The large number of Aux/IAA and ARF binding pairs possible, and their different distributions between cell types and across developmental age are thought to account for the astonishingly diverse responses that auxin produces. In June 2018, it was demonstrated that plant tissues can respond to auxin in a TIR1-dependent manner extremely quickly (probably too quickly to be explained by changes in gene expression). This has led some scientists to suggest that there is an as yet unidentified TIR1-dependent auxin-signalling pathway that differs from the well-known transcriptional response. On a cellular level On the cellular level, auxin is essential for cell growth, affecting both cell division and cellular expansion. Auxin concentration level, together with other local factors, contributes to cell differentiation and specification of the cell fate. Depending on the specific tissue, auxin may promote axial elongation (as in shoots), lateral expansion (as in root swelling), or iso-diametric expansion (as in fruit growth). In some cases (coleoptile growth), auxin-promoted cellular expansion occurs in the absence of cell division. In other cases, auxin-promoted cell division and cell expansion may be closely sequenced within the same tissue (root initiation, fruit growth). In a living plant, auxins and other plant hormones nearly always appear to interact to determine patterns of plant development. Organ patterns Growth and division of plant cells together result in the growth of tissue, and specific tissue growth contributes to the development of plant organs. Growth of cells contributes to the plant's size, unevenly localized growth produces bending, turning and directionalization of organs- for example, stems turning toward light sources (phototropism), roots growing in response to gravity (gravitropism), and other tropisms originated because cells on one side grow faster than the cells on the other side of the organ. So, precise control of auxin distribution between different cells has paramount importance to the resulting form of plant growth and organization. Auxin transport and the uneven distribution of auxin To cause growth in the required domains, auxins must of necessity be active preferentially in them. Local auxin maxima can be formed by active biosynthesis in certain cells of tissues, for example via tryptophan-dependent pathways, but auxins are not synthesized in all cells (even if cells retain the potential ability to do so, only under specific conditions will auxin synthesis be activated in them). For that purpose, auxins have to be not only translocated toward those sites where they are needed but also they must have an established mechanism to detect those sites. Translocation is driven throughout the plant body, primarily from peaks of shoots to peaks of roots (from up to down). For long distances, relocation occurs via the stream of fluid in phloem vessels, but, for short-distance transport, a unique system of coordinated polar transport directly from cell to cell is exploited. This short-distance, active transport exhibits some morphogenetic properties. This process, polar auxin transport, is directional, very strictly regulated, and based in uneven distribution of auxin efflux carriers on the plasma membrane, which send auxins in the proper direction. While PIN-FORMED (PIN) proteins are vital in transporting auxin in a polar manner, the family of AUXIN1/LIKE-AUX1 (AUX/LAX) genes encodes for non-polar auxin influx carriers. The regulation of PIN protein localisation in a cell determines the direction of auxin transport from cell, and concentrated effort of many cells creates peaks of auxin, or auxin maxima (regions having cells with higher auxin – a maximum). Proper and timely auxin maxima within developing roots and shoots are necessary to organise the development of the organ. PINs are regulated by multiple pathways, at both the transcriptional and the post-translational levels. PIN proteins can be phosphorylated by PINOID, which determines their apicobasal polarity and thereby the directionality of auxin fluxes. In addition, other AGC kinases, such as D6PK, phosphorylate and activate PIN transporters. AGC kinases, including PINOID and D6PK, target to the plasma membrane via binding to phospholipids. Upstream of D6PK, 3'-phosphoinositide dependent protein kinase 1 (PDK1) acts as a master regulator. PDK1 phosphorylates and activates D6PK at the basal side of plasma membrane, executing the activity of PIN-mediated polar auxin transport and subsequent plant development. Surrounding auxin maxima are cells with low auxin troughs, or auxin minima. For example, in the Arabidopsis fruit, auxin minima have been shown to be important for its tissue development. Auxin has a significant effect on spatial and temporal gene expressions during the growth of apical meristems. These interactions depend both on the concentration of Auxin as well as the spatial orientation during primordial positioning. Auxin relies on PIN1 which works as an auxin efflux carrier. PIN1 positioning upon membranes determines the directional flow of the hormone from higher to lower concentrations. Initiation of primordia in apical meristems is correlated to heightened auxin levels. Genes required to specify the identity of cells arrange and express based on levels of auxin. STM (SHOOT MERISTEMLESS), which helps maintain undifferentiated cells, is down-regulated in the presence of auxin. This allows growing cells to differentiate into various plant tissues. The CUC (CUP-SHAPED COTYLEDON) genes set the boundaries for growing tissues and promote growth. They are upregulated via auxin influx. Experiments making use of GFP (GREEN FLUORESCENCE PROTEIN) visualization in Arabidopsis have supported these claims. Organization of the plant As auxins contribute to organ shaping, they are also fundamentally required for proper development of the plant itself. Without hormonal regulation and organization, plants would be merely proliferating heaps of similar cells. Auxin employment begins in the embryo of the plant, where the directional distribution of auxin ushers in subsequent growth and development of primary growth poles, then forms buds of future organs. Next, it helps to coordinate proper development of the arising organs, such as roots, cotyledons, and leaves and mediates long-distance signals between them, contributing so to the overall architecture of the plant. Throughout the plant's life, auxin helps the plant maintain the polarity of growth, and actually "recognize" where it has its branches (or any organ) connected. An important principle of plant organization based upon auxin distribution is apical dominance, which means the auxin produced by the apical bud (or growing tip) diffuses (and is transported) downwards and inhibits the development of ulterior lateral bud growth, which would otherwise compete with the apical tip for light and nutrients. Removing the apical tip and its suppressively acting auxin allows the lower dormant lateral buds to develop, and the buds between the leaf stalk and stem produce new shoots which compete to become the lead growth. The process is actually quite complex because auxin transported downwards from the lead shoot tip has to interact with several other plant hormones (such as strigolactones or cytokinins) in the process on various positions along the growth axis in plant body to achieve this phenomenon. This plant behavior is used in pruning by horticulturists. Finally, the sum of auxin arriving from stems to roots influences the degree of root growth. If shoot tips are removed, the plant does not react just by the outgrowth of lateral buds — which are supposed to replace to original lead. It also follows that smaller amount of auxin arriving at the roots results in slower growth of roots and the nutrients are subsequently in higher degree invested in the upper part of the plant, which hence starts to grow faster. Effects Auxin participates in phototropism, geotropism, hydrotropism and other developmental changes. The uneven distribution of auxin, due to environmental cues, such as unidirectional light or gravity force, results in uneven plant tissue growth, and generally, auxin governs the form and shape of the plant body, direction and strength of growth of all organs, and their mutual interaction. When the cells grow larger, their volume increases as the intracellular solute concentration increases with water moving into the cells from extracellular fluid. This auxin-stimulated intake of water causes turgor pressure on the cell walls, causing the plant to bend. Auxin stimulates cell elongation by stimulating wall-loosening factors, such as expansins, to loosen cell walls. The effect is stronger if gibberellins are also present. Auxin also stimulates cell division if cytokinins are present. When auxin and cytokinin are applied to callus, rooting can be generated with higher auxin to cytokinin ratios, shoot growth is induced by lower auxin to cytokinin ratios, and a callus is formed with intermediate ratios, with the exact threshold ratios depending on the species and the original tissue. Auxin also induces sugar and mineral accumulation at the site of application. Wound response Auxin induces the formation and organization of phloem and xylem. When the plant is wounded, the auxin may induce the cell differentiation and regeneration of the vascular tissues. Root growth and development Auxins promote root initiation. Auxin induces both growth of pre-existing roots and root branching (lateral root initiation), and also adventitious root formation. As more native auxin is transported down the stem to the roots, the overall development of the roots is stimulated. If the source of auxin is removed, such as by trimming the tips of stems, the roots are less stimulated accordingly, and growth of stem is supported instead. In horticulture, auxins, especially NAA and IBA, are commonly applied to stimulate root initiation when rooting cuttings of plants. However, high concentrations of auxin inhibit root elongation and instead enhance adventitious root formation. Removal of the root tip can lead to inhibition of secondary root formation. Apical dominance Auxin induces shoot apical dominance; the axillary buds are inhibited by auxin, as a high concentration of auxin directly stimulates ethylene synthesis in axillary buds, causing inhibition of their growth and potentiation of apical dominance. When the apex of the plant is removed, the inhibitory effect is removed and the growth of lateral buds is enhanced. This is called decapitation, usually performed in tea plantations and hedge-making. Auxin is sent to the part of the plant facing away from the light, where it promotes cell elongation, thus causing the plant to bend towards the light. Fruit growth and development Auxin is required for fruit growth and development and delays fruit senescence. When seeds are removed from strawberries, fruit growth is stopped; exogenous auxin stimulates the growth in fruits with seeds removed. For fruit with unfertilized seeds, exogenous auxin results in parthenocarpy ("virgin-fruit" growth). Fruits form abnormal morphologies when auxin transport is disturbed. In Arabidopsis fruits, auxin controls the release of seeds from the fruit (pod). The valve margins are a specialised tissue in pods that regulates when pod will open (dehiscence). Auxin must be removed from the valve margin cells to allow the valve margins to form. This process requires modification of the auxin transporters (PIN proteins). The evolutionary transition from diploid to triploid endosperms - and the production of antipodal cells - may have occurred due to a shift in gametophyte development which produced a new interaction with an auxin-dependent mechanism originating in the earliest angiosperms. Flowering Auxin plays also a minor role in the initiation of flowering and development of reproductive organs. In low concentrations, it can delay the senescence of flowers. A number of plant mutants have been described that affect flowering and have deficiencies in either auxin synthesis or transport. In maize, one example is bif2 barren inflorescence2. Ethylene biosynthesis In low concentrations, auxin can inhibit ethylene formation and transport of precursor in plants; however, high concentrations can induce the synthesis of ethylene. Therefore, the high concentration can induce femaleness of flowers in some species. Auxin inhibits abscission prior to the formation of the abscission layer, and thus inhibits senescence of leaves. Synthetic auxins In the course of research on auxin biology, many compounds with noticeable auxin activity were synthesized. Many of them had been found to have economical potential for human-controlled growth and development of plants in agronomy. Auxins are toxic to plants in large concentrations; they are most toxic to dicots and less so to monocots. Because of this property, synthetic auxin herbicides, including 2,4-dichlorophenoxyacetic acid (2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T), have been developed and used for weed control. However, some exogenously synthesized auxins, especially 1-naphthaleneacetic acid (NAA) and indole-3-butyric acid (IBA), are also commonly applied to stimulate root growth when taking cuttings of plants or for different agricultural purposes such as the prevention of fruit drop in orchards. Used in high doses, auxin stimulates the production of ethylene, also a native plant hormone. Excess ethylene can inhibit elongation growth, cause leaves to fall (abscission), and even kill the plant. Some synthetic auxins, such as 2,4-D and 2,4,5-T are marketed also as herbicides. Dicots, such as dandelions, are much more susceptible to auxins than monocots, such as grasses and cereal crops. So these synthetic auxins are valuable as synthetic herbicides. 2,4-D was the first widely used herbicide, and it is still in use. 2,4-D was first commercialized by the Sherwin-Williams company and saw use in the late 1940s. It is easy and inexpensive to manufacture. Triclopyr (3,5,6-TPA), while known as an herbicide, has also been shown to increase the size of fruit in plants. At increased concentrations, the hormone can be lethal. Dosing down to the correct concentration has been shown to alter photosynthetic pathways. This hindrance to the plant causes a response that increases carbohydrate production, leading to larger fruit. Herbicide manufacture Synthetic auxins are used as a kind of herbicide and overdosing of auxins will interrupt plants' growth and lead to their death. The defoliant Agent Orange, used extensively by British forces in the Malayan Emergency and American forces in the Vietnam War, was a mix of 2,4-D and 2,4,5-T. The compound 2,4-D is still in use and is thought to be safe, but 2,4,5-T was more or less banned by the U.S. Environmental Protection Agency in 1979. The dioxin TCDD is an unavoidable contaminant produced in the manufacture of 2,4,5-T. As a result of the integral dioxin contamination, the use of 2,4,5-T products has been implicated in leukemia, miscarriages, birth defects, liver damage, and other diseases.
Biology and health sciences
Plant hormone
Biology
496669
https://en.wikipedia.org/wiki/Cytokinin
Cytokinin
Cytokinins (CK) are a class of plant hormones that promote cell division, or cytokinesis, in plant roots and shoots. They are involved primarily in cell growth and differentiation, but also affect apical dominance, axillary bud growth, and leaf senescence. There are two types of cytokinins: adenine-type cytokinins represented by kinetin, zeatin, and 6-benzylaminopurine, and phenylurea-type cytokinins like diphenylurea and thidiazuron (TDZ). Most adenine-type cytokinins are synthesized in roots. Cambium and other actively dividing tissues also synthesize cytokinins. No phenylurea cytokinins have been found in plants. Cytokinins participate in local and long-distance signalling, with the same transport mechanism as purines and nucleosides. Typically, cytokinins are transported in the xylem. Cytokinins act in concert with auxin, another plant growth hormone. The two are complementary, having generally opposite effects. History The idea of specific substances required for cell division to occur in plants actually dates back to the Swiss physiologist J. Wiesner, who, in 1892, proposed that initiation of cell division is evoked by endogenous factors, indeed a proper balance among endogenous factors. Somewhat later, the Austrian plant physiologist, G. Haberlandt, reported in 1913 that an unknown substance diffuses from the phloem tissue which can induce cell division in the parenchymatic tissue of potato tubers. In 1941, Johannes Van Overbeek found that the milky endosperm of immature coconut also had this factor, which stimulated cell division and differentiation in very young Datura embryos. Jablonski and Skoog (1954) extended the work of Haberlandt and reported that a substance present in the vascular tissue was responsible for causing cell division in the sith cells. Miller and his co-workers (1954) isolated and purified the cell division substance in crystallised form from autoclaved herring fish sperm DNA. This active compound was named as Kinetin because of its ability to promote cell division and was the first cytokinin to be named. Kinetin was later identified to be 6-furfuryl-amino purine. Later on, the generic name kinin was suggested to include kinetin and other substances having similar properties. The first naturally occurring cytokinin was isolated and crystallised simultaneously by Miller and D.S. Lethum (1963–65) from the milky endosperm of corn (Zea mays) and named Zeatin. Lethem (1963) proposed the term Cytokinins for such substances. Function Cytokinins are involved in many plant processes, including cell division and shoot and root morphogenesis. They are known to regulate axillary bud growth and apical dominance. According to the "direct inhibition hypothesis", these effects result from the ratio of cytokinin to auxin. This theory states that auxin from apical buds travels down shoots to inhibit axillary bud growth. This promotes shoot growth, and restricts lateral branching. Cytokinin moves from the roots into the shoots, eventually signaling lateral bud growth. Simple experiments support this theory. When the apical bud is removed, the axillary buds are uninhibited, lateral growth increases, and plants become bushier. Applying auxin to the cut stem again inhibits lateral dominance. Moreover, it has been shown that cytokinin alone has no effect on parenchyma cells. When cultured with auxin but no cytokinin, they grow large but do not divide. When cytokinin and auxin are both added together, the cells expand and differentiate. When cytokinin and auxin are present in equal levels, the parenchyma cells form an undifferentiated callus. A higher ratio of cytokinin induces growth of shoot buds, while a higher ratio of auxin induces root formation. Cytokinins have been shown to slow aging of plant organs by preventing protein breakdown, activating protein synthesis, and assembling nutrients from nearby tissues. A study that regulated leaf senescence in tobacco leaves found that wild-type leaves yellowed while transgenic leaves remained mostly green. It was hypothesized that cytokinin may affect enzymes that regulate protein synthesis and degradation. Cytokinins have recently been found to play a role in plant pathogenesis. For example, cytokinins have been described to induce resistance against Pseudomonas syringae in Arabidopsis thaliana and Nicotiana tabacum. Also in context of biological control of plant diseases cytokinins seem to have potential functions. Production of cytokinins by Pseudomonas fluorescens G20-18 has been identified as a key determinant to efficiently control the infection of A. thaliana with P. syringae.. While cytokinin action in vascular plants is described as pleiotropic, this class of plant hormones specifically induces the transition from apical growth to growth via a three-faced apical cell in moss protonema. This bud induction can be pinpointed to differentiation of a specific single cell, and thus is a very specific effect of cytokinin. Mode of action Cytokinin signaling in plants is mediated by a two-component phosphorelay. This pathway is initiated by cytokinin binding to a histidine kinase receptor in the endoplasmic reticulum membrane. This results in the autophosphorylation of the receptor, with the phosphate then being transferred to a phosphotransfer protein. The phosphotransfer proteins can then phosphorylate the type-B response regulators (RR) which are a family of transcriptions factors. The phosphorylated, and thus activated, type-B RRs regulate the transcription of numerous genes, including the type-A RRs. The type-A RRs negatively regulate the pathway. Biosynthesis Adenosine phosphate-isopentenyltransferase (IPT) catalyses the first reaction in the biosynthesis of isoprene cytokinins. It may use ATP, ADP, or AMP as substrates and may use dimethylallyl pyrophosphate (DMAPP) or hydroxymethylbutenyl pyrophosphate (HMBPP) as prenyl donors. This reaction is the rate-limiting step in cytokinin biosynthesis. DMADP and HMBDP used in cytokinin biosynthesis are produced by the methylerythritol phosphate pathway (MEP). Cytokinins can also be produced by recycled tRNAs in plants and bacteria. tRNAs with anticodons that start with a uridine and carrying an already-prenylated adenosine adjacent to the anticodon release on degradation the adenosine as a cytokinin. The prenylation of these adenines is carried out by tRNA-isopentenyltransferase. Auxin is known to regulate the biosynthesis of cytokinin. Uses Because cytokinins promote plant cell division and growth, they have been studied since the 1970s as potential agrochemicals, however they have yet to be widely adopted, probably due to the complex nature of their effects. One study found that applying cytokinin to cotton seedlings led to a 5–10% increase in yield under drought conditions. Some cytokinins are utilized in tissue culture of plants and can also be used to promote the germination of seeds.
Biology and health sciences
Plant hormone
Biology
496670
https://en.wikipedia.org/wiki/Gibberellin
Gibberellin
Gibberellins (GAs) are plant hormones that regulate various developmental processes, including stem elongation, germination, dormancy, flowering, flower development, and leaf and fruit senescence. They are one of the longest-known classes of plant hormone. It is thought that the selective breeding (albeit unconscious) of crop strains that were deficient in GA synthesis was one of the key drivers of the "green revolution" in the 1960s, a revolution that is credited to have saved over a billion lives worldwide. Chemistry All known gibberellins are diterpenoid acids synthesized by the terpenoid pathway in plastids and then modified in the endoplasmic reticulum and cytosol until they reach their biologically active form. All are derived via the ent-gibberellane skeleton, but are synthesised via ent-kaurene. The gibberellins are named GA1 through GAn in order of discovery. Gibberellic acid, which was the first gibberellin to be structurally characterized, is GA3. , there are 136 GAs identified from plants, fungi, and bacteria. Gibberellins are tetracyclic diterpene acids. There are two classes, with either 19 or 20 carbons. The 19-carbon gibberellins are generally the biologically active forms. They have lost carbon 20 and, in place, possess a five-member lactone bridge that links carbons 4 and 10. Hydroxylation also has a great effect on its biological activity. In general, the most biologically active compounds are dihydroxylated gibberellins, with hydroxyl groups on both carbons 3 and 13. Gibberellic acid is a 19-carbon dihydroxylated gibberellin. Bioactive GAs The bioactive Gibberellins are GA1, GA3, GA4, and GA7. There are three common structural traits between these GAs: 1) hydroxyl group on C-3β, 2) a carboxyl group on carbon 6, and 3) a lactone between carbons 4 and 10. The 3β-hydroxyl group can be exchanged for other functional groups at C-2 and/or C-3 positions. GA5 and GA6 are examples of bioactive GAs without a hydroxyl group on C-3β. The presence of GA1 in various plant species suggests that it is a common bioactive GA. Biological function Gibberellins are involved in the natural process of breaking dormancy and other aspects of germination. Before the photosynthetic apparatus develops sufficiently in the early stages of germination, the seed reserves of starch nourish the seedling. Usually in germination, the breakdown of starch to glucose in the endosperm begins shortly after the seed is exposed to water. Gibberellins in the seed embryo are believed to signal starch hydrolysis through inducing the synthesis of the enzyme α-amylase in the aleurone cells. In the model for gibberellin-induced production of α-amylase, it is demonstrated that gibberellins from the scutellum diffuse to the aleurone cells, where they stimulate the secretion α-amylase. α-Amylase then hydrolyses starch (abundant in many seeds), into glucose that can be used to produce energy for the seed embryo. Studies of this process have indicated gibberellins cause higher levels of transcription of the gene coding for the α-amylase enzyme, to stimulate the synthesis of α-amylase. Exposition to cold temperatures increases the production of Gibberellins. They stimulate cell elongation, breaking and budding, and seedless fruits. Gibberellins cause also seed germination by breaking the seed's dormancy and acting as a chemical messenger. Its hormone binds to a receptor, and calcium activates the protein calmodulin, and the complex binds to DNA, producing an enzyme to stimulate growth in the embryo. Metabolism Biosynthesis Gibberellins are usually synthesized from the methylerythritol phosphate (MEP) pathway in higher plants. In this pathway, bioactive GA is produced from trans-geranylgeranyl diphosphate (GGDP), with the participation of three classes of enzymes: terpene syntheses (TPSs), cytochrome P450 monooxygenases (P450s), and 2-oxoglutarate–dependent dioxygenases (2ODDs). The MEP pathway follows eight steps: GGDP is converted to ent-copalyl diphosphate (ent-CDP) by ent-copalyl diphosphate synthase (CPS) ent-CDP is converted to ent-kaurene by ent-kaurene synthase (KS) ent-kaurene is converted to ent-kaurenol by ent-kaurene oxidase (KO) ent-kaurenol is converted to ent-kaurenal by KO ent-kaurenal is converted to ent-kaurenoic acid by KO ent-kaurenoic acid is converted to ent-7a-hydroxykaurenoic acid by ent-kaurenoic acid oxidase (KAO) ent-7a-hydroxykaurenoic acid is converted to GA12-aldehyde by KAO GA12-aldehyde is converted to GA12 by KAO. GA12 is processed to the bioactive GA4 by oxidations on C-20 and C-3, which is accomplished by 2 soluble ODDs: GA 20-oxidase and GA 3-oxidase. One or two genes encode the enzymes responsible for the first steps of GA biosynthesis in Arabidopsis and rice. The null alleles of the genes encoding CPS, KS, and KO result in GA-deficient Arabidopsis dwarves. Multigene families encode the 2ODDs that catalyze the formation of GA12 to bioactive GA4. AtGA3ox1 and AtGA3ox2, two of the four genes that encode GA3ox in Arabidopsis, affect vegetative development. Environmental stimuli regulate AtGA3ox1 and AtGA3ox2 activity during seed germination. In Arabidopsis, GA20ox overexpression leads to an increase in GA concentration. Sites of biosynthesis Most bioactive Gibberellins are located in actively growing organs on plants. Both GA20ox and GA3ox genes (genes coding for GA 20-oxidase and GA 3-oxidase) and the SLENDER1 gene (a GA signal transduction gene) are found in growing organs on rice, which suggests bioactive GA synthesis occurs at their site of action in growing organs in plants. During flower development, the tapetum of anthers is believed to be a primary site of GA biosynthesis. Differences between biosynthesis in fungi and lower plants The flower Arabidopsis and the fungus Gibberella fujikuroi possess different GA pathways and enzymes. P450s in fungi perform functions analogous to the functions of KAOs in plants. The function of CPS and KS in plants is performed by a single enzyme in fungi (CPS/KS). In plants the Gibberellin biosynthesis genes are found randomly on multiple chromosomes, but in fungi are found on one chromosome . Plants produce low amount of Gibberellic Acid, therefore is produced for industrial purposes by microorganisms. Industrially GA3 can be produced by submerged fermentation, but this process presents low yield with high production costs and hence higher sale value, nevertheless other alternative process to reduce costs of its production is solid-state fermentation (SSF) that allows the use of agro-industrial residues. Catabolism Several mechanisms for inactivating Giberellins have been identified. 2β-hydroxylation deactivates them, and is catalyzed by GA2-oxidases (GA2oxs). Some GA2oxs use 19-carbon Gibberellins as substrates, while other use C20-GAs. Cytochrome P450 mono-oxygenase, encoded by elongated uppermost internode (eui), converts Gibberellins into 16α,17-epoxides. Rice eui mutants amass bioactive Gibberellins at high levels, which suggests cytochrome P450 mono-oxygenase is a main enzyme responsible for deactivation GA in rice. The Gamt1 and gamt2 genes encode enzymes that methylate the C-6 carboxyl group of GAs. In a gamt1 and gamt2 mutant, concentrations of GA in developing seeds is increased. Homeostasis Feedback and feedforward regulation maintains the levels of bioactive Gibberellins in plants. Levels of AtGA20ox1 and AtGA3ox1 expression are increased in a Gibberellin deficient environment, and decreased after the addition of bioactive GAs, Conversely, expression of the Gibberellin deactivation genes AtGA2ox1 and AtGA2ox2 is increased with addition of Gibberellins. Regulation Regulation by other hormones The auxin indole-3-acetic acid (IAA) regulates concentration of GA1 in elongating internodes in peas. Removal of IAA by removal of the apical bud, the auxin source, reduces the concentration of GA1, and reintroduction of IAA reverses these effects to increase the concentration of GA1. This has also been observed in tobacco plants. Auxin increases GA 3-oxidation and decreases GA 2-oxidation in barley. Auxin also regulates GA biosynthesis during fruit development in peas. These discoveries in different plant species suggest the auxin regulation of GA metabolism may be a universal mechanism. Ethylene decreases the concentration of bioactive GAs. Regulation by environmental factors Recent evidence suggests fluctuations in GA concentration influence light-regulated seed germination, photomorphogenesis during de-etiolation, and photoperiod regulation of stem elongation and flowering. Microarray analysis showed about one fourth cold-responsive genes are related to GA-regulated genes, which suggests GA influences response to cold temperatures. Plants reduce growth rate when exposed to stress. A relationship between GA levels and amount of stress experienced has been suggested in barley. Role in seed development Bioactive GAs and abscisic acid (ABA) levels have an inverse relationship and regulate seed development and germination. Levels of FUS3, an Arabidopsis transcription factor, are upregulated by ABA and downregulated by Giberellins, which suggests that there is a regulation loop that establishes the balance of Gibberellins and Abscisic Acid. In the practice, this means that farmers can alter this balance to make all fruits mature a little later, at a same time, or 'glue' the fruit in the trees until the harvest day (because ABA participates in the maturation of the fruits, and many crops mature and drop a few fruits a day for several weeks, that is undesirable for markets). Signalling mechanism Receptor In the early 1990s, there were several lines of evidence that suggested the existence of a GA receptor in oat seeds located at the plasma membrane. However, despite intensive research, to date, no membrane-bound GA receptor has been isolated. This, along with the discovery of a soluble receptor, GA insensitive dwarf 1 (GID1) has led many to doubt that a membrane-bound receptor exists.GID1 was first identified in rice and in Arabidopsis there are three orthologs of GID1, AtGID1a, b, and c. GID1s have a high affinity for bioactive GAs. GA binds to a specific binding pocket on GID1; the C3-hydroxyl on GA makes contact with tyrosine-31 in the GID1 binding pocket. GA binding to GID1 causes changes in GID1 structure, causing a 'lid' on GID1 to cover the GA binding pocket. The movement of this lid results in the exposure of a surface which enables the binding of GID1 to DELLA proteins. DELLA proteins: Repression of a repressor DELLA proteins (such as SLR1 in rice or GAI and RGA in Arabidopsis) are repressors of plant development, characterized by the presence of a DELLA motif (aspartate-glutamate-leucine-leucine-alanine or D-E-L-L-A in the single letter amino acid code). DELLAs inhibit seed germination, seed growth, flowering and GA reverses these effects. When Gibberellins bind to the GID1 receptor, it enhances the interaction between GID1 and DELLA proteins, forming a GA-GID1-DELLA complex. In that complex it is thought that the structure of DELLA proteins experience changes, enabling their binding to F-box proteins for their degradation. F-box proteins (SLY1 in Arabidopsis or GID2 in rice) catalyse the addition of ubiquitin to their targets. Adding ubiquitin to DELLA proteins promotes their degradation via the 26S-proteosome. This releases cells from DELLAs repressive effects. Targets of DELLA proteins Transcription factors The first targets of DELLA proteins identified were Phytochrome Interacting Factors (PIFs). PIFs are transcription factors that negatively regulate light signalling and are strong promoters of elongation growth. In the presence of GA, DELLAs are degraded and this then allows PIFs to promote elongation. It was later found that DELLAs repress a large number of other transcription factors, among which are positive regulators of auxin, brassinosteroid and ethylene signalling. DELLAs can repress transcription factors either by stopping their binding to DNA or by promoting their degradation. Prefoldins and microtubule assembly In addition to repressing transcription factors, DELLAs also bind to prefoldins (PFDs). PFDs are molecular chaperones (they assist in the folding of other proteins) that work in the cytosol, but when DELLAs bind to them are restricted to the nucleus. An important function of PFDs is to assist in the folding of β-tubulin, a vital component of the cytoskeleton in the form of microtubules. As such, in the absence of Gibberellins (high level of DELLA proteins), PFDs reduce its activity, leading to a lower cellular pool of β-tubulin. When GA is present the DELLAs are degraded, PFDs can move to the cytosol and assist in the folding of β-tubulin. As such, GA allows for re-organisation of the cytoskeleton, and the elongation of cells. Microtubules are also required for the trafficking of membrane vesicles, that is needed for the correct positioning of several hormone transporters. One of the most well characterized hormone transporters are PIN proteins, which are responsible for the movement of the hormone auxin between cells. In the absence of Gibberellins, DELLA proteins reduce the levels of microtubules and thereby inhibit membrane vesicle trafficking. This reduces the level of PIN proteins at the cell membrane, and the level of auxin in the cell. GA reverses this process and allows for PIN protein trafficking to the cell membrane to enhance the level of auxin in the cell.
Biology and health sciences
Plant hormone
Biology
496730
https://en.wikipedia.org/wiki/Alpine%20climate
Alpine climate
Alpine climate is the typical climate for elevations above the tree line, where trees fail to grow due to cold. This climate is also referred to as a mountain climate or highland climate. Definition There are multiple definitions of alpine climate. In the Köppen climate classification, the alpine and mountain climates are part of group E, along with the polar climate, where no month has a mean temperature higher than . According to the Holdridge life zone system, there are two mountain climates which prevent tree growth : a) the alpine climate, which occurs when the mean biotemperature of a location is between . The alpine climate in Holdridge system is roughly equivalent to the warmest tundra climates (ET) in the Köppen system. b) the alvar climate, the coldest mountain climate since the biotemperature is between 0 °C and 1.5 °C (biotemperature can never be below 0 °C). It corresponds more or less to the coldest tundra climates and to the ice cap climates (EF) as well. Holdrige reasoned that plants net primary productivity ceases with plants becoming dormant at temperatures below and above . Therefore, he defined biotemperature as the mean of all temperatures but with all temperatures below freezing and above 30 °C adjusted to 0 °C; that is, the sum of temperatures not adjusted is divided by the number of all temperatures (including both adjusted and non-adjusted ones). The variability of the alpine climate throughout the year depends on the latitude of the location. For tropical oceanic locations, such as the summit of Mauna Loa, the temperature is roughly constant throughout the year. For mid-latitude locations, such as Mount Washington in New Hampshire, the temperature varies seasonally, but never gets very warm. Cause The temperature profile of the atmosphere is a result of an interaction between radiation and convection. Sunlight in the visible spectrum hits the ground and heats it. The ground then heats the air at the surface. If radiation were the only way to transfer heat from the ground to space, the greenhouse effect of gases in the atmosphere would keep the ground at roughly , and the temperature would decay exponentially with height. However, when air is hot, it tends to expand, which lowers its density. Thus, hot air tends to rise and transfer heat upward. This is the process of convection. Convection comes to equilibrium when a parcel of air at a given altitude has the same density as its surroundings. Air is a poor conductor of heat, so a parcel of air will rise and fall without exchanging heat. This is known as an adiabatic process, which has a characteristic pressure-temperature curve. As the pressure gets lower, the temperature decreases. The rate of decrease of temperature with elevation is known as the adiabatic lapse rate, which is approximately 9.8 °C per kilometer (or 5.4 °F per 1000 feet) of altitude. The presence of water in the atmosphere complicates the process of convection. Water vapor contains latent heat of vaporization. As air rises and cools, it eventually becomes saturated and cannot hold its quantity of water vapor. The water vapor condenses (forming clouds), and releases heat, which changes the lapse rate from the dry adiabatic lapse rate to the moist adiabatic lapse rate (5.5 °C per kilometre or 3 °F per 1000 feet). The actual lapse rate, called the environmental lapse rate, is not constant (it can fluctuate throughout the day or seasonally and also regionally), but a normal lapse rate is 5.5 °C per 1,000 m (3.57 °F per 1,000 ft). Therefore, moving up on a mountain is roughly equivalent to moving 80 kilometres (50 miles or 0.75° of latitude) towards the pole. This relationship is only approximate, however, since local factors, such as proximity to oceans, can drastically modify the climate. As the altitude increases, the main form of precipitation becomes snow and the winds increase. The temperature continues to drop until the tropopause, at , where it does not decrease further. This is higher than the highest summit. Distribution Although this climate classification only covers a small portion of the Earth's surface, alpine climates are widely distributed. They are present in the Himalayas, the Tibetan Plateau, Gansu, Qinghai and Mount Lebanon in Asia; the Alps, the Urals, the Pyrenees, the Cantabrian Mountains and the Sierra Nevada in Europe; the Andes in South America; the Sierra Nevada, the Cascade Range, the Rocky Mountains, the northern Appalachian Mountains (Adirondacks and White Mountains), and the Trans-Mexican Volcanic Belt in North America; the Southern Alps in New Zealand; the Snowy Mountains in Australia; high elevations in the Atlas Mountains, Ethiopian Highlands, and Eastern Highlands of Africa; the central parts of Borneo and New Guinea; and the summits of Mount Pico in the Atlantic and Mauna Loa in the Pacific. The lowest altitude of alpine climate varies dramatically by latitude. If alpine climate is defined by the tree line, then it occurs as low as at 68°N in Sweden, while on Mount Kilimanjaro in Tanzania, the tree line is at .
Physical sciences
Climates
Earth science
497007
https://en.wikipedia.org/wiki/Rift%20valley
Rift valley
A rift valley is a linear shaped lowland between several highlands or mountain ranges produced by the action of a geologic rift. Rifts are formed as a result of the pulling apart of the lithosphere due to extensional tectonics. The linear depression may subsequently be further deepened by the forces of erosion. More generally the valley is likely to be filled with sedimentary deposits derived from the rift flanks and the surrounding areas. In many cases rift lakes are formed. One of the best known examples of this process is the East African Rift. On Earth, rifts can occur at all elevations, from the sea floor to plateaus and mountain ranges in continental crust or in oceanic crust. They are often associated with a number of adjoining subsidiary or co-extensive valleys, which are typically considered part of the principal rift valley geologically. Earth's rift valleys The most extensive rift valley is located along the crest of the mid-ocean ridge system and is the result of sea floor spreading. Examples of this type of rift include the Mid-Atlantic Ridge and the East Pacific Rise. Many existing continental rift valleys are the result of a failed arm (aulacogen) of a triple junction, although there are three, the East African Rift, Rio Grande rift and the Baikal Rift Zone, which are currently active, as well as a fourth which may be, the West Antarctic Rift System. In these instances, not only the crust but entire tectonic plates are in the process of breaking apart forming new plates. If they continue, continental rifts will eventually become oceanic rifts. Other rift valleys are the result of bends or discontinuities in horizontally-moving (strike-slip) faults. When these bends or discontinuities are in the same direction as the relative motions along the fault, extension occurs. For example, for a right lateral-moving fault, a bend to the right will result in stretching and consequent subsidence in the area of the irregularity. In the view of many geologists today, the Dead Sea lies in a rift which results from a leftward discontinuity in the left lateral-moving Dead Sea Transform fault. Where a fault breaks into two strands, or two faults run close to each other, crustal extension may also occur between them, as a result of differences in their motions. Both types of fault-caused extension commonly occur on a small scale, producing such features as sag ponds or landslides. Rift valley lakes Many of the world's largest lakes are located in rift valleys. Lake Baikal in Siberia, a World Heritage Site, lies in an active rift valley. Baikal is both the deepest lake in the world and, with 20% of all of the liquid freshwater on earth, has the greatest volume. Lake Tanganyika, second by both measures, is in the Albertine Rift, the westernmost arm of the active East African Rift. Lake Superior in North America, the largest freshwater lake by area, lies in the ancient and dormant Midcontinent Rift. The largest subglacial lake, Lake Vostok, may also lie in an ancient rift valley. Lake Nipissing and Lake Timiskaming in Ontario and Quebec, Canada lie inside a rift valley called the Ottawa-Bonnechere Graben. Þingvallavatn, Iceland's largest natural lake, is also an example of a rift lake. Extraterrestrial rift valleys Rift valleys are also known to occur on other terrestrial planets and natural satellites. The 4,000 km long Valles Marineris on Mars is believed by planetary geologists to be a large rift system. Some features of Venus, most notably, the 4,000 km Devana Chasma and a part of the western Eistla, and possibly also Alta and Bell Regio have been interpreted by some planetary geologists as rift valleys. Some natural satellites also have prominent rift valleys. The 2,000 km long Ithaca Chasma on Tethys in the Saturn system is a prominent example. Charon's Nostromo Chasma is the first confirmed in the Pluto system, however large chasms up to 950 km wide observed on Charon have also been tentatively interpreted by some as giant rifts, and similar formations have also been noted on Pluto. A recent study suggests a complex system of ancient lunar rift valleys, including Vallis Rheita and Vallis Alpes. The Uranus system also has prominent examples, with large 'chasma' believed to be giant rift valley systems, most notably the 1492 km long Messina Chasma on Titania, 622 km Kachina Chasmata on Ariel, Verona Rupes on Miranda, and Mommur Chasma on Oberon.
Physical sciences
Landforms: General
Earth science
497225
https://en.wikipedia.org/wiki/Lifebuoy
Lifebuoy
A lifebuoy or life ring, among many other names (see § Other names), is a life-saving buoy designed to be thrown to a person in water to provide buoyancy and prevent drowning. Some modern lifebuoys are fitted with one or more seawater-activated lights to aid rescue at night. Other names Other names for "lifebuoy" include: life preserver life ring lifering lifebelt lifesaver ring buoy donut safety wheel Perry buoy Kisbee ring Description The lifebuoy is usually a ring- or horseshoe-shaped personal flotation device with a connecting line allowing the casualty to be pulled to the rescuer in a boat. They are carried by ships and boats and located beside bodies of water and swimming pools. To prevent vandalism, they are protected by fines (up to £5,000 in the United Kingdom) or imprisonment. In the United States, Coast Guard approved lifebuoys are considered Type IV personal flotation devices. At least one Type IV PFD is required on all vessels 26 feet or more in length. In the UK the Royal Life Saving Society considers lifebuoys unsuitable for use in swimming pools because throwing one into a busy pool could injure the casualty or other pool users. In these locations, lifebuoys have been superseded by devices such as the torpedo buoy, a low-drag device developed to be towed by lifeguards to those in danger. History Leonardo da Vinci sketched a concept for a safety wheel, as well as for buoyant shoes and balancing sticks for walking on water. According to various sources the Knights of Malta were the first to use cork lifebuoys on their ships. In the book Architectura naval antigua y moderna (1752) by Juan José Navarro, 1st Marquess of Victoria, two plates show "circular lifebuoys" and another plate includes a drawing of "a lifebuoy made of cork", called "salvenos". This is the type used systematically by the Knights of Malta on their ships. The lifebuoy was attached to a rope on one side and to the poop of the ship on the other, so that it may be deployed in case anyone should fall into the sea. Navarro was Captain General of the Navy and is credited with the systematic introduction of the lifebuoy on all ships of the Spanish navy. In 1803, a device called the "Marine Spencer" from the name of its inventor, Knight Spencer of Bread Street, was described in the Philosophical Magazine. It was made of "800 old tavern corks" affixed to a band, "covered in canvass, and painted in oil, so as to render it waterproof." The invention gained Spencer the honorary silver medal from the Royal Humane Society. Gallery
Technology
Basics_7
null
497367
https://en.wikipedia.org/wiki/Prague%20Metro
Prague Metro
The Prague Metro () is the rapid transit network of Prague, Czech Republic. Founded in 1974, the system consists of three lines (A, B and C) serving 61 stations (predominantly with island platforms), and is long. The system served 568 million passengers in 2021 (about 1.55 million daily). Two types of rolling stock are used on the Metro: the 81-71M (a completely modernized variant of the original 81-717/714.1), and the Metro M1. All the lines are controlled automatically from the central dispatching, near I.P. Pavlova station. The Metro is operated by the Prague Public Transit Company (, DPP), and integrated in the Prague Integrated Transport (Pražská integrovaná doprava, PID) system. Basic information The Prague Metro has three lines and one "Line D" under construction, each represented by its own colour on the maps and signs: Line A (green, 17 stations, ), Line B (yellow, 24 stations, ) and Line C (red, 20 stations, ). There are 58 stations in total (three of which are transfer stations) connected by nearly 66 kilometres of mostly underground railways. Service operates from 4–5 am until midnight, with two- to three-minute intervals between trains during rush hours and four to ten minutes between trains at other times. Nearly 600 million passengers use the Prague Metro every year (about 1.6 million daily). The system is run by the Prague Public Transit Company Co. Inc. (Czech: , DPP), which also manages the other means of public transport around the city, including the trams, buses, five ferries, the funicular to Petřín Hill, and the chairlift inside the Prague Zoo. Since 1993, the system has been connected to commuter trains and buses, and also to "park-and-ride" parking lots. Together, they form an extensive public transportation network reaching further from the city, called Prague Integrated Transport (Czech: Pražská integrovaná doprava, PID). Whilst the large system is zonally priced, the Metro is entirely inside the central zone. Many stations are quite large, with several entrances spaced relatively far apart. This can often lead to confusion for those unfamiliar with the system, especially at the central hubs such as Můstek or Muzeum. In general the stations are well signposted even for those unfamiliar with the Czech language. System layout and stations The Prague Metro system is radial, with each line running through the city centre from termini in the outskirts; however, the lines do not meet at a single central station. Rather, the three lines form a triangle in the centre of the city, with three interchange stations at the vertices of the triangle: Florenc, Můstek, and Muzeum. Each interchange station has two halls, one hall for each line. The depth of the stations (and the connecting lines) varies considerably. The deepest station is Náměstí Míru, located under the ground. Parts of the tracks in the city centre were mostly bored using a tunnelling shield. Outer parts were dug by a cut-and-cover method, and these stations are only a few metres under the surface. Part of Line B runs in a glassed-in tunnel above the ground. Most stations have a single island platform in the centre of the station hall (tunnel) serving both directions. The sub-surface stations have a straight ceiling sometimes supported by columns, while the deep-level stations are larger tunnels with the track tunnels on each side. The walls of many stations are decorated using coloured aluminium panels; each station has its own colour. Some stations are considered among the finest in Europe. Rolling stock Metro M1 Metro M1 trains have operated on Line C since 2000; they completely replaced older cars on this line in 2003. DPP owns 265 of these cars, which form 53 five-car trains. These cars were developed specially for Prague, and were manufactured there between 2000 and 2003 by a consortium consisting of ČKD Praha, ADtranz and Siemens (during the contract, Siemens acquired ČKD Praha). The total length of the train is , the acceleration is , and the total capacity of the train is 1,464 people (224 sitting, 1,240 standing). This unit was also adapted for use in Venezuela on the Maracaibo Metro. 81-71M 81-71M trains are a modernized variant of the old Soviet 81-717 trains with new traction motors, technical equipment, interiors, and exteriors. They have operated on Lines A and B since 1996. The modernization was conducted by Škoda Transportation and ČKD between 1996 and 2011. DPP owns 465 81-71M cars, which form 93 five-car trains. The total length of the train is , and the acceleration is identical to that of the Metro M1 cars, at . Similar reconstructions were also made on the Tbilisi Metro and Yerevan Metro, as well as a near-identical version exported to Kyiv from Metrowagonmash as part of the Slavutich project, designated 81-553.1, 81–554.1 and 81-555.1. Previously in service 81-71, old Soviet trains manufactured by Metrovagonmash were gradually phased out and replaced by the modernized versions. Their service ended on 2 July 2009. One vehicle is stored in the Museum of Prague public transport, while one fully-operational train of five cars stays in the Zličín (B) depot for special occasions. Ečs, Soviet trains manufactured by Metrovagonmash, that ran on Line C, in service from 1974 to 1997. One vehicle is also stored in the Museum of Prague public transport, while one fully-operational train of three cars is stored in the Zličín (B) depot. History Although the Prague Metro system is relatively new, the idea of underground transport in Prague dates back many years. The first proposal to build a sub-surface railway was made by Ladislav Rott in 1898. He encouraged the city council to take advantage of the fact that parts of the central city were already being dug up for sewer work. Rott wanted them to start digging tunnels for the railway at the same time. However, the plan was denied by the city authorities. Another proposal in 1926, by Bohumil Belada and Vladimír List, was the first to use the term "Metro", and though it was not accepted either, it served as an impulse for moving towards a real solution of the rapidly developing transport in Prague. In the 1930s and 1940s, intensive projection and planning works took place, taking into account two possible solutions: an underground tramway (regular rolling stock going underground in the city center, nowadays described as a "premetro", "Stadtbahn" or "subway-surface") and a "true" metro having its own independent system of railways. After World War II, all work was stopped due to the poor economic situation of the country, although the three lines, A, B and C, had been almost fully designed. In the early 1960s the concept of the sub-surface tramway was finally accepted and on 9 August 1967 the building of the first station (Hlavní nádraží) started. However, in the same year, a substantial change in the concept came, as the government, under the influence of Soviet advisers, decided to build a true metro system instead of an underground tramway. Thus, during the first years, the construction continued while the whole project was conceptually transformed. During the construction of the metro, a Czech rolling stock manufacturer, ČKD Tatra Smíchov, was charged with designing the trains. Two prototype two-car units under the name R1 were constructed in 1970 and 1971 and were used for field testing. However, the then-Czechoslovakian government decided instead to order the trains for the underground from the Soviet Union (which would soon become Ečs, part of the Soviet "E" series, standing for "E Czechoslovak"). The R1 rolling stock would later be scrapped in the 1980s, near the end of the Cold War. Regular service on the first section of Line C began on 9 May 1974 between Sokolovská (now Florenc) and Kačerov stations. Since then, many extensions have been built and the number of lines has risen to three. On 22 February 1990, 13 station names reflecting mostly communist ideology were changed to be politically neutral. For example, Leninova station, which contained a giant bust of Vladimir Lenin before the Velvet Revolution, was renamed Dejvická after a nearby street and surrounding neighbourhood. Other changes were: Dukelská – Nové Butovice, Švermova – Jinonice, Moskevská – Anděl, Sokolovská – Florenc, Fučíkova – Nádraží Holešovice, Gottwaldova – Vyšehrad, Mládežnická – Pankrác, Primátora Vacka – Roztyly, Budovatelů – Chodov, Družby – Opatov, Kosmonautů – Háje. In August 2002, the system suffered disastrous flooding that struck parts of Bohemia and other areas in Central Europe (see 2002 European flood). 19 stations were flooded, causing a partial collapse of the transport system in Prague; the damage to the Metro has been estimated at approximately 7 billion CZK (over US$225 million in exchange rate at that time). The affected sections of the Metro stayed out of service for several months; the last station (Křižíkova, located in the most-damaged area – Karlín) reopened in March 2003. Small gold plates have been placed at some stations to show the highest water level of the flood. Service was suspended between: Radlická and Kolbenova on Line B Malostranská and Náměstí Míru on Line A Hlavní nádraží and Nádraží Holešovice on Line C (before the extension to Ládví in 2004 and to Letňany in 2008) A number of stations were closed due to flooding in June 2013. Replacement trams ran between Dejvická and Muzeum on Line A and Českomoravská and Smíchovské nádraží on Line B, and replacement buses between Kobylisy and Muzeum on Line C due to closed sections of the track. Extensions After regular service on the first section of Line C began in 1974 between Florenc and Kačerov, building of extensions continued quite rapidly. In 1978, Line A was opened, and Line B opened in 1985, thus forming the triangle with three crossing points. Since then, the lines have been extended outwards from the center. In 1980 and 1990, Line A was extended eastward from Náměstí Míru to Želivského and Skalka. Line B was extended from Nové Butovice to Zličín in 1994 and from Českomoravská to Černý Most in 1998, and the Kolbenova and Hloubětín stations were opened in 2001. Expansion of Line C was carried out in 1980 (Kačerov – Háje) and 1984 (Florenc – Nádraží Holešovice). A northern extension of Line C was opened on 26 June 2004, with two more stations, Kobylisy and Ládví. New tunnels were built under the Vltava river using a unique "ejecting-tunnels" technology. First, a trench was excavated in the riverbed and the concrete tunnels constructed in dry docks on the riverbank. Then the docks were flooded, and the floating tunnels were moved as a rigid complex to their final position, sunk, anchored, and covered. Line A was extended to the east on 26 May 2006, when a new terminus, Depo Hostivař, opened. The station was constructed within the railway depot. Line C was extended to the northeast to connect the city center to the housing blocks at Prosek and a large shopping center at Letňany. Three stations (Střížkov, Prosek, and Letňany) opened on 8 May 2008. In April 2015, Line A was extended westward from Dejvická to Nemocnice Motol with four new stations: Bořislavka, Nádraží Veleslavín, Petřiny, and Nemocnice Motol. The Nádraží Veleslavín station is also the new terminus of the 119 bus to Václav Havel Airport. Plans for an extension to the airport have been proposed, but never put into action. According to estimates from 2018 the project would cost about 26.8 billion crowns and take 11 years to complete. Future plans Another phase of the extension of Line A was planned from Nemocnice Motol to the Václav Havel Airport, but it is very likely that this extension will not be built and the airport will be serviced by a new railway instead. Line D There are plans to build a new line, Line D (blue), which will connect the city centre to southern parts of the city. According to current plans, the line will run for 11 kilometers and start in the city center and lead to Vršovice, Krč, Libuš, and Písnice. There will be 10 stations: Náměstí Míru (transfer to Line A), Náměstí bratří Synků, Pankrác (transfer to Line C), Olbrachtova, Nádraží Krč, Nemocnice Krč, Nové Dvory, Libuš, Písnice and Depo Písnice. Line D is very important for improving the traffic situation in the southern and southeastern part of the city. In the second stage it is planned to extend this line from Pankrác to Náměstí Míru (Peace Square). The first part of Line D is planned to be built between 2022 and 2029. Line E There are also plans for Line E, which will probably be circular. The exact route has not yet been determined. In the beginning of the 21st century, there were discussions regarding it in the connection with plans to organise the Summer Olympic Games in Prague, which were however canceled. The Praha sobě list endorsed the idea of a circular metro line during the run-up to the 2022 Prague municipal election. Features The name of the Můstek station means "little bridge" and refers to the area around the station. The origin of the area's name was not known until remains of a medieval bridge were discovered during construction of the station. The remains were incorporated into the station and can be seen near the northwestern exit of the station. The escalators at Náměstí Míru (Peace Square) station in Vinohrady are the longest escalators in the European Union (length 87 m, vertical span 43.5 m, 533 steps, taking 2 minutes and 15 seconds to ascend). Náměstí Míru is also the deepest station in the European Union, at 53 metres. Between I. P. Pavlova and Vyšehrad stations, Line C runs inside the box structure of the large Nusle Bridge over a steep valley. The terminal station Depo Hostivař was constructed within the buildings of an existing railway depot. The extension is the first segment of the system to be built above ground and not through a tunnel. There are no reversing tracks in the terminus; trains depart from the same track on which they arrive. Anděl station was known as Moskevská (Moscow station) until 1990. It opened on the same day in 1985 as the Prazhskaya (Prague) station on the Moscow Metro. It contains several pieces of art promoting Soviet-Czechoslovak friendship. Anděl station, like the Smíchov train station, contains some of the best-preserved examples of Communist-era art remaining in Prague. Works were carried out from 2014-15 to make the station accessible for wheelchair users. The entrance hall of the Hradčanská station still features the coat of arms of the Czechoslovak Socialist Republic and the motto Všechna moc v Československé socialistické republice patří pracujícímu lidu ("All the power in the Czechoslovak Socialist Republic belongs to the working people"), which were parts of the station's original socialist-realist design. During the communist period, rumours circulated that large "survival chambers" were being built for high officials of the government in case of a nuclear attack. After the fall of communism such areas were shown indeed to exist, but not on the scale envisioned nor fitted out in luxury. Tickets The Prague Metro operates on a proof-of-payment system, as does the rest of the PID network. Passengers must buy and validate a ticket before entering a station's paid area. There are uniformed and plainclothes fare inspectors who randomly check passengers' tickets within the paid area. Basic single tickets cost 40 CZK (as of 1 August 2021) for a 90-minute ride or 30 CZK for a 30-minute ride. In November 2007 SMS purchase for basic single transfer tickets and day tickets was introduced (available only from Czech mobile phones). Short-term tourist passes are available for periods of 24 hours (120 CZK) and 3 days (330 CZK). As of 2019, single tickets and short term passes can be purchased online using the PID Lítačka smartphone app. Since April 2019 single and 24hour tickets can be also bought on board of every tram and in all metro stations, using contactless payment, including payment apps like Google Pay or Apple Pay. Such tickets are already validated from the time of purchase. Longer-term season tickets can be bought on the smart ticketing system Lítačka card, for periods of one month (550 CZK), three months (1480 CZK) or the annual pass for 3650 CZK (10 CZK/day). Students studying in the Czech republic with a valid student license ISIC, children under 18 years old, and seniors over 60 years of age can buy season tickets at reduced prices. Reduced ticket prices are: 130 CZK for 30 days, 360 CZK for 90 days, and 1280 CZK for a year. Senior citizens aged 65 or older and children up to 14 years old can ride for free. The tickets are the same for all means of transport in Prague (metro, trams, buses, funiculars and ferries). Announcements The announcement made through the public address system when the doors are closing, "Ukončete, prosím, výstup a nástup, dveře se zavírají" ("Please finish exiting and boarding, the doors are closing") has become a symbol of Prague for many tourists, and is possibly the first clear Czech phrase many travelers hear. The announcement has changed little since 1974, when the first line was opened; the original version did not include the word "please". The announcements are voiced by on Line A, by on Line B, and by on Line C. Other announcements include: "Vystupujte vpravo ve směru jízdy" ("Exit on the right side in the direction of travel"), "Konečná stanice, prosíme, vystupte" ("Terminal station, please exit the train"), and "Přestup na linky S a další vlakové spoje" ("Transfer to S lines and other railway connections"). Gallery Examples of stations on Line A Examples of stations on Line B Examples of stations on Line C Transfer corridors Current subway cars Historic subway cars Related constructions Network map
Technology
Europe_2
null
497463
https://en.wikipedia.org/wiki/Cashmere%20wool
Cashmere wool
Cashmere wool, usually simply known as cashmere, is a fiber obtained from cashmere goats, pashmina goats, and some other breeds of goat. It has been used to make yarn, textiles and clothing for hundreds of years. Cashmere is closely associated with the Kashmir shawl, the word "cashmere" deriving from an anglicization of Kashmir, when the Kashmir shawl reached Europe in the 19th century. Both the soft undercoat and the guard hairs may be used; the softer hair is reserved for textiles, while the coarse guard hair is used for brushes and other non-apparel purposes. Cashmere is a hygroscopic fiber, absorbing and releasing water from the air based on the surrounding environment. This helps regulate the body in both warm and cool temperatures. A number of countries produce cashmere and have improved processing techniques over the years, but China and Mongolia are two of the leading producers as of 2019. Afghanistan is ranked third. Some yarns and clothing marketed as containing cashmere have been found to contain little to no cashmere fiber, so more stringent testing has been requested to ensure items are fairly represented. Poor land management and overgrazing to increase production of the valuable fiber has resulted in the decimation and transformation of grasslands into deserts in Asia, increasing local temperatures and causing air pollution which has traveled as far as Canada and the United States. Sources Historically in the western scientific community, fine-haired Cashmere goats have been called Capra hircus, as if they were a subspecies of the domestic goat Capra hircus. However, they are now more commonly considered part of the domestic goat subspecies Capra aegagrus hircus or the alternate version Hircus Blythi Goat. Cashmere goats produce a double fleece that consists of a fine, soft undercoat or underdown of hair mingled with a straighter and much coarser outer coating of hair called guard hair. This undercoat is grown in the winter as a way to keep the goat warm in colder months. For the fine underdown to be sold and processed further, it must be de-haired. De-hairing is a mechanical process that separates the coarse hairs from the fine hair. After de-hairing, the resulting cashmere is ready to be dyed and converted into textile yarn, fabrics and garments. De-hairing is made slightly easier by removing the undercoat by hand, rather than shaving the entire coat. This process takes much longer to remove the cashmere, but produces a much finer, higher quality fiber. Gathering Cashmere wool is collected during the spring moulting season when the goats naturally shed their winter coat. In the Northern Hemisphere, the goats moult as early as March and as late as May. In some regions, the mixed mass of down and coarse hair is removed by hand with a coarse comb that pulls tufts of fiber from the animal as the comb is raked through the fleece. The collected fiber then has a higher yield of pure cashmere after the fiber has been washed and dehaired than produced by shearing. The long, coarse guard hair is then typically clipped from the animal and is often used for brushes, interfacings and other non-apparel uses. Animals in Iran, Afghanistan, New Zealand, and Australia are typically shorn of their fleece, resulting in a higher coarse hair content and lower pure cashmere yield. In America, the most popular method is combing. The process takes up to two weeks, but with a trained eye for when the fiber is releasing, it is possible to comb the fibers out in about a week. The term "baby cashmere" is used for fibres harvested from younger goats, and has a reputation of being softer. Production China has become the largest producer of raw cashmere, estimated at 19,200 metric tons (in hair) per year (2016). Mongolia follows with 8,900 tons (in hair) as of 2016, while Afghanistan, Iran, Turkey, Kyrgyzstan and other Central Asian republics produce lesser amounts. The annual world raw production is estimated to be between 15,000 and 20,000 tons (13,605 and 18,140 tonnes) (in hair). Pure cashmere, resulting from removing animal grease, dirt and coarse hairs from the fleece, is estimated at 6,500 tons (5,895 tonnes). Ultra-fine Cashmere or Pashmina is still produced by communities in Kashmir but its rarity and high price, along with political instability in the region, make it very hard to source and to regulate quality. It is estimated that the average yearly production per goat is . Pure cashmere can be dyed and spun into yarns and knitted into jumpers (sweaters), hats, gloves, socks and other clothing, or woven into fabrics then cut and assembled into garments such as outer coats, jackets, trousers (pants), pajamas, scarves, blankets, and other items. Fabric and garment producers in Scotland, Italy, and Japan have long been known as market leaders. Cashmere may also be blended with other fibers to bring the garment cost down, or to gain their properties, such as elasticity from wool, or sheen from silk. The town of Uxbridge, Massachusetts, in the United States was an incubator for the cashmere wool industry. It had the first power looms for woolens and the first manufacture of "satinets". Capron Mill had the first power looms, in 1820. It burned on July 21, 2007, in the Bernat Mill fire. In the United States, under the U.S. Wool Products Labeling Act of 1939, as amended, (15 U. S. Code Section 68b(a)(6)), a wool or textile product may be labelled as containing cashmere only if the following criteria are met: such wool product is the fine (dehaired) undercoat fibers produced by a cashmere goat (Capra hircus laniger); the average diameter of the fiber of such wool product does not exceed 19 microns; and such wool product does not contain more than 3 percent (by weight) of cashmere fibers with average diameters that exceed 30 microns. the average fiber diameter may be subject to a coefficient of variation around the mean that shall not exceed 24 percent. Types of fiber Raw – fiber that has not been processed and is essentially straight from the animal Processed – fiber that has been through the processes of de-hairing, washing, carding, and is ready either to spin or to knit/crochet/weave Virgin – new fiber made into yarns, fabrics or garments for the first time Recycled – fibers reclaimed from scraps or fabrics that were previously woven or felted and may or may not have been previously used by the consumer from various parts of the world. The world cashmere industry Mongolia supplies 9,600 tons of raw cashmere per year to the world. 15% of the total raw cashmere supplied by Mongolia is being used to manufacture finished goods whereas the remaining 85% is being exported in semi processed form. 70% of the total raw material used to produce finished garments in Mongolia is being procured by Gobi Corporation with the remaining 30% being used by other producers in Mongolia. The global fashion luxury cashmere clothing market is expected to reach US$4.2 billion in 2025, growing at an annual rate of 3.86% per year between 2018 and 2025. History Cashmere has been manufactured in Mongolia, Nepal and Kashmir for thousands of years. The fiber is also known as pashm (Persian for wool) or pashmina (Persian/Urdu word derived from Pashm) for its use in the handmade shawls of Kashmir.
Technology
Fabrics and fibers
null
497747
https://en.wikipedia.org/wiki/Terminator%20%28genetics%29
Terminator (genetics)
In genetics, a transcription terminator is a section of nucleic acid sequence that marks the end of a gene or operon in genomic DNA during transcription. This sequence mediates transcriptional termination by providing signals in the newly synthesized transcript RNA that trigger processes which release the transcript RNA from the transcriptional complex. These processes include the direct interaction of the mRNA secondary structure with the complex and/or the indirect activities of recruited termination factors. Release of the transcriptional complex frees RNA polymerase and related transcriptional machinery to begin transcription of new mRNAs. In prokaryotes Two classes of transcription terminators, Rho-dependent and Rho-independent, have been identified throughout prokaryotic genomes. These widely distributed sequences are responsible for triggering the end of transcription upon normal completion of gene or operon transcription, mediating early termination of transcripts as a means of regulation such as that observed in transcriptional attenuation, and to ensure the termination of runaway transcriptional complexes that manage to escape earlier terminators by chance, which prevents unnecessary energy expenditure for the cell. Rho-dependent terminators Rho-dependent transcription terminators require a large protein called a Rho factor which exhibits RNA helicase activity to disrupt the mRNA-DNA-RNA polymerase transcriptional complex. Rho-dependent terminators are found in bacteria and phages. The Rho-dependent terminator occurs downstream of translational stop codons and consists of an unstructured, cytosine-rich sequence on the mRNA known as a Rho utilization site (rut), and a downstream transcription stop point (tsp). The rut serves as a mRNA loading site and as an activator for Rho; activation enables Rho to efficiently hydrolyze ATP and translocate down the mRNA while it maintains contact with the rut site. Rho is able to catch up with the RNA polymerase because it is being stalled at the downstream tsp sites. Multiple different sequences can function as a tsp site. Contact between Rho and the RNA polymerase complex stimulates dissociation of the transcriptional complex through a mechanism involving allosteric effects of Rho on RNA polymerase. Rho-independent terminators Intrinsic transcription terminators or Rho-independent terminators require the formation of a self-annealing hairpin structure on the elongating transcript, which results in the disruption of the mRNA-DNA-RNA polymerase ternary complex. The terminator sequence in DNA contains a 20 basepair GC-rich region of dyad symmetry followed by a short poly-A tract or "A stretch" which is transcribed to form the terminating hairpin and a 7–9 nucleotide "U tract" respectively. The mechanism of termination is hypothesized to occur through a combination of direct promotion of dissociation through allosteric effects of hairpin binding interactions with the RNA polymerase and "competitive kinetics". The hairpin formation causes RNA polymerase stalling and destabilization, leading to a greater likelihood that dissociation of the complex will occur at that location due to increased time spent paused at that site and reduced stability of the complex. Additionally, the elongation protein factor NusA interacts with the RNA polymerase and the hairpin structure to stimulate transcriptional termination. In eukaryotes In eukaryotic transcription of mRNAs, terminator signals are recognized by protein factors that are associated with the RNA polymerase II and which trigger the termination process. The genome encodes one or more polyadenylation signals. Once the signals are transcribed into the mRNA, the proteins cleavage and polyadenylation specificity factor (CPSF) and cleavage stimulation factor (CstF) transfer from the carboxyl terminal domain of RNA polymerase II to the poly-A signal. These two factors then recruit other proteins to the site to cleave the transcript, freeing the mRNA from the transcription complex, and add a string of about 200 A-repeats to the 3' end of the mRNA in a process known as polyadenylation. During these processing steps, the RNA polymerase continues to transcribe for several hundred to a few thousand bases and eventually dissociates from the DNA and downstream transcript through an unclear mechanism; there are two basic models for this event known as the torpedo and allosteric models. Torpedo model After the mRNA is completed and cleaved off at the poly-A signal sequence, the left-over (residual) RNA strand remains bound to the DNA template and the RNA polymerase II unit, continuing to be transcribed. After this cleavage, a so-called exonuclease binds to the residual RNA strand and removes the freshly transcribed nucleotides one at a time (also called 'degrading' the RNA), moving towards the bound RNA polymerase II. This exonuclease is XRN2 (5'-3' Exoribonuclease 2) in humans. This model proposes that XRN2 proceeds to degrade the uncapped residual RNA from 5' to 3' until it reaches the RNA pol II unit. This causes the exonuclease to 'push off' the RNA pol II unit as it moves past it, terminating the transcription while also cleaning up the residual RNA strand. Similar to Rho-dependent termination, XRN2 triggers the dissociation of RNA polymerase II by either pushing the polymerase off of the DNA template or pulling the template out of the RNA polymerase. The mechanism by which this happens remains unclear, however, and has been challenged not to be the sole cause of the dissociation. In order to protect the transcribed mRNA from degradation by the exonuclease, a 5' cap is added to the strand. This is a modified guanine added to the front of mRNA, which prevents the exonuclease from binding and degrading the RNA strand. A 3' poly(A) tail is added to the end of a mRNA strand for protection from other exonucleases as well. Allosteric model The allosteric model suggests that termination occurs due to the structural change of the RNA polymerase unit after binding to or losing some of its associated proteins, making it detach from the DNA strand after the signal. This would occur after the RNA pol II unit has transcribed the poly-A signal sequence, which acts as a terminator signal. RNA polymerase is normally capable of transcribing DNA into single-stranded mRNA efficiently. However, upon transcribing over the poly-A signals on the DNA template, a conformational shift is induced in the RNA polymerase from the proposed loss of associated proteins from its carboxyl terminal domain. This change of conformation reduces RNA polymerase's processivity making the enzyme more prone to dissociating from its DNA-RNA substrate. In this case, termination is not completed by degradation of mRNA but instead is mediated by limiting the elongation efficiency of RNA polymerase and thus increasing the likelihood that the polymerase will dissociate and end its current cycle of transcription. Non-mRNAs The several RNA polymerases in eukaryotes each have their own means of termination. Pol I is stopped by TTF1 (yeast Nsi1), which recognizes a downstream DNA sequence; the endonuclease is XRN2 (yeast Rat1). Pol III is able to terminate on its on on a stretch of As on the template strand. Finally, Pol II also have poly(A)-independent modes of termination, which is required when it transcribes snRNA and snoRNA genes in yeast. The yeast protein Nrd1 is responsible. Some human mechanism, possibly PCF11, seems to cause premature termination when pol II transcribes HIV genes.
Biology and health sciences
Molecular biology
Biology
497756
https://en.wikipedia.org/wiki/Oxbow%20lake
Oxbow lake
An oxbow lake is a U-shaped lake or pool that forms when a wide meander of a river is cut off, creating a free-standing body of water. The word "oxbow" can also refer to a U-shaped bend in a river or stream, whether or not it is cut off from the main stream. It takes its name from an oxbow which is part of a harness for oxen to pull a plough or cart. In South Texas, oxbows left by the Rio Grande are called resacas. In Australia, oxbow lakes are called billabongs. Geology An oxbow lake forms when a meandering river erodes through the neck of one of its meanders. This takes place because meanders tend to grow and become more curved over time. The river then follows a shorter course that bypasses the meander. The entrances to the abandoned meander eventually silt up, forming an oxbow lake. Because oxbow lakes are stillwater lakes, with no current flowing through them, the entire lake gradually silts up, becoming a bog or swamp and then evaporating completely. When a river reaches a low-lying plain, often in its final course to the sea or a lake, it meanders widely. In the vicinity of a river bend, deposition occurs on the convex bank (the bank with the smaller radius). In contrast, both lateral erosion and undercutting occur on the cut bank or concave bank (the bank with the greater radius). Continuous deposition on the convex bank and erosion of the concave bank of a meandering river cause the formation of a very pronounced meander with two concave banks getting closer. The narrow neck of land between the two neighboring concave banks is finally cut through, either by lateral erosion of the two concave banks or by the strong currents of a flood. When this happens a new, straighter river channel develops—and an abandoned meander loop, called a cutoff, forms. When deposition finally seals off the cutoff from the river channel, an oxbow lake forms. This process can occur over a time from a few years to several decades, and may sometimes become essentially static. Gathering of erosion products near the concave bank and transporting them to the convex bank is the work of the secondary flow across the floor of the river in the vicinity of a river bend. The process of deposition of silt, sand and gravel on the convex bank is clearly illustrated in point bars. The effect of the secondary flow can be demonstrated using a circular bowl. Partly fill the bowl with water and sprinkle dense particles such as sand or rice into the bowl. Set the water into circular motion with one hand or a spoon. The dense particles quickly sweep into a neat pile in the center of the bowl. This is the mechanism that leads to the formation of point bars and contributes to the formation of oxbow lakes. The primary flow of water in the bowl is circular and the streamlines are concentric with the side of the bowl. However, the secondary flow of the boundary layer across the floor of the bowl is inward toward the center. The primary flow might be expected to fling the dense particles to the perimeter of the bowl, but instead the secondary flow sweeps the particles toward the center. The curved path of a river around a bend makes the water's surface slightly higher on the outside of the bend than on the inside. As a result, at any elevation within the river, water pressure is slightly greater near the outside of the bend than on the inside. A pressure gradient toward the convex bank provides the centripetal force necessary for each parcel of water to follow its curved path. The boundary layer that flows along the river floor does not move fast enough to balance the pressure gradient laterally across the river. It responds to this pressure gradient, and its velocity is partly downstream and partly across the river toward the convex bank. As it flows along the floor of the river, it sweeps loose material toward the convex bank. This flow of the boundary layer is significantly different from the speed and direction of the primary flow of the river, and is part of the river's secondary flow. River flood plains that contain rivers with a highly sinuous platform are populated by longer oxbow lakes than those with low sinuosity. This is because rivers with high sinuosity have larger meanders, and greater opportunity for longer lakes to form. Rivers with lower sinuosity are characterized by fewer cutoffs and shorter oxbow lakes due to the shorter distance of their meanders. Oxbow lake ecology Oxbow lakes serve as important wetland ecosystems. In the United States, oxbow lakes serve as the primary habitat for water tupelo and the iconic bald cypress. The numerous oxbow lakes of the Amazon River are a favorable habitat for the giant river otter. Oxbow lakes may also be suitable locations for aquaculture. Oxbow lakes contribute to the health of a river ecosystem by trapping sediments and agricultural runoff, thereby removing them from the main river flow. However, this is destructive of the oxbow lake ecosystem itself. Oxbow lakes are also vulnerable to heavy metal contamination from industrial sources. Artificial oxbow lakes Oxbow lakes may be formed when a river channel is straightened artificially to improve navigation or for flood alleviation. This occurred notably on the upper Rhine in Germany in the nineteenth century. An example of an entirely artificial waterway with oxbows is the Oxford Canal in England. When originally constructed, it had a very meandering course, following the contours of the land, but the northern part of the canal was straightened out between 1829 and 1834, reducing its length from approximately and creating a number of oxbow-shaped sections isolated from the new course. Notable examples Bole and Burton Round in West Burton, Nottinghamshire, England are a good example of previous lakes in a close proximity to one another. Bayou Brevelle in Natchitoches Parish, United States was created after the Red River of the South altered its course due to the effects of the Great Raft. Carter Lake, United States was created after severe flooding in 1877 led to the Missouri River shifting approximately to the southeast. Cuckmere Haven in Sussex, England contains a widely meandering river with many oxbow lakes, often referred to in physical geography textbooks. Halfmoon Lake in downtown Eau Claire, United States was formed due to a shift in the course of the Chippewa River, which now flows immediately to the south. Kanwar Lake Bird Sanctuary, India contains rare and endangered migratory birds and is one of Asia's largest oxbow lakes. The Oxbow, a bend in the Connecticut River, is disconnected at one end. There are many oxbow lakes alongside the Mississippi River and its tributaries. The largest oxbow lake in North America, Lake Chicot (located near Lake Village, United States), was originally part of the Mississippi River, as was Horseshoe Lake, the namesake for the town of Horseshoe Lake, United States. Reelfoot Lake in west Tennessee is another notable oxbow lake; it was formed when the Mississippi River took a new channel following the 1811–12 New Madrid earthquakes. The upper reaches of New Zealand's Taieri River has also cut a multitude of oxbow lakes in its upper course near the town of Paerau. Some of this area has been converted into water meadows. Vynthala Lake, Chalakudy, India is formed from a "cutoff" of the Chalakudy River which flows nearby. There has also been a possible oxbow lake postulated in Saraswati Flumen near Ontario Lacus on Saturn's moon Titan.
Physical sciences
Hydrology
Earth science
3619345
https://en.wikipedia.org/wiki/Crystal%20polymorphism
Crystal polymorphism
In crystallography, polymorphism is the phenomenon where a compound or element can crystallize into more than one crystal structure. The preceding definition has evolved over many years and is still under discussion today. Discussion of the defining characteristics of polymorphism involves distinguishing among types of transitions and structural changes occurring in polymorphism versus those in other phenomena. Overview Phase transitions (phase changes) that help describe polymorphism include polymorphic transitions as well as melting and vaporization transitions. According to IUPAC, a polymorphic transition is "A reversible transition of a solid crystalline phase at a certain temperature and pressure (the inversion point) to another phase of the same chemical composition with a different crystal structure." Additionally, Walter McCrone described the phases in polymorphic matter as "different in crystal structure but identical in the liquid or vapor states." McCrone also defines a polymorph as “a crystalline phase of a given compound resulting from the possibility of at least two different arrangements of the molecules of that compound in the solid state.” These defining facts imply that polymorphism involves changes in physical properties but cannot include chemical change. Some early definitions do not make this distinction. Eliminating chemical change from those changes permissible during a polymorphic transition delineates polymorphism. For example, isomerization can often lead to polymorphic transitions. However, tautomerism (dynamic isomerization) leads to chemical change, not polymorphism. As well, allotropy of elements and polymorphism have been linked historically. However, allotropes of an element are not always polymorphs. A common example is the allotropes of carbon, which include graphite, diamond, and londsdaleite. While all three forms are allotropes, graphite is not a polymorph of diamond and londsdaleite. Isomerization and allotropy are only two of the phenomena linked to polymorphism. For additional information about identifying polymorphism and distinguishing it from other phenomena, see the review by Brog et al. It is also useful to note that materials with two polymorphic phases can be called dimorphic, those with three polymorphic phases, trimorphic, etc. Polymorphism is of practical relevance to pharmaceuticals, agrochemicals, pigments, dyestuffs, foods, and explosives. Detection Experimental methods Early records of the discovery of polymorphism credit Eilhard Mitscherlich and Jöns Jacob Berzelius for their studies of phosphates and arsenates in the early 1800s. The studies involved measuring the interfacial angles of the crystals to show that chemically identical salts could have two different forms. Mitscherlich originally called this discovery isomorphism. The measurement of crystal density was also used by Wilhelm Ostwald and expressed in Ostwald's Ratio. The development of the microscope enhanced observations of polymorphism and aided Moritz Ludwig Frankenheim’s studies in the 1830s. He was able to demonstrate methods to induce crystal phase changes and formally summarized his findings on the nature of polymorphism. Soon after, the more sophisticated polarized light microscope came into use, and it provided better visualization of crystalline phases allowing crystallographers to distinguish between different polymorphs. The hot stage was invented and fitted to a polarized light microscope by Otto Lehmann in about 1877. This invention helped crystallographers determine melting points and observe polymorphic transitions. While the use of hot stage microscopes continued throughout the 1900s, thermal methods also became commonly used to observe the heat flow that occurs during phase changes such as melting and polymorphic transitions. One such technique, differential scanning calorimetry (DSC), continues to be used for determining the enthalpy of polymorphic transitions. In the 20th century, X-ray crystallography became commonly used for studying the crystal structure of polymorphs. Both single crystal x-ray diffraction and powder x-ray diffraction techniques are used to obtain measurements of the crystal unit cell. Each polymorph of a compound has a unique crystal structure. As a result, different polymorphs will produce different x-ray diffraction patterns. Vibrational spectroscopic methods came into use for investigating polymorphism in the second half of the twentieth century and have become more commonly used as optical, computer, and semiconductor technologies improved. These techniques include infrared (IR) spectroscopy, terahertz spectroscopy and Raman spectroscopy. Mid-frequency IR and Raman spectroscopies are sensitive to changes in hydrogen bonding patterns. Such changes can subsequently be related to structural differences. Additionally, terahertz and low frequency Raman spectroscopies reveal vibrational modes resulting from intermolecular interactions in crystalline solids. Again, these vibrational modes are related to crystal structure and can be used to uncover differences in 3-dimensional structure among polymorphs. Computational methods Computational chemistry may be used in combination with vibrational spectroscopy techniques to understand the origins of vibrations within crystals. The combination of techniques provides detailed information about crystal structures, similar to what can be achieved with x-ray crystallography. In addition to using computational methods for enhancing the understanding of spectroscopic data, the latest development in identifying polymorphism in crystals is the field of crystal structure prediction. This technique uses computational chemistry to model the formation of crystals and predict the existence of specific polymorphs of a compound before they have been observed experimentally by scientists. Examples Many compounds exhibit polymorphism. It has been claimed that "every compound has different polymorphic forms, and that, in general, the number of forms known for a given compound is proportional to the time and money spent in research on that compound." Organic compounds Benzamide The phenomenon was discovered in 1832 by Friedrich Wöhler and Justus von Liebig. They observed that the silky needles of freshly crystallized benzamide slowly converted to rhombic crystals. Present-day analysis identifies three polymorphs for benzamide: the least stable one, formed by flash cooling is the orthorhombic form II. This type is followed by the monoclinic form III (observed by Wöhler/Liebig). The most stable form is monoclinic form I. The hydrogen bonding mechanisms are the same for all three phases; however, they differ strongly in their pi-pi interactions. Maleic acid In 2006 a new polymorph of maleic acid was discovered, 124 years after the first crystal form was studied. Maleic acid is manufactured on an industrial scale in the chemical industry. It forms salt found in medicine. The new crystal type is produced when a co-crystal of caffeine and maleic acid (2:1) is dissolved in chloroform and when the solvent is allowed to evaporate slowly. Whereas form I has monoclinic space group P21/c, the new form has space group Pc. Both polymorphs consist of sheets of molecules connected through hydrogen bonding of the carboxylic acid groups: in form I, the sheets alternate with respect of the net dipole moment, while in form II, the sheets are oriented in the same direction. 1,3,5-Trinitrobenzene After 125 years of study, 1,3,5-trinitrobenzene yielded a second polymorph. The usual form has the space group Pbca, but in 2004, a second polymorph was obtained in the space group Pca21 when the compound was crystallised in the presence of an additive, trisindane. This experiment shows that additives can induce the appearance of polymorphic forms. Other organic compounds Acridine has been obtained as eight polymorphs and aripiprazole has nine. The record for the largest number of well-characterised polymorphs is held by a compound known as ROY. Glycine crystallizes as both monoclinic and hexagonal crystals. Polymorphism in organic compounds is often the result of conformational polymorphism. Inorganic matter Elements Elements including metals may exhibit polymorphism. Allotropy is the term used when describing elements having different forms and is used commonly in the field of metallurgy. Some (but not all) allotropes are also polymorphs. For example, iron has three allotropes that are also polymorphs. Alpha-iron, which exists at room temperature, has a bcc form. Above 910 degrees gamma-iron exists, which has a fcc form. Above 1390 degrees delta-iron exists with a bcc form. Another metallic example is tin, which has two allotropes that are also polymorphs. At room temperature, beta-tin exists as a white tetragonal form. When cooled below 13.2 degrees, alpha-tin forms which is gray in color and has a cubic diamond form. A classic example of a nonmetal that exhibits polymorphism is carbon. Carbon has many allotropes, including graphite, diamond, and londsdaleite. However, these are not all polymorphs of each other. Graphite is not a polymorph of diamond and londsdaleite, since it is chemically distinct, having sp2 hybridized bonding. Diamond, and londsdaleite are chemically identical, both having sp3 hybridized bonding, and they differ only in their crystal structures, making them polymorphs. Additionally, graphite has two polymorphs, a hexagonal (alpha) form and a rhombohedral (beta) form. Binary metal oxides Polymorphism in binary metal oxides has attracted much attention because these materials are of significant economic value. One set of famous examples have the composition SiO2, which form many polymorphs. Important ones include: α-quartz, β-quartz, tridymite, cristobalite, moganite, coesite, and stishovite. Other inorganic compounds A classical example of polymorphism is the pair of minerals calcite, which is rhombohedral, and aragonite, which is orthorhombic. Both are forms of calcium carbonate. A third form of calcium carbonate is vaterite, which is hexagonal and relatively unstable. β-HgS precipitates as a black solid when Hg(II) salts are treated with H2S. With gentle heating of the slurry, the black polymorph converts to the red form. Factors affecting polymorphism According to Ostwald's rule, usually less stable polymorphs crystallize before the stable form. The concept hinges on the idea that unstable polymorphs more closely resemble the state in solution, and thus are kinetically advantaged. The founding case of fibrous vs rhombic benzamide illustrates the case. Another example is provided by two polymorphs of titanium dioxide. Nevertheless, there are known systems, such as metacetamol, where only narrow cooling rate favors obtaining metastable form II. Polymorphs have disparate stabilities. Some convert rapidly at room (or any) temperature. Most polymorphs of organic molecules only differ by a few kJ/mol in lattice energy. Approximately 50% of known polymorph pairs differ by less than 2 kJ/mol and stability differences of more than 10 kJ/mol are rare. Polymorph stability may change upon temperature or pressure. Importantly, structural and thermodynamic stability are different. Thermodynamic stability may be studied using experimental or computational methods. Polymorphism is affected by the details of crystallisation. The solvent in all respects affects the nature of the polymorph, including concentration, other components of the solvent, i.e., species that inhibiting or promote certain growth patterns. A decisive factor is often the temperature of the solvent from which crystallisation is carried out. Metastable polymorphs are not always reproducibly obtained, leading to cases of "disappearing polymorphs", with usually negative implications on law and business. In pharmaceuticals Legal aspects Drugs receive regulatory approval and are granted patents for only a single polymorph. In a classic patent dispute, the GlaxoSmithKline defended its patent for the Type II polymorph of the active ingredient in Zantac against competitors while that of the Type I polymorph had already expired. Polymorphism in drugs can also have direct medical implications since dissolution rates depend on the polymorph. Polymorphic purity of drug samples can be checked using techniques such as powder X-ray diffraction, IR/Raman spectroscopy, and utilizing the differences in their optical properties in some cases. Case studies The known cases up to 2015 are discussed in a review article by Bučar, Lancaster, and Bernstein. Dibenzoxazepines Multidisciplinary studies involving experimental and computational approaches were applied to pharmaceutical molecules to facilitate the comparison of their solid-state structures. Specifically, this study has focused on exploring how changes in molecular structure affect the molecular conformation, packing motifs, interactions in the resultant crystal lattices and the extent of solid-state diversity of these compounds. The results highlight the value of crystal structure prediction studies and PIXEL calculations in the interpretation of the observed solid-state behaviour and quantifying the intermolecular interactions in the packed structures and identifying the key stabilising interactions. An experimental screen yielded 4 physical forms for clozapine as compared to 60 distinct physical forms for olanzapine. The experimental screening results of clozapine are consistent with its crystal energy landscape which confirms that no alternate packing arrangement is thermodynamically competitive to the experimentally obtained structure. Whilst in case of olanzapine, crystal energy landscape highlights that the extensive experimental screening has probably not found all possible polymorphs of olanzapine, and further solid form diversity could be targeted with a better understanding of the role of kinetics in its crystallisation. CSP studies were able to offer an explanation for the absence of the centrosymmetric dimer in anhydrous clozapine. PIXEL calculations on all the crystal structures of clozapine revealed that similar to olanzapine, the intermolecular interaction energy in each structure is also dominated by the Ed. Despite the molecular structure similarity between amoxapine and loxapine (molecules in group 2), the crystal packing observed in polymorphs of loxa differs significantly from the amoxapine. A combined experimental and computational study demonstrated that the methyl group in loxapine has a significant influence in increasing the range of accessible solid forms and favouring various alternate packing arrangements. CSP studies have again helped in explaining the observed solid-state diversity of loxapine and amoxapine. PIXEL calculations showed that in absence of strong H-bonds, weak H-bonds such as C–H...O, C–H...N and dispersion interactions play a key role in stabilising the crystal lattice of both the molecules. Efficient crystal packing of amoxapine seems to be contributing towards its monomorphic behaviour as compared to the comparatively less efficient packing of loxapine molecules in both polymorphs. The combination of experimental and computational approaches has provided a deeper understanding of the factors influencing the solid-state structure and diversity in these compounds. Hirshfeld surfaces using Crystal Explorer represent another way of exploring packing modes and intermolecular interactions in molecular crystals. The influence of changes in the small substituents on shape and electron distribution can also be investigated by mapping the total electron density on the electrostatic potential for molecules in the gas phase. This allows straightforward visualisation and comparison of overall shape, electron-rich and electron-deficient regions within molecules. The shape of these molecules can be further investigated to study its influence on diverse solid-state diversity. Posaconazole The original formulations of posaconazole on the market licensed as Noxafil were formulated utilising form I of posaconazole. The discovery of polymorphs of posaconazole increased rapidly and resulted in much research in crystallography of posaconazole. A methanol solvate and a 1,4-dioxane co-crystal were added to the Cambridge Structural Database (CSD). Ritonavir The antiviral drug ritonavir exists as two polymorphs, which differ greatly in efficacy. Such issues were solved by reformulating the medicine into gelcaps and tablets, rather than the original capsules. Aspirin There was only one proven polymorph Form I of aspirin, though the existence of another polymorph was debated since the 1960s, and one report from 1981 reported that when crystallized in the presence of aspirin anhydride, the diffractogram of aspirin has weak additional peaks. Though at the time it was dismissed as mere impurity, it was, in retrospect, Form II aspirin. Form II was reported in 2005, found after attempted co-crystallization of aspirin and levetiracetam from hot acetonitrile. In form I, pairs of aspirin molecules form centrosymmetric dimers through the acetyl groups with the (acidic) methyl proton to carbonyl hydrogen bonds. In form II, each aspirin molecule forms the same hydrogen bonds, but with two neighbouring molecules instead of one. With respect to the hydrogen bonds formed by the carboxylic acid groups, both polymorphs form identical dimer structures. The aspirin polymorphs contain identical 2-dimensional sections and are therefore more precisely described as polytypes. Pure Form II aspirin could be prepared by seeding the batch with aspirin anhydrate in 15% weight. Paracetamol Paracetamol powder has poor compression properties, which poses difficulty in making tablets. A second polymorph was found with more suitable compressive properties. Cortisone acetate Cortisone acetate exists in at least five different polymorphs, four of which are unstable in water and change to a stable form. Carbamazepine Carbamazepine, estrogen, paroxetine, and chloramphenicol also show polymorphism. Pyrazinamide Pyrazinamide has at least 4 polymorphs. All of them transforms to stable α form at room temperature upon storage or mechanical treatment. Recent studies prove that α form is thermodynamically stable at room temperature. Polytypism Polytypes are a special case of polymorphs, where multiple close-packed crystal structures differ in one dimension only. Polytypes have identical close-packed planes, but differ in the stacking sequence in the third dimension perpendicular to these planes. Silicon carbide (SiC) has more than 170 known polytypes, although most are rare. All the polytypes of SiC have virtually the same density and Gibbs free energy. The most common SiC polytypes are shown in Table 1. Table 1: Some polytypes of SiC. A second group of materials with different polytypes are the transition metal dichalcogenides, layered materials such as molybdenum disulfide (MoS2). For these materials the polytypes have more distinct effects on material properties, e.g. for MoS2, the 1T polytype is metallic in character, while the 2H form is more semiconducting. Another example is tantalum disulfide, where the common 1T as well as 2H polytypes occur, but also more complex 'mixed coordination' types such as 4Hb and 6R, where the trigonal prismatic and the octahedral geometry layers are mixed. Here, the 1T polytype exhibits a charge density wave, with distinct influence on the conductivity as a function of temperature, while the 2H polytype exhibits superconductivity. ZnS and CdI2 are also polytypical. It has been suggested that this type of polymorphism is due to kinetics where screw dislocations rapidly reproduce partly disordered sequences in a periodic fashion. Theory ]In terms of thermodynamics, two types of polymorphic behaviour are recognized. For a monotropic system, plots of the free energies of the various polymorphs against temperature do not cross before all polymorphs melt. As a result, any transition from one polymorph to another below the melting point will be irreversible. For an enantiotropic system, a plot of the free energy against temperature shows a crossing point before the various melting points. It may also be possible to convert interchangeably between the two polymorphs by heating or cooling, or through physical contact with a lower energy polymorph. A simple model of polymorphism is to model the Gibbs free energy of a ball-shaped crystal as . Here, the first term is the surface energy, and the second term is the volume energy. Both parameters . The function rises to a maximum before dropping, crossing zero at . In order to crystallize, a ball of crystal much overcome the energetic barrier to the part of the energy landscape. Now, suppose there are two kinds of crystals, with different energies and , and if they have the same shape as in Figure 2, then the two curves intersect at some . Then the system has three phases: . Crystals tend to dissolve. Amorphous phase. . Crystals tend to grow as form 1. . Crystals tend to grow as form 2. If the crystal is grown slowly, it could be kinetically stuck in form 1.
Physical sciences
Crystallography
Physics
3622421
https://en.wikipedia.org/wiki/Cratonic%20sequence
Cratonic sequence
A cratonic sequence (also known as megasequence, Sloss sequence or supersequence) in geology is a very large-scale lithostratigraphic sequence in the rock record that represents a complete cycle of marine transgression and regression on a craton (block of continental crust) over geologic time. They are geologic evidence of relative sea level rising and then falling (transgressing and regressing), thereby depositing varying layers of sediment onto the craton, now expressed as sedimentary rock. Places such as the Grand Canyon are a good visual example of this process, demonstrating the changes between layers deposited over time as the ancient environment changed. Cratonic sequences were first proposed by Laurence L. Sloss in 1963. Each one represents a time when inland seas deposited sediments across the craton. The top and bottom edges of a sequence are each bounded by craton-wide unconformities (time gaps in the rock record). The unconformities indicate when the seas receded and sediment was eroded rather than deposited. Cause and chronology These sequences may in part represent eustatic (global) change in sea level; however, when the proper names are used they usually refer to relative sea level changes on the North American continent. The most likely causes of these cycles is change in mid-ocean ridge volume, which is related to seafloor spreading rates. When Earth's mid-ocean ridges spread rapidly, the ridges tend to be longer than usual; also, the greater heat elevates the lithosphere over the ridges. This elevated lithosphere displaces seawater onto the continents; conversely, when spreading rates decline, the ridges subside, and the seas drain from the cratons. It is also possible that other mechanisms, such as dynamic topography related to mantle mass anomalies, and intraplate stress related to episodes of contractional and extensional tectonics, play a part by causing significant tectonic uplift and subsidence across the craton. There have been six cratonic sequences since the beginning of the Cambrian Period. For North America, from oldest to youngest, they are the Sauk, Tippecanoe, Kaskaskia, Absaroka, Zuñi, and Tejas sequences. Attempts to identify equivalent cratonic sequences on other continents have met with only limited success, suggesting that eustasy (total global sea-level change) is unlikely to be the sole responsible mechanism.
Physical sciences
Stratigraphy
Earth science
3622564
https://en.wikipedia.org/wiki/Pill%20millipede
Pill millipede
Pill millipedes are any members of two living (and one extinct) orders of millipedes, often grouped together into a single superorder, Oniscomorpha. The name Oniscomorpha refers to the millipedes' resemblance to certain woodlice (Oniscidea), also called pillbugs or "roly-polies". However, millipedes and woodlice are not closely related (belonging to the subphyla Myriapoda and Crustacea, respectively); rather, this is a case of convergent evolution. Description Pill millipedes are relatively short-bodied compared to most other millipedes, with only eleven to thirteen body segments, and are capable of rolling into a ball (volvation) when disturbed, as a defense against predators. This ability evolved separately in each of the two orders, making it a case of convergent evolution, rather than homology. They can also exude a noxious liquid, which may be both caustic and toxic among other millipede taxa, but is not in pill millipedes——Glomerida secretes a clear, odorless liquid from the midline of the back that contains toxic alkaloids and has a sedative effect to repel predators. Sphaerotheriida don't even have such ability, they completely rely on their hard shell to defend against enemies. Pill millipedes are detritivorous, feeding on decomposing plant matter, usually in woodlands. Orders Glomerida The order Glomerida is predominantly found in the Northern Hemisphere and includes species such as Glomeris marginata, the common European pill millipede. They have from eleven to twelve body segments, and possess dorsal ozopores (openings of the repugnatorial glands) rather than the lateral ozopores found on many other millipedes. Glomeridans reach maximum lengths of , and eyes, if present, are in a single row of ocelli. The order contains approximately 450 species found in Europe, South-east Asia and the Americas from California to Guatemala. Four species are present in the British Isles. Sphaerotheriida The order Sphaerotheriida is a Gondwana-distribution taxon, with around 350 species in southern Africa, Madagascar, Australasia and South East Asia. Five species, all in the genus Procyliosoma are present in New Zealand, and around thirty species are present in Australia. Sphaerotheriidans have thirteen body segments, and do not possess repugnatorial glands. Spherotheriidans reach larger size than Glomeridans (up to ), and always possess large, kidney-shaped eyes. Amynilyspedida Oniscomorpha also includes the extinct order Amynilyspedida from the upper Carboniferous of North America and Europe. Amynilyspedida differs from the other Oniscomorph orders in having 14–15 segments. The order contains the genus Amynilyspes with unique spines on the tergites, as well as Glomeropsis, Archiscudderia, and Palaeosphaeridium.
Biology and health sciences
Myriapoda
Animals
13036672
https://en.wikipedia.org/wiki/Miscibility
Miscibility
Miscibility () is the property of two substances to mix in all proportions (that is, to fully dissolve in each other at any concentration), forming a homogeneous mixture (a solution). Such substances are said to be miscible (etymologically equivalent to the common term "mixable"). The term is most often applied to liquids but also applies to solids and gases. An example in liquids is the miscibility of water and ethanol as they mix in all proportions. By contrast, substances are said to be immiscible if the mixture does not form a solution for certain proportions. For one example, oil is not soluble in water, so these two solvents are immiscible. As another example, butanone (methyl ethyl ketone) is immiscible in water: it is soluble in water up to about 275 grams per liter, but will separate into two phases beyond that. Organic compounds In organic compounds, the weight percent of hydrocarbon chain often determines the compound's miscibility with water. For example, among the alcohols, ethanol has two carbon atoms and is miscible with water, whereas 1-butanol with four carbons is not. 1-Octanol, with eight carbons, is practically insoluble in water, and its immiscibility leads it to be used as a standard for partition equilibria. The straight-chain carboxylic acids up to butanoic acid (with four carbon atoms) are miscible with water, pentanoic acid (with five carbons) is partly soluble, and hexanoic acid (with six) is practically insoluble, as are longer fatty acids and other lipids; the very long carbon chains of lipids cause them almost always to be immiscible with water. Analogous situations occur for other functional groups such as aldehydes and ketones. Metals Immiscible metals are unable to form alloys with each other. Typically, a mixture will be possible in the molten state, but upon freezing, the metals separate into layers. This property allows solid precipitates to be formed by rapidly freezing a molten mixture of immiscible metals. One example of immiscibility in metals is copper and cobalt, where rapid freezing to form solid precipitates has been used to create granular GMR materials. Some metals are immiscible in the liquid state. One with industrial importance is that liquid zinc and liquid silver are immiscible in liquid lead, while silver is miscible in zinc. This leads to the Parkes process, an example of liquid-liquid extraction, whereby lead containing any amount of silver is melted with zinc. The silver migrates to the zinc, which is skimmed off the top of the two-phase liquid, and the zinc is then boiled away, leaving nearly pure silver. Effect of entropy If a mixture of polymers has lower configurational entropy than the components, they are likely to be immiscible in one another even in the liquid state. Determination Miscibility of two materials is often determined optically. When the two miscible liquids are combined, the resulting liquid is clear. If the mixture is cloudy the two materials are immiscible. Care must be taken with this determination. If the indices of refraction of the two materials are similar, an immiscible mixture may be clear and give an incorrect determination that the two liquids are miscible.
Physical sciences
Mixture
Chemistry
25380742
https://en.wikipedia.org/wiki/Misorientation
Misorientation
In materials science, misorientation is the difference in crystallographic orientation between two crystallites in a polycrystalline material. In crystalline materials, the orientation of a crystallite is defined by a transformation from a sample reference frame (i.e. defined by the direction of a rolling or extrusion process and two orthogonal directions) to the local reference frame of the crystalline lattice, as defined by the basis of the unit cell. In the same way, misorientation is the transformation necessary to move from one local crystal frame to some other crystal frame. That is, it is the distance in orientation space between two distinct orientations. If the orientations are specified in terms of matrices of direction cosines and , then the misorientation operator going from to can be defined as follows: where the term is the reverse operation of , that is, transformation from crystal frame back to the sample frame. This provides an alternate description of misorientation as the successive operation of transforming from the first crystal frame () back to the sample frame and subsequently to the new crystal frame (). Various methods can be used to represent this transformation operation, such as: Euler angles, Rodrigues vectors, axis/angle (where the axis is specified as a crystallographic direction), or unit quaternions. Symmetry and misorientation The effect of crystal symmetry on misorientations is to reduce the fraction of the full orientation space necessary to uniquely represent all possible misorientation relationships. For example, cubic crystals (i.e. FCC) have 24 symmetrically related orientations. Each of these orientations is physically indistinguishable, though mathematically distinct. Therefore, the size of orientation space is reduced by a factor of 24. This defines the fundamental zone (FZ) for cubic symmetries. For the misorientation between two cubic crystallites, each possesses its 24 inherent symmetries. In addition, there exists a switching symmetry, defined by: which recognizes the invariance of misorientation to direction; A→B or B→A. The fraction of the total orientation space in the cubic-cubic fundamental zone for misorientation is then given by: or 1/48 the volume of the cubic fundamental zone. This also has the effect of limiting the maximum unique misorientation angle to 62.8° Disorientation describes the misorientation with the smallest possible rotation angle out of all symmetrically equivalent misorientations that fall within the FZ (usually specified as having an axis in the standard stereographic triangle for cubics). Calculation of these variants involves application of crystal symmetry operators to each of the orientations during the calculation of misorientation. where Ocrys denotes one of the symmetry operators for the material. Misorientation distribution The misorientation distribution (MD) is analogous to the ODF used in characterizing texture. The MD describes the probability of the misorientation between any two grains falling into a range around a given misorientation . While similar to a probability density, the MD is not mathematically the same due to the normalization. The intensity in an MD is given as "multiples of random density" (MRD) with respect to the distribution expected in a material with uniformly distributed misorientations. The MD can be calculated by either series expansion, typically using generalized spherical harmonics, or by a discrete binning scheme, where each data point is assigned to a bin and accumulated. Graphical representation Discrete misorientations or the misorientation distribution can be fully described as plots in the Euler angle, axis/angle, or Rodrigues vector space. Unit quaternions, while computationally convenient, do not lend themselves to graphical representation because of their four-dimensional nature. For any of the representations, plots are usually constructed as sections through the fundamental zone; along φ2 in Euler angles, at increments of rotation angle for axis/angle, and at constant ρ3 (parallel to <001>) for Rodrigues. Due to the irregular shape of the cubic-cubic FZ, the plots are typically given as sections through the cubic FZ with the more restrictive boundaries overlaid. Mackenzie plots are a one-dimensional representation of the MD plotting the relative frequency of the misorientation angle, irrespective of the axis. Mackenzie determined the misorientation distribution for a cubic sample with a random texture. Example of calculating misorientation The following is an example of the algorithm for determining the axis/angle representation of misorientation between two texture components given as Euler angles: Copper [90,35,45] S3 [59,37,63] The first step is converting the Euler angle representation, to an orientation matrix by: where and represent and of the respective Euler component. This yields the following orientation matrices: The misorientation is then: The axis/angle description (with the axis as a unit vector) is related to the misorientation matrix by: (There are errors in the similar formulae for the components of 'r' given in the book by Randle and Engler (see refs.), which will be corrected in the next edition of their book. The above are the correct versions, note a different form for these equations has to be used if Θ = 180 degrees.) For the copper—S3 misorientation given by , the axis/angle description is 19.5° about [0.689,0.623,0.369], which is only 2.3° from <221>. This result is only one of the 1152 symmetrically related possibilities but does specify the misorientation. This can be verified by considering all possible combinations of orientation symmetry (including switching symmetry).
Physical sciences
Crystallography
Physics
2662923
https://en.wikipedia.org/wiki/Dressing%20%28medicine%29
Dressing (medicine)
A dressing or compress is a piece of material such as a pad applied to a wound to promote healing and protect the wound from further harm. A dressing is designed to be in direct contact with the wound, as distinguished from a bandage, which is most often used to hold a dressing in place. Modern dressings are sterile. Medical uses A dressing can have a number of purposes, depending on the type, severity and position of the wound, although all purposes are focused on promoting recovery and protecting from further harm. Key purposes of a dressing are: Stop bleeding – to help to seal the wound to expedite the clotting process; Protection from infection – to defend the wound against germs and mechanical damage; Absorb exudate – to soak up blood, plasma, and other fluids exuded from the wound, containing it/them in one place and preventing maceration; Ease pain – either by a medicated analgesic effect, compression or simply preventing pain from further trauma; Debride the wound – to remove slough and foreign objects from the wound to expedite healing; Reduce psychological stress – to obscure a healing wound from the view of the patient and others. Ultimately, the aim of a dressing is to promote healing of the wound by providing a sterile, breathable and moist environment that facilitates granulation and epithelialization. This will then reduce the risk of infection, help the wound heal more quickly, and reduce scarring. Types Modern dressings include dry or impregnated gauze, plastic films, gels, foams, hydrocolloids, hydrogels, and alginates. They provide different physical environments suited to different wounds: Absorption of exudate, to regulate the moisture level surrounding the wound- for example, dry gauzes absorb exudate strongly, drying the wound, hydrocolloids maintain a moist environment and film dressings do not absorb exudate; Gas permeability and exchange, especially with regard to oxygen and water vapour; Maintaining the optimum temperature to encourage healing; Mechanically debriding a wound to remove slough. Pressure dressings are commonly used to treat burns and after skin grafts. They apply pressure and prevent fluids from collecting in the tissue. Dressings can also regulate the chemical environment of a wound, usually with the aim of preventing infection by the impregnation of topical antiseptic chemicals. Commonly used antiseptics include povidone-iodine, boracic lint dressings or historically castor oil. Antibiotics are also often used with dressings to prevent bacterial infection. Medical grade honey is another antiseptic option, and there is moderate evidence that honey dressings are more effective than common antiseptic and gauze for healing infected post-operative wounds. Bioelectric dressings can be effective in attacking certain antibiotic-resistant bacteria and speeding up the healing process. Dressings are also often impregnated with analgesics to reduce pain. The physical features of a dressing can impact the efficacy of such topical medications. Occlusive dressings, made from substances impervious to moisture such as plastic or latex, can be used to increase their rate of absorption into the skin. Dressings are usually secured with adhesive tape and/or a bandage. Many dressings today are produced as an "island" surrounded by an adhesive backing, ready for immediate application – these are known as island dressings. Passive products Generally, these products are indicated for only superficial, clean, and dry wounds with minimal exudates. They can also be used as secondary dressings (additional dressings to secure the primary dressing in place or to absorb additional discharge from the wound). Examples are: Gauze, lint, adhesive bandage (plasters), and cotton wool. The main aim is to protect the wound from bacterial contamination. They are also used for secondary dressing. Gauze dressing is made up of woven or non-woven fibres of cotton, rayon, and polyester. Gauze dressing are capable of absorbing discharge from wound but requires frequent changing. Excessive wound discharge would cause the gauze to adhere to the wound, thus causes pain when trying to remove the gauze from the wound. Bandages are made up of cotton wool, cellulose, or polyamide materials. Cotton bandages can act as a secondary dressing while compression bandages provides good compressions for venous ulcers. On the other hand, tulle gras dressing which is impregnated with paraffin oil is indicated for superficial clean wound. Interactive products Several types of interactive products are: semi-permeable film dressings, semi-permeable foam dressings, hydrogel dressings, hydrocolloid dressings, hydrofiber and alginate dressings. Apart from preventing bacteria contamination of the wound, they keep the wound environment moist in order to promote healing. Semi-permeable film dressing: This dressing is a transparent film made up of polyurethane. It allows the movement of water vapor, oxygen, and carbon dioxide into and out of the dressing. It also plays an additional role in autolytic debridement (removal of dead tissue) which is less painful when compared to manual wound debridement inside the operating theater. It is highly elastic and flexible, thus is closely adhered to the skin. As the dressing is transparent, wound inspection is possible without removing the dressing. Due to the limited absorption capacity, such dressing is only used in superficial wounds with low amount of discharge. Semi-permeable foam dressing: This dressing is made up of foam with hydrophilic (attracted to water) properties and outer layer of hydrophobic (repelled from water) properties with adhesive borders. The hydrophobic layer protects the wound from the outside fluid contamination. Meanwhile, the inner hydrophilic layer is able to absorb moderate amount of discharge from the wound. Therefore, this type of dressing is useful for wound with high amount of discharge and for wound with granulation tissue. Secondary dressings are not required. However, it requires frequent changing and is not suitable for dry wounds. Silicone is a common material that make up the foam. The foam is able to mold according to the shape of the wound. Hydrogel dressing: This dressing is made up of synthetic polymers such as methacrylate and polyvinyl pyrrolidine. It has high water content, thus provides moisture and cooling effect for the wound. The dressing is easy to remove from the wound without causing any damage. The dressing is also non-irritant. Therefore, it is used for dry necrotic wound, necrotic wound, pressure ulcers, and burn wound. It is not suitable for wounds with heavy discharge and infected wounds. Hydrocolloid dressing: This type of dressing contains two layers: inner colloidal layer and outer waterproof layer. It contains gel forming agents such as carboxymethylcellulose, gelatin and pectin. When the dressing is in contact with the wound, the wound discharge are retained to form gel which provides moist environment for wound healing. It protects the wound from bacterial contamination, absorbs wound discharge, and digests necrotic tissues. It is mostly use as secondary dressing. However, it is not used in wound with high discharge and neuropathic ulcers. Alginate dressing: This type of dressing is made up of either sodium or calcium salt of alginic acid. This dressing can absorb high amount of discharge from a wound. Ions present in the dressing can interact with blood to produce a film that protects the wound from bacterial contamination. However, this dressing is not suitable for dry wounds, third degree burn wound, and deep wounds with exposed bone. It also requires secondary dressing because wounds can quickly dry up with alginate dressing. Hydrofiber dressing: Made up of sodium carboxymethyl cellulose, hydrofibers can absorb high amounts of wound discharge, forming a gel and preventing skin maceration. Bioactive products Advancements in understanding of wounds have commanded biomedical innovations in the treatment of acute, chronic, and other types of wounds. Many biologics, skin substitutes, biomembranes and scaffolds have been developed to facilitate wound healing through various mechanisms. Usage Applying a dressing is a first aid skill, although many people undertake the practice with no training – especially on minor wounds. Modern dressings will almost all come in a prepackaged sterile wrapping, date coded to ensure sterility. Sterility is necessary to prevent infection from pathogens resident within the dressing. Historically, and still the case in many less developed areas and in an emergency, dressings are often improvised as needed. This can consist of anything, including clothing or spare material, which will fulfill some of the basic tenets of a dressing – usually stemming bleeding and absorbing exudate. Applying and changing dressings is one common task of medical personnel.
Technology
Equipment
null
2663284
https://en.wikipedia.org/wiki/Golden%20hour%20%28photography%29
Golden hour (photography)
In photography, the golden hour is the period of daytime shortly after sunrise or before sunset, during which daylight is redder and softer than when the sun is higher in the sky. The golden hour is also sometimes called the magic hour, especially by cinematographers and photographers. During these times, the brightness of the sky matches the brightness of streetlights, signs, car headlights and lit windows. The period of time shortly before the magic hour at sunrise, or after it at sunset, is called the "blue hour". This is when the sun is at a significant depth below the horizon, when residual, indirect sunlight takes on a predominantly blue shade, and there are no sharp shadows because the sun either has not risen, or has already set. Details When the sun is low above the horizon, sunlight rays must penetrate the atmosphere for a greater distance, reducing the intensity of the direct light, so that more of the illumination comes from indirect light from the sky, reducing the lighting ratio. This is technically a type of lighting diffusion. More blue light is scattered, so if the sun is present, its light appears more reddish. In addition, the sun's low angle above the horizon produces longer shadows. The term hour is used figuratively; the effect has no clearly defined duration and varies according to season and latitude. The character of the lighting is determined by the sun's altitude, and the time for the sun to move from the horizon to a specified altitude depends on a location's latitude and the time of year. In Los Angeles, California, at an hour after sunrise or an hour before sunset, the sun has an altitude of about 10–12°. For a location closer to the Equator, the same altitude is reached in less than an hour, and for a location farther from the equator, the altitude is reached in more than one hour. For a location sufficiently far from the equator, the sun may not reach an altitude of 10°, and the golden hour lasts for the entire day in certain seasons. In the middle of the day, the bright overhead sun can create strong highlights and dark shadows. The degree to which overexposure can occur varies because different types of film and digital cameras have different dynamic ranges. This harsh lighting problem is particularly important in portrait photography, where a fill flash is often necessary to balance lighting across the subject's face or body, filling in strong shadows that are usually considered undesirable. Because the contrast is less during the golden hour, shadows are less dark, and highlights are less likely to be overexposed. In landscape photography, the warm color of the low sun is often considered desirable to enhance the colours of the scene. It is the best time of day for natural photography when diffuse and warm light is desired.
Physical sciences
Celestial mechanics
Astronomy
2664153
https://en.wikipedia.org/wiki/Hauyne
Hauyne
Hauyne or haüyne, also called hauynite or haüynite ( ), old name Azure spar, is a rare tectosilicate sulfate mineral with endmember formula . As much as 5 wt % may be present, and also and Cl. It is a feldspathoid and a member of the sodalite group. Hauyne was first described in 1807 from samples discovered in Vesuvian lavas in Monte Somma, Italy, and was named in 1807 by Brunn-Neergard for the French crystallographer René Just Haüy (1743–1822). It is sometimes used as a gemstone. Sodalite group Formulae: haüyne sodalite nosean lazurite tsaregorodtsevite tugtupite vladimirivanovite All these minerals are feldspathoids. Haüyne forms a solid solution with nosean and with sodalite. Complete solid solution exists between synthetic nosean and haüyne at 600 °C, but only limited solid solution occurs in the sodalite-nosean and sodalite-haüyne systems. The characteristic blue color of sodalite-group minerals arises mainly from caged and clusters. Unit cell Haüyne belongs to the hexatetrahedral class of the isometric system, 3m, space group P3n. It has one formula unit per unit cell (Z = 1), which is a cube with side length of 9 Å. More accurate measurements are as follows: a = 8.9 Å a = 9.08 to 9.13 Å a = 9.10 to 9.13 Å a = 9.11(2) Å a = 9.116 Å a = 9.13 Å Structure All silicates have a basic structural unit that is a tetrahedron with an oxygen ion O at each apex, and a silicon ion Si in the middle, forming (SiO4)4−. In tectosilicates (framework silicates) each oxygen ion is shared between two tetrahedra, linking all the tetrahedra together to form a framework. Since each O is shared between two tetrahedra only half of it "belongs" to the Si ion in either tetrahedron, and if no other components are present then the formula is SiO2, as in quartz. Aluminium ions Al, can substitute for some of the silicon ions, forming (AlO4)5− tetrahedra. If the substitution is random the ions are said to be disordered, but in haüyne the Al and Si in the tetrahedral framework are fully ordered. Si has a charge 4+, but the charge on Al is only 3+. If all the cations (positive ions) are Si then the positive charges on the Si's exactly balance the negative charges on the O's. When Al replaces Si there is a deficiency of positive charge, and this is made up by extra positively charged ions (cations) entering the structure, somewhere in between the tetrahedra. In haüyne these extra cations are sodium Na+ and calcium Ca2+, and in addition the negatively charged sulfate group (SO4)2− is also present. In the haüyne structure the tetrahedra are linked to form six-membered rings that are stacked up in an ..ABCABC.. sequence along one direction, and rings of four tetrahedra are stacked up parallel to another direction. The resulting arrangement forms continuous channels that can accommodate a large variety of cations and anions. Appearance Haüyne crystallizes in the isometric system forming rare dodecahedral or pseudo-octahedral crystals that may reach 3 cm across; it also occurs as rounded grains. The crystals are transparent to translucent, with a vitreous to greasy luster. The color is usually bright blue, but it can also be white, grey, yellow, green and pink. In thin section the crystals are colorless or pale blue, and the streak is very pale blue to white. Optical properties Haüyne is isotropic. Truly isotropic minerals have no birefringence, but haüyne is weakly birefringent when it contains inclusions. The refractive index is 1.50; although this is quite low, similar to that of ordinary window glass, it is the largest value for minerals of the sodalite group. It may show reddish orange to purplish pink fluorescence under longwave ultraviolet light. Physical properties Cleavage is distinct to perfect, and twinning is common, as contact, penetration and polysynthetic twins. The fracture is uneven to conchoidal, the mineral is brittle, and it has hardness to 6, almost as hard as feldspar. All the members of the sodalite group have quite low densities, less than that of quartz; haüyne is the densest of them all, but still its specific gravity is only 2.44 to 2.50. If haüyne is placed on a glass slide and treated with nitric acid HNO3, and then the solution is allowed to evaporate slowly, monoclinic needles of gypsum form. This distinguishes haüyne from sodalite, which forms cubic crystals of chlorite under the same conditions. The mineral is not radioactive. Geological setting and associations Haüyne occurs in phonolites and related leucite- or nepheline-rich, silica-poor, igneous rocks; less commonly in nepheline-free extrusives and metamorphic rocks (marble). Associated minerals include nepheline, leucite, titanian andradite, melilite, augite, sanidine, biotite, phlogopite and apatite. Localities The type locality is Lake Nemi, Alban Hills, Rome Province, Latium, Italy. Occurrences include: Canary Islands: A pale blue mineral intermediate between haüyne and lazurite has been found in spinel dunite xenoliths from La Palma, Canary Islands. Ecuador: Phenocrysts found in alkaline extrusive rocks (tephrite), product of effusive volcanism of the Sumaco volcano, of northeast Ecuador. Germany: In ejected rocks of hornblende-haüyne-scapolite rock from the Laach lake volcanic complex, Eifel, Rhineland-Palatinate Italy: Anhedral blue to dark grey phenocrysts in leucite-melilite-bearing lava at Monte Vulture, Melfi, Basilicata, Potenza Italy: Millimetric transparent blue crystals in ejecta consisting mainly of K-feldspar and plagioclase from Albano Laziale, Roma Italy: Ejected blocks in the peperino of the Alban Hills, Rome Province, Latium, contain white octahedral haüyne associated with leucite, garnet, melilite and latiumite. US: Haüyne of metamorphic origin occurs at the Edwards Mine, St. Lawrence County, New York. US: Haüyne occurs in nepheline alnoite with melilite, phlogopite and apatite at Winnett, Petroleum County, Montana, US. US: Haüyne is common in small quantities as phenocrysts in phonolite and lamprophyre at the Cripple Creek, Colorado Mining District, Colorado, US.
Physical sciences
Silicate minerals
Earth science
2664780
https://en.wikipedia.org/wiki/Fire%20whirl
Fire whirl
A fire whirl, fire devil or fire tornado is a whirlwind induced by a fire and often (at least partially) composed of flame or ash. These start with a whirl of wind, often made visible by smoke, and may occur when intense rising heat and turbulent wind conditions combine to form whirling eddies of air. These eddies can contract to a tornado-like vortex that sucks in debris and combustible gases. The phenomenon is sometimes labeled a fire tornado, firenado, fire swirl, or fire twister, but these terms usually refer to a separate phenomenon where a fire has such intensity that it generates an actual tornado. Fire whirls are not usually classifiable as tornadoes as the vortex in most cases does not extend from the surface to cloud base. Also, even in such cases, those fire whirls very rarely are classic tornadoes, as their vorticity derives from surface winds and heat-induced lifting, rather than from a tornadic mesocyclone aloft. The phenomenon was first verified in the 2003 Canberra bushfires and has since been verified in the 2018 Carr Fire in California, the 2020 Loyalton Fire in California and Nevada. Formation A fire whirl consists of a burning core and a rotating pocket of air. A fire whirl can reach up to . Fire whirls become frequent when a wildfire, or especially firestorm, creates its own wind, which can spawn large vortices. Even bonfires often have whirls on a smaller scale and tiny fire whirls have been generated by very small fires in laboratories. Most of the largest fire whirls are spawned from wildfires. They form when a warm updraft and convergence from the wildfire are present. They are usually tall, a few meters (several feet) wide, and last only a few minutes. Some, however, can be more than tall, contain wind speeds over , and persist for more than 20 minutes. Fire whirls can uproot trees tall or more. These can also aid the 'spotting' ability of wildfires to propagate and start new fires as they lift burning materials such as tree bark. These burning embers can be blown away from the fire-ground by the stronger winds aloft. Fire whirls can be common within the vicinity of a plume during a volcanic eruption. These range from small to large and form from a variety of mechanisms, including those akin to typical fire whirl processes, but can result in Cumulonimbus flammagenitus (cloud) spawning landspouts and waterspouts or even to develop mesocyclone-like updraft rotation of the plume itself and/or of the cumulonimbi, which can spawn tornadoes similar to those in supercells. Pyrocumulonimbi generated by large fires on rare occasions also develop in a similar way. Classification There are currently three widely recognized types of fire whirls: Type 1: Stable and centered over burning area. Type 2: Stable or transient, downwind of burning area. Type 3: Steady or transient, centered over an open area adjacent to an asymmetric burning area with wind. There is evidence suggesting that the fire whirl in the Hifukusho-ato area, during the 1923 Great Kantō earthquake, was of type 3. Other mechanism and fire whirl dynamics may exist. A broader classification of fire whirls suggested by Forman A. Williams includes five different categories: Whirls generated by fuel distribution in wind Whirls above fuels in pools or on water Tilted fire whirls Moving fire whirls Whirls modified by vortex breakdown The meteorological community views some fire-induced phenomena as atmospheric phenomena. Using the pyro- prefix, fire-induced clouds are called pyrocumulus and pyrocumulonimbus. Larger fire vortices are similarly being viewed. Based on vortex scale, the classification terms of pyronado, "pyrotornado", and "pyromesocyclone" have been proposed. Notable examples During the 1871 Peshtigo fire, the community of Williamsonville, Wisconsin, was burned by a fire whirl; the area where Williamsonville once stood is now Tornado Memorial County Park. An extreme example of the phenomenon occurred in the aftermath of the 1923 Great Kantō earthquake in Japan, in which a city-wide firestorm in Tokyo produced the conditions required for a gigantic fire whirl that killed 38,000 people in fifteen minutes in the Hifukusho-Ato region of the city. Numerous large fire whirls (some tornadic) that developed after lightning struck an oil storage facility near San Luis Obispo, California, on 7 April 1926, produced significant structural damage well away from the fire, killing two. Many whirlwinds were produced by the four-day-long firestorm coincident with conditions that produced severe thunderstorms, in which the larger fire whirls carried debris away. Fire whirls were produced in the conflagrations and firestorms triggered by firebombings of European and Japanese cities during World War II and by the atomic bombings of Hiroshima and Nagasaki. Fire whirls associated with the bombing of Hamburg, particularly those of 27–28 July 1943, were studied. Throughout the 1960s and 1970s, particularly in 1978–1979, fire whirls ranging from the transient and very small to intense, long-lived tornado-like vortices capable of causing significant damage were spawned by fires generated from the 1000 MW Météotron, a series of large oil wells located in the Lannemezan plain of France used for testing atmospheric motions and thermodynamics. During the 2003 Canberra bushfires in Canberra, Australia, a violent fire whirl was documented. It was calculated to have horizontal winds of and vertical air speed of , causing the flashover of in 0.04 seconds. It was the first known fire whirl in Australia to have EF3 wind speeds on the Enhanced Fujita scale. On May 22, 2015, a Conair Group float-equipped Air Tractor AT-802 fighting a fire near Cold Lake, Alberta encountered a fire whirl. The encounter resulted in a loss of control and collision with terrain, killing the pilot on board. A fire whirl, of reportedly uncommon size for New Zealand wildfires, formed on day three of the 2017 Port Hills fires in Christchurch. Pilots estimated the fire column to be high. On July 26, 2018, the massive 2018 Carr Fire tornado would hit Redding, California. On August 15, 2020, for the first time in its history, the U.S. National Weather Service issued a tornado warning for a pyrocumulonimbus created by a wildfire near Loyalton, California, capable of producing a fire tornado. On January 11, 2025, a fire whirl was spotted in the Palisades wildfire. Blue whirl In controlled small-scale experiments, fire whirls are found to transition to a mode of combustion called blue whirls. The name blue whirl was coined because the soot production is negligible, leading to the disappearance of the yellow color typical of a fire whirl. Blue whirls are partially premixed flames that reside elevated in the recirculation region of the vortex-breakdown bubble. The flame length and burning rate of a blue whirl are smaller than those of a fire whirl.
Physical sciences
Storms
Earth science
2666137
https://en.wikipedia.org/wiki/Pier%20%28architecture%29
Pier (architecture)
A pier, in architecture, is an upright support for a structure or superstructure such as an arch or bridge. Sections of structural walls between openings (bays) can function as piers. External or free-standing walls may have piers at the ends or on corners. Description The simplest cross section of the pier is square, or rectangular, but other shapes are also common. In medieval architecture, massive circular supports called drum piers, cruciform (cross-shaped) piers, and compound piers are common architectural elements. Columns are a similar upright support, but stand on a round base; in many contexts columns may also be called piers. In buildings with a sequence of bays between piers, each opening (window or door) between two piers is considered a single bay. Bridge piers Single-span bridges have abutments at each end that support the weight of the bridge and serve as retaining walls to resist lateral movement of the earthen fill of the bridge approach. Multi-span bridges require piers to support the ends of spans between these abutments. In cold climates, the upstream edge of a pier may include a starkwater to prevent accumulation of broken ice during peak snowmelt flows. The starkwater has a sharpened upstream edge sometimes called a cutwater. The cutwater edge may be of concrete or masonry, but is often capped with a steel angle to resist abrasion and focus force at a single point to fracture floating pieces of ice striking the pier. In cold climates, the starling is typically sloped at an angle of about 45°  so current pushing against the ice tends to lift the downstream edge of the ice translating horizontal force of the current to vertical force against a thinner cross-section of ice until unsupported weight of ice fractures the piece of ice allowing it to pass on either side of the pier. Examples In the Arc de Triomphe, Paris (illustration, right) the central arch and side arches are raised on four massive . St Peter's Basilica Donato Bramante's original plan for St Peter's Basilica in Rome has richly articulated piers. Four piers support the weight of the dome at the central crossing. These piers were found to be too small to support the weight and were changed later by Michelangelo to account for the massive weight of the dome. The piers of the four apses that project from each outer wall are also strong, to withstand the outward thrust of the half-domes upon them. Many niches articulate the wall-spaces of the piers.
Technology
Architectural elements
null
2666439
https://en.wikipedia.org/wiki/Honeycrisp
Honeycrisp
Honeycrisp (Malus pumila) is an apple cultivar (cultivated variety) developed at the Minnesota Agricultural Experiment Station's Horticultural Research Center at the University of Minnesota, Twin Cities. Designated in 1974 with the MN 1711 test designation, patented in 1988, and released in 1991, the Honeycrisp, once slated to be discarded, has rapidly become a prized commercial commodity, as its sweetness, firmness, and tartness make it an ideal apple for eating raw. "...The apple wasn't bred to grow, store or ship well. It was bred for taste: crisp, with balanced sweetness and acidity." It has larger cells than most apple cultivars, a trait which is correlated with juiciness, as larger cells are more prone to rupturing instead of cleaving along the cell walls; this rupturing effect is likely what makes the apple taste juicier. The Honeycrisp also retains its pigment well and has a relatively long shelf life when stored in cool, dry conditions. Pepin Heights Orchards delivered the first Honeycrisp apples to grocery stores in 1997. The name Honeycrisp was trademarked by the University of Minnesota, but university officials were unsure of its patent status in 2007. It is now the official state fruit of Minnesota. A large-sized honeycrisp will contain about . Genetics U.S. Plant Patent 7197 and Report 225-1992 (AD-MR-5877-B) from the Horticultural Research Center indicated that the Honeycrisp was a hybrid of the apple cultivars 'Macoun' and 'Honeygold'. However, genetic fingerprinting conducted by a group of researchers in 2004, which included those who were attributed on the US plant patent, determined that neither of these cultivars is a parent of the Honeycrisp. It found that one parent was a hybrid of the Keepsake (itself a hybrid of Frostbite (MN447) x Northern Spy) while the other was identified in 2017 as the unreleased University of Minnesota selection MN1627. The grandparents of Honeycrisp on the MN1627 side are the Duchess of Oldenburg and the Golden Delicious. The US patent for the Honeycrisp cultivar expired in 2008, although patents in some countries don't expire until as late as 2031. Patent royalties had generated more than $10 million by 2011, split three ways by the University of Minnesota between its inventors, the college and department in which the research was conducted, and a fund for other research. The University of Minnesota crossed Honeycrisp with another of their apple varieties, Minnewashta (brand name Zestar!), to create a hybrid called Minneiska (brand name SweeTango), released as a "managed variety" to control how and where it can be grown and sold. SugarBee is an open cross-pollination between Honeycrisp and an unknown variety discovered in Minnesota in the early 1990s. Agriculture Honeycrisp apple flowers are self-sterile, so another apple variety must be nearby as a pollenizer in order to get fruit. Most other apple varieties will pollenize Honeycrisp, as will varieties of crabapple. Honeycrisp will not come true when grown from seed. Trees grown from the seeds of Honeycrisp apples will be hybrids of Honeycrisp and the pollenizer. Young trees typically have a lower density of large, well-colored fruit, while mature trees have higher fruit density of fruit with diminished size and color quality. Fruit density can be adjusted through removal of blossom clusters or young fruit to counteract the effect. Flesh firmness is also generally better with lower crop densities. Bitter pit disproportionately affects Honeycrisps; typically 23% of the harvest is affected. International growth As a result of the Honeycrisp apple's growing popularity, the government of Nova Scotia, Canada, spent over C$1.5 million funding a five-year Honeycrisp Orchard Renewal Program from 2005 to 2010 to subsidize apple producers to replace older trees (mainly McIntosh) with newer higher-return varieties of apples: the Honeycrisp, Gala, and Ambrosia. Apple growers in New Zealand's South Island have begun growing Honeycrisp to supply consumers during the US off-season. The first batch of New Zealand-grown Honeycrisp cultivars being introduced to the North American market have been branded using the "HoneyCrunch" registered trademark. According to the US Apple Association website it is one of the fifteen most popular apple cultivars in the United States.
Biology and health sciences
Pomes
Plants
24004580
https://en.wikipedia.org/wiki/Remingtonocetidae
Remingtonocetidae
Remingtonocetidae is a diverse family of early aquatic mammals of the order Cetacea. The family is named after paleocetologist Remington Kellogg. Description Remingtonocetids have long and narrow skulls with the external nare openings located on the front of the skull. Their frontal shields are narrow and their orbits small. Their mouth has a convex palate and an incompletely fused mandibular symphysis. The dental formula is . The anterior teeth are flattened mediolaterally, making them appear shark-like. In the postcranial skeleton, the cervical vertebrae are relatively long and the sacrum is composed of four vertebrae of which at least three are fused. The acetabular notch is narrow or closed and on the femoral head the fovea is absent. Cranial fossils are common but dental remains are rare. The postcrania morphology is based entirely on a single specimen of Kutchicetus which was small and had a long and muscular back and tail. Perhaps remintonocetids swam like the South American giant otter which swims with its long flat tail. With long and low bodies, relatively short limbs, their elongated rostrum, remingtonocetids looked like mammalian crocodiles, more so than Ambulocetus. They could both walk on land and swim in the water and most likely lived in a near-shore habitat. At least one genus, Dalanistes, had a marine diet. Remingtonocetids are often found in association with catfish and crocodilians, as well as protocetid whales and sirenians. They were probably independent of freshwater. Distribution Remingtonocetidae was long considered endemic to the northern coastline of the ancient Tethys Ocean (in present day Pakistan and India) during the Eocene, but the discovery of Rayanistes in Egypt indicates that remingtonocetids had a broader distribution than previously thought. A single tooth recovered from the Castle Hayne Limestone of North Carolina, USA closely resembles that of remingtonocetids; if it belongs to one, it indicates that they may have been found as far west as eastern North America, expanding their distribution across the Atlantic. Taxonomy Remingtonocetidae was established by . It was considered monophyletic by . It was assigned to Odontoceti by ; to Remingtonocetoidea by and ; to Archaeoceti by ; to Archaeoceti by , , , , , , , and and to Cetacea by , and . The name of the family was derived from the type genus Remingtonocetus, which was named after paleocetologist Remington Kellogg. In 2009, paleontologists Thewissen & Bajpai proposed the subfamily Andrewsiphiinae for the genera Andrewsiphius and Kutchicetus. Genera Remingtonocetus (type) Andrewsiphius Attockicetus , the oldest genus Dalanistes Kutchicetus Rayanistes Bebej, Zalmout, El-Aziz, Antar, and Gingerich, 2016
Biology and health sciences
Cetaceans
Animals
286775
https://en.wikipedia.org/wiki/Touchpad
Touchpad
A touchpad or trackpad is a type of pointing device. Its largest component is a tactile sensor: an electronic device with a flat surface, that detects the motion and position of a user's fingers, and translates them to 2D motion, to control a pointer in a graphical user interface on a computer screen. Touchpads are common on laptop computers, contrasted with desktop computers, where mice are more prevalent. Trackpads are sometimes used with desktop setups where desk space is scarce. Wireless touchpads are also available, as detached accessories. Due to the ability of trackpads to be made small, they were additionally used on personal digital assistants (PDAs) and some portable media players. Operation and function Touchpads operate in several ways, including capacitive sensing or resistive touchscreen. The most common technology used in the 2010s senses the change of capacitance where a finger touches the pad. Capacitance-based touchpads will not sense the tip of a pencil or other similar ungrounded or non-conducting implements. Fingers insulated by a glove may also be problematic, and capacitive touchpads are rarely used as pointing devices for medical hardware. Like touchscreens, touchpads sense absolute position but their resolution is limited by their size. For common use as a pointer device, the dragging motion of a finger is translated into a finer, relative motion of the cursor on the output to the display on the operating system, analogous to the handling of a mouse that is lifted and put back on a surface. Hardware buttons equivalent to a standard mouse's left and right buttons are sometimes positioned adjacent to the touchpad. Some touchpads and associated device driver software may interpret tapping the pad as a mouse click, and a tap followed by a continuous pointing motion (a "click-and-a-half") can indicate dragging. Tactile touchpads allow for clicking and dragging by incorporating button functionality into the surface of the touchpad itself. To select, one presses down on the touchpad instead of a physical button. To drag, instead of performing the "click-and-a-half" technique, the user presses down while on the object, drags without releasing pressure, and lets go when done. Touchpad drivers can also allow the use of multiple fingers to emulate the other mouse buttons (commonly two-finger tapping for right click). Touchpads are called clickpads if they rely on software buttons rather than physical buttons. Physically the whole clickpad formed a button, logically the driver interprets a click as a left or right button click depending on the placement of fingers. Some touchpads have "hotspots", locations on the touchpad used for functionality beyond a mouse. For example, on certain touchpads, moving the finger along an edge of the touch pad will act as a scroll wheel, controlling the scrollbar and scrolling the window that has the focus, vertically or horizontally. Many touchpads use two-finger dragging for scrolling. Also, some touchpad drivers support tap zones, regions where a tap will execute a function, for example, pausing a media player or launching an application. All of these functions are implemented in the touchpad device driver software, and can be disabled. History In 1980, Xerox offered one of the first, if not first, touchpads on a computer system with their Xerox 860, a word processing workstation aimed at medium- and large-sized businesses. Embedded on the Xerox 860's keyboard, to the right of the keys, is the circular touchpad, which Xerox dubbed the "Cat" (short for capacitance-activated transducer). Xerox offered the Cat as an alternative input method for selecting strings of text to copy, delete, insert, or move around the document. By 1982, Apollo desktop computers were equipped with a touchpad on the right side of the keyboard. Introduced a year later, in 1983, the first battery-powered clamshell laptop, the Gavilan SC included a touchpad, which was mounted above its keyboard, rather than below, which became the norm. Psion's MC 200/400/600/WORD Series, introduced in 1989, came with a new mouse-replacing input device similar to a touchpad, although more closely resembling a graphics tablet, as the cursor was positioned by clicking on a specific point on the pad, instead of moving it in the direction of a stroke. Laptops with touchpads were launched by Olivetti and Triumph-Adler in 1992. Cirque introduced the first widely available touchpad, branded as GlidePoint, in 1994. Apple introduced touchpads with modern placing in the PowerBook 500 series in 1994, using Cirque's GlidePoint technology, which Apple refers to as a "trackpad"; it replaced the trackball of previous PowerBook models. Since 2008, Apple's revisions of the MacBook and MacBook Pro incorporated a "Tactile Touchpad" design with a button integrated into the tracking surface (the lower part of the touchpad surface acts as a clickable button). Another early adopter of the GlidePoint pointing device was Sharp. Later, Synaptics introduced their touchpad into the marketplace, branded the TouchPad, and Epson was an early adopter of this product with their ActionNote. As touchpads began to be introduced in laptops in the 1990s, there was often confusion as to what the product should be called. No consistent term was used, and references varied, such as: glidepoint, touch sensitive input device, touchpad, trackpad, and pointing device. Users were often presented with the option to purchase a pointing stick, touchpad, or trackball. Combinations of the devices were common, though touchpads and trackballs were rarely included together. Since the early 2000s, touchpads have become the dominant laptop pointing device as most consumer laptops produced during this period and beyond includes only touchpads, displacing the pointing stick. Use in devices Touchpads are primarily used in self-contained portable laptop computers and do not require a flat surface near the machine. The touchpad is close to the keyboard, and relatively short finger movements are required to move the cursor across the display screen; while advantageous, this also makes it possible for a user's palm or wrist to move the mouse cursor accidentally while typing. Laptops today feature multitouch touchpads that can sense in some cases up to five fingers simultaneously, providing more options for input, such as the ability to bring up the context menu by tapping two fingers, dragging two fingers for scrolling, or gestures for zoom in/out or rotate. The touchpads with physical buttons now are only hi-end business/professional laptops option. One-dimensional touchpads are the primary control interface for menu navigation on iPod Classic portable music players and additional input method on some Wacom digitizer tablets, where they are referred to as "click wheels", since they only sense motion along one axis, which is wrapped around like a wheel. Creative Labs also uses a touchpad for their Zen line of MP3 players, beginning with the Zen Touch. The second-generation Microsoft Zune product line (the Zune 80/120 and Zune 4/8) uses touch for the Zune Pad. Touchpads also exist for desktop computers as an external peripheral, albeit rarely seen. But touchpad layer can be integrated with graphics tablet as additional input option. External computer keyboards can be equipped with integrated touchpads (particularly keyboards oriented for HTPC use), and some keyboards can have only touch input surface instead of hardware buttons (a typical solution for clean rooms). Optical trackpads primary can be used as part of ultraportable electronics; some handheld laptops and early smartphones can be equipped with optical trackpads. On September 9, 2024, Apple unveiled the iPhone 16, iPhone 16 Plus, iPhone 16 Pro and iPhone 16 Pro Max to feature the single one-dimensional trackpad next to the side button called "Camera Control" which allows the users to take an easier way to take a photos and videos, the force sensor to distinguish between the firm press and the light press and the capacitance touch sensor to distinguish sliding with finger for adjusting the zoom, exposure or depth-of-field. Theory of operation There are two principal means by which touchpads work: the matrix approach and the capacitive shunt method. In the matrix approach, a series of conductors are arranged in an array of parallel lines in two layers, separated by an insulator and crossing each other at right angles to form a grid. A high frequency signal is applied sequentially between pairs in this two-dimensional grid array. The current that passes between the nodes is proportional to the capacitance. When a virtual ground, such as a finger, is placed over one of the intersections between the conductive layer some of the electrical field is shunted to this ground point, resulting in a change in the apparent capacitance at that location. This method received awarded to George Gerpheide in April 1994. The capacitive shunt method, described in an application note by manufacturer Analog Devices, senses the change in capacitance between a transmitter and receiver that are on opposite sides of the sensor. The transmitter creates an electric field which oscillates at 200–300 kHz. If a ground point, such as the finger, is placed between the transmitter and receiver, some of the field lines are shunted away, decreasing the apparent capacitance. Trackpads such as those found in some Blackberry smartphones work optically, like an optical computer mouse. Manufacturing Major manufacturers include: Alps Electric Elan Microelectronics Cirque Corporation Synaptics
Technology
User interface
null
286788
https://en.wikipedia.org/wiki/Galilean%20invariance
Galilean invariance
Galilean invariance or Galilean relativity states that the laws of motion are the same in all inertial frames of reference. Galileo Galilei first described this principle in 1632 in his Dialogue Concerning the Two Chief World Systems using the example of a ship travelling at constant velocity, without rocking, on a smooth sea; any observer below the deck would not be able to tell whether the ship was moving or stationary. Formulation Specifically, the term Galilean invariance today usually refers to this principle as applied to Newtonian mechanics, that is, Newton's laws of motion hold in all frames related to one another by a Galilean transformation. In other words, all frames related to one another by such a transformation are inertial (meaning, Newton's equation of motion is valid in these frames). In this context it is sometimes called Newtonian relativity. Among the axioms from Newton's theory are: There exists an absolute space, in which Newton's laws are true. An inertial frame is a reference frame in relative uniform motion to absolute space. All inertial frames share a universal time. Galilean relativity can be shown as follows. Consider two inertial frames S and S' . A physical event in S will have position coordinates r = (x, y, z) and time t in S, and r' = (x' , y' , z' ) and time t' in S' . By the second axiom above, one can synchronize the clock in the two frames and assume t = t' . Suppose S' is in relative uniform motion to S with velocity v. Consider a point object whose position is given by functions r' (t) in S' and r(t) in S. We see that The velocity of the particle is given by the time derivative of the position: Another differentiation gives the acceleration in the two frames: It is this simple but crucial result that implies Galilean relativity. Assuming that mass is invariant in all inertial frames, the above equation shows Newton's laws of mechanics, if valid in one frame, must hold for all frames. But it is assumed to hold in absolute space, therefore Galilean relativity holds. Newton's theory versus special relativity A comparison can be made between Newtonian relativity and special relativity. Some of the assumptions and properties of Newton's theory are: The existence of infinitely many inertial frames. Each frame is of infinite size (the entire universe may be covered by many linearly equivalent frames). Any two frames may be in relative uniform motion. (The relativistic nature of mechanics derived above shows that the absolute space assumption is not necessary.) The inertial frames may move in all possible relative forms of uniform motion. There is a universal, or absolute, notion of elapsed time. Two inertial frames are related by a Galilean transformation. In all inertial frames, Newton's laws, and gravity, hold. In comparison, the corresponding statements from special relativity are as follows: The existence, as well, of infinitely many non-inertial frames, each of which referenced to (and physically determined by) a unique set of spacetime coordinates. Each frame may be of infinite size, but its definition is always determined locally by contextual physical conditions. Any two frames may be in relative non-uniform motion (as long as it is assumed that this condition of relative motion implies a relativistic dynamical effect – and later, mechanical effect in general relativity – between both frames). Rather than freely allowing all conditions of relative uniform motion between frames of reference, the relative velocity between two inertial frames becomes bounded above by the speed of light. Instead of universal elapsed time, each inertial frame possesses its own notion of elapsed time. The Galilean transformations are replaced by Lorentz transformations. In all inertial frames, all laws of physics are the same. Both theories assume the existence of inertial frames. In practice, the size of the frames in which they remain valid differ greatly, depending on gravitational tidal forces. In the appropriate context, a local Newtonian inertial frame, where Newton's theory remains a good model, extends to roughly 107 light years. In special relativity, one considers Einstein's cabins, cabins that fall freely in a gravitational field. According to Einstein's thought experiment, a man in such a cabin experiences (to a good approximation) no gravity and therefore the cabin is an approximate inertial frame. However, one has to assume that the size of the cabin is sufficiently small so that the gravitational field is approximately parallel in its interior. This can greatly reduce the sizes of such approximate frames, in comparison to Newtonian frames. For example, an artificial satellite orbiting the Earth can be viewed as a cabin. However, reasonably sensitive instruments could detect "microgravity" in such a situation because the "lines of force" of the Earth's gravitational field converge. In general, the convergence of gravitational fields in the universe dictates the scale at which one might consider such (local) inertial frames. For example, a spaceship falling into a black hole or neutron star would (at a certain distance) be subjected to tidal forces strong enough to crush it in width and tear it apart in length. In comparison, however, such forces might only be uncomfortable for the astronauts inside (compressing their joints, making it difficult to extend their limbs in any direction perpendicular to the gravity field of the star). Reducing the scale further, the forces at that distance might have almost no effects at all on a mouse. This illustrates the idea that all freely falling frames are locally inertial (acceleration and gravity-free) if the scale is chosen correctly. Electromagnetism There are two consistent Galilean transformations that may be used with electromagnetic fields in certain situations. A transformation is not consistent if where and are velocities. A consistent transformation will produce the same results when transforming to a new velocity in one step or multiple steps. It is not possible to have a consistent Galilean transformation that transforms both the magnetic and electric fields. There are useful consistent Galilean transformations that may be applied whenever either the magnetic field or the electric field is dominant. Magnetic field system Magnetic field systems are those systems in which the electric field in the initial frame of reference is insignificant, but the magnetic field is strong. When the magnetic field is dominant and the relative velocity, , is low, then the following transformation may be useful: where is free current density, is magnetization density. The electric field is transformed under this transformation when changing frames of reference, but the magnetic field and related quantities are unchanged. An example of this situation is a wire is moving in a magnetic field such as would occur in an ordinary generator or motor. The transformed electric field in the moving frame of reference could induce current in the wire. Electric field system Electric field systems are those systems in which the magnetic field in the initial frame of reference is insignificant, but the electric field is strong. When the electric field is dominant and the relative velocity, , is low, then the following transformation may be useful: where is free charge density, is polarization density. The magnetic field and free current density are transformed under this transformation when changing frames of reference, but the electric field and related quantities are unchanged Work, kinetic energy, and momentum Because the distance covered while applying a force to an object depends on the inertial frame of reference, so depends the work done. Due to Newton's law of reciprocal actions there is a reaction force; it does work depending on the inertial frame of reference in an opposite way. The total work done is independent of the inertial frame of reference. Correspondingly the kinetic energy of an object, and even the change in this energy due to a change in velocity, depends on the inertial frame of reference. The total kinetic energy of an isolated system also depends on the inertial frame of reference: it is the sum of the total kinetic energy in a center-of-momentum frame and the kinetic energy the total mass would have if it were concentrated in the center of mass. Due to the conservation of momentum the latter does not change with time, so changes with time of the total kinetic energy do not depend on the inertial frame of reference. By contrast, while the momentum of an object also depends on the inertial frame of reference, its change due to a change in velocity does not.
Physical sciences
Classical mechanics
Physics
286790
https://en.wikipedia.org/wiki/Trauma%20center
Trauma center
A trauma center, or trauma centre, is a hospital equipped and staffed to provide care for patients suffering from major traumatic injuries such as falls, motor vehicle collisions, or gunshot wounds. The term "trauma center" may be used incorrectly to refer to an emergency department (also known as a "casualty department" or "accident and emergency") that lacks the presence of specialized services or certification to care for victims of major trauma. In the United States, a hospital can receive trauma center status by meeting specific criteria established by the American College of Surgeons (ACS) and passing a site review by the Verification Review Committee. Official designation as a trauma center is determined by individual state law provisions. Trauma centers vary in their specific capabilities and are identified by "Level" designation, Level I (Level-1) being the highest and Level III (Level-3) being the lowest (some states have four or five designated levels). The highest levels of trauma centers have access to specialist medical and nursing care, including emergency medicine, trauma surgery, critical care, neurosurgery, orthopedic surgery, anesthesiology, and radiology, as well as a wide variety of highly specialized and sophisticated surgical and diagnostic equipment. Lower levels of trauma centers may be able to provide only initial care and stabilization of a traumatic injury and arrange for transfer of the patient to a higher level of trauma care. The operation of a trauma center is often expensive and some areas may be underserved by trauma centers because of that expense. As there is no way to schedule the need for emergency services, patient traffic at trauma centers can vary widely. A trauma center may have a helipad for receiving patients that have been airlifted to the hospital. In some cases, persons injured in remote areas and transported to a distant trauma center by helicopter can receive faster and better medical care than if they had been transported by ground ambulance to a closer hospital that does not have a designated trauma center. History United Kingdom Trauma centres grew into existence out of the realisation that traumatic injury is a disease process unto itself requiring specialised and experienced multidisciplinary treatment and specialised resources. The world's first trauma centre, the first hospital to be established specifically to treat injured rather than ill patients, was the Birmingham Accident Hospital, which opened in Birmingham, England in 1941 after a series of studies found that the treatment of injured persons within England was inadequate. By 1947, the hospital had three trauma teams, each including two surgeons and an anaesthetist, and a burns team with three surgeons. The hospital became part of the National Health Service in its formation in July 1948 and closed in 1993. The NHS now has 27 major trauma centres established across England, four in Scotland, and one planned in Wales. United States According to the CDC, injuries are the leading cause of death for American children and young adults ages 1–19. The leading causes of trauma are motor vehicle collisions, falls, and assaults with a deadly weapon. In the United States, Robert J. Baker and Robert J. Freeark established the first civilian Shock Trauma Unit at Cook County Hospital (opened 1834) in Chicago, Illinois on March 16, 1966. The concept of a shock trauma center was also developed at the University of Maryland, Baltimore, in the 1950s and 1960s by thoracic surgeon and shock researcher R Adams Cowley, who founded what became the Shock Trauma Center in Baltimore, Maryland, on July 1, 1966. The R Adams Cowley Shock Trauma Center is one of the first shock trauma centers in the world. Cook County Hospital in Chicago trauma center (opened in 1966). David R. Boyd interned at Cook County Hospital from 1963 to 1964 before being drafted into the Army of the United States of America. Upon his release from the Army, Boyd became the first shock-trauma fellow at the R Adams Cowley Shock Trauma Center, and then went on to develop the National System for Emergency Medical Services, under President Ford. In 1968 the American Trauma Society was created by various co-founders, including R Adams Cowley and Rene Joyeuse as they saw the importance of increased education and training of emergency providers and for nationwide quality trauma care. Canada According to the founder of the Trauma Unit at Sunnybrook Health Sciences Centre in Toronto, Ontario, Marvin Tile, "the nature of injuries at Sunnybrook has changed over the years. When the trauma centre first opened in 1976, about 98 per cent of patients suffered from blunt-force trauma caused by accidents and falls. Now, as many as 20 per cent of patients arrive with gunshot and knife wounds". Fraser Health Authority in British Columbia, located at Royal Columbian Hospital and Abbotsford Regional Hospital, services the BC area, "Each year, Fraser Health treats almost 130,000 trauma patients as part of the integrated B.C. trauma system". Definitions in United States In the United States, trauma centers are certified by the American College of Surgeons (ACS) or local state governments, from Level I (comprehensive service) to Level III (limited-care). The different levels refer to the types of resources available in a trauma center and the number of patients admitted yearly. These are categories that define national standards for trauma care in hospitals. Level I through Level II designations are also given adult or pediatric designations. Additionally, some states have their own trauma-center rankings separate from that of the ACS. These levels may range from Level I to Level IV. Some hospitals are less-formally designated Level V. The ACS does not officially designate hospitals as trauma centers. Numerous U.S. hospitals that are not verified by ACS claim trauma center designation. Most states have legislation that determines the process for designation of trauma centers within that state. The ACS describes this responsibility as "a geopolitical process by which empowered entities, government or otherwise, are authorized to designate." The ACS's self-appointed mission is limited to confirming and reporting on any given hospital's ability to comply with the ACS standard of care known as Resources for Optimal Care of the Injured Patient. The Trauma Information Exchange Program (TIEP) is a program of the American Trauma Society in collaboration with the Johns Hopkins Center for Injury Research and Policy and is funded by the Centers for Disease Control and Prevention. TIEP maintains an inventory of trauma centers in the US, collects data and develops information related to the causes, treatment and outcomes of injury, and facilitates the exchange of information among trauma care institutions, care providers, researchers, payers and policymakers. A trauma center is a hospital that is designated by a state or local authority or is verified by the American College of Surgeons. Level I A Level I trauma center provides the highest level of surgical care to trauma patients. Being treated at a Level I trauma center can reduce mortality by 25% compared to a non-trauma center. It has a full range of specialists and equipment available 24 hours a day and admits a minimum required annual volume of severely injured patients. A Level I trauma center is required to have a certain number of the following people on duty 24 hours a day at the hospital: surgeons emergency physicians anesthesiologists nurses respiratory therapists an education program preventive and outreach programs. Key elements include 24‑hour in‑house coverage by general surgeons and prompt availability of care in varying specialties—such as orthopedic surgery, cardiothoracic surgery, neurosurgery, plastic surgery, anesthesiology, emergency medicine, radiology, internal medicine, otolaryngology, oral and maxillofacial surgery, and critical care, which are needed to adequately respond and care for various forms of trauma that a patient may suffer, as well as provide rehabilitation services. Most Level I trauma centers are teaching hospitals/campuses. Additionally, a Level I center has a program of research, is a leader in trauma education and injury prevention, and is a referral resource for communities in nearby regions. Level II A Level II trauma center works in collaboration with a Level I center. It provides comprehensive trauma care and supplements the clinical expertise of a Level I institution. It provides 24-hour availability of all essential specialties, personnel, and equipment. Oftentimes, level II centers possess critical care services capable of caring for almost all injury types indefinitely. Minimum volume requirements may depend on local conditions. Such institutions are not required to have an ongoing program of research or a surgical residency program. Level III A Level III trauma center does not have the full availability of specialists but has resources for emergency resuscitation, surgery, and intensive care of most trauma patients. A Level III center has transfer agreements with Level I or Level II trauma centers that provide back-up resources for the care of patients with exceptionally severe injuries, such as multiple trauma. Level IV A Level IV trauma center exists in some states in which the resources do not exist for a Level III trauma center. It provides initial evaluation, stabilization, diagnostic capabilities, and transfer to a higher level of care. It may also provide surgery and critical-care services, as defined in the scope of services for trauma care. A trauma-trained nurse is immediately available, and physicians are available upon the patient's arrival in the Emergency Department. Transfer agreements exist with other trauma centers of higher levels, for use when conditions warrant a transfer. Level V A Level V trauma center provides initial evaluation, stabilization, diagnostic capabilities, and transfer to a higher level of care. They may provide surgical and critical-care services, as defined in the service's scope of trauma care services. A trauma-trained nurse is immediately available, and physicians are available upon patient arrival in the emergency department. If not open 24 hours daily, the facility must have an after-hours trauma response protocol. Pediatric trauma centers A facility can be designated an adult trauma center, a pediatric trauma center, or an adult and pediatric trauma center. If a hospital provides trauma care to both adult and pediatric patients, the level designation may not be the same for each group. For example, a Level I adult trauma center may also be a Level II pediatric trauma center because pediatric trauma surgery is a specialty unto itself. Adult trauma surgeons are not generally specialized in providing surgical trauma care to children and vice versa, and the difference in practice is significant. In contrast to adult trauma centers, pediatric trauma centers only have two ratings, either level I or level II.
Biology and health sciences
Health facilities
Health
286802
https://en.wikipedia.org/wiki/Catalytic%20converter
Catalytic converter
A catalytic converter is an exhaust emission control device which converts toxic gases and pollutants in exhaust gas from an internal combustion engine into less-toxic pollutants by catalyzing a redox reaction. Catalytic converters are usually used with internal combustion engines fueled by gasoline or diesel, including lean-burn engines, and sometimes on kerosene heaters and stoves. The first widespread introduction of catalytic converters was in the United States automobile market. To comply with the U.S. Environmental Protection Agency's stricter regulation of exhaust emissions, most gasoline-powered vehicles starting with the 1975 model year are equipped with catalytic converters. These "two-way" converters combine oxygen with carbon monoxide (CO) and unburned hydrocarbons (HC) to produce carbon dioxide (CO2) and water (H2O). Although two-way converters on gasoline engines were rendered obsolete in 1981 by "three-way" converters that also reduce oxides of nitrogen (), they are still used on lean-burn engines to oxidize particulate matter and hydrocarbon emissions (including diesel engines, which typically use lean combustion), as three-way-converters require fuel-rich or stoichiometric combustion to successfully reduce . Although catalytic converters are most commonly applied to exhaust systems in automobiles, they are also used on electrical generators, forklifts, mining equipment, trucks, buses, locomotives, motorcycles, and on ships. They are even used on some wood stoves to control emissions. This is usually in response to government regulation, either through environmental regulation or through health and safety regulations. History Catalytic converter prototypes were first designed in France at the end of the 19th century, when only a few thousand "oil cars" were on the roads; these prototypes had inert clay-based materials coated with platinum, rhodium, and palladium and sealed into a double metallic cylinder. A few decades later, a catalytic converter was patented by Eugene Houdry, a French mechanical engineer. Houdry was an expert in catalytic oil refining, having invented the catalytic cracking process that all modern refining is based on today. Houdry moved to the United States in 1930 to live near the refineries in the Philadelphia area and develop his catalytic refining process. When the results of early studies of smog in Los Angeles were published, Houdry became concerned about the role of smokestack exhaust and automobile exhaust in air pollution and founded a company called Oxy-Catalyst. Houdry first developed catalytic converters for smokestacks, called "cats" for short, and later developed catalytic converters for warehouse forklifts that used low grade, unleaded gasoline. In the mid-1950s, he began research to develop catalytic converters for gasoline engines used on cars and was awarded United States Patent 2,742,437 for his work. Catalytic converters were further developed by a series of engineers including Carl D. Keith, John J. Mooney, Antonio Eleazar, and Phillip Messina at Engelhard Corporation, creating the first production catalytic converter in 1973. The first widespread introduction of catalytic converters was in the United States automobile market. To comply with the U.S. Environmental Protection Agency's new exhaust emissions regulations, most gasoline-powered vehicles manufactured from 1975 onwards are equipped with catalytic converters. Early catalytic converters were "two-way", combining oxygen with carbon monoxide (CO) and unburned hydrocarbons (HC, chemical compounds in fuel of the form CmHn) to produce carbon dioxide (CO2) and water (H2O). These stringent emission control regulations also resulted in the removal of the antiknock agent tetraethyl lead from automotive gasoline, to reduce lead in the air. Lead and its compounds are catalyst poisons and foul catalytic converters by coating the catalyst's surface. Requiring the removal of lead allowed the use of catalytic converters to meet the other emission standards in the regulations. To lower harmful emissions, a twin-catalyst system was developed in the 1970s this added a separate (rhodium/platinum) catalyst which reduced ahead of the air pump, after which a two-way catalytic converter (palladium/platinum) removed HC and CO. This cumbersome and expensive system was soon made redundant, after it was noted that under some conditions the initial catalyst also removed HC and CO. This led to the development of the three-way catalyst, made possible by electronics and engine management developments. William C. Pfefferle developed a catalytic combustor for gas turbines in the early 1970s, allowing combustion without significant formation of nitrogen oxides and carbon monoxide. Four-way catalytic converters have also been developed which also remove particulates from engine exhaust; since most of these particulates are unburned hydrocarbons, they can be burned to convert them into carbon dioxide. Construction The catalytic converter's construction is as follows: The catalyst support or substrate. For automotive catalytic converters, the core is usually a ceramic monolith that has a honeycomb structure (commonly square, not hexagonal). (Prior to the mid 1980s, the catalyst material was deposited on a packed bed of alumina pellets in early GM applications.) Metallic foil monoliths made of Kanthal (FeCrAl) are used in applications where particularly high heat resistance is required. The substrate is structured to produce a large surface area. The cordierite ceramic substrate used in most catalytic converters was invented by Rodney Bagley, Irwin Lachman, and Ronald Lewis at Corning Glass, for which they were inducted into the National Inventors Hall of Fame in 2002. The washcoat. A washcoat is a carrier for the catalytic materials and is used to disperse the materials over a large surface area. Aluminum oxide, titanium dioxide, silicon dioxide e.g. colloidal silica or a mixture of silica and alumina can be used. The catalytic materials are suspended in the washcoat prior to applying to the core. Washcoat materials are selected to form a rough, irregular surface, which increases the surface area compared to the smooth surface of the bare substrate. Ceria or ceria-zirconia. These oxides are mainly added as oxygen storage promoters. The catalyst itself is most often a mix of precious metals, mostly from the platinum group. Platinum is the most active catalyst and is widely used, but is not suitable for all applications because of unwanted additional reactions and historically high cost. Palladium and rhodium are two other precious metals used, though as of February 2023, platinum has become the least expensive of the platinum group metals. Rhodium is used as a reduction catalyst, palladium is used as an oxidation catalyst, and platinum is used both for reduction and oxidation. Cerium, iron, manganese, and nickel are also used, although each has limitations. Nickel is not legal for use in the European Union because of its reaction with carbon monoxide into toxic nickel tetracarbonyl. Copper can be used in most countries, with a notable exception in Japan. Upon failure, a catalytic converter can be recycled into scrap. The precious metals inside the converter, including platinum, palladium, and rhodium, are extracted. Placement of catalytic converters Catalytic converters require a temperature of to operate effectively. Therefore, they are placed as close to the engine as possible, or one or more smaller catalytic converters (known as "pre-cats") are placed immediately after the exhaust manifold. Types Two-way A 2-way (or "oxidation", sometimes called an "oxi-cat") catalytic converter has two simultaneous tasks: Oxidation of carbon monoxide to carbon dioxide: Oxidation of hydrocarbons (unburnt and partially burned fuel) to carbon dioxide and water: (a combustion reaction) The two-way catalytic converter is widely used on diesel engines to reduce hydrocarbon and carbon monoxide emissions. They were also used on gasoline engines in American and Canadian automobile markets until 1981. Because of their inability to control oxides of nitrogen, manufacturers briefly installed twin catalyst systems, with an reducing, rhodium/platinum catalyst ahead of the air pump, which led to the development of the three-way catalytic converter. The two-way catalytic converter also continued to be used on certain, lower-cost cars in some markets such as Europe, where emissions were not universally regulated until the introduction of the Euro 3 emissions standard in 2000. Three-way The three-way catalytic converters have the additional advantage of controlling the emission of nitric oxide (NO) and nitrogen dioxide (NO2) (both together abbreviated with and not to be confused with nitrous oxide (N2O)). are precursors to acid rain and smog. Since 1981, the three-way (oxidation-reduction) catalytic converters have been used in vehicle emission control systems in the United States and Canada; many other countries have also adopted stringent vehicle emission regulations that in effect require three-way converters on gasoline-powered vehicles. The reduction and oxidation catalysts are typically contained in a common housing; however, in some instances, they may be housed separately. A three-way catalytic converter does three simultaneous tasks: Reduction of nitrogen oxides to nitrogen (N2) Oxidation of carbon, hydrocarbons, and carbon monoxide to carbon dioxide These three reactions occur most efficiently when the catalytic converter receives exhaust from an engine running slightly above the stoichiometric point. For gasoline combustion, this ratio is between 14.6 and 14.8 parts air to one part fuel, by weight. The ratio for autogas (or liquefied petroleum gas LPG), natural gas, and ethanol fuels can vary significantly for each, notably so with oxygenated or alcohol based fuels, with E85 requiring approximately 34% more fuel, requiring modified fuel system tuning and components when using those fuels. Engines fitted with regulated 3-way catalytic converters are equipped with a computerized closed-loop feedback fuel injection system using one or more oxygen sensors (also known as Lambda Sonds or sensors). Other variants combined three-way converters with carburetors equipped with feedback mixture control were used. An unregulated three-way converter features the same chemical processes but without the oxygen sensor, which meant higher emissions, particularly under partial loads. These were low-cost solutions, typically used for retrofitting to older cars or for smaller, cheaper cars. Three-way converters are effective when the engine is operated within a narrow band of airfuel ratios near the stoichiometric point. Total conversion efficiency falls very rapidly when the engine is operated outside of this band. Slightly lean of stoichiometric, the exhaust gases from the engine contain excess oxygen, the production of by the engine increases, and the efficiency of the catalyst at reducing falls off rapidly. However, the conversion of HC and CO is very efficient due to the available oxygen, oxidizing to H2O and CO2. Slightly rich of stoichiometric, the production of CO and unburnt HC by the engine starts to increase dramatically, available oxygen decreases, and the efficiency of the catalyst for oxidizing CO and HC decreases significantly, especially as stored oxygen becomes depleted. However, the efficiency of the catalyst at reducing is good, and the production of by the engine decreases. To maintain catalyst efficiency, the airfuel ratio must stay close to stoichiometric and not remain rich or lean for too long. Closed-loop engine control systems are used for effective operation of three-way catalytic converters because of this continuous rich-lean balance required for effective reduction and HC+CO oxidation. The control system allows the catalyst to release oxygen during slightly rich operating conditions, which oxidizes CO and HC under conditions that also favor the reduction of NOx. Before the stored oxygen is depleted, the control system shifts the airfuel ratio to become slightly lean, improving HC and CO oxidation while storing additional oxygen in the catalyst material, at a small penalty in reduction efficiency. Then the airfuel mixture is brought back to slightly rich, at a small penalty in CO and HC oxidation efficiency, and the cycle repeats. Efficiency is improved when this oscillation around the stoichiometric point is small and carefully controlled. Closed-loop control under light to moderate load is accomplished by using one or more oxygen sensors in the exhaust system. When oxygen is detected by the sensor, the airfuel ratio is lean of stoichiometric, and when oxygen is not detected, it is rich. The control system adjusts the rate of fuel being injected into the engine based on this signal to keep the airfuel ratio near the stoichiometric point in order to maximize the catalyst conversion efficiency. The control algorithm is also affected by the time delay between the adjustment of the fuel flow rate and the sensing of the changed airfuel ratio by the sensor, as well as the sigmoidal response of the oxygen sensors. Typical control systems are designed to rapidly sweep the airfuel ratio such that it oscillates slightly around the stoichiometric point, staying near the optimal efficiency point while managing the levels of stored oxygen and unburnt HC. Closed loop control is often not used during high load/maximum power operation, when an increase in emissions is permitted and a rich mixture is commanded to increase power and prevent exhaust gas temperature from exceeding design limits. This presents a challenge for control system and catalyst design. During such operations, large amounts of unburnt HC are produced by the engine, well beyond the capacity of the catalyst to release oxygen. The surface of the catalyst quickly becomes saturated with HC. When returning to lower power output and leaner airfuel ratios, the control system must prevent excessive oxygen from reaching the catalyst too quickly, as this will rapidly burn the HC in the already hot catalyst, potentially exceeding the design temperature limit of the catalyst. Excessive catalyst temperature can prematurely age the catalyst, reducing its efficiency before reaching its design lifetime. Excessive catalyst temperature can also be caused by cylinder misfire, which continuously flows unburnt HC combined with oxygen to the hot catalyst, burning in the catalyst and increasing its temperature. Unwanted reactions Unwanted reactions result in the formation of hydrogen sulfide and ammonia, which poison catalysts. Nickel or manganese is sometimes added to the washcoat to limit hydrogen-sulfide emissions. Sulfur-free or low-sulfur fuels eliminate or minimize problems with hydrogen sulfide. Diesel engines For compression-ignition (i.e., diesel) engines, the most commonly used catalytic converter is the diesel oxidation catalyst (DOC). DOCs contain palladium or platinum supported on alumina. This catalyst converts particulate matter (PM), hydrocarbons, and carbon monoxide to carbon dioxide and water. These converters often operate at 90 percent efficiency, virtually eliminating diesel odor and helping reduce visible particulates. These catalysts are ineffective for , so emissions from diesel engines are controlled by exhaust gas recirculation (EGR). In 2010, most light-duty diesel manufacturers in the U.S. added catalytic systems to their vehicles to meet federal emissions requirements. Two techniques have been developed for the catalytic reduction of emissions under lean exhaust conditions, selective catalytic reduction (SCR) and the adsorber. Instead of precious metal-containing absorbers, most manufacturers selected base-metal SCR systems that use a reagent such as ammonia to reduce the into nitrogen and water. Ammonia is supplied to the catalyst system by the injection of urea into the exhaust, which then undergoes thermal decomposition and hydrolysis into ammonia. The urea solution is also referred to as diesel exhaust fluid (DEF). Diesel exhaust contains relatively high levels of particulate matter. Catalytic converters remove only 20–40% of PM so particulates are cleaned up by a soot trap or diesel particulate filter (DPF). In the U.S., all on-road light, medium, and heavy-duty diesel-powered vehicles built after 1 January 2007, are subject to diesel particulate emission limits, and so are equipped with a 2-way catalytic converter and a diesel particulate filter. As long as the engine was manufactured before 1 January 2007, the vehicle is not required to have the DPF system. This led to an inventory runup by engine manufacturers in late 2006 so they could continue selling pre-DPF vehicles well into 2007. Lean-burn spark-ignition engines For lean-burn spark-ignition engines, an oxidation catalyst is used in the same manner as in a diesel engine. Emissions from lean burn spark ignition engines are very similar to emissions from a diesel compression ignition engine. Installation Many vehicles have a close-coupled catalytic converter located near the engine's exhaust manifold. The converter heats up quickly, due to its exposure to the very hot exhaust gases, allowing it to reduce undesirable emissions during the engine warm-up period. This is achieved by burning off the excess hydrocarbons which result from the extra-rich mixture required for a cold start. When catalytic converters were first introduced, most vehicles used carburetors that provided a relatively rich air-fuel ratio. Oxygen (O2) levels in the exhaust stream were therefore generally insufficient for the catalytic reaction to occur efficiently. Most designs of the time therefore included secondary air injection, which injected air into the exhaust stream. This increased the available oxygen, allowing the catalyst to function as intended. Some three-way catalytic converter systems have air injection systems with the air injected between the first ( reduction) and second (HC and CO oxidation) stages of the converter. As in two-way converters, this injected air provides oxygen for the oxidation reactions. An upstream air injection point, ahead of the catalytic converter, is also sometimes present to provide additional oxygen only during the engine warm up period. This causes unburned fuel to ignite in the exhaust tract, thereby preventing it reaching the catalytic converter at all. This technique reduces the engine runtime needed for the catalytic converter to reach its "light-off" or operating temperature. Most newer vehicles have electronic fuel injection systems, and do not require air injection systems in their exhausts. Instead, they provide a precisely controlled air-fuel mixture that quickly and continually cycles between lean and rich combustion. Oxygen sensors monitor the exhaust oxygen content before and after the catalytic converter, and the engine control unit uses this information to adjust the fuel injection so as to prevent the first ( reduction) catalyst from becoming oxygen-loaded, while simultaneously ensuring the second (HC and CO oxidation) catalyst is sufficiently oxygen-saturated. Damage Catalyst poisoning occurs when the catalytic converter is exposed to exhaust containing substances that coat the working surfaces, so that they cannot contact and react with the exhaust. The most notable contaminant is lead, so vehicles equipped with catalytic converters can run only on unleaded fuel. Other common catalyst poisons include sulfur, manganese (originating primarily from the gasoline additive MMT), and silicon, which can enter the exhaust stream if the engine has a leak that allows coolant into the combustion chamber. Phosphorus is another catalyst contaminant. Although phosphorus is no longer used in gasoline, it (and zinc, another low-level catalyst contaminant) was widely used in engine oil antiwear additives such as zinc dithiophosphate (ZDDP). Beginning in 2004, a limit of phosphorus concentration in engine oils was adopted in the API SM and ILSAC GF-4 specifications. Depending on the contaminant, catalyst poisoning can sometimes be reversed by running the engine under a very heavy load for an extended period of time. The increased exhaust temperature can sometimes vaporize or sublimate the contaminant, removing it from the catalytic surface. However, removal of lead deposits in this manner is usually not possible because of lead's high boiling point. Any condition that causes abnormally high levels of unburned hydrocarbons (raw or partially burnt fuel or oils) to reach the converter will tend to significantly elevate its temperature bringing the risk of a meltdown of the substrate and resultant catalytic deactivation and severe exhaust restriction. These conditions include failure of the upstream components of the exhaust system (manifold or header assembly and associated clamps susceptible to rust, corrosion or fatigue such as the exhaust manifold splintering after repeated heat cycling), ignition system (e.g., coil packs, primary ignition components, distributor cap, wires, ignition coil and spark plugs) or damaged fuel system components (e.g., fuel injectors, fuel pressure regulator, and associated sensors). Oil and coolant leaks, perhaps caused by a head gasket leak, can also cause high unburned hydrocarbons. Regulations Emissions regulations vary considerably from jurisdiction to jurisdiction. Most automobile spark-ignition engines in North America have been fitted with catalytic converters since 1975, and the technology used in non-automotive applications is generally based on automotive technology. In many jurisdictions, it is illegal to remove or disable a catalytic converter for any reason other than its direct and immediate replacement. Nevertheless, some vehicle owners remove or "gut" the catalytic converter on their vehicle. In such cases, the converter may be replaced by a welded-in section of ordinary pipe or a flanged "test pipe", ostensibly meant to check if the converter is clogged by comparing how the engine runs with and without the converter. This facilitates temporary reinstallation of the converter in order to pass an emission test. In the United States, it is a violation of Section 203(a)(3)(A) of the 1990 amended Clean Air Act for a vehicle repair shop to remove a converter from a vehicle, or cause a converter to be removed from a vehicle, except in order to replace it with another converter, and Section 203(a)(3)(B) makes it illegal for any person to sell or to install any part that would bypass, defeat, or render inoperative any emission control system, device, or design element. Vehicles without functioning catalytic converters generally fail emission inspections. The automotive aftermarket supplies high-flow converters for vehicles with upgraded engines, or whose owners prefer an exhaust system with larger-than-stock capacity. Catalytic converters have been mandatory on all new gasoline cars sold in the European Union and the United Kingdom since January 1, 1993 in order to comply with the Euro 1 emission standards. Effect on exhaust flow Faulty catalytic converters as well as undamaged early types of converters can restrict the flow of exhaust, which negatively affects vehicle performance and fuel economy. Modern catalytic converters do not significantly restrict exhaust flow. A 2006 test on a 1999 Honda Civic, for example, showed that removing the stock catalytic converter netted only a 3% increase in maximum horsepower; a new metallic core converter only cost the car 1% horsepower, compared to no converter. Dangers Carburetors on pre-1981 vehicles without feedback fuel-air mixture control could easily provide too much fuel to the engine, which could cause the catalytic converter to overheat and potentially ignite flammable materials under the car. Warm-up period Vehicles fitted with catalytic converters emit most of their total pollution during the first five minutes of engine operation; that is, before the catalytic converter has warmed up sufficiently to be fully effective. In the early 2000s it became common to place the catalyst converter right next to the exhaust manifold, close to the engine, for much quicker warm-up. In 1995, Alpina introduced an electrically heated catalyst. Called "E-KAT", it was used in Alpina's B12 5,7 E-KAT based on the BMW 750i. Heating coils inside the catalytic converter assemblies are electrified just after the engine is started, bringing the catalyst up to operating temperature very quickly to qualify the vehicle for low emission vehicle (LEV) designation. BMW later introduced the same heated catalyst, developed jointly by Emitec, Alpina, and BMW, in its 750i in 1999. Some vehicles contain a pre-cat, a small catalytic converter upstream of the main catalytic converter which heats up faster on vehicle start up, reducing the emissions associated with cold starts. A pre-cat is most commonly used by an auto manufacturer when trying to attain the Ultra Low Emissions Vehicle (ULEV) rating, such as on the Toyota MR2 Roadster. Environmental effect Catalytic converters have proven to be reliable and effective in reducing noxious tailpipe emissions. However, they also have some shortcomings in use, and also adverse environmental effects in production: An engine equipped with a three-way catalyst must run at the stoichiometric point, which means more fuel is consumed than in a lean-burn engine. This means approximately 10% more CO2 emissions from the vehicle. Catalytic converter production requires palladium or platinum; part of the world supply of these precious metals is produced near Norilsk, Russia, where the industry (among others) has caused Norilsk to be added to Time magazine's list of most-polluted places. The extreme heat of the converters themselves can cause wildfires, especially in dry areas. Theft Because of the external location and the use of valuable precious metals including platinum, palladium and rhodium, catalytic converters are a target for thieves. The problem is especially common among late-model pickup trucks and truck-based SUVs, because of their high ground clearance and easily removed bolt-on catalytic converters. Welded-on converters are also at risk of theft, as they can be easily cut off. The Toyota Prius catalytic converters are also targets for thieves. The catalytic converters of hybrids need more of the precious metals to work properly compared to conventional internal combustion vehicles because they do not get as hot as those installed on conventional vehicles, since the combustion engines of hybrids only run part of the time. Pipecutters are often used to quietly remove the converter but other tools such as a portable reciprocating saw can damage other components of the car, such as the alternator, wiring or fuel lines, with potentially dangerous consequences. In 2023, bipartisan legislation to combat catalytic converter theft was introduced in the U.S. Senate. The Preventing Auto Recycling Thefts Act (PART Act) would mandate catalytic converters in new vehicles to come with traceable identification numbers. Additionally, the legislation would make catalytic converter theft a federal criminal offense. Statistics Rising metal prices in the U.S. during the 2000s commodities boom led to a significant increase in converter theft. A catalytic converter can cost more than $1,000 to replace, more if the vehicle is damaged during the theft. Apart from damaging other systems of the vehicle, theft can also cause death and injury to thieves. Thefts of catalytic converters rose over tenfold in the United States from the late 2010s to early 2020s, driven presumably by the rise in the price of precious metals contained within the converters. Study findings reveal an average price elasticity of 1.98, which means that a 10 percent increase in the price of metal leads to an approximate 20 percent increase in thefts. According to the National Insurance Crime Bureau, there were 1,298 reported cases of catalytic converter theft in 2018, which increased to 14,433 in 2020. In 2022, it was reported that the number of catalytic converter thefts in the United States sharply rose to 153,000 total thefts for the year. From 2019 to 2020, thieves in the United Kingdom were targeting older-model hybrid cars (such as Toyota's hybrids) which have more precious metals than newer vehicles—sometimes worth more than the value of the car—leading to scarcity and long delays in replacing them. In 2021 a trend emerged in the Democratic Republic of the Congo where catalytic converters were alleged to be stolen for use in illicit street drug production. The drug, a powder known as "bombé," was said to be a mixture of powdered pills/vitamins and pulverized honeycomb structures of catalytic converters. In 2023, however, a study of various samples of the drug concluded that its alleged origin from catalytic exhausts was found to be unsubstantiated. Diagnostics Various jurisdictions now require on-board diagnostics to monitor the function and condition of the emissions-control system, including the catalytic converter. Vehicles equipped with OBD-II diagnostic systems are designed to alert the driver to a misfire condition by means of illuminating the "check engine" light on the dashboard, or flashing it if the current misfire conditions are severe enough to potentially damage the catalytic converter. On-board diagnostic systems take several forms. Temperature sensors are used for two purposes. The first is as a warning system, typically on two-way catalytic converters such as those used on LPG forklifts. The function of the sensor is to warn of catalytic converter temperature above the safe limit of . Modern catalytic-converter designs are not as susceptible to temperature damage and can withstand sustained temperatures of . Temperature sensors are also used to monitor catalyst functioning: usually two sensors will be fitted, with one before the catalyst and one after to monitor the temperature rise over the catalytic-converter core. The oxygen sensor is the basis of the closed-loop control system on a spark-ignited rich-burn engine; however, it is also used for diagnostics. In vehicles with OBD II, a second oxygen sensor is fitted after the catalytic converter to monitor the O2 levels. The O2 levels are monitored to see the efficiency of the burn process. The on-board computer makes comparisons between the readings of the two sensors. The readings are taken by voltage measurements. If both sensors show the same output or the rear O2 is "switching", the computer recognizes that the catalytic converter either is not functioning or has been removed, and will operate a malfunction indicator lamp and affect engine performance. Simple "oxygen sensor simulators" have been developed to circumvent this problem by simulating the change across the catalytic converter with plans and pre-assembled devices available on the Internet. Although these are not legal for on-road use, they have been used with mixed results. Similar devices apply an offset to the sensor signals, allowing the engine to run a more fuel-economical lean burn that may, however, damage the engine or the catalytic converter. sensors are extremely expensive and are in general used only when a compression-ignition engine is fitted with a selective catalytic-reduction (SCR) converter, or a absorber in a feedback system. When fitted to an SCR system, there may be one or two sensors. When one sensor is fitted it will be pre-catalyst; when two are fitted, the second one will be post-catalyst. They are used for the same reasons and in the same manner as an oxygen sensor; the only difference is the substance being monitored.
Technology
Food, water and health
null
286817
https://en.wikipedia.org/wiki/Huntsman%20spider
Huntsman spider
Huntsman spiders, members of the family Sparassidae (formerly Heteropodidae), catch their prey by hunting rather than in webs. They are also called giant crab spiders because of their size and appearance. Larger species sometimes are referred to as wood spiders, because of their preference for woody places (forests, mine shafts, woodpiles, wooden shacks). In southern Africa the genus Palystes are known as rain spiders or lizard-eating spiders. Commonly, they are confused with baboon spiders from the Mygalomorphae infraorder, which are not closely related. More than a thousand Sparassidae species occur in most warm temperate to tropical regions of the world, including much of Australia, Africa, Asia, the Mediterranean Basin, and the Americas. Several species of huntsman spider can use an unusual form of locomotion. The wheel spider (Carparachne aureoflava) from the Namib uses a cartwheeling motion which gives it its name, while Cebrennus rechenbergi uses a handspring motion. Description Sparassids are eight-eyed spiders. The eyes appear in two largely forward-facing rows of four on the anterior aspect of the prosoma. Many species grow very large – in Laos, male giant huntsman spiders (Heteropoda maxima) attain a legspan of . People unfamiliar with spider taxonomy commonly confuse large species with tarantulas, but huntsman spiders can generally be identified by their legs, which, rather than being jointed vertically relative to the body, are twisted in such a way that in some attitudes the legs extend forward in a crab-like fashion. It is also commonly confused for a brown recluse spider, due to their shared coloring. However, brown recluse venom is significantly dangerous to humans, while that of the huntsman spider is less so. On their upper surfaces the main colours of huntsman spiders are inconspicuous shades of brown or grey, but many species have undersides more or less aposematically marked in black-and-white. Their legs bear fairly prominent spines, but the rest of their bodies are smoothly furry. They tend to live under rocks, bark and similar shelters, but human encounters are common in sheds, garages and other infrequently-disturbed places. The banded huntsman (Holconia) is large, grey to brown with striped bands on its legs. The badge huntsman (Neosparassus) is larger still, brown and hairy. The tropical or brown huntsman (Heteropoda) is also large and hairy, with mottled brown, white and black markings. The eyesight of these spiders is not as good as that of the Salticidae (jumping spiders). Nevertheless, their vision is quite sufficient to detect approaching humans or other large animals from some distance. Identification They can be distinguished from other spider families by their appearance, as other spiders similar to them are smaller in size. They are often confused for tarantulas due to their hairy nature, but can easily be distinguished by their laterigrade legs, similar to those of crabs. Members of this family are also typically less bulky than tarantulas. They possess two claws, as is the case for most spiders that actively hunt their prey. If this is not enough to fully identify them, they also possess eight eyes divided into two regular rows. Size, venom, and aggression On average, a huntsman spider's leg-span can reach up to , while their bodies measure about long. Like most spiders, Sparassidae use venom to immobilize prey. There have been reports of members of various genera such as Palystes, Neosparassus, and several others inflicting severe bites on humans. The effects vary, including local swelling and pain, nausea, headache, vomiting, irregular pulse rate, and heart palpitations, indicating some systemic neurotoxin effects, especially when the bites were severe or repeated. However, the formal study of spider bites is fraught with complications, including unpredictable infections, dry bites, shock, nocebo effects, and even bite misdiagnosis by medical professionals and specimen misidentification by the general public. It is not always clear what provokes Sparassidae to attack and bite humans and animals, but it is known that female members of this family will aggressively defend their egg-sacs and young against perceived threats. Bites from sparassids usually do not require hospital treatment. Sound production in mating rituals Males of the huntsman spider Heteropoda venatoria have recently been found to deliberately make a substrate-borne sound when they detect a chemical (pheromone) left by a nearby female of their species. The males anchor themselves firmly to the surface onto which they have crawled and then use their legs to transmit vibrations from their bodies to the surface. Most of the sound emitted is produced by strong vibrations of the abdomen. The characteristic frequency of vibration and the pattern of bursts of sound identify them to females of their species, who will approach if they are interested in mating. This sound can often be heard as a rhythmic ticking, somewhat like a quartz clock, which fades in and out and can be heard by human ears in a relatively quiet environment. Genera , the World Spider Catalog accepted the following genera: Adcatomus Karsch, 1880 — Venezuela, Peru Anaptomecus Simon, 1903 — Central America, South America Anchonastus Simon, 1898 — Cameroon, Congo Arandisa Lawrence, 1938 — Namibia Barylestis Simon, 1910 — Africa, Asia, Europe Beregama Hirst, 1990 — Australia, Papua New Guinea Berlandia Lessert, 1921 — East Africa Bhutaniella Jäger, 2000 — Asia Borniella Grall & Jäger, 2022 — Borneo Caayguara Rheims, 2010 — Brazil Carparachne Lawrence, 1962 — Namibia Cebrennus Simon, 1880 — Africa, Asia, Malta Cerbalus Simon, 1897 — Israel, Jordan, Egypt Chrosioderma Simon, 1897 — Madagascar Clastes Walckenaer, 1837 — Indonesia, Papua New Guinea Curicaberis Rheims, 2015 — North America, Central America, Brazil Damastes Simon, 1880 — Madagascar, Mozambique, Seychelles Decaphora Franganillo, 1931 — North America, Caribbean, Central America, Colombia Deelemanikara Jäger, 2021 — Madagascar Defectrix Petrunkevitch, 1925 — Panama Delena Walckenaer, 1837 — Australia, New Zealand Dermochrosia Mello-Leitão, 1940 — Brazil Diminutella Rheims & Alayón, 2018 — Cuba Eusparassus Simon, 1903 — Asia, Africa, Europe, Peru Exopalystes Hogg, 1914 — Papua New Guinea Extraordinarius Rheims, 2019 — Brazil Geminia Thorell, 1897 — Myanmar Gnathopalystes Rainbow, 1899 — Asia, Oceania Guadana Rheims, 2010 — Brazil, Peru, Ecuador Heteropoda Latreille, 1804 — Oceania, Asia, South America, Greece Holconia Thorell, 1877 — Australia Irileka Hirst, 1998 — Australia Isopeda L. Koch, 1875 — Australia, Philippines, Papua New Guinea Isopedella Hirst, 1990 — Australia, Papua New Guinea, Indonesia Keilira Hirst, 1989 — Australia Leucorchestris Lawrence, 1962 — Angola, Namibia Macrinus Simon, 1887 — South America, Tobago, United States Martensopoda Jäger, 2006 — India May Jäger & Krehenwinkel, 2015 — Namibia, South Africa Megaloremmius Simon, 1903 — Madagascar Menarik Grall & Jäger, 2022 — Borneo Meri Rheims & Jäger, 2022 — South America Micrommata Latreille, 1804 — Spain, Africa, Asia Micropoda Grall & Jäger, 2022 — Papua New Guinea Microrchestris Lawrence, 1962 — Namibia Neosparassus Hogg, 1903 — Australia Neostasina Rheims & Alayón, 2016 — Caribbean Nolavia Kammerer, 2006 — Brazil Nungara Pinto & Rheims, 2016 — Brazil, Ecuador Olios Walckenaer, 1837 — Asia, South America, Oceania, Africa, Central America, North America, Caribbean Orchestrella Lawrence, 1965 — Namibia Origes Simon, 1897 — Argentina, Peru, Ecuador Paenula Simon, 1897 — Ecuador Palystella Lawrence, 1928 — Namibia Palystes L. Koch, 1875 — Africa, India, Australia Panaretella Lawrence, 1937 — South Africa Pandercetes L. Koch, 1875 — Asia, Oceania Parapalystes Croeser, 1996 — South Africa Pediana Simon, 1880 — Indonesia, Australia Platnickopoda Jäger, 2020 — East Africa Pleorotus Simon, 1898 — Seychelles Polybetes Simon, 1897 — South America Prusias O. Pickard-Cambridge, 1892 — Brazil, Mexico, Panama Prychia L. Koch, 1875 — Papua New Guinea, Fiji, Philippines Pseudomicrommata Järvi, 1914 — Africa Pseudopoda Jäger, 2000 — Asia Quemedice Mello-Leitão, 1942 — Brazil, Argentina Remmius Simon, 1897 — Africa Rhacocnemis Simon, 1897 — Seychelles Rhitymna Simon, 1897 — Asia, Africa Sadala Simon, 1880 — South America Sagellula Strand, 1942 — Japan, China Sarotesius Pocock, 1898 — East Africa Sinopoda Jäger, 1999 — Asia Sivalicus Dyal, 1957 — India Sparianthina Banks, 1929 — South America, Tobago, Central America Sparianthis Simon, 1880 — Colombia Spariolenus Simon, 1880 — Asia Staianus Simon, 1889 — Madagascar Stasina Simon, 1877 — South America, Gabon, Asia, Cuba Stasinoides Berland, 1922 — Ethiopia Stipax Simon, 1898 — Seychelles Strandiellum Kolosváry, 1934 — Papua New Guinea Thelcticopis Karsch, 1884 — Asia, Oceania, Africa Thomasettia Hirst, 1911 — Seychelles Thunberga Jäger, 2020 — Madagascar Tibellomma Simon, 1903 — Venezuela Tiomaniella Grall & Jäger, 2022 — Malaysia Tychicus Simon, 1880 — Philippines, Papua New Guinea, Indonesia Typostola Simon, 1897 — Australia, Papua New Guinea Uaiuara Rheims, 2013 — Panama, South America Vindullus Simon, 1880 — South America, Guatemala Yiinthi Davies, 1994 — Australia, Papua New Guinea Zachria L. Koch, 1875 — Australia Distribution and habitat Members of the Sparassidae are native to tropical and warm temperate regions worldwide. A few species are native to colder climates, like the green huntsman spider (Micrommata virescens) which is native to Northern and Central Europe. Some tropical species like Heteropoda venatoria (Cane huntsman) and Delena cancerides (Social huntsman) have been accidentally introduced to many subtropical parts of the world, including New Zealand (which has no native sparassid species). As adults, huntsman spiders do not build webs, but hunt and forage for food: their diet consists primarily of insects and other invertebrates, and occasionally small skinks and geckos. They live in the crevices of tree bark, but will frequently wander into homes and vehicles. They are able to travel extremely quickly, often using a springing jump while running, and walk on walls and even on ceilings. They also tend to exhibit a "cling" reflex if picked up, making them difficult to shake off and much more likely to bite. The females are fierce defenders of their egg sacs and young. They will generally make a threat display if provoked, and if the warning is ignored they may attack and bite. The egg sacs differ fairly widely among the various genera. For example, in Heteropoda spp. egg sacs are carried underneath the female's body, while in other species like Palystes and Pseudomicrommata spp., females generally attach egg sacs to vegetation.
Biology and health sciences
Spiders
Animals
287061
https://en.wikipedia.org/wiki/Pagoda
Pagoda
A pagoda is a tiered tower with multiple eaves common to Thailand, Cambodia, Nepal, China, Japan, Korea, Myanmar, Vietnam, and other parts of Asia. Most pagodas were built to have a religious function, most often Buddhist, but sometimes Taoist, and were often located in or near viharas. The pagoda traces its origins to the stupa, while its design was developed in ancient India. Chinese pagodas () are a traditional part of Chinese architecture. In addition to religious use, since ancient times Chinese pagodas have been praised for the spectacular views they offer, and many classical poems attest to the joy of scaling pagodas. The oldest and tallest pagodas were built of wood, but most that survived were built of brick or stone. Some pagodas are solid with no interior. Hollow pagodas have no higher floors or rooms, but the interior often contains an altar or a smaller pagoda, as well as a series of staircases for the visitor to climb to see the view from an opening on one side of each tier. Most have between three and 13 tiers (almost always an odd number) and the classic gradual tiered eaves. In some countries, the term may refer to other religious structures. In Vietnam and Cambodia, due to French translation, the English term pagoda is a more generic term referring to a place of worship, although pagoda is not an accurate word to describe a Buddhist vihara. The architectural structure of the stupa has spread across Asia, taking on many diverse forms specific to each region. Many Philippine bell towers are highly influenced by pagodas through Chinese workers hired by the Spaniards. Etymology One proposed etymology is from a South Chinese pronunciation of the term for an eight-cornered tower, , and reinforced by the name of a famous pagoda encountered by many early European visitors to China, the "Pázhōu tǎ" (), standing just south of Guangzhou at Whampoa Anchorage. Another proposed etymology is Persian butkada, from but, "idol" and kada, "temple, dwelling." Yet another etymology is from the Sinhala word dāgaba, derived from Sanskrit dhātugarbha or Pali dhātugabbha: "relic womb/chamber" or "reliquary shrine", i.e. a stupa, by way of Portuguese. History The origin of the pagoda can be traced to the stupa (3rd century BCE). The stupa, a dome shaped monument, was used as a commemorative monument to house sacred relics and writings. In East Asia, the architecture of Chinese towers and Chinese pavilions blended into pagoda architecture, eventually also spreading to Southeast Asia. Their construction was popularized by the efforts of Buddhist missionaries, pilgrims, rulers, and ordinary devotees to honor Buddhist relics. Japan has a total of 22 five-storied timber pagodas constructed before 1850. China The earliest styles of Chinese pagodas were square-base and circular-base, with octagonal-base towers emerging in the 5th–10th centuries. The highest Chinese pagoda from the pre-modern age is the Liaodi Pagoda of Kaiyuan Monastery, Dingxian, Hebei, completed in the year 1055 AD under Emperor Renzong of Song and standing at a total height of 84 m (275 ft). Although it no longer stands, the tallest pre-modern pagoda in Chinese history was the of Chang'an, built by Emperor Yang of Sui, and possibly the short-lived 6th century Yongning Pagoda (永宁宝塔) of Luoyang at roughly 137 metres. The tallest pre-modern pagoda still standing is the Liaodi Pagoda. In April 2007 a new wooden pagoda Tianning Temple of Changzhou was opened to the public, the tallest in China, standing 154 m (505 ft). Symbolism and geomancy Chinese iconography is noticeable in Chinese and other East Asian pagoda architectures. Also prominent is Buddhist iconography such as the image of the Shakyamuni and Gautama Buddha in the abhaya mudra. In an article on Buddhist elements in Han dynasty art, Wu Hung suggests that in these temples, Buddhist symbolism was fused with native Chinese traditions into a unique system of symbolism. Some believed reverence at pagodas could bring luck to students taking the Chinese civil service examinations. When a pagoda of Yihuang County in Fuzhou collapsed in 1210, local inhabitants believed the disaster correlated with the recent failure of many exam candidates in the prefectural examinations The pagoda was rebuilt in 1223 and had a list inscribed on it of the recently successful examination candidates, in hopes that it would reverse the trend and win the county supernatural favor. Architecture Pagodas come in many different sizes, with taller ones often attracting lightning strikes, inspiring a tradition that the finial decoration of the top of the structure can seize demons. Today many pagodas have been fitted with wires making the finial into a lightning rod. Wooden pagodas possess certain characteristics thought to resist earthquake damage. These include the friction damping and sliding effect of the complex wooden dougong joints, the structural isolation of floors, the effects of wide eaves analogous to a balancing toy, and the Shinbashira phenomenon that the center column is bolted to the rest of the superstructure. Pagodas traditionally have an odd number of levels, a notable exception being the eighteenth-century orientalist pagoda designed by Sir William Chambers at Kew Gardens in London. The pagodas in Himalayas are derived from Newari architecture, very different from Chinese and Japanese styles. Construction materials Wood During the Southern and Northern dynasties, pagodas were mostly built of wood, as were other ancient Chinese structures. Wooden pagodas are resistant to earthquakes, and no Japanese pagoda has been destroyed by an earthquake, but they are prone to fire, natural rot, and insect infestation. Examples of wooden pagodas: White Horse Pagoda at White Horse Temple, Luoyang Futuci Pagoda in Xuzhou, built in the Three Kingdoms period (–265) Many of the pagodas in Stories About Buddhist Temples in Luoyang, a Northern Wei text The literature of subsequent eras also provides evidence of the domination of wooden pagoda construction. The famous Tang dynasty poet, Du Mu, once wrote: The oldest standing fully wooden pagoda in China today is the Pagoda of Fugong Temple in Ying County, Shanxi, built in the 11th century during the Song/Liao dynasty (see Song architecture). Transition to brick and stone During the Northern Wei and Sui dynasties (386–618) experiments began with the construction of brick and stone pagodas. Even at the end of the Sui, however, wood was still the most common material. For example, Emperor Wen of the Sui dynasty (reigned 581–604) once issued a decree for all counties and prefectures to build pagodas to a set of standard designs, however since they were all built of wood none have survived. Only the Songyue Pagoda has survived, a circular-based pagoda built out of brick in 523 AD. Brick The earliest extant brick pagoda is the 40-metre-tall Songyue Pagoda in Dengfeng Country, Henan. This curved, circle-based pagoda was built in 523 during the Northern Wei dynasty, and has survived for 15 centuries. Much like the later pagodas found during the following Tang dynasty, this temple featured tiers of eaves encircling its frame, as well as a spire crowning the top. Its walls are 2.5 m thick, with a ground floor diameter of 10.6 m. Another early brick pagoda is the Sui dynasty Guoqing Pagoda built in 597. Stone The earliest large-scale stone pagoda is a Four Gates Pagoda at Licheng, Shandong, built in 611 during the Sui dynasty. Like the Songyue Pagoda, it also features a spire at its top, and is built in the pavilion style. Brick and stone One of the earliest brick and stone pagodas was a three-storey construction built in the (first) Jin dynasty (266–420), by Wang Jun of Xiangyang. However, it is now destroyed. Brick and stone went on to dominate Tang, Song, Liao and Jin dynasty pagoda construction. An example is the Giant Wild Goose Pagoda (652 AD), built during the early Tang dynasty. The Porcelain Pagoda of Nanjing has been one of the most famous brick and stone pagoda in China throughout history. The Zhou dynasty started making the ancient pagodas about 3,500 years ago. De-emphasis over time Pagodas, in keeping with the tradition of the White Horse Temple, were generally placed in the center of temples until the Sui and Tang dynasties. During the Tang, the importance of the main hall was elevated and the pagoda was moved beside the hall, or out of the temple compound altogether. In the early Tang, Daoxuan wrote a Standard Design for Buddhist Temple Construction in which the main hall replaced the pagoda as the center of the temple. The design of temples was also influenced by the use of traditional Chinese residences as shrines, after they were philanthropically donated by the wealthy or the pious. In such pre-configured spaces, building a central pagoda might not have been either desirable or possible. In the Song dynasty (960–1279), the Chan (Zen) sect developed a new 'seven part structure' for temples. The seven parts—the Buddha hall, dharma hall, monks' quarters, depository, gate, pure land hall and toilet facilities—completely exclude pagodas, and can be seen to represent the final triumph of the traditional Chinese palace/courtyard system over the original central-pagoda tradition established 1000 years earlier by the White Horse Temple in 67. Although they were built outside of the main temple itself, large pagodas in the tradition of the past were still built. This includes the two Ming dynasty pagodas of Famen Temple and the Chongwen Pagoda in Jingyang of Shaanxi. A prominent, later example of converting a palace to a temple is Beijing's Yonghe Temple, which was the residence of Yongzheng Emperor before he ascended the throne. It was donated for use as a lamasery after his death in 1735. Styles of eras Han dynasty Examples of Han dynasty era tower architecture predating Buddhist influence and the full-fledged Chinese pagoda can be seen in the four pictures below. Michael Loewe writes that during the Han dynasty (202 BC – 220 AD) period, multi-storied towers were erected for religious purposes, as astronomical observatories, as watchtowers, or as ornate buildings that were believed to attract the favor of spirits, deities, and immortals. Sui and Tang Pagodas built during the Sui and Tang dynasty usually had a square base, with a few exceptions such as the Daqin Pagoda: Dali kingdom Song, Liao, Jin, Yuan Pagodas of the Five Dynasties, Northern and Southern Song, Liao, Jin, and Yuan dynasties incorporated many new styles, with a greater emphasis on hexagonal and octagonal bases for pagodas: Ming and Qing Pagodas in the Ming and Qing dynasties generally inherited the styles of previous eras, although there were some minor variations: Notable pagodas Tiered towers with multiple eaves: Dâu Temple, Bắc Ninh, Vietnam, built in 187 Changu Narayan Temple, Bhaktapur, Nepal, originally built in 4th century CE, rebuilt in 1702 Pashupatinath Temple, Kathmandu, Nepal, built in the 5th century Trấn Quốc Pagoda, Hanoi, Vietnam, built in 545 Songyue Pagoda on Mount Song, Henan, China, built in 523 Mireuksa at Iksan, Korea, built in the early 7th century Bunhwangsa at Gyeongju, Korea, built in 634 Xumi Pagoda at Zhengding, Hebei, China, built in 636 Daqin Pagoda in China, built in 640 Hwangnyongsa Wooden nine-story pagoda on Hwangnyongsa, Gyeongju, Korea, built in 645 Pagoda at Hōryū-ji, Ikaruga, Nara, Japan, built in the 7th century, one of the oldest wooden buildings in the world Giant Wild Goose Pagoda, made of brick, built in Xi'an, China in 704 Small Wild Goose Pagoda, built in Xi'an, China in 709 Seokgatap on Bulguksa, Gyeongju, South Korea, built in 751, made of granite. In 1966, the Mugujeonggwang Great Dharani Sutra, the oldest extant woodblock print, was found with several other treasures in the second story of this pagoda. Dabotap on Bulguksa, Gyeongju, Korea, built in 751 Tiger Hill Pagoda, built in 961 outside of Suzhou, China Lingxiao Pagoda at Zhengding, Hebei, China, built in 1045 Iron Pagoda of Kaifeng, built in 1049, during the Song dynasty Liaodi Pagoda of Dingzhou, built in 1055 during the Song dynasty Pagoda of Fogong Temple, built in 1056 in Ying County, Shanxi, China Pizhi Pagoda of Lingyan Temple, Shandong, China, 11th century Beisi Pagoda at Suzhou, Jiangsu, China, built in 1162 Liuhe Pagoda (Six Harmonies Pagoda) of Hangzhou, Zhejiang, China, built in 1165 during the Song dynasty Ichijō-ji, Kasai, Hyōgo, Japan, built in 1171 Bình Sơn Pagoda of Vĩnh Khánh Temple, Vĩnh Phúc, Vietnam, built in the Trần dynasty (about the 13th century) Phổ Minh pagoda of Phổ Minh Temple, Vietnam, built in 1305 Prashar Lake temple, dedicated to the Rishi Prashar, the patron of the Mandi region in India. The temple was constructed by Raja Ban Sen in the 14th century, with the rishi being present in the form of a pindi stone. The Porcelain Tower of Nanjing, built between 1402 and 1424, a wonder of the medieval world in Nanjing, China. Tsui Sing Lau Pagoda in Ping Shan, Hong Kong, built in 1486 Bajrayogini Temple, Kathmandu, Nepal, built in 16th century by Pratap Malla Taleju Temple, a temple in Kathmandu, Nepal, built in 1564 Gokarneshwor Mahadev temple, Nepal, built in 1582 Pazhou Pagoda on Whampoa (Huangpu) Island, Guangzhou (Canton), China, built in 1600 Phước Duyên Pagoda of Thiên Mụ Temple, in Huế, Vietnam, built in 1844 on the order of the Thiệu Trị Emperor Palsangjeon, a five-story pagoda at Beopjusa, Korea built in 1605 Tō-ji, the tallest wooden structure in Kyoto, Japan, built in 1644 Nyatapola at Bhaktapur, Kathmandu Valley built during 1701–1702 The Great Pagoda at Kew Gardens, London, UK, built in 1762 Reading Pagoda of Reading, Pennsylvania, built in 1908 Kek Lok Si's main pagoda in Penang, Malaysia, exhibits a combination of Chinese, Burmese and Thai Buddhist architecture, built in 1930 Seven-storey Pagoda in Chinese Garden at Jurong East, Singapore, built in 1975 Dragon and Tiger Pagodas in Kaohsiung, Taiwan, built in 1976 The pagoda of Japan Pavilion at Epcot, Florida, built in 1982 Pagoda of Tianning Temple, the tallest pagoda in the world since its completion in April 2007, stands at 153.7 m in height. Nepalese Peace Pagoda in Brisbane, Australia built for the World Expo '88 Pagoda Avalokitesvara, Indonesia, tallest pagoda in Indonesia, stands at 45 meters, built in 2004. Sun and Moon Pagodas in Guilin, Guangxi, China, twin pagodas on Shan Lake, originally built in the 10th century and reconstructed using historical description on the original foundation in 2001 Stupas called "pagodas": Global Vipassana Pagoda, the largest unsupported domed stone structure in the world Mingun Pahtodawgyi, a monumental uncompleted stupa began by King Bodawpaya in 1790. If completed, it would be the largest in the world at 150 meters. Pha That Luang, the holiest wat, pagoda, and stupa in Laos, in Vientiane Phra Pathommachedi the highest pagoda or stupa in Thailand Nakhon Pathom, Thailand Shwedagon Pagoda, a gilded pagoda and stupa located in Yangon, Myanmar. It is the most sacred Buddhist pagoda for the Burmese with relics of the past four Buddhas enshrined within. Shwezigon Pagoda in Nyaung-U, Myanmar. Completed during the reign of King Kyanzittha in 1102, it is a prototype of Burmese stupas. Uppatasanti Pagoda, a 325-foot tall landmark in Naypyidaw, Myanmar, built from 2006 to 2009, which houses a Buddha tooth relic Places called "pagoda" but which are not tiered structures with multiple eaves: One Pillar Pagoda: Hanoi, Vietnam, is an icon of Vietnamese culture. It was built in 1049, destroyed, and rebuilt in 1954. Structures that evoke pagoda architecture: The Dragon House of Sanssouci Park, an eighteenth-century German attempt at imitating Chinese architecture The Panasonic Pagoda, or Pagoda Tower, at the Indianapolis Motor Speedway. This 13-story pagoda, used as the control tower for races such as the Indy 500, has been transformed several times since it was first built in 1913. Jin Mao Tower in Shanghai, built between 1994 and 1999 Petronas Towers in Kuala Lumpur, the tallest buildings in the world from 1998 to 2004 Taipei 101 in Taiwan, record setter for height (508 m) in 2004 and currently (2021) the world's tenth tallest completed building Structures not generally thought of as pagodas, but which have some pagoda-like characteristics: The Hall of Prayer for Good Harvests at the Temple of Heaven Wongudan Altar in Korea
Technology
Ceremonial buildings
null
287101
https://en.wikipedia.org/wiki/Harpoon
Harpoon
A harpoon is a long, spear-like projectile used in fishing, whaling, sealing, and other hunting to shoot, kill, and capture large fish or marine mammals such as seals, sea cows, and whales. It impales the target and secures it with barb or toggling claws, allowing the fishermen or hunters to use an attached rope or chain to pull and retrieve the animal. A harpoon can also be used as a ranged weapon against other watercraft in naval warfare. Certain harpoons are made with different builds to perform better with the type of target. For example, the Inuit have short, fixed-foreshaft harpoons for hunting at breathing holes, while loose-shafted ones are made for throwing and remaining attached to the game. History In the 1990s, harpoon points, known as the Semliki harpoons or the Katanda harpoons, were found in the Katanda region in Zaire. As the earliest known harpoons, these weapons were made and used 90,000 years ago, most likely to spear catfishes. Later, in Japan, spearfishing with poles was widespread in palaeolithic times, especially during the Solutrean and Magdalenian periods. Cosquer Cave in southern France has cave art over 16,000 years old, including drawings of seals that appear to have been harpooned. There are references to harpoons in ancient literature, though in most cases the descriptions do not go into detail. An early example can be found in the Bible in Job 41:7 (NIV): "Can you fill its hide with harpoons or its head with fishing spears?" The Greek historian Polybius (c. 203 BC – 120 BC), in his Histories, describes hunting for swordfish by using a harpoon with a barbed and detachable head. Copper harpoons were known to the seafaring Harappans well into antiquity. Early hunters in India include the Mincopie people, aboriginal inhabitants of India's Andaman and Nicobar islands, who have used harpoons with long cords for fishing since early times. Whaling In the novel Moby-Dick, Herman Melville explained the reason for the harpoon's effectiveness: He also describes another device that was at times a necessary addition to harpoons: Explosive harpoons The first use of explosives in the hunting of whales was made by the British South Sea Company in 1737, after some years of declining catches. A large fleet was sent, armed with cannon-fired harpoons. Although the weaponry was successful in killing the whales, most of the catch sank before being retrieved. However, the system was still occasionally used, and underwent successive improvements at the hands of various inventors over the next century, including Abraham Stagholt in the 1770s and George Manby in the early 19th century. William Congreve, who invented some of the first rockets for British Army use, designed a rocket-propelled whaling harpoon in the 1820s. The shell was designed to explode on contact and impale the whale with the harpoon. The weapon was in turn attached by a line to the boat, and the hope was that the explosion would generate enough gas within the whale to keep it afloat for retrieval. Expeditions were sent out to try this new technology; many whales were killed, but most of them sank. These early devices, called bomb lances, became widely used for the hunting of humpbacks and right whales. A notable user of these early explosive harpoons was the American Thomas Welcome Roys in 1865, who set up a shore station in Seydisfjördur, Iceland. A slump in oil prices after the American Civil War forced their endeavor into bankruptcy in 1867. An early version of the explosive harpoon was designed by Jacob Nicolai Walsøe, a Norwegian painter and inventor. His 1851 application was rejected by the interior ministry on the grounds that he had received public funding for his experiments. In 1867, a Danish fireworks manufacturer, Gaetano Amici, patented a cannon-fired harpoon, and in the same year, an Englishman, George Welch, patented a grenade harpoon very similar to the version which transformed whaling in the following decade. In 1870, the Norwegian shipping magnate Svend Foyn patented and pioneered the modern exploding whaling harpoon and gun. Foyn had studied the American method in Iceland. His basic design is still in use today. He perceived the failings of other methods and solved these problems in his own system. He included, with the help of H.M.T. Esmark, a grenade tip that exploded inside the whale. This harpoon design also utilized a shaft that was connected to the head with a moveable joint. His original cannons were muzzle-loaded with special padding and also used a unique form of gunpowder. The cannons were later replaced with safer breech-loading types. Together with the steam engine, this development ushered in the modern age of commercial whaling. Euro-American whalers were now equipped to hunt faster and more powerful species, such as the rorquals. Because rorquals sank when they died, later versions of the exploding harpoon injected air into the carcass to keep it afloat. The modern whaling harpoon consists of a deck-mounted launcher (mostly a cannon) and a projectile which is a large harpoon with an explosive (penthrite) charge, attached to a thick rope. The spearhead is shaped in a manner which allows it to penetrate the thick layers of whale blubber and stick in the flesh. It has sharp spikes to prevent the harpoon from sliding out. Thus, by pulling the rope with a motor, the whalers can drag the whale back to their ship. A recent development in harpoon technology is the hand-held speargun. Divers use the speargun for spearing fish. They may also be used for defense against dangerous marine animals. Spearguns may be powered by pressurized gas or with mechanical means like springs or elastic bands. Space The Philae spacecraft carried harpoons for helping the probe anchor itself to the surface of comet 67P/Churyumov–Gerasimenko. However, the harpoons failed to fire.
Technology
Ranged weapons
null
287152
https://en.wikipedia.org/wiki/Gray%20%28unit%29
Gray (unit)
The gray (symbol: Gy) is the unit of ionizing radiation dose in the International System of Units (SI), defined as the absorption of one joule of radiation energy per kilogram of matter. It is used as a unit of the radiation quantity absorbed dose that measures the energy deposited by ionizing radiation in a unit mass of absorbing material, and is used for measuring the delivered dose in radiotherapy, food irradiation and radiation sterilization. It is important in predicting likely acute health effects, such as acute radiation syndrome and is used to calculate equivalent dose using the sievert, which is a measure of the stochastic health effect on the human body. The gray is also used in radiation metrology as a unit of the radiation quantity kerma; defined as the sum of the initial kinetic energies of all the charged particles liberated by uncharged ionizing radiation in a sample of matter per unit mass. The unit was named after British physicist Louis Harold Gray, a pioneer in the measurement of X-ray and radium radiation and their effects on living tissue. The gray was adopted as part of the International System of Units in 1975. The corresponding cgs unit to the gray is the rad (equivalent to 0.01 Gy), which remains common largely in the United States, though "strongly discouraged" in the style guide for U.S. National Institute of Standards and Technology. Applications The gray has a number of fields of application in measuring dose: Radiobiology The measurement of absorbed dose in tissue is of fundamental importance in radiobiology and radiation therapy as it is the measure of the amount of energy the incident radiation deposits in the target tissue. The measurement of absorbed dose is a complex problem due to scattering and absorption, and many specialist dosimeters are available for these measurements, and can cover applications in 1-D, 2-D and 3-D. In radiation therapy, the amount of radiation applied varies depending on the type and stage of cancer being treated. For curative cases, the typical dose for a solid epithelial tumor ranges from 60 to 80 Gy, while lymphomas are treated with 20 to 40 Gy. Preventive (adjuvant) doses are typically around 45–60 Gy in 1.8–2 Gy fractions (for breast, head, and neck cancers). The average radiation dose from an abdominal X-ray is 0.7 millisieverts (0.0007 Sv), that from an abdominal CT scan is 8 mSv, that from a pelvic CT scan is 6 mGy, and that from a selective CT scan of the abdomen and the pelvis is 14 mGy. Radiation protection The absorbed dose also plays an important role in radiation protection, as it is the starting point for calculating the stochastic health risk of low levels of radiation, which is defined as the probability of cancer induction and genetic damage. The gray measures the total absorbed energy of radiation, but the probability of stochastic damage also depends on the type and energy of the radiation and the types of tissues involved. This probability is related to the equivalent dose in sieverts (Sv), which has the same dimensions as the gray. It is related to the gray by weighting factors described in the articles on equivalent dose and effective dose. The International Committee for Weights and Measures states: "In order to avoid any risk of confusion between the absorbed dose D and the dose equivalent H, the special names for the respective units should be used, that is, the name gray should be used instead of joules per kilogram for the unit of absorbed dose D and the name sievert instead of joules per kilogram for the unit of dose equivalent H." The accompanying diagrams show how absorbed dose (in grays) is first obtained by computational techniques, and from this value the equivalent doses are derived. For X-rays and gamma rays the gray is numerically the same value when expressed in sieverts, but for alpha particles one gray is equivalent to 20 sieverts, and a radiation weighting factor is applied accordingly. Radiation poisoning The gray is conventionally used to express the severity of what are known as "tissue effects" from doses received in acute exposure to high levels of ionizing radiation. These are effects that are certain to happen, as opposed to the uncertain effects of low levels of radiation that have a probability of causing damage. A whole-body acute exposure to 5 grays or more of high-energy radiation usually leads to death within 14 days. LD1 is 2.5 Gy, LD50 is 5 Gy and LD99 is 8 Gy. The LD50 dose represents 375 joules for a 75 kg adult. Absorbed dose in matter The gray is used to measure absorbed dose rates in non-tissue materials for processes such as radiation hardening, food irradiation and electron irradiation. Measuring and controlling the value of absorbed dose is vital to ensuring correct operation of these processes. Kerma Kerma ("kinetic energy released per unit mass") is used in radiation metrology as a measure of the liberated energy of ionisation due to irradiation, and is expressed in grays. Importantly, kerma dose is different from absorbed dose, depending on the radiation energies involved, partially because ionization energy is not accounted for. Whilst roughly equal at low energies, kerma is much higher than absorbed dose at higher energies, because some energy escapes from the absorbing volume in the form of bremsstrahlung (X-rays) or fast-moving electrons. Kerma, when applied to air, is equivalent to the legacy roentgen unit of radiation exposure, but there is a difference in the definition of these two units. The gray is defined independently of any target material, however, the roentgen was defined specifically by the ionisation effect in dry air, which did not necessarily represent the effect on other media. Development of the absorbed dose concept and the gray Wilhelm Röntgen discovered X-rays on November 8, 1895, and their use spread very quickly for medical diagnostics, particularly broken bones and embedded foreign objects where they were a revolutionary improvement over previous techniques. Due to the wide use of X-rays and the growing realisation of the dangers of ionizing radiation, measurement standards became necessary for radiation intensity and various countries developed their own, but using differing definitions and methods. Eventually, in order to promote international standardisation, the first International Congress of Radiology (ICR) meeting in London in 1925, proposed a separate body to consider units of measure. This was called the International Commission on Radiation Units and Measurements, or ICRU, and came into being at the Second ICR in Stockholm in 1928, under the chairmanship of Manne Siegbahn. One of the earliest techniques of measuring the intensity of X-rays was to measure their ionising effect in air by means of an air-filled ion chamber. At the first ICRU meeting it was proposed that one unit of X-ray dose should be defined as the quantity of X-rays that would produce one esu of charge in one cubic centimetre of dry air at 0 °C and 1 standard atmosphere of pressure. This unit of radiation exposure was named the roentgen in honour of Wilhelm Röntgen, who had died five years previously. At the 1937 meeting of the ICRU, this definition was extended to apply to gamma radiation. This approach, although a great step forward in standardisation, had the disadvantage of not being a direct measure of the absorption of radiation, and thereby the ionisation effect, in various types of matter including human tissue, and was a measurement only of the effect of the X-rays in a specific circumstance; the ionisation effect in dry air. In 1940, Louis Harold Gray, who had been studying the effect of neutron damage on human tissue, together with William Valentine Mayneord and the radiobiologist John Read, published a paper in which a new unit of measure, dubbed the gram roentgen (symbol: gr) was proposed, and defined as "that amount of neutron radiation which produces an increment in energy in unit volume of tissue equal to the increment of energy produced in unit volume of water by one roentgen of radiation". This unit was found to be equivalent to 88 ergs in air, and made the absorbed dose, as it subsequently became known, dependent on the interaction of the radiation with the irradiated material, not just an expression of radiation exposure or intensity, which the roentgen represented. In 1953 the ICRU recommended the rad, equal to 100 erg/g, as the new unit of measure of absorbed radiation. The rad was expressed in coherent cgs units. In the late 1950s, the CGPM invited the ICRU to join other scientific bodies to work on the development of the International System of Units, or SI. The CCU decided to define the SI unit of absorbed radiation as energy deposited by reabsorbed charged particles per unit mass of absorbent material, which is how the rad had been defined, but in MKS units it would be equivalent to the joule per kilogram. This was confirmed in 1975 by the 15th CGPM, and the unit was named the "gray" in honour of Louis Harold Gray, who had died in 1965. The gray was thus equal to 100 rad. Notably, the centigray (numerically equivalent to the rad) is still widely used to describe absolute absorbed doses in radiotherapy. The adoption of the gray by the 15th General Conference on Weights and Measures as the unit of measure of the absorption of ionizing radiation, specific energy absorption, and of kerma in 1975 was the culmination of over half a century of work, both in the understanding of the nature of ionizing radiation and in the creation of coherent radiation quantities and units. Radiation-related quantities The following table shows radiation quantities in SI and non-SI units.
Physical sciences
Radioactivity
null
287188
https://en.wikipedia.org/wiki/Number%20line
Number line
A number line is a graphical representation of a straight line that serves as spatial representation of numbers, usually graduated like a ruler with a particular origin point representing the number zero and evenly spaced marks in either direction representing integers, imagined to extend infinitely. The association between numbers and points on the line links arithmetical operations on numbers to geometric relations between points, and provides a conceptual framework for learning mathematics. In elementary mathematics, the number line is initially used to teach addition and subtraction of integers, especially involving negative numbers. As students progress, more kinds of numbers can be placed on the line, including fractions, decimal fractions, square roots, and transcendental numbers such as the circle constant : Every point of the number line corresponds to a unique real number, and every real number to a unique point. Using a number line, numerical concepts can be interpreted geometrically and geometric concepts interpreted numerically. An inequality between numbers corresponds to a left-or-right order relation between points. Numerical intervals are associated to geometrical segments of the line. Operations and functions on numbers correspond to geometric transformations of the line. Wrapping the line into a circle relates modular arithmetic to the geometric composition of angles. Marking the line with logarithmically spaced graduations associates multiplication and division with geometric translations, the principle underlying the slide rule. In analytic geometry, coordinate axes are number lines which associate points in a geometric space with tuples of numbers, so geometric shapes can be described using numerical equations and numerical functions can be graphed. In advanced mathematics, the number line is usually called the real line or real number line, and is a geometric line isomorphic to the set of real numbers, with which it is often conflated; both the real numbers and the real line are commonly denoted or . The real line is a one-dimensional real coordinate space, so is sometimes denoted when comparing it to higher-dimensional spaces. The real line is a one-dimensional Euclidean space using the difference between numbers to define the distance between points on the line. It can also be thought of as a vector space, a metric space, a topological space, a measure space, or a linear continuum. The real line can be embedded in the complex plane, used as a two-dimensional geometric representation of the complex numbers. History The first mention of the number line used for operation purposes is found in John Wallis's Treatise of Algebra (1685). In his treatise, Wallis describes addition and subtraction on a number line in terms of moving forward and backward, under the metaphor of a person walking. An earlier depiction without mention to operations, though, is found in John Napier's A Description of the Admirable Table of Logarithmes (1616), which shows values 1 through 12 lined up from left to right. Contrary to popular belief, René Descartes's original does not feature a number line, defined as we use it today, though it does use a coordinate system. In particular, Descartes's work does not contain specific numbers mapped onto lines, only abstract quantities. Drawing the number line A number line is usually represented as being horizontal, but in a Cartesian coordinate plane the vertical axis (y-axis) is also a number line. The arrow on the line indicates the positive direction in which numbers increase. Some textbooks attach an arrow to both sides, suggesting that the arrow indicates continuation. This is unnecessary, since according to the rules of geometry a line without endpoints continues indefinitely in the positive and negative directions. A line with one endpoint as a ray, and a line with two endpoints as a line segment. Comparing numbers If a particular number is farther to the right on the number line than is another number, then the first number is greater than the second (equivalently, the second is less than the first). The distance between them is the magnitude of their difference—that is, it measures the first number minus the second one, or equivalently the absolute value of the second number minus the first one. Taking this difference is the process of subtraction. Thus, for example, the length of a line segment between 0 and some other number represents the magnitude of the latter number. Two numbers can be added by "picking up" the length from 0 to one of the numbers, and putting it down again with the end that was 0 placed on top of the other number. Two numbers can be multiplied as in this example: To multiply 5 × 3, note that this is the same as 5 + 5 + 5, so pick up the length from 0 to 5 and place it to the right of 5, and then pick up that length again and place it to the right of the previous result. This gives a result that is 3 combined lengths of 5 each; since the process ends at 15, we find that 5 × 3 = 15. Division can be performed as in the following example: To divide 6 by 2—that is, to find out how many times 2 goes into 6—note that the length from 0 to 2 lies at the beginning of the length from 0 to 6; pick up the former length and put it down again to the right of its original position, with the end formerly at 0 now placed at 2, and then move the length to the right of its latest position again. This puts the right end of the length 2 at the right end of the length from 0 to 6. Since three lengths of 2 filled the length 6, 2 goes into 6 three times (that is, 6 ÷ 2 = 3). Portions of the number line The section of the number line between two numbers is called an interval. If the section includes both numbers it is said to be a closed interval, while if it excludes both numbers it is called an open interval. If it includes one of the numbers but not the other one, it is called a half-open interval. All the points extending forever in one direction from a particular point are together known as a ray. If the ray includes the particular point, it is a closed ray; otherwise it is an open ray. Extensions of the concept Logarithmic scale On the number line, the distance between two points is the unit length if and only if the difference of the represented numbers equals 1. Other choices are possible. One of the most common choices is the logarithmic scale, which is a representation of the positive numbers on a line, such that the distance of two points is the unit length, if the ratio of the represented numbers has a fixed value, typically 10. In such a logarithmic scale, the origin represents 1; one inch to the right, one has 10, one inch to the right of 10 one has , then , then , etc. Similarly, one inch to the left of 1, one has , then , etc. This approach is useful, when one wants to represent, on the same figure, values with very different order of magnitude. For example, one requires a logarithmic scale for representing simultaneously the size of the different bodies that exist in the Universe, typically, a photon, an electron, an atom, a molecule, a human, the Earth, the Solar System, a galaxy, and the visible Universe. Logarithmic scales are used in slide rules for multiplying or dividing numbers by adding or subtracting lengths on logarithmic scales. Combining number lines A line drawn through the origin at right angles to the real number line can be used to represent the imaginary numbers. This line, called imaginary line, extends the number line to a complex number plane, with points representing complex numbers. Alternatively, one real number line can be drawn horizontally to denote possible values of one real number, commonly called x, and another real number line can be drawn vertically to denote possible values of another real number, commonly called y. Together these lines form what is known as a Cartesian coordinate system, and any point in the plane represents the value of a pair of real numbers. Further, the Cartesian coordinate system can itself be extended by visualizing a third number line "coming out of the screen (or page)", measuring a third variable called z. Positive numbers are closer to the viewer's eyes than the screen is, while negative numbers are "behind the screen"; larger numbers are farther from the screen. Then any point in the three-dimensional space that we live in represents the values of a trio of real numbers. Advanced concepts As a linear continuum The real line is a linear continuum under the standard ordering. Specifically, the real line is linearly ordered by , and this ordering is dense and has the least-upper-bound property. In addition to the above properties, the real line has no maximum or minimum element. It also has a countable dense subset, namely the set of rational numbers. It is a theorem that any linear continuum with a countable dense subset and no maximum or minimum element is order-isomorphic to the real line. The real line also satisfies the countable chain condition: every collection of mutually disjoint, nonempty open intervals in is countable. In order theory, the famous Suslin problem asks whether every linear continuum satisfying the countable chain condition that has no maximum or minimum element is necessarily order-isomorphic to . This statement has been shown to be independent of the standard axiomatic system of set theory known as ZFC. As a metric space The real line forms a metric space, with the distance function given by absolute difference: The metric tensor is clearly the 1-dimensional Euclidean metric. Since the -dimensional Euclidean metric can be represented in matrix form as the -by- identity matrix, the metric on the real line is simply the 1-by-1 identity matrix, i.e. 1. If and , then the -ball in centered at is simply the open interval . This real line has several important properties as a metric space: The real line is a complete metric space, in the sense that any Cauchy sequence of points converges. The real line is path-connected and is one of the simplest examples of a geodesic metric space. The Hausdorff dimension of the real line is equal to one. As a topological space The real line carries a standard topology, which can be introduced in two different, equivalent ways. First, since the real numbers are totally ordered, they carry an order topology. Second, the real numbers inherit a metric topology from the metric defined above. The order topology and metric topology on are the same. As a topological space, the real line is homeomorphic to the open interval . The real line is trivially a topological manifold of dimension . Up to homeomorphism, it is one of only two different connected 1-manifolds without boundary, the other being the circle. It also has a standard differentiable structure on it, making it a differentiable manifold. (Up to diffeomorphism, there is only one differentiable structure that the topological space supports.) The real line is a locally compact space and a paracompact space, as well as second-countable and normal. It is also path-connected, and is therefore connected as well, though it can be disconnected by removing any one point. The real line is also contractible, and as such all of its homotopy groups and reduced homology groups are zero. As a locally compact space, the real line can be compactified in several different ways. The one-point compactification of is a circle (namely, the real projective line), and the extra point can be thought of as an unsigned infinity. Alternatively, the real line has two ends, and the resulting end compactification is the extended real line . There is also the Stone–Čech compactification of the real line, which involves adding an infinite number of additional points. In some contexts, it is helpful to place other topologies on the set of real numbers, such as the lower limit topology or the Zariski topology. For the real numbers, the latter is the same as the finite complement topology. As a vector space The real line is a vector space over the field of real numbers (that is, over itself) of dimension . It has the usual multiplication as an inner product, making it a Euclidean vector space. The norm defined by this inner product is simply the absolute value. As a measure space The real line carries a canonical measure, namely the Lebesgue measure. This measure can be defined as the completion of a Borel measure defined on , where the measure of any interval is the length of the interval. Lebesgue measure on the real line is one of the simplest examples of a Haar measure on a locally compact group. In real algebras When A is a unital real algebra, the products of real numbers with 1 is a real line within the algebra. For example, in the complex plane z = x + iy, the subspace {z : y = 0} is a real line. Similarly, the algebra of quaternions q = w + x i + y j + z k has a real line in the subspace {q : x = y = z = 0 }. When the real algebra is a direct sum then a conjugation on A is introduced by the mapping of subspace V. In this way the real line consists of the fixed points of the conjugation. For a dimension n, the square matrices form a ring that has a real line in the form of real products with the identity matrix in the ring.
Mathematics
Real analysis
null
287192
https://en.wikipedia.org/wiki/Helipad
Helipad
A helipad is the landing area of a heliport, in use by helicopters, powered lift, and vertical lift aircraft to land on surface. While helicopters and powered lift aircraft are able to operate on a variety of relatively flat surfaces, a fabricated helipad provides a clearly marked hard surface away from obstacles where such aircraft can land safely. Larger helipads, intended for use by helicopters and other vertical take-off and landing (VTOL) aircraft, may be called vertiports. An example is Vertiport Chicago, which opened in 2015. Usage Helipads may be located at a heliport or airport where fuel, air traffic control and service facilities for aircraft are available. Most helipads are located away from populated areas due to sounds, winds, space and cost constraints. Some skyscrapers have one on their roofs to accommodate air taxi services. Some basic helipads are built on top of highrise buildings for evacuation in case of a major fire outbreak. Major police departments may use a dedicated helipad at heliports as a base for police helicopters. Large ships and oil platforms usually have a helipad on board for emergency use. In such a case, the terms "helicopter deck", "helideck", or "helodeck" are used. Helipads are common features at hospitals where they serve to facilitate medical evacuation or air ambulance transfers of patients to trauma centers or to accept patients from remote areas without local hospitals or facilities capable of providing the level of emergency medicine required. In urban environments, these heliports are sometimes located on the roof of the hospital. Rooftop helipads sometimes display a large two-digit number, representing the weight limit (in thousands of pounds) of the pad. A second number may be present, representing the maximum rotor diameter in feet. Location identifiers are often, but not always, issued for helipads. They may be issued by the appropriate aviation authority. Authorized agencies include the Federal Aviation Administration in the United States, Transport Canada in Canada, the International Civil Aviation Organization, and the International Air Transport Association. Some helipads may have location identifiers from multiple sources, and these identifiers may be of different format and name. Construction Helipads are usually constructed out of concrete and are marked with a circle and/or a letter "H", so as to be visible from the air. Sometimes wildfire fighters will construct a temporary one from timber to receive supplies in remote areas. Rig mats may be used to build helipads. Landing pads may also be constructed in extreme conditions such as on ice. The world's highest helipad, built by India, is located on the Siachen Glacier at a height of 21,000 feet (6400 m) above sea level. Portability A portable helipad is a helipad structure with a rugged frame that can be used to land helicopters in any areas with slopes of up to 30 degrees, such as hillsides, riverbeds and boggy areas. Portable helipads can be transported by helicopter or powered-lift to place them where a VTOL needs to land, as long as there are no insurmountable obstructions nearby. Gallery
Technology
Concepts of aviation
null
287207
https://en.wikipedia.org/wiki/Plasmodium
Plasmodium
Plasmodium is a genus of unicellular eukaryotes that are obligate parasites of vertebrates and insects. The life cycles of Plasmodium species involve development in a blood-feeding insect host which then injects parasites into a vertebrate host during a blood meal. Parasites grow within a vertebrate body tissue (often the liver) before entering the bloodstream to infect red blood cells. The ensuing destruction of host red blood cells can result in malaria. During this infection, some parasites are picked up by a blood-feeding insect (mosquitoes in majority cases), continuing the life cycle. Plasmodium is a member of the phylum Apicomplexa, a large group of parasitic eukaryotes. Within Apicomplexa, Plasmodium is in the order Haemosporida and family Plasmodiidae. Over 200 species of Plasmodium have been described, many of which have been subdivided into 14 subgenera based on parasite morphology and host range. Evolutionary relationships among different Plasmodium species do not always follow taxonomic boundaries; some species that are morphologically similar or infect the same host turn out to be distantly related. Species of Plasmodium are distributed globally wherever suitable hosts are found. Insect hosts are most frequently mosquitoes of the genera Culex and Anopheles. Vertebrate hosts include reptiles, birds, and mammals. Plasmodium parasites were first identified in the late 19th century by Charles Laveran. Over the course of the 20th century, many other species were discovered in various hosts and classified, including five species that regularly infect humans: P. vivax, P. falciparum, P. malariae, P. ovale, and P. knowlesi. P. falciparum is by far the most lethal in humans, resulting in hundreds of thousands of deaths per year. A number of drugs have been developed to treat Plasmodium infection; however, the parasites have evolved resistance to each drug developed. Although the parasite can also infect people via blood transfusion, this is very rare, and Plasmodium cannot be spread from person to person. Some of subspecies of Plasmodium are obligate intracellular parasites. Description The genus Plasmodium consists of all eukaryotes in the phylum Apicomplexa that both undergo the asexual replication process of merogony inside host red blood cells and produce the crystalline pigment hemozoin as a byproduct of digesting host hemoglobin. Plasmodium species contain many features that are common to other eukaryotes, and some that are unique to their phylum or genus. The Plasmodium genome is separated into 14 chromosomes contained in the nucleus. Plasmodium parasites maintain a single copy of their genome through much of the life cycle, doubling the genome only for a brief sexual exchange within the midgut of the insect host. Attached to the nucleus is the endoplasmic reticulum (ER), which functions similarly to the ER in other eukaryotes. Proteins are trafficked from the ER to the Golgi apparatus which generally consists of a single membrane-bound compartment in Apicomplexans. From here, proteins are trafficked to various cellular compartments or to the cell surface. Like other apicomplexans, Plasmodium species have several cellular structures at the apical end of the parasite that serve as specialized organelles for secreting effectors into the host. The most prominent are the bulbous rhoptries which contain parasite proteins involved in invading the host cell and modifying the host once inside. Adjacent to the rhoptries are smaller structures termed micronemes that contain parasite proteins required for motility as well as recognizing and attaching to host cells. Spread throughout the parasite are secretory vesicles called dense granules that contain parasite proteins involved in modifying the membrane that separates the parasite from the host, termed the parasitophorous vacuole. Species of Plasmodium also contain two large membrane-bound organelles of endosymbiotic origin, the mitochondrion and the apicoplast, both of which play key roles in the parasite's metabolism. Unlike mammalian cells which contain many mitochondria, Plasmodium cells contain a single large mitochondrion that coordinates its division with that of the Plasmodium cell. Like in other eukaryotes, the Plasmodium mitochondrion is capable of generating energy in the form of ATP via the citric acid cycle; however, this function is only required for parasite survival in the insect host, and is not needed for growth in red blood cells. A second organelle, the apicoplast, is derived from a secondary endosymbiosis event, in this case the acquisition of a red alga by the Plasmodium ancestor. The apicoplast is involved in the synthesis of various metabolic precursors, including fatty acids, isoprenoids, iron-sulphur clusters, and components of the heme biosynthesis pathway. Life cycle The life cycle of Plasmodium involves several distinct stages in the insect and vertebrate hosts. Parasites are generally introduced into a vertebrate host by the bite of an insect host (generally a mosquito, with the exception of some Plasmodium species of reptiles). Parasites first infect the liver or other tissue, where they undergo a single large round of replication before exiting the host cell to infect erythrocytes. At this point, some species of Plasmodium of primates can form a long-lived dormant stage called a hypnozoite, which can remain in the liver for more than a year. However, for most Plasmodium species, the parasites in infected liver cells are only what are called merozoites. After emerging from the liver, they enter red blood cells, as explained above. They then go through continuous cycles of erythrocyte infection, while a small percentage of parasites differentiate into a sexual stage called a gametocyte which is picked up by an insect host taking a blood meal. In some hosts, invasion of erythrocytes by Plasmodium species can result in disease, called malaria. This can sometimes be severe, rapidly followed by death of the host (e.g. P. falciparum in humans). In other hosts, Plasmodium infection can apparently be asymptomatic. Even when humans have such subclinical plasmodial infections, there can nevertheless be very large numbers of multiplying parasites concealed in, particularly, the spleen and bone marrow. Certainly, this applies in the case of P. vivax. These hidden parasites (in addition to hypnozoites) are thought to be the origin of instances of recurrent P. vivax malaria. Within the red blood cells, the merozoites grow first to a ring-shaped form and then to a larger form called a trophozoite. Trophozoites then mature to schizonts which divide several times to produce new merozoites. The infected red blood cell eventually bursts, allowing the new merozoites to travel within the bloodstream to infect new red blood cells. Most merozoites continue this replicative cycle, however some merozoites upon infecting red blood cells differentiate into male or female sexual forms called gametocytes. These gametocytes circulate in the blood until they are taken up when a mosquito feeds on the infected vertebrate host, taking up blood which includes the gametocytes. In the mosquito, the gametocytes move along with the blood meal to the mosquito's midgut. Here the gametocytes develop into male and female gametes which fertilize each other, forming a zygote. Zygotes then develop into a motile form called an ookinete, which penetrates the wall of the midgut. Upon traversing the midgut wall, the ookinete embeds into the gut's exterior membrane and develops into an oocyst. Oocysts divide many times to produce large numbers of small elongated sporozoites. These sporozoites migrate to the salivary glands of the mosquito where they can be injected into the blood of the next host the mosquito bites, repeating the cycle. Evolution and taxonomy Taxonomy Plasmodium belongs to the phylum Apicomplexa, a taxonomic group of single-celled parasites with characteristic secretory organelles at one end of the cell. Within Apicomplexa, Plasmodium is within the order Haemosporida, a group that includes all apicomplexans that live within blood cells. Based on the presence of the pigment hemozoin and the method of asexual reproduction, the order is further split into four families, of which Plasmodium is in the family Plasmodiidae. The genus Plasmodium consists of over 200 species, generally described on the basis of their appearance in blood smears of infected vertebrates. These species have been categorized on the basis of their morphology and host range into 14 subgenera: Subgenus Asiamoeba (Telford, 1988) – reptiles Subgenus Bennettinia (Valkiunas, 1997) – birds Subgenus Carinamoeba (Garnham, 1966) – reptiles Subgenus Giovannolaia (Corradetti, et al. 1963) – birds Subgenus Haemamoeba (Corradetti, et al. 1963) – birds Subgenus Huffia (Corradetti, et al. 1963) – birds Subgenus Lacertamoeba (Telford, 1988) – reptiles Subgenus Laverania (Bray, 1958) – great apes, humans Subgenus Novyella (Corradetti, et al. 1963) – birds Subgenus Ophidiella (Telford, 1988) – reptiles Subgenus Paraplasmodium (Telford, 1988) – reptiles Subgenus Plasmodium (Bray, 1955) – monkeys and apes Subgenus Sauramoeba (Garnham, 1966) – reptiles Subgenus Vinckeia (Garnham, 1964) – mammals inc. primates Species infecting monkeys and apes with the exceptions of P. falciparum and P. reichenowi (which together make up the subgenus Laverania) are classified in the subgenus Plasmodium. Parasites infecting other mammals including some primates (lemurs and others) are classified in the subgenus Vinckeia. The five subgenera Bennettinia, Giovannolaia, Haemamoeba, Huffia, and Novyella contain the known avian malarial species. The remaining subgenera: Asiamoeba, Carinamoeba, Lacertamoeba, Ophidiella, Paraplasmodium, and Sauramoeba contain the diverse groups of parasites found to infect reptiles. Phylogeny More recent studies of Plasmodium species using molecular methods have implied that the group's evolution has not perfectly followed taxonomy. Many Plasmodium species that are morphologically similar or infect the same hosts turn out to be only distantly related. In the 1990s, several studies sought to evaluate evolutionary relationships of Plasmodium species by comparing ribosomal RNA and a surface protein gene from various species, finding the human parasite P. falciparum to be more closely related to avian parasites than to other parasites of primates. However, later studies sampling more Plasmodium species found the parasites of mammals to form a clade along with the genus Hepatocystis, while the parasites of birds or lizards appear to form a separate clade with evolutionary relationships not following the subgenera: Estimates for when different Plasmodium lineages diverged have differed broadly. Estimates for the diversification of the order Haemosporida range from around 16.2 million to 100 million years ago. There has been particular interest in dating the divergence of the human parasite P. falciparum from other Plasmodium lineages due to its medical importance. For this, estimated dates range from 110,000 to 2.5 million years ago. Distribution Plasmodium species are distributed globally. All Plasmodium species are parasitic and must pass between a vertebrate host and an insect host to complete their life cycles. Different species of Plasmodium display different host ranges, with some species restricted to a single vertebrate and insect host, while other species can infect several species of vertebrates and/or insects. Vertebrates Plasmodium parasites have been described in a broad array of vertebrate hosts including reptiles, birds, and mammals. While many species can infect more than one vertebrate host, they are generally specific to one of these classes (such as birds). Humans are primarily infected by five species of Plasmodium, with the overwhelming majority of severe disease and death caused by Plasmodium falciparum. Some species that infect humans can also infect other primates, and zoonoses of certain species (e.g. P. knowlesi) from other primates to humans are common. Non-human primates also contain a variety of Plasmodium species that do not generally infect humans. Some of these can cause severe disease in primates, while others can remain in the host for prolonged periods without causing disease. Many other mammals also carry Plasmodium species, such as a variety of rodents, ungulates, and bats. Again, some species of Plasmodium can cause severe disease in some of these hosts, while many appear not to. Over 150 species of Plasmodium infect a broad variety of birds. In general each species of Plasmodium infects one to a few species of birds. Plasmodium parasites that infect birds tend to persist in a given host for years or for the life time of the host, although in some cases Plasmodium infections can result in severe illness and rapid death. Unlike with Plasmodium species infecting mammals, those infecting birds are distributed across the globe. Species from several subgenera of Plasmodium infect diverse reptiles. Plasmodium parasites have been described in most lizard families and, like avian parasites, are spread worldwide. Again, parasites can result either in severe disease or be apparently asymptomatic depending on the parasite and the host. A number of drugs have been developed over the years to control Plasmodium infection in vertebrate hosts, particularly in humans. Quinine was used as a frontline antimalarial from the 17th century until widespread resistance emerged in the early 20th century. Resistance to quinine spurred the development of a broad array of antimalarial medications through the 20th century including chloroquine, proguanil, atovaquone, sulfadoxine/pyrimethamine, mefloquine, and artemisinin. In all cases, parasites resistant to a given drug have emerged within a few decades of the drugs deployment. To combat this, antimalarial drugs are frequently used in combination, with artemisinin combination therapies currently the gold standard for treatment. In general, antimalarial drugs target the life stages of Plasmodium parasites that reside within vertebrate red blood cells, as these are the stages that tend to cause disease. However, drugs targeting other stages of the parasite life cycle are under development in order to prevent infection in travelers and to prevent transmission of sexual stages to insect hosts. Insects In addition to a vertebrate host, all Plasmodium species also infect a bloodsucking insect host, generally a mosquito (although some reptile-infecting parasites are transmitted by sandflies). Mosquitoes of the genera Culex, Anopheles, Culiseta, Mansonia and Aedes act as insect hosts for various Plasmodium species. The best studied of these are the Anopheles mosquitoes which host the Plasmodium parasites of human malaria, as well as Culex mosquitoes which host the Plasmodium species that cause malaria in birds. Only female mosquitoes are infected with Plasmodium, since only they feed on the blood of vertebrate hosts. Different species affect their insect hosts differently. Sometimes, insects infected with Plasmodium have reduced lifespan and reduced ability to produce offspring. Further, some species of Plasmodium appear to cause insects to prefer to bite infected vertebrate hosts over non-infected hosts. History Charles Louis Alphonse Laveran first described parasites in the blood of malaria patients in 1880. He named the parasite Oscillaria malariae. In 1885, zoologists Ettore Marchiafava and Angelo Celli reexamined the parasite and termed it a member of a new genus, Plasmodium, named for the resemblance to the multinucleate cells of slime molds of the same name. The fact that several species may be involved in causing different forms of malaria was first recognized by Camillo Golgi in 1886. Soon thereafter, Giovanni Batista Grassi and Raimondo Filetti named the parasites causing two different types of human malaria Plasmodium vivax and Plasmodium malariae. In 1897, William Welch identified and named Plasmodium falciparum. This was followed by the recognition of the other two species of Plasmodium which infect humans: Plasmodium ovale (1922) and Plasmodium knowlesi (identified in long-tailed macaques in 1931; in humans in 1965). The contribution of insect hosts to the Plasmodium life cycle was described in 1897 by Ronald Ross and in 1899 by Giovanni Batista Grassi, Amico Bignami and Giuseppe Bastianelli. In 1966, Cyril Garnham proposed separating Plasmodium into nine subgenera based on host specificity and parasite morphology. This included four subgenera that had previously been proposed for bird-infecting Plasmodium species by A. Corradetti in 1963. This scheme was expanded upon by Sam R. Telford in 1988 when he reclassified Plasmodium parasites that infect reptiles, adding five subgenera. In 1997, G. Valkiunas reclassified the bird-infecting Plasmodium species adding a fifth subgenus: Bennettinia.
Biology and health sciences
SAR supergroup
Plants
287528
https://en.wikipedia.org/wiki/Verrazzano-Narrows%20Bridge
Verrazzano-Narrows Bridge
The Verrazzano-Narrows Bridge ( ; also referred to as the Narrows Bridge, the Verrazzano Bridge, and simply the Verrazzano) is a suspension bridge connecting the boroughs of Staten Island and Brooklyn in New York City, United States. It spans the Narrows, a body of water linking the relatively enclosed New York Harbor with Lower New York Bay and the Atlantic Ocean. It is the only fixed crossing of the Narrows. The double-deck bridge carries 13 lanes of Interstate 278: seven on the upper level and six on the lower level. The span is named for Giovanni da Verrazzano, who in 1524 was the first European explorer to enter New York Harbor and the Hudson River. Engineer David B. Steinman proposed a bridge across the Narrows in the late 1920s, but plans were deferred over the next twenty years. A 1920s attempt to build a Staten Island Tunnel was aborted, as was a 1930s plan for vehicular tubes underneath the Narrows. Discussion of a tunnel resurfaced in the mid-1930s and early 1940s, but the plans were again denied. In the late 1940s, urban planner Robert Moses championed a bridge across the Narrows as a way to connect Staten Island with the rest of the city. Various problems delayed the start of construction until 1959. Designed by Othmar Ammann, Leopold Just, and other engineers at Ammann & Whitney, the bridge opened on November 21, 1964. The lower deck opened in 1969 to accommodate increasing traffic loads. The bridge was refurbished in the 1990s and again in the 2010s and 2020s. The bridge has a central span of . It was the longest suspension bridge in the world until it was surpassed by the Humber Bridge in the UK in 1981. The bridge has the 18th-longest main span in the world, as well as the longest in the Americas. When the bridge was officially named in 1960, it was misspelled "Verrazano-Narrows Bridge" due to an error in the construction contract; the name was officially corrected in 2018. The Verrazzano-Narrows Bridge collects tolls in both directions, although only westbound drivers paid a toll from 1986 to 2020 in an attempt to reduce traffic congestion. History Early plans Liberty Bridge A bridge across the Narrows had been proposed as early as 1926 or 1927, when structural engineer David B. Steinman brought up the possibility of such a crossing. At the time, Staten Island was isolated from the rest of New York City, and its only direct connection to the city's other four boroughs was by the Staten Island Ferry to South Ferry in Manhattan, or by ferries to 39th and 69th Streets in Brooklyn. In 1928, the chambers of commerce in Brooklyn, Queens, Long Island, and Staten Island announced that the Interboro Bridge Company had proposed the future construction of the "Liberty Bridge" to the United States Department of War. The bridge's towers would be high and it would cost $60million in 1928 dollars. In November 1929, engineers released plans for the Liberty Bridge spanning the Narrows, with 800-foot-tall towers. Supporters hoped that the new construction would spur development on Staten Island, along with the Outerbridge Crossing and the Bayonne Bridge, which were under construction at the time. The Liberty Bridge would carry vehicles from Bay Ridge, Brooklyn, to an as-yet-undetermined location on Staten Island. On the Brooklyn side, the city planned to connect the Liberty Bridge to a "Crosstown Highway", spanning Brooklyn and Queens and connecting to the proposed Triborough Bridge in northwestern Queens. The city also envisioned a possible connection to the preexisting Manhattan Bridge, connecting Downtown Brooklyn to Lower Manhattan. However, a vote on the planned Liberty Bridge was never taken, as it was blocked by then-Congressman Fiorello H. La Guardia, who believed that a public necessity should not be provided by private interests. 1920s tunnel plan A prior attempt to link Brooklyn and Staten Island, using the Staten Island Tunnel, had commenced in 1923 but was canceled two years later. That tunnel would have extended New York City Subway service from Brooklyn to Staten Island. This proposal was also revived with the announcement of the Liberty Bridge. One of the alternative proposals had the subway tunnel going from St. George, Staten Island, to Bay Ridge, Brooklyn, before continuing to Governors Island and then Lower Manhattan. Simultaneously, engineers proposed a set of vehicular tunnels from Fort Wadsworth, Staten Island, to 97th Street, Brooklyn. The tubes were being planned in conjunction with the Triborough Tunnel (the modern-day Queens Midtown Tunnel), which would connect Manhattan, Brooklyn, and Queens. The city appropriated $5million for the tunnels in July 1929, and the Baltimore and Ohio Railroad also pledged funding for the vehicular tunnels. Planning for the vehicular tubes started that month. The Brooklyn Chamber of Commerce simultaneously considered all three projects—the bridge, the vehicular tunnels, and the subway tunnel. Community groups on both sides of the Narrows disagreed on which projects should be built first, if at all. Residents of Bay Ridge opposed any plans involving a bridge because its construction would almost definitely require the demolition of part of the neighborhood. Boring work for the vehicular tunnels started in November 1930. The twin tunnels, projected to be completed by 1937, were to connect Hylan Boulevard on Staten Island with 86th Street in Brooklyn once they were completed. In January 1932, construction of these tunnels was put on hold indefinitely due to a lack of money. The construction work did not go beyond an examination of shoreline on the Brooklyn side. Cancellation of bridge In February 1933, the U.S. House of Representatives approved a bill authorizing the construction of a suspension bridge across the Narrows. With this approval, the Interboro Bridge Company hoped to start constructing the bridge by the end of the year, thereby creating jobs for 80,000 workers. Structural engineer Othmar H. Ammann, who was building the Triborough Bridge, Midtown Tunnel, and Golden Gate Bridge at the time, showed interest in designing the proposed Narrows bridge, which would be the world's longest bridge if it were built. The city approved the construction of a rapid transit tunnel under the Narrows in December 1933. This tunnel was approved in conjunction with the proposed Brooklyn–Battery Tunnel connecting Red Hook with Lower Manhattan. In April 1934, the War Department announced its opposition to the Narrows Bridge's construction. The War Department's opposition to the bridge plan was based on the fact that a bridge could create a blockage during wartime, a rationale it gave for opposing a Brooklyn–Battery Bridge connecting Red Hook, Brooklyn, with Lower Manhattan. The Port Authority of New York and New Jersey did not have a public position regarding the Narrows Bridge plan, other than a request that it be allowed to operate the future bridge. Following the War Department's announcement that they would oppose the Narrows bridge, private interests began studying the feasibility of a tunnel under the Narrows. 1930s tunnel plan In 1936, the plan for a Narrows crossing was brought up again when now-New York City mayor La Guardia gained authorization to petition the U.S. Congress for a bridge across the Narrows. Under the new plan, the proposed bridge would charge tolls for motorists, and its $50million cost would be paid off using federal bonds. La Guardia preferred a tunnel instead, and so the next year he requested the New York City Tunnel Authority to review the feasibility of such a crossing. The New York City Planning Commission was amenable to constructing either a bridge or a tunnel across the Narrows, and in 1939, put forth a plan to expand New York City's highway system. In March of the same year, as a bill for the Battery Bridge was being passed, Staten Island state legislators added a last-minute amendment to the bill, providing for a Narrows bridge. The Narrows crossing was not included in the final version of the Planning Commission's plan, which was approved in 1941. In 1943, the New York City Board of Estimate allocated $50,000 toward a feasibility study of the tunnel. By this time, Bay Ridge residents now opposed the tunnel plan as well, because they feared that the tunnel's construction would lower the quality of life in that neighborhood. After the war ended in 1945, the Planning Commission estimated that construction of the Narrows Tunnel would cost $73.5million. However, by then, La Guardia had turned against the tunnel, saying that "it is not my time" to construct the tunnel. 1940s and 1950s bridge plan Initial proposal The cancellation of plans for the Narrows tunnel brought a resurgence of proposals for a bridge across the Narrows. In September 1947, Robert Moses, the chairman of the Triborough Bridge and Tunnel Authority (TBTA), announced that the city was going to ask the War Department for permission to build a bridge across the Narrows. Moses had previously created a feasibility study for a Narrows tunnel, finding that it would be much cheaper to build a bridge. Moses and mayor William O'Dwyer both supported the Narrows Bridge plan, which was still being referred to as "Liberty Bridge". The city submitted its request to the War Department in July 1948, and a commission composed of three United States Armed Forces branches was convened to solicit the public's opinions on the proposed span. U.S. Representative Donald Lawrence O'Toole, whose constituency included Bay Ridge, opposed the proposal for the bridge because he believed it would damage the character of Bay Ridge, and because the bridge might block the Narrows in case of a war. He cited a poll showing that for every Bay Ridge resident who supported the bridge's construction, 33 more were opposed. The U.S. military approved the bridge proposal in May 1949, over the vociferous opposition of Bay Ridge residents, on the condition that construction start within five years. By that time, plans for the span had been finalized, and the project only needed $78million in financing in order to proceed. This financing was not set to be awarded until 1950, when the Battery Tunnel was completed. Preliminary plans showed the bridge as being above the mean high water level, enough for the RMS Queen Mary to pass under it. Moses and acting Port Authority Chairman Bayard F. Pope were agreeable to letting either of their respective agencies construct and operate the proposed Narrows Bridge, as long as that agency could do so efficiently. In 1954, the two agencies started conducting a joint study on the logistics of building and constructing the bridge. Because of restrictions by the TBTA's bondholders, construction could not begin until at least 1957. Frederick H. Zurmuhlen, the Commissioner of Public Works, estimated that the Narrows Bridge would cost $200million total. He encouraged the TBTA to start construction on the bridge as soon as possible in order to reduce congestion on East River crossings to the north. Staten Islanders viewed the project cautiously, since the Narrows Bridge would provide a connection to the rest of the city, but could also cause traffic congestion through the borough. Moses had only a positive view of the bridge's proposed effects on Staten Islanders, saying that it was vital for the borough's future. In May 1954, the Army's permit for starting construction on the Narrows Bridge lapsed. The Army granted a two-year extension for the start of construction. In a measure passed in March 1955, the city gained control over the approval process for several tasks related to the Narrows bridge's construction, including land acquisition. A little more than a month later, New York governor W. Averell Harriman signed a $600million spending bill authorizing the construction of the Narrows Bridge; the construction of the Throgs Neck Bridge between Queens and the Bronx; and the addition of a second level to the George Washington Bridge between Manhattan and New Jersey. Later that year, it was announced that the Narrows Bridge would be part of an expansion to the Interstate Highway System. Although a study on the viability of adding transit service to the Narrows Bridge was commissioned in early 1956, Moses rejected the idea of adding subway tracks onto the new bridge, saying that it would be too costly. In April of that year, New Jersey governor Robert B. Meyner signed a bill that allowed the Port Authority to build the Narrows Bridge and lease it to the TBTA, who would operate the bridge. The TBTA would buy the bridge from the Port Authority in 1967 as part of the agreement. Finalization of plans On the Brooklyn side, the Narrows Bridge was originally supposed to connect to the Circumferential (Belt) Parkway, but in early 1957, Harriman vetoed a bill that stipulated that the main approach connect to the Belt Parkway. By May 1957, an updated location for the Brooklyn anchorage had been agreed on. The anchorage was now to be located at Fort Lafayette, an island coastal fortification built next to Fort Hamilton at the southern tip of Bay Ridge. Moses also proposed expanding Brooklyn's Gowanus Expressway and extending it to the Narrows Bridge by way of Seventh Avenue, which would require cutting through the middle of Bay Ridge. This proposal drew opposition from the community, who wanted the approach to follow the Belt Parkway along the Brooklyn shore. These opponents said that the Seventh Avenue alignment would displace over 1,500 families. In February 1958, the New York State Legislature approved a bill to change the Brooklyn approaches back to Belt Parkway, which was almost identical to the bill Harriman had vetoed. However, the city approved the Seventh Avenue bridge approach in August 1958. The next month, mayor Robert F. Wagner Jr. said that the city was committed to building a bridge across the Narrows, but was not committed to the construction of the Seventh Avenue approach. In response, Moses wrote to Wagner that any continuing delays would cause the bridge to be canceled. The bridge's cost had now risen to $320million. After holding a hearing for concerned Bay Ridge residents, the Board of Estimate affirmed the Narrows Bridge plan in October 1958, without any objections. At the same time, it rejected plans for a tunnel under the Narrows, as well as a bridge or tunnel from Brooklyn directly to Jersey City, New Jersey. The Board was set to vote on the Seventh Avenue approach in mid-December, but the federal government stated that it would only agree to the bridge's construction if the Seventh Avenue approach had 12 lanes, with six on each level. The federal government was already paying for two highway improvements on both sides of the proposed bridge: the Clove Lakes Expressway (Staten Island Expressway) on Staten Island, and the Gowanus Expressway in Brooklyn. On December 31 of that year, the Board of Estimate voted to approve plans for the Seventh Avenue approach, having delayed that vote several times. The approval of the Seventh Avenue approach angered Bay Ridge residents since the construction of the approach would displace 7,500 people. Opposition in Staten Island was far smaller. More than twice as many people were being displaced there, but Staten Island stood to benefit from a better connection to the rest of the city. Thus, the bridge's announcement was welcomed and it sparked a rise in real-estate prices on the island. As the controversy progressed, Steinman brought up a competing proposal to build a bridge between Brooklyn and New Jersey directly. Nelson Rockefeller, the Republican candidate for governor of New York, initially supported Steinman's proposal to build a bridge to New Jersey, but Moses later persuaded Rockefeller to endorse the bridge to Staten Island. The State Legislature drafted a bill in an effort to change the Brooklyn approach's location to Belt Parkway. However, now-governor Rockefeller vetoed the Belt Parkway bill, and in March 1959, the Board of Estimate officially condemned land along Seventh Avenue to make way for the Gowanus Expressway extension to the Narrows Bridge. The only tasks remaining before the start of construction were to finalize the design of the Narrows Bridge, and to speed up the construction schedule to meet a 1964 deadline. In April 1959, the bridge was officially renamed after the Italian navigator Giovanni da Verrazzano. This sparked a controversy because the proposed bridge's name only had one "z" while the explorer's name had two "z"s. Construction Preparation Surveying work for the -Narrows Bridge began in January 1959. Construction officially began on August 14, 1959, with a groundbreaking ceremony on the Staten Island anchorage. Those in attendance included New Jersey governor Meyner, New York City mayor Wagner, and TBTA chairman Moses. Although Rockefeller had been invited to the event, neither he nor Assembly speaker Joseph F. Carlino showed up. In December 1959, the TBTA was put in charge of funding and building the bridge. To raise money for construction, Rockefeller signed a bill that would remove the 4% ceiling on the interest rates for the securities that the TBTA was selling to pay for the bridge. This ceiling would be lifted until June 1965. In essence, this meant that the TBTA could sell securities at much higher interest rates to raise the $320million that was needed. Othmar Ammann was named as the senior partner for the project. Other notable figures involved chief engineer Milton Brumer; project engineers Herb Rothman and Frank L. Stahl; design engineer Leopold Just; Safety Engineer Alonzo Dickinson, and engineer of construction John West Kinney. Meanwhile, John "Hard Nose" Murphy supervised the span's and cables' construction. Before starting actual work on the bridge, the TBTA destroyed the structures at the future locations of the anchorages. The agency acquired of the within Fort Hamilton, in return for paying for a $12million renovation of the Army installation and giving up of land in Dyker Beach Park. A 1,000-ton World War I monument on the Brooklyn side, within the path of the future Seventh Avenue approach, was placed atop rolling logs and shifted . The right-of-way for the Seventh Avenue approach was also being cleared, and despite initial opposition to the clearing work, all of the residents within the approach's path eventually acquiesced to moving elsewhere. To prevent contractors from delaying work on the expressways on either side of the bridge, Moses warned them of steep fines if the expressways were not completed by the time the bridge was finished. Progress An anchorage was built on each side of the Narrows, with each anchorage measuring long by wide, and containing a combined of steel and concrete. Each anchorage contained sixty-six large holes for the cables. The bases of each anchorage are built on glacial sands, reaching below ground level on the Brooklyn side, and below ground level on the Staten Island side. Foundation work for the -Narrows Bridge was well underway by 1960, as visitors were able to see the anchorages. A concrete workers' strike in mid-1961 threatened the timely completion of the Staten Island anchorage, which had only been partially filled with concrete. This strike lasted several months and affected many projects under the city. As construction on the anchorages proceeded, two watertight caissons were sunk into the channel close to either shoreline, so that the bridge's suspension towers could be built atop each caisson. The bases of each caisson consisted of sixty-six circular openings each in diameter, arranged in a six-by-eleven grid. Shafts of reinforced concrete would be built along the inner rim of each opening, and once each section of shaft reached above water level, cranes with clamshell buckets would dig the sand and mud inside each shaft before sinking the shafts deeper into the water. The Staten Island side's caisson was sunk into the water, and necessitated the dredging of of sand and assorted muck. This caisson required of concrete, and in March 1961, it became the first of the two caissons to be sunk. The Brooklyn side's caisson required even more work, since it was deep, displaced of muck, and used of concrete. Once the caissons were sunk completely, the shafts inside each caisson were filled with water, and the bases of the caissons were covered by a sheet of reinforced concrete. The process of constructing the anchorages and caissons took just over two years, and it was complete by the end of 1961. Two separate companies later constructed the modules that would make up the suspension towers. The Staten Island tower was built by Bethlehem Steel, and the Brooklyn tower was built by the Harris Structural Company. The first piece of the towers, a 300-foot piece of the tower on the Staten Island side, was lifted into place in October 1961, and this tower was topped out by September 1962. The Brooklyn tower started construction in April 1962. When the towers were fully erected, workers began the process of spinning the bridge's cables. The American Bridge Company was selected to construct the cables and deck. The cable-spinning process began in March 1963, and took six months, since of bridge cables had to be strung 104,432 times around the bridge. The main cables were hung on both sides of the span, and then suspender cables were hung from the bridge's main cables. The main cables were fully spun by August. In late 1963, builders started receiving the rectangular pieces that would make up the roadway deck. The components for the sixty 40-ton slabs were first created in an assembly line in Jersey City. Then, these components were combined in a Bayonne steelworks from the bridge site, and after the pieces of each slab were assembled, they were floated to the Narrows via barge. Each piece measured high by approximately wide and long. These pieces of the deck were then hung from the suspender cables. The first piece of the deck was lifted onto the bridge in October 1963. By early 1964, the span was nearly finished, and all that remained was to secure the various parts of the bridge. By this point, plans for new development on Staten Island were well underway, and tourists had come to observe the construction of the -Narrows Bridge. The bridge had been scheduled to open in 1965, but as a result of the faster-than-anticipated rate of progress, the TBTA decided to open the bridge in November 1964. In preparation for the -Narrows Bridge's opening, the TBTA fully repainted the structure. The construction process of the bridge had employed an average of 1,200 workers a day for five years, excluding those who had worked on the approaches; around 10,000 individuals had worked on the bridge throughout that five-year period. Three men died during the construction of the bridge. The first fatality was 58-year-old Paul Bassett, who fell off the deck and struck a tower in August 1962. Irving Rubin, also 58 years old, died in July 1963, when he fell off of the bridge approach. The third worker who died was 19-year-old Gerard McKee, who fell into the water in October 1963, after slipping off the catwalk. After McKee's death, workers participated in a five-day strike in December 1963. The strike resulted in temporary safety nets being installed underneath the deck. These nets had not been provided during the four years prior to the strike. The construction of the bridge was chronicled by the writer Gay Talese in his 1964 book The Bridge: The Building of the -Narrows Bridge. He also wrote several articles about the bridge's construction for The New York Times. The book also contains several drawings by Lili Réthi and photographs by Bruce Davidson. Opening The Staten Island approach to the -Narrows Bridge was the first part of the new project to be completed, and it opened in January 1964. The upper deck was opened on November 21, 1964, at a cost of $320 million (equivalent to $ billion in present dollars). Politicians at all levels of the government, from Brooklyn Borough President Abe Stark to U.S. President Lyndon B. Johnson, wrote speeches paying tribute to the -Narrows Bridge. The opening ceremony was attended by over 5,000 people, including 1,500 official guests. Several dignitaries, involving the mayor, the governor, and the borough presidents of Brooklyn and Staten Island, cut the gold ribbon. They then joined a motorcade to mark the official opening of the bridge. A 50-cent toll was charged to all motorists crossing the bridge. The Bridge's opening was celebrated across Staten Island. Moses did not invite any of the 12,000 workers to the opening, so they boycotted the event and instead attended a mass in memory of the three workers who died during construction. The opening was accompanied by the release of a commemorative postage stamp, which depicted a ship sailing underneath the new span. The Metropolitan Transportation Authority (MTA) created a bus route across the bridge to connect Victory Boulevard in Staten Island with the Bay Ridge–95th Street subway station in Brooklyn. This bus service initially saw low patronage, with only 6,000 daily passengers using the route. Five days after the -Narrows Bridge opened, the ferry from Staten Island to Bay Ridge, Brooklyn, stopped running, as it was now redundant to the new bridge. Within the first two months of the bridge's opening, 1.86million vehicles had used the new crossing, 10% more than originally projected, and this netted the TBTA almost $1million in toll revenue. The Goethals Bridge, which connected New Jersey to the Staten Island Expressway and the Bridge, saw its daily average use increase by 75%, or approximately 300,000 trips total, compared to before the Narrows Bridge opened. The Holland Tunnel from New Jersey to Manhattan, and the Staten Island Ferry from Staten Island to Manhattan, both saw decreased vehicle counts after the bridge opened. In summer 1965, Staten Island saw increased patronage at its beaches, facilitated by the opening of the new bridge. By the time of the bridge's first anniversary, 17million motorists had crossed the -Narrows Bridge, paying $9million in tolls. The bridge had seen 34% more trips than planners had projected. Conversely, 5.5million fewer passengers and 700,000 fewer vehicles rode the Staten Island Ferry to Manhattan. The Bridge was the last project designed by Ammann, who had designed many of the other major crossings into and within New York City. He died in 1965, the year after the bridge opened. The -Narrows Bridge was also the last great public works project in New York City overseen by Moses. The urban planner envisioned that the and Throgs Neck Bridges would be the final major bridges in New York City for the time being, since they would complete the city's expressway system. Late 1960s to 2000s Although the bridge was constructed with only one six-lane roadway, Ammann had provided extra trusses to support a potential second roadway underneath the main deck. These trusses, which were used to strengthen the bridge, were a design alteration that was added to many bridges in the aftermath of the Tacoma Narrows Bridge collapse in 1940. The Verrazzano-Narrows Bridge became so popular among motorists that in March 1969, the TBTA decided to erect the lower deck at a cost of $22million. The Verrazzano Bridge had not been expected to carry enough traffic to necessitate a second deck until 1978, but traffic patterns over the previous five years had demonstrated the need for extra capacity. By contrast, a lower deck on the George Washington Bridge, connecting New Jersey and Upper Manhattan, had not been built until 31 years after the bridge's 1931 opening. The new six-lane deck opened on June 28, 1969. Originally, the Verrazzano Bridge's Brooklyn end was also supposed to connect to the planned Cross-Brooklyn Expressway, New York State Route 878, and JFK Airport, but the Cross-Brooklyn Expressway project was canceled in 1969. On June 26, 1976, to celebrate the United States' 200th anniversary, workers placed a very large U.S. flag on the side of the Verrazzano Bridge. The flag, which measured , was described in The New York Times as being the size of "a football field and a half" and billed as the world's largest flag. At the time, it was the largest U.S. flag ever made. The flag was supposed to withstand wind speeds of , but it ripped apart three days later, when there was a wind speed of . The flag had been stuck against the bridge's suspender cables, so any slight wind would have caused the cables to make tears in the flag. A second flag was created in 1980 for the July 4 celebration that year. This flag was even larger at (an area of ). The new flag was placed along a steel grid so that the suspender cables would not rip it apart. Architectural critic Ada Louise Huxtable derided the new flag as a "simple-minded, vainglorious proposal" and asked, "Does anyone really want to spend $850,000 to upstage the Statue of Liberty?" The TBTA stopped collecting tolls for Brooklyn-bound drivers on the Verrazzano Bridge in 1986 and doubled the toll for Staten Island-bound drivers. This was a result of a bill introduced by Guy V. Molinari, the U.S. representative for Staten Island, as part of an initiative to reduce traffic that accumulated at the toll booth on Staten Island. The one-way toll was initially intended to be part of a six-month pilot program, but resulted in permanent changes to traffic flows on the Verrazzano Bridge. The crossing saw more Brooklyn-bound traffic and less Staten Island-bound traffic as a result. This unidirectional collection remained in effect through 2020, when two-way tolls were restored. The TBTA spent $45 million in the 1990s on repairs to the deck, which had been damaged by chloride from the seawater. As part of the project, the top of the deck was removed and latex-modified concrete poured in its place. The TBTA also allocated $30 million to repaint the bridge and approaches gray; this project began in 1998. Workers also repaired the bridge's anchorages and replaced the roadways atop the anchorages. The repairs to the anchorages and electrical systems were completed in 2002, and workers finished repainting the bridge the next year. In addition, a minor project to clean and repaint both of the decks (mostly the bottom deck) took place between 2004 and 2008. Beginning in 2008, all 262 of the mercury vapor fixtures in the bridge's necklace lighting were replaced with energy-efficient light-emitting diodes. This retrofit was completed in 2009, years before LED street lights were installed in the rest of the city. The Verrazzano-Narrows Bridge's name was originally spelled with one "z". The "" name dates to 1960 when governor Rockefeller had signed the bill authorizing the bridge's name as such. A bill to formally change the bridge's name to the variant with two "z"s was introduced by college student Robert Nash in 2016, but it stalled in early 2018. The New York State Senate voted to change the name of the bridge in June 2018, and the name change was officially signed into law that October. 2010s to present In 2011, the city began a $1.5billion construction project at the bridge. At the time, it was expected to take up to 25 years. The first phase, which cost $235million and lasted until 2017, involved constructing a seventh lane on the upper deck, which was to be used as a high-occupancy vehicle (HOV) lane. The old median barrier was demolished, and the old deck was replaced with orthotropic decking. Related work also included the repainting the bridge's supports with corrosion-resistant paint. The ramps within the Belt Parkway interchange were also rearranged to allow for a ramp to be constructed for the new HOV lane on the upper deck. The parts for this deck were ordered from China because the parts that the MTA required were no longer manufactured in the United States. After the upper deck was replaced, parts of the lower deck are to be replaced, but this necessitates the closure of the lower deck during construction. Hence, the MTA opted to replace the upper deck first to add more capacity. The upper level's new HOV lane opened on June 22, 2017. The MTA dismantled the Staten Island-bound toll booths in 2017 to speed up westbound traffic. This work was done in advance of the reconstruction of tracks around Penn Station, which severely limited rail service into that station and created more vehicular traffic at crossings to Manhattan. The MTA accelerated some components of the Verrazzano-Narrows Bridge's reconstruction during the COVID-19 pandemic in New York City in 2020, rebuilding the westbound approach ramp from the Gowanus Expressway and adding a fourth lane along that ramp. This also allowed for an extensive renovation of the towers' pedestals. Long-term plans also call for the installation of a bicycle and pedestrian path on the Verrazzano-Narrows Bridge. Description The Verrazzano-Narrows Bridge is owned by Triborough Bridge and Tunnel Authority bondholders who paid for the bridge at its construction. It is operated by the TBTA, which is an affiliate agency of the MTA, using the business name MTA Bridges and Tunnels. The bridge carries Interstate 278, which continues onto the Staten Island Expressway to the west and the Gowanus Expressway to the northeast. The Verrazzano, in combination with the Goethals Bridge and the Staten Island Expressway, created a new way for commuters and travelers to reach Brooklyn, Long Island, and Manhattan by car from New Jersey. Deck At the time of opening, the Verrazzano-Narrows Bridge was the longest suspension bridge in the world; its 4,260-foot center span, between the two suspension towers, was longer than the Golden Gate Bridge's center span. Despite being only slightly longer than the Golden Gate Bridge, the Verrazzano-Narrows Bridge could carry a 75% greater load than the former could. In 1981, the Verrazzano Bridge was surpassed by the Humber Bridge in England, which has a center span of , as the world's longest suspension bridge. The upper and lower levels are supported by trusses underneath each roadway, which stiffen the bridge against vertical, torsional, and lateral pressure. There are also lateral trusses on either side of the lower level. The anchorage on the Staten Island side contains a facility for heating cinders that are used to de-ice the bridge deck during winter. Because of thermal expansion of the steel cables, the height of the upper roadway is lower in summer than in winter. The Narrows is the only entry point for large cruise ships and container ships that dock in New York City. As a result, they must be built to accommodate the clearance under the bridge. At mean high water, that clearance is . The RMS Queen Mary 2, one such vessel built to Verrazzano-Narrows Bridge specifications, was designed with a flatter funnel to pass under the bridge, and has of clearance under the bridge during high tide. Towers and cables Each of the two suspension towers contains around 1 million bolts and 3 million rivets. The towers contain a combined of metal, more than three times the of metal used in the Empire State Building. Because of the height of the towers () and their distance from each other (), the curvature of the Earth's surface had to be taken into account when designing the bridge. The towers are not parallel to each other, but are farther apart at their tops than at their bases. When built, the bridge's suspension towers were the tallest structures in New York City outside of Manhattan. The towers do not use cross-bracing, unlike similar suspension bridges; instead, there are arched struts near the top of each tower and below the lower deck. At the base of each tower is a concrete-and-granite pedestal, which rests on a caisson measuring across. The western tower is offshore from Staten Island, while the eastern tower is offshore from Brooklyn. The Verrazzano-Narrows Bridge is classified as a fracture critical bridge, making it vulnerable to collapse if parts of the offshore towers were to fail. The March 2024 collapse of Baltimore's Key Bridge raised awareness and concern about other bridges nationwide, especially with ship traffic being diverted to other area ports. A few weeks after the Baltimore bridge collapse, a large container ship had propulsion problems near the Verrazzano in early April and was assisted by several tugboats. Authorities were confident that riprap, or piles of rocks, around the towers' bases would ground a stray ship before it could hit a tower. The diameter of each of the four main suspension cables is . Each main cable is composed of 26,108 wires amounting to a total of in length. Numerous birds nest or roost on the bridge, most notably breeding peregrine falcons. The falcons nest at the top of the Verrazzano-Narrows Bridge's towers, as well as on the Throgs Neck and Marine Parkway Bridges. As the falcons are endangered, the city places bands on each bird and examines the birds' nesting sites each year. The falcons were discovered on the top of the Verrazzano Bridge in 1983, though they had started breeding there several years prior. Naming Tentative names During the planning stages, the bridge was originally named simply the "Narrows Bridge". The co-naming of the bridge for Verrazzano (with two "z"s) was controversial. It was first proposed in 1951 by the Italian Historical Society of America when the bridge was in the planning stage. After Robert Moses turned down the initial proposal, the society undertook a public relations campaign to re-establish Giovanni da Verrazzano's largely forgotten reputation and to promote the idea of naming the bridge for him. The society's director, John N. LaCorte, successfully lobbied several governors of states along the U.S.'s East Coast to proclaim April 17, the anniversary of Verrazzano's arrival in the harbor, as Verrazzano Day. LaCorte then approached the TBTA again but was turned down a second time. The explorer's name had previously been suggested for the George Washington Bridge, located several miles north, in 1931. The Italian Historical Society later successfully lobbied to get a bill introduced in the New York State Assembly to name the bridge for the explorer. After the introduction of the bill, the Staten Island Chamber of Commerce joined the society in promoting the name. In April 1958, governor W. Averell Harriman announced that he would propose naming the Narrows Bridge after Verrazzano in honor of the explorer's voyage to New York Harbor in 1524. His successor, Nelson Rockefeller, put his support behind the one-"z" "" name in April 1959, saying that it was the Americans' standard way of spelling the explorer's name. According to Gay Talese, the one-"z" name was bolstered by the fact that it appeared on the bridge's first construction contracts in 1959; this incorrect spelling persisted in all subsequent references to the bridge. Although the "" name was not finalized yet, The New York Times noted that the Staten Island Ferry boat carrying dignitaries to the bridge's August 1959 groundbreaking ceremony was named the "Verrazzano". The Times further stated that Harriman and mayor Wagner had respectively proposed a "Verrazzano Bridge" and proclaimed a "Verrazzano Day". The Staten Island Chamber of Commerce opposed the Verrazzano name altogether, saying that the proper name of the bridge should be "Staten Island Bridge" because there was also a "Brooklyn Bridge", a "Manhattan Bridge", a "Queens Bridge", and a "Bronx Bridge". The Italian Historical Society was reportedly perplexed about the opposition to the "" name. In response to the Staten Island Chamber of Commerce's opposition, the TBTA offered to add a hyphen between "" and "Narrows". Official name with one "z" Rockefeller signed the "" name into law in March 1960, which officially changed the name of the Narrows Bridge to "-Narrows Bridge". The naming issue did not encounter further controversy until 1963, after the assassination of President John F. Kennedy. This prompted a series of suggestions to rename structures, monuments, and agencies across the United States after the late president. A petition to rename the Bridge for Kennedy received thousands of signatures. In response, LaCorte contacted the president's brother, United States Attorney General Robert F. Kennedy, who told LaCorte that he would assure that the bridge would keep the "" name. Ultimately, the -Narrows Bridge kept its name, while Idlewild Airport in Queens was renamed after Kennedy. In part due to discrimination against Italian-Americans, the bridge's official name was widely ignored by local news outlets at the time of the dedication. Some radio announcers and newspapers omitted any reference to Verrazzano, referring to the bridge as the Narrows Bridge, or the Brooklyn–Staten Island Bridge. The society continued its lobbying efforts to promote the name in the following years until the name became firmly established. Another ethnic slur for the bridge was its nickname as the "Guinea Gangplank", referring to the Italian-Americans who subsequently moved from Brooklyn to Staten Island. The Italian Historical Society's published references to the bridge's name all contained two "z"s. Bills to change bridge's name In June 2016, St. Francis College student Robert Nash started a petition to spell Giovanni da Verrazzano's name on the bridge correctly, with two "z"s. The petition gained support from politicians including New York state senators Martin Golden and Andrew Lanza. In December 2016, Golden and Lanza sent letters to the Metropolitan Transportation Authority CEO Thomas F. Prendergast, in which they recommended that the bridge's name be spelled correctly. An MTA spokesperson said the agency was reviewing the letter. A bill to formally change the bridge's name passed the New York State Senate in 2017 before being defeated in the New York State Assembly in early 2018. In mid-2018, Golden sponsored a State Senate bill to change the bridge's spelling to "Verrazzano" with two "z"s. On June 6 of that year, the Senate unanimously passed a bill to change the spelling of the -Narrows Bridge, sending the measure to the New York State Assembly and the office of New York governor Andrew Cuomo for approval. If the bill was passed, the MTA would modify nearly a hundred road signs at a cost of $350,000. The Assembly passed the bill on June 21, sending the measure to Cuomo with a minor modification. As a cost-saving measure, existing signs would retain the one-"z" spelling, and only new signs would contain the double-"z" spelling. On October 1, 2018, Cuomo signed the bill into law, effectively changing the legal spelling of the bridge to the "Verrazzano-Narrows Bridge". The first signs with the corrected spelling were installed in February 2020. Tolls , drivers pay $11.19 per car or $4.71 per motorcycle for tolls by mail/non-NYCSC E-Z Pass. E-ZPass users with transponders issued by the New York E‑ZPass Customer Service Center pay $6.94 per car or $3.02 per motorcycle. Mid-Tier NYCSC E-Z Pass users pay $8.36 per car or $3.57 per motorcycle. All E-ZPass users with transponders not issued by the New York E-ZPass CSC will be required to pay Toll-by-mail rates. The Staten Island Resident Rebate Program provides a discounted rate of $2.75 to registered residents of Staten Island who use E-ZPass. In the event that the Resident Rebate Program is discontinued, the effective toll for Staten Island residents with E-ZPasses would be set at $3.68. Until April 2021, Staten Island residents could request an E-ZPass Flex; when three or more people were in a passenger vehicle, they could travel at a reduced rate of $1.70. Non-residents do not get the rebated or discounted rates. A bill that passed in the New York State Senate in May 2019 would give the discounted rate to Brooklyn residents with E-ZPass who cross at least 10 times per month. The discount would only apply to non-commercial vehicles. Prior to the implementation of two-way tolling, the undiscounted tolls for passenger cars were higher than for most other tolled crossings in the U.S. The tolls from the Verrazzano-Narrows Bridge grossed the MTA $417million in 2017, and more than 85 percent of motorists paid a discounted toll rate. One-way toll An urban legend has it that tolls were to be abolished once the bridge's construction bonds were paid off, but this has been debunked by the Staten Island Advance. Originally, all drivers paid the same toll to cross the Verrazzano-Narrows Bridge. Staten Island residents were the only residents of New York City who had to pay a toll in order to enter their home borough, since all four of Staten Island's vehicular crossings collected tolls. This put Staten Island motorists at a financial disadvantage compared with drivers who lived in other boroughs. A bill to reduce the tolls for Staten Islanders was introduced in the New York City Council in 1975. Governor Mario Cuomo signed another law to give Staten Island residents discounted tolls in 1983, after years of petitioning and opposition from his two predecessors. From its opening until 1986, the toll was collected in both directions. In 1985, U.S. Representative Guy V. Molinari co-sponsored a bill that would require the MTA to collect the Verrazzano-Narrows Bridge's toll in the Staten Island-bound direction only. This came after Staten Island residents had complained about pollution from idling vehicles. In December of that year, the United States House of Representatives passed a bill that prohibited the MTA from collecting tolls from Brooklyn-bound vehicles, under penalty of a loss of highway funding. Accordingly, in March 1986, the MTA started a pilot program where it charged a $3.50 toll for Staten Island-bound vehicles rather than charging a $1.75 toll in both directions. The pilot program was extended to six months, but it was controversial due to the dubious benefits involved. The new toll plan not only caused a drop in revenues, but also caused congestion in Manhattan and Brooklyn and air pollution in Manhattan. Canal Street in Lower Manhattan, which connected to the Holland Tunnel to New Jersey, saw the most severe congestion, as drivers would go through New Jersey and use the Bayonne Bridge to pay a cheaper toll to enter Staten Island. Fatal accidents involving pedestrians in Lower Manhattan also increased greatly as a result. In 1987, the MTA supported removing the one-way toll because it reduced MTA revenues by $7million a year. At that point, Cuomo proposed reinstating an eastbound toll for trucks. In 1990, it was noted that about 455,000 more eastbound vehicles per year were using the bridge's eastbound lanes compared with before the toll reconfiguration, but that this was heavily outweighed by the 1.5million fewer westbound vehicles per year. Residents of Manhattan and Brooklyn wanted the tolls changed so that either eastbound vehicles only, or both directions, would be tolled. In 2019, the United States House of Representatives voted in favor of a federal appropriations bill, which would repeal the bridge's one-way-toll mandate and allow half of the then-current toll to be applied to both directions. At the time, the Verrazzano-Narrows Bridge was the only American bridge with a federal mandate controlling its toll collections. President Donald Trump signed the bill in December 2019, but the MTA had yet to determine at that time when it would enact two-way tolls. After the mandate was repealed, the MTA had to seek approval for split tolls from its board, and install tolling gantries to support split tolls. On December 1, 2020, two-way tolls were reinstated. Electronic tolling and tollbooth removals E-ZPass was introduced at the Verrazzano-Narrows Bridge in late 1995. Its introduction helped to reduce traffic congestion at the tollbooths; in March 1997, it was found that drivers with E-ZPass were able to pass through the westbound tollbooths within 30 seconds, compared to 15 minutes for drivers paying with tokens or cash. In February 1998, the MTA discontinued the sale of toll tokens on the Verrazzano-Narrows Bridge, except to Staten Island residents purchasing them in bulk. Despite not collecting tolls in the eastbound direction since 1986, the MTA did not do anything with the unused booths for a quarter-century. In 2010, eight of the eleven Brooklyn-bound toll booths were removed as part of the first phase of a project to improve traffic flow at the toll plaza. Two years later, the last of the eastbound tollbooths was removed. Until 2020, tolls were still collected in the Staten Island-bound direction only, and congestion within Lower Manhattan persisted due to the bridge's one-way westbound toll. Open-road cashless tolling began on July 8, 2017. The westbound tollbooths were also dismantled, and drivers were no longer able to pay cash at the bridge. Instead, cameras and E-ZPass readers are mounted on new overhead gantries manufactured by TransCore near where the booths were located. A vehicle without E-ZPass has a picture taken of its license plate and a bill for the toll is mailed to its owner. For E-ZPass users, sensors detect their transponders wirelessly. Historical tolls Bridge usage In 2015, an average of 202,523 vehicles used the Verrazzano-Narrows Bridge daily in both directions. , the Verrazzano-Narrows Bridge carries more traffic than the Outerbridge Crossing, the Bayonne Bridge, and the Goethals Bridge combined. These three bridges, which connect Staten Island with New Jersey, were used by a combined 168,984 vehicles in both directions. In 2011, advocacy group Transportation for America rated the Verrazzano-Narrows Bridge as New York's most dangerous, because of the combination of deterioration and the number of people who cross it per day. The MTA responded that the Verrazzano-Narrows Bridge, which was both the newest large bridge and the longest bridge in the state, was structurally sound, and that the bridge had passed its most recent inspection. The MTA attributed Transportation for America's results to a "misinterpretation of inspection records". Signs at both ends of the Verrazzano-Narrows Bridge forbid photography and videotaping while on the bridge. These signs were installed after the September 11, 2001, attacks, when the MTA started confiscating film from individuals who were caught filming MTA crossings. However, the ban had been in place long before the attacks in order to prevent people from taking close-up pictures of the bridge. Public transportation Three local bus routes operated by MTA Regional Bus Operations use the Verrazzano-Narrows Bridge: the S53 local route, the S79 Select Bus Service route, and the S93 limited-stop route. The bridge also carries 20 express bus routes that connect Staten Island with Manhattan and are also operated by New York City Transit. They are the SIM1, SIM1C, SIM2, SIM3, SIM3C, SIM4, SIM4C, SIM4X, SIM5, SIM6, SIM7, SIM9, SIM10, SIM15, SIM31, SIM32, SIM33, SIM33C, SIM34, and SIM35. As part of a proposed expansion of the New York City Subway, a subway line on the bridge was considered early in the planning process, but Moses rejected the plan, ostensibly over cost concerns. Other bridges proposed and built by Moses, including the Triborough Bridge, Henry Hudson Bridge, Bronx-Whitestone Bridge, and Throgs Neck Bridge, also lack provisions for subway tracks. According to biographer Robert Caro, Moses purposely excluded any provisions for mass transit on his bridges to promote private transportation. Pedestrian and bicycle access Lack of walkway or bikeway The Verrazzano-Narrows Bridge was not built with a pedestrian walkway. At the time, it was seen as too expensive, and planners additionally explained the lack of a walkway as a benefit that would help prevent suicide jumps. Non-motorized transportation occurs only if the bridge is closed to regular traffic, such as during the New York City Marathon and the Five Boro Bike Tour. In 1976, the Verrazzano-Narrows Bridge was designated as the starting point of the New York City Marathon. The 1976 marathon was the first year in the marathon's six-year history that its course went outside Manhattan. Since then, the marathon has started at the bridge's Staten Island end every year. The bridge is entirely closed whenever the marathon takes place, and it is partially closed during the bike tours. Originally, the organizers of these events used the bridge free of charge; in the 2020s, the MTA began requesting that the events' organizers pay a fee to compensate for the loss of toll revenue. The MTA ultimately dropped its request that the marathon reimburse the agency for lost toll revenue. The lack of a walkway did not completely prevent suicides, since by 1975, four people had died after jumping off the bridge. The number of suicides has increased over time, despite efforts at deterrence. A sign that says "Life Is Worth Living" is located on the Staten Island approach. In 2008, the MTA also installed six suicide hotlines on the bridge. In December 2019, the MTA began installing a prototype suicide barrier after a series of fatal jumps from the bridge. A permanent suicide barrier was installed from 2021 to 2022. Proposals for pedestrian and bike access There have been calls for a walkway or bike lane on the Verrazzano-Narrows Bridge since its opening, when several people protested over the lack of bike lanes at the bridge's opening ceremony. In 1977, as a temporary solution, the city modified three buses to fit 12 bikes and 20 passengers each, then operated these buses on a new "S7 Bridge" route. In 1993, the New York City Department of City Planning called for a footpath across the bridge as part of their Greenway Plan for New York City. The next year, the city sought a $100,000 federal grant to fund a feasibility study into a Verrazzano Bridge pedestrian and bike path. In 1997, the DCP released its study, which found that two footpaths running between the suspender ropes along the upper level, separated for pedestrian and cyclist use, would cost a minimum of $26.5million. The MTA at the time expressed concern about the "safety and liability inherent in any strategy that introduces pedestrian and bicycle access" to the bridge. Local residents on both sides of the bridge started advocating for the construction of a walkway or bikeway on the Verrazzano-Narrows Bridge in 2002. Dave Lutz, the director of the Neighborhood Open Space Coalition nonprofit, stated that after the September 11 attacks, Staten Islanders walked home along the bridge's roadway. Mayor Michael Bloomberg promised to look into the possibility in October 2003. The Harbor Ring Committee was formed in 2011 to advocate for the completion of the Harbor Ring route, which would create a ring around New York Harbor, including a footpath across the Verrazzano. In spring 2013 the committee began an online petition that generated more than 2,500 signatures, as well as an organizational sign-on letter with the support of 16 regional and local advocacy and planning organizations. That year, the MTA announced that it would conduct a three-year feasibility study for installing a pathway on the Verrazzano-Narrows Bridge. The MTA considered plans for a bike lane in 2015, during the reconstruction of the Verrazzano-Narrows Bridge. The MTA estimated that a dedicated multiple-use pathway would cost $400 million due to the need for a minimum width to accommodate a fire engine and construction of entrance and exit ramps. The plan was ultimately rejected in March 2019 over safety concerns.
Technology
Bridges
null
287531
https://en.wikipedia.org/wiki/Gallstone
Gallstone
A gallstone is a stone formed within the gallbladder from precipitated bile components. The term cholelithiasis may refer to the presence of gallstones or to any disease caused by gallstones, and choledocholithiasis refers to the presence of migrated gallstones within bile ducts. Most people with gallstones (about 80%) are asymptomatic. However, when a gallstone obstructs the bile duct and causes acute cholestasis, a reflexive smooth muscle spasm often occurs, resulting in an intense cramp-like visceral pain in the right upper part of the abdomen known as a biliary colic (or "gallbladder attack"). This happens in 1–4% of those with gallstones each year. Complications from gallstones may include inflammation of the gallbladder (cholecystitis), inflammation of the pancreas (pancreatitis), obstructive jaundice, and infection in bile ducts (cholangitis). Symptoms of these complications may include pain that lasts longer than five hours, fever, yellowish skin, vomiting, dark urine, and pale stools. Risk factors for gallstones include birth control pills, pregnancy, a family history of gallstones, obesity, diabetes, liver disease, or rapid weight loss. The bile components that form gallstones include cholesterol, bile salts, and bilirubin. Gallstones formed mainly from cholesterol are termed cholesterol stones, and those formed mainly from bilirubin are termed pigment stones. Gallstones may be suspected based on symptoms. Diagnosis is then typically confirmed by ultrasound. Complications may be detected using blood tests. The risk of gallstones may be decreased by maintaining a healthy weight with exercise and a healthy diet. If there are no symptoms, treatment is usually not needed. In those who are having gallbladder attacks, surgery to remove the gallbladder is typically recommended. This can be carried out either through several small incisions or through a single larger incision, usually under general anesthesia. In rare cases when surgery is not possible, medication can be used to dissolve the stones or lithotripsy can be used to break them down. In developed countries, 10–15% of adults experience gallstones. Gallbladder and biliary-related diseases occurred in about 104 million people (1.6% of people) in 2013 and resulted in 106,000 deaths. Gallstones are more common among women than men and occur more commonly after the age of 40. Gallstones occur more frequently among certain ethnic groups than others. For example, 48% of Native Americans experience gallstones, whereas gallstone rates in many parts of Africa are as low as 3%. Once the gallbladder is removed, outcomes are generally positive. Definition Gallstone disease refers to the condition where gallstones are either in the gallbladder or common bile duct. The presence of stones in the gallbladder is referred to as cholelithiasis, from the Greek (, 'bile') + (, 'stone') + (, 'process'). The presence of gallstones in the common bile duct is called choledocholithiasis, from the Greek (, 'bile-containing', from + , 'duct') + + . Choledocholithiasis is frequently associated with obstruction of the bile ducts, which can lead to cholangitis, from the Greek: + (, 'vessel') + (, 'inflammation'), a serious infection of the bile ducts. Gallstones within the ampulla of Vater can obstruct the exocrine system of the pancreas and can result in pancreatitis. Signs and symptoms Gallstones, regardless of size or number, are often asymptomatic. These "silent stones" do not require treatment and can remain asymptomatic even years after they form. Sometimes, the pain may be referred to tip of the scapula in cholelithiasis; this is called "Collin's sign". A characteristic symptom of a gallstone attack is the presence of colic-like pain in the upper-right side of the abdomen, often accompanied by nausea and vomiting. Pain from symptomatic gallstones may range from mild to severe and can steadily increase over a period lasting from 30 minutes to several hours. Other symptoms may include fever, as well as referred pain between the shoulder blades or below the right shoulder. If one or more gallstones block the bile ducts and cause bilirubin to leak into the bloodstream and surrounding tissue, jaundice and itching may also occur. In this case, liver enzyme levels are likely to be raised. Often, gallbladder attacks occur after eating a heavy meal. Attacks are most common in the evening or at night. Other complications In rare cases, gallstones that cause severe inflammation can erode through the gallbladder into adherent bowel, potentially causing an obstruction termed gallstone ileus. Other complications can include ascending cholangitis, which occurs when a bacterial infection causes purulent inflammation in the biliary tree and liver, and acute pancreatitis caused by blockage of the bile ducts that prevents active enzymes from being secreted into the bowel, instead damaging the pancreas. Rarely, gallbladder cancer may occur as a complication. Risk factors Gallstone risk increases for females (especially before menopause) and for people near or above 40 years; the condition is more prevalent among people of European or American Indigenous descent than among other ethnicities. A lack of melatonin could significantly contribute to gallbladder stones, as melatonin inhibits cholesterol secretion from the gallbladder, enhances the conversion of cholesterol to bile, and is an antioxidant, which is able to reduce oxidative stress to the gallbladder. Gilbert syndrome has been linked to an increased risk of gallstones. Researchers believe that gallstones may be caused by a combination of factors, including inherited body chemistry, body weight, gallbladder motility (movement), and low-calorie diet. The absence of such risk factors does not, however, preclude the formation of gallstones. Nutritional factors that may increase risk of gallstones include constipation; eating fewer meals per day; low intake of the nutrients folate, magnesium, calcium, and vitamin C; low fluid consumption; and, at least for men, a high intake of carbohydrate, a high glycemic load, and high glycemic index diet. Wine and whole-grained bread may decrease the risk of gallstones. Rapid weight loss increases risk of gallstones. The weight loss drug orlistat is known to increase the risk of gallstones. Cholecystokinin deficiency caused by celiac disease increases risk of gallstone formation, especially when diagnosis of celiac disease is delayed. Pigment gallstones are most commonly seen in the developing world. Risk factors for pigment stones include hemolytic anemias (such as from sickle-cell disease and hereditary spherocytosis), cirrhosis, and biliary tract infections. People with erythropoietic protoporphyria (EPP) are at increased risk to develop gallstones. Additionally, prolonged use of proton pump inhibitors has been shown to decrease gallbladder function, potentially leading to gallstone formation. Cholesterol modifying medications can affect gallstone formation. Statins inhibit cholesterol synthesis and there is evidence that their use may decrease the risk of getting gallstones. Fibrates increase cholesterol concentration in bile and their use has been associated with an increased risk of gallstones. Bile acid malabsorption may also be a risk. Pathophysiology Cholesterol gallstones develop when bile contains too much cholesterol and not enough bile salts. Besides a high concentration of cholesterol, two other factors are important in causing gallstones. The first is how often and how well the gallbladder contracts; incomplete and infrequent emptying of the gallbladder may cause the bile to become overconcentrated and contribute to gallstone formation. This can be caused by high resistance to the flow of bile out of the gallbladder due to the complicated internal geometry of the cystic duct. The second factor is the presence of proteins in the liver and bile that either promote or inhibit cholesterol crystallization into gallstones. In addition, increased levels of the hormone estrogen, as a result of pregnancy or hormone therapy, or the use of combined (estrogen-containing) forms of hormonal contraception, may increase cholesterol levels in bile and also decrease gallbladder motility, resulting in gallstone formation. Composition The composition of gallstones is affected by age, diet and ethnicity. On the basis of their composition, gallstones can be divided into the following types: cholesterol stones, pigment stones, and mixed stones. An ideal classification system is yet to be defined. Cholesterol stones Cholesterol stones vary from light yellow to dark green or brown or chalk white and are oval, usually solitary, between 2 and 3 cm long, each often having a tiny, dark, central spot. To be classified as such, they must be at least 80% cholesterol by weight (or 70%, according to the Japanese classification system). Between 35% and 90% of stones are cholesterol stones. Pigment stones Bilirubin ("pigment", "black pigment") stones are small, dark (often appearing black), and usually numerous. They are composed primarily of bilirubin (insoluble bilirubin pigment polymer) and calcium (calcium phosphate) salts that are found in bile. They contain less than 20% of cholesterol (or 30%, according to the Japanese classification system). Between 2% and 30% of stones are bilirubin stones. Mixed stones Mixed (brown pigment stones) typically contain 20–80% cholesterol (or 30–70%, according to the Japanese classification system). Other common constituents are calcium carbonate, palmitate phosphate, bilirubin and other bile pigments (calcium bilirubinate, calcium palmitate and calcium stearate). Because of their calcium content, they are often radiographically visible. They typically arise secondary to infection of the biliary tract which results in the release of β-glucuronidase (by injured hepatocytes and bacteria) which hydrolyzes bilirubin glucuronides and increases the amount of unconjugated bilirubin in bile. Between 4% and 20% of stones are mixed. Gallstones can vary in size and shape from as small as a grain of sand to as large as a golf ball. The gallbladder may contain a single large stone or many smaller ones. Pseudoliths, sometimes referred to as sludge, are thick secretions that may be present within the gallbladder, either alone or in conjunction with fully formed gallstones. Diagnosis Diagnosis is typically confirmed by abdominal ultrasound. Other imaging techniques used are ERCP and MRCP. Gallstone complications may be detected on blood tests. On abdominal ultrasound, sinking gallstones usually have posterior acoustic shadowing. In floating gallstones, reverberation echoes (or comet-tail artifact) is seen instead in a clinical condition called adenomyomatosis. Another sign is wall-echo-shadow (WES) triad (or double-arc shadow) which is also characteristic of gallstones. A positive Murphy's sign is a common finding on physical examination during a gallbladder attack. Prevention Maintaining a healthy weight by getting sufficient exercise and eating a healthy diet that is high in fiber may help prevent gallstone formation. Ursodeoxycholic acid (UDCA) appears to prevent formation of gallstones during weight loss. A high fat diet during weight loss also appears to prevent gallstones. Treatment Lithotripsy Extracorporeal shock wave lithotripsy is a non-invasive method to manage gallstones that uses high-energy sound waves to disintegrate them first applied in January 1985. Side effects of extracorporeal shock wave lithotripsy include biliary pancreatitis and liver haematoma. The term is derived from the Greek words meaning 'breaking (or pulverizing) stones': + , ). Surgical Cholecystectomy (gallbladder removal) has a 99% chance of eliminating the recurrence of cholelithiasis. The lack of a gallbladder has no negative consequences in most people, however 10 to 15% of people develop postcholecystectomy syndrome, which may cause nausea, indigestion, diarrhea, and episodes of abdominal pain. There are two surgical options for cholecystectomy: Open cholecystectomy is performed via an abdominal incision (laparotomy) below the lower right ribs. Recovery typically requires 3–5 days of hospitalization, with a return to normal diet a week after release and to normal activity several weeks after release. Laparoscopic cholecystectomy, introduced in the 1980s, is performed via three to four small puncture holes for a camera and instruments. Post-operative care typically includes a same-day release or a one-night hospital stay, followed by a few days of home rest and pain medication. Perforation of the gall bladder is not uncommon—it has been reported in the range of 10% to 40%. Unretrieved gallstone spillage has been reported as 6% to 30%, but gallstones that are not retrieved rarely cause complications (0.08%–0.3%). Obstruction of the common bile duct with gallstones can sometimes be relieved by endoscopic retrograde sphincterotomy (ERS) following endoscopic retrograde cholangiopancreatography (ERCP). Surgery carries risks and some people continue to experience symptoms (including pain) afterwards, for reasons that remain unclear. An alternative option is to adopt a ‘watch and wait’ strategy before operating to see if symptoms resolve. A study compared the 2 approaches for uncomplicated gallstones and after 18 months, both approaches were associated with similar levels of pain. The watch and wait approach was also less costly (more than £1000 less per patient). Medical The medications ursodeoxycholic acid (UDCA) and chenodeoxycholic acid (CDCA) have been used in treatment to dissolve gallstones. A 2013 meta-analysis concluded that UDCA or higher dietary fat content appeared to prevent formation of gallstones during weight loss. Medical therapy with oral bile acids has been used to treat small cholesterol stones, and for larger cholesterol gallstones when surgery is either not possible or unwanted. CDCA treatment can cause diarrhea, mild reversible hepatic injury, and a small increase in the plasma cholesterol level. UDCA may need to be taken for years. Use in alternative medicine Gallstones can be a valued by-product of animals butchered for meat because of their use as an antipyretic and antidote in the traditional medicine of some cultures, particularly traditional Chinese medicine. The most highly prized gallstones tend to be sourced from old dairy cows, termed calculus bovis or niu-huang (yellow thing of cattle) in Chinese. Some slaughterhouses carefully scrutinize workers for gallstone theft.
Biology and health sciences
Specific diseases
Health
287542
https://en.wikipedia.org/wiki/Dentist
Dentist
A dentist, also known as a dental surgeon, is a health care professional who specializes in dentistry, the branch of medicine focused on the teeth, gums, and mouth. The dentist's supporting team aids in providing oral health services. The dental team includes dental assistants, dental hygienists, dental technicians, and sometimes dental therapists. History Middle Ages In China as well as France, the first people to perform dentistry were barbers. They have been categorized into 2 distinct groups: guild of barbers and lay barbers. The first group, the Guild of Barbers, was created to distinguish more educated and qualified dental surgeons from lay barbers. Guild barbers were trained to do complex surgeries. The second group, the lay barbers, were qualified to perform regular hygienic services such as shaving and tooth extraction as well as basic surgery. However, in 1400, France made decrees prohibiting lay barbers from practicing all types of surgery. In Germany as well as France from 1530 to 1575 publications completely devoted to dentistry were being published. Ambroise Paré, often known as the Father of Surgery, published his own work about the proper maintenance and treatment of teeth. Ambroise Paré was a French barber surgeon who performed dental care for multiple French monarchs. He is often credited with having raised the status of barber surgeons. Modern dentistry Pierre Fauchard of France is often referred to as the "father of modern dentistry" because in 1728 he was the first to publish a scientific textbook on the techniques and practices of dentistry. Over time, trained dentists immigrated from Europe to the Americas to practice dentistry, and by 1760, America had its own native born practicing dentists. Newspapers were used at the time to advertise and promote dental services. In America from 1768 to 1770 the first application of dentistry to verify forensic cases was being pioneered; this was called forensic dentistry. With the rise of dentists, there was also the rise of new methods to improve the quality of dentistry. These new methods included the spinning wheel to rotate a drill and chairs made specifically for dental patients. In the 1840s, the world's first dental school and national dental organization were established. Along with the first dental school came the establishment of the Doctor of Dental Surgery degree, often referred to as a DDS degree. In response to the rise in new dentists as well as dentistry techniques, the first dental practice act was established to regulate dentistry. In the United States, the First Dental Practice Act required dentists to pass each specific state medical board exam in order to practice dentistry in that particular state. However, because the dental act was rarely enforced, some dentists did not obey the act. From 1846 to 1855, new dental techniques were being invented such as the use of ester anesthesia for surgery, and the cohesive gold foil method which enabled gold to be applied to a cavity. The American Dental Association was established in 1859 after a meeting with 26 dentists. Around 1867, the first university-associated dental school was established, Harvard Dental School. Lucy Hobbs Taylor was the first woman to earn a dental degree. In the 1880s, tube toothpaste was created which replaced the original forms of powder or liquid toothpaste. New dental boards, such as the National Association of Dental Examiners, were created to establish standards and uniformity among dentists. In 1887, the first dental laboratory was established; dental laboratories are used to create dentures and crowns that are specific to each patient. In 1895, the dental X-ray was discovered by a German physicist, Wilhelm Röntgen. In the 20th century, new dental techniques and technology were invented such as the porcelain crowns (1903), Novocain (a local anesthetic) 1905, precision cast fillings (1907), nylon toothbrushes (1938), water fluoridation (1945), fluoride toothpaste (1950), air driven dental tools (1957), lasers (1960), electric toothbrushes (1960), and home tooth bleaching kits (1989) were invented. Inventions such as the air driven dental tools ushered in a new high-speed dentistry. Responsibilities By nature of their general training, a licensed dentist can carry out most dental treatments such as restorative (dental restorations, crowns, bridges), orthodontics (braces), prosthodontic (dentures, crown/bridge), endodontic (root canal) therapy, periodontal (gum) therapy, and oral surgery (extraction of teeth), as well as performing examinations, taking radiographs (x-rays) and diagnosis. Additionally, dentists can further engage in oral surgery procedures such as dental implant placement. Dentists can also prescribe medications such as antibiotics, fluorides, pain killers, local anesthetics, sedatives/hypnotics and any other medications that serve in the treatment of the various conditions that arise in the head and neck. All DDS and DMD degree holders are legally qualified to perform a number of more complex procedures such as gingival grafts, bone grafting, sinus lifts, and implants, as well as a range of more invasive oral and maxillofacial surgery procedures, though many choose to pursue residencies or other post-doctoral education to augment their abilities. A few select procedures, such as the administration of General anesthesia, legally require postdoctoral training in the US. While many oral diseases are unique and self-limiting, poor conditions in the oral cavity can lead to poor general health and vice versa; notably, there is a significant link between periodontal, cardiovascular, and endocrine diseases. Conditions in the oral cavity may also be indicative of other systemic diseases such as osteoporosis, diabetes, AIDS, and various blood diseases, including malignancies and lymphoma. Dentists can also prescribe medicines. Several studies have suggested that dentists and dental students are at high risk of burnout. During burnout, dentists experience exhaustion, alienate from work and perform less efficiently. A systemic study identified risk factors associated with this condition such as practitioner's young age, personality type, gender, the status of education, high job strain, working hours, and the burden of clinical degrees requisites. The authors of this study concluded that intervention programs at an early stage during the undergraduate level may provide practitioners with a good strategy to prepare for and cope with this condition. Regulations Depending on the country, all dentists are required to register with their national or local health board, regulators, and professional indemnity insurance, in order to practice dentistry. In the UK, dentists are required to register with the General Dental Council. In Australia, it is the Dental Board of Australia, while in the United States, dentists are registered according to the individual state board. The main role of a dental regulator is to protect the public by ensuring only qualified dental practitioners are registered, handle any complaints or misconduct, and develop national guidelines and standards for dental practitioners to follow. List of specialties For many countries, after satisfactory completion of post-graduate training, dental specialists are required to join a specialist board or list, in order to use the title 'specialist'. United States In the US, dental specialties are recognized by the American Dental Association (ADA) or the American Board of Dental Specialties (ABDS) Currently, the ADA lists twelve dental specialties, who are recognized by the National Commission on Recognition of Dental Specialties and Certifying Boards, while the ABDS recognizes four dental specialty boards. List of Dental Specialties under the ADA: Dental anesthesiology – The study and administration of general anesthesia, sedation, local anesthesia and advanced methods of pain control. Recognized by both ADA and ABDS. Dental public health – The study of dental epidemiology and social health policies. Endodontics – Root canal therapy and study of diseases of the dental pulp. Oral and maxillofacial pathology – The study, diagnosis, and sometimes the treatment of oral and maxillofacial-related diseases. Oral and maxillofacial radiology – The study and radiologic interpretation of oral and maxillofacial diseases. Oral and maxillofacial surgery – Extractions, implants, and maxillofacial surgery which also includes correction of congenital facial deformities Oral Surgery. A recognized specialty in Europe and Australia. A specialty devoted to surgery within the oral cavity. Mainly the extraction of teeth, the exposure of teeth, treatment of cystic lesions, and treatment of patients with medical complicating factors. Oral medicine - the discipline of dentistry concerned with the oral health care of medically complex patients – including the diagnosis and management of medical conditions that affect the oral and maxillofacial region. Recognized by both ADA and ABDS. Orofacial pain - the specialty of dentistry that encompasses the diagnosis, management, and treatment of pain disorders of the jaw, mouth, face, and associated regions. Recognized by both ADA and ABDS. Orthodontics and dentofacial orthopaedics – The straightening of teeth and modification of midface and mandibular growth. Periodontics – Study and treatment of diseases of the gums (non-surgical and surgical) as well as placement and maintenance of dental implants Pediatric dentistry (formerly pedodontics) – Dentistry for children. Teeth, bones, and jaw continually grow in children and certain dental issues in children require specific attention. Prosthodontics – Dentures, bridges and dental implants (restoring/placing). Some prosthodontists further their training in "oral and maxillofacial prosthodontics", which is the discipline concerned with the replacement of missing facial structures, such as ears, eyes, noses, etc. List of Dental Specialties under the ABDS: Oral implantology/implant dentistry Oral medicine Orofacial pain Dental anesthesiology Specialists in these fields are designated "registrable" (in the United States, "board eligible") and warrant exclusive titles such as dentist anesthesiologist, orthodontist, oral and maxillofacial surgeon, endodontist, pediatric dentist, periodontist, or prosthodontist upon satisfying certain local accreditation requirements (U.S., "Board Certified") United Kingdom In the UK, the specialties are recognized by the General Dental Council (GDC). Currently the GDC lists 13 different dental specialties: Dental & maxillofacial radiology – This specialty includes any medical imaging used to supplement investigations with relevant information about the anatomy, function, and health of the teeth, jaws, and surrounding structures. Dental public health – This is a non-clinical specialty that assesses the needs of dental health and explores the ways in which they can be met. Endodontics – This specialty includes the aetiology, diagnosis, treatment options, and prevention of disease that affects the nerve tissue found inside a tooth, roots, and surrounding tissues. Oral & Maxillofacial pathology – This is a clinical specialty that is undertaken by laboratory-based personnel. It assesses the changes in the tissues of the oral cavity, jaws, and salivary glands that are characteristic of disease to aid in coming to a diagnosis. Restorative dentistry – This is based on three monospecialities. These are endodontics, periodontics and prosthodontics. Periodontists are dentists that specialize in preventing, diagnosing, and treating gum disease. Prosthodontists deal with missing teeth. Oral medicine – This specialty deals with the diagnosis and non-surgical management of patients with disorders related to the oral and maxillofacial region. Oral Microbiology – This clinical specialty involves diagnosing, reporting, and interpreting microbiological samples taken from mouth Oral Surgery – This clinical specialty manages any abnormalities of the jaw and mouth that requires surgery Orthodontics – This clinical specialty deals with correcting the irregularities of the teeth, jaw, and bite Paediatric dentistry – This clinical specialty provides comprehensive oral health care for children from infants to adolescents including children with mental or physical impairments Periodontics – This clinical specialty is involved in the diagnosis and treatment of gums Prosthodontics – This clinical specialty deals with replacing missing teeth by using fixed or removable prosthesis such as implants, bridges, dentures Special needs dentistry – This clinical specialty is trained to improve and manage the oral health of adults with disability inc physical, mental, medical, social, emotional, and learning impairments European Union European Union legislation recognizes two dental specialties: Oral and Maxillofacial Surgery (A degree in dentistry and medicine being compulsory) and Orthodontics.
Biology and health sciences
Health professionals
Health
287667
https://en.wikipedia.org/wiki/Latrodectus
Latrodectus
Latrodectus is a broadly distributed genus of spiders with several species that are commonly known as the true widows. This group is composed of those often loosely called black widow spiders, brown widow spiders, and similar spiders. However, the diversity of species is much greater. A member of the family Theridiidae, this genus contains 34 species, which include several North American "black widows" (southern black widow Latrodectus mactans, western black widow Latrodectus hesperus, and northern black widow Latrodectus variolus). Besides these, North America also has the red widow Latrodectus bishopi and the brown widow Latrodectus geometricus, which, in addition to North America, has a much wider geographic distribution. Elsewhere, others include the European black widow (Latrodectus tredecimguttatus), the Australian redback spider (Latrodectus hasseltii) and the closely related New Zealand katipō (Latrodectus katipo), several different species in Southern Africa that can be called button spiders, and the South American black-widow spiders (Latrodectus corallinus and Latrodectus curacaviensis). Species vary widely in size. In most cases, the females are dark-coloured and can be readily identified by reddish markings on the central underside (ventral) abdomen, which are often hourglass-shaped. These small spiders have an unusually potent venom containing the neurotoxin latrotoxin, which causes the condition latrodectism, both named after the genus. Female widow spiders have unusually large venom glands, and their bite can be particularly harmful to large vertebrates, including humans. However, despite their notoriety, Latrodectus bites rarely cause death or produce serious complications. Only the bites of the females are dangerous to humans. Description Female widow spiders are typically dark brown or a shiny black in colour when they are full grown, usually exhibiting a red or orange hourglass on the ventral surface (underside) of the abdomen; some may have a pair of red spots or have no marking at all. The male widow spiders often exhibit various red or red and white markings on the dorsal surface (upper side) of the abdomen, ranging from a single stripe to bars or spots, and juveniles are often similar to the male pattern. Females of a few species are paler brown and some have no bright markings. The bodies of black widow spiders range from in size; some females can measure in their body length (not including legs). Including legs, female adult black widows generally measure . Behaviour The prevalence of sexual cannibalism, a behaviour in which the female eats the male after mating, has inspired the common name "widow spiders". This behaviour may promote the survival odds of the offspring; however, females of some species only rarely show this behaviour, and much of the documented evidence for sexual cannibalism has been observed in laboratory cages where the males could not escape. Male black widow spiders tend to select their mates by determining if the female has eaten already to avoid being eaten themselves. They are able to tell if the female has fed by sensing chemicals in the web. Latrodectus hesperus is referred to as an "opportunistic cannibal" because in dire situations it will resort to cannibalism. In addition to sexual cannibalism, Latrodectus hesperus are also known to engage in sibling cannibalism. Like other members of the Theridiidae, widow spiders construct a web of irregular, tangled, sticky silken fibres. Black widow spiders prefer to nest near the ground in dark and undisturbed areas, usually in small holes produced by animals, or around construction openings or woodpiles. Indoor nests are in dark, undisturbed places such as under desks or furniture or in a basement. The spider frequently hangs upside down near the centre of its web and waits for insects to blunder in and get stuck. Then, before the insect can extricate itself, the spider rushes over to envenomate and wrap it in silk. To feed, the spider's mouth pulses digestive juices over the prey, which liquifies, which the spider internalizes by capillary action, sucking the slurry into its mouth. Their prey consists of small insects such as flies, mosquitoes, grasshoppers, beetles, and caterpillars. If the spider perceives a threat, it quickly lets itself down to the ground on a safety line of silk. As with other web-weavers, these spiders have very poor eyesight and depend on vibrations reaching them through their webs to find trapped prey or warn them of larger threats. When a widow spider is trapped, it is unlikely to bite, preferring to play dead or flick silk at the potential threat; bites occur when they cannot escape. Many injuries to humans are due to defensive bites delivered when a spider gets unintentionally squeezed or pinched. The blue mud dauber species, Chalybion californicum, is a wasp that, in western North America, is the primary predator of black widow spiders. The ultimate tensile strength and other physical properties of Latrodectus hesperus (western black widow) silk are similar to the properties of silk from orb-weaving spiders that had been tested in other studies. The tensile strength for the three kinds of silk measured in the Blackledge study was about 1,000 MPa. The ultimate strength reported in a previous study for Trichonephila edulis was 1,290 ± 160 MPa. The tensile strength of spider silk is comparable to that of steel wire of the same thickness. However, as the density of steel is about six times that of silk, silk is correspondingly stronger than steel wire of the same weight. Spiders of the genus Steatoda (also of the Theridiidae) are often mistaken for widow spiders, and are known as "false widow spiders"; while their bite can be painful, they are significantly less harmful to humans. Taxonomy The genus Latrodectus was erected by Charles Athanase Walckenaer in 1805, for the species Latrodectus tredecimguttatus and Latrodectus mactans. Arachnologist Herbert Walter Levi revised the genus in 1959, studying the female sexual organs and noting their similarity across described species. He concluded the colour variations were variable across the world and were not sufficient to warrant species status, and reclassified the redback and several other species as subspecies of the black widow spider. Levi also noted that study of the genus had been contentious; in 1902, both F. O. Pickard-Cambridge and Friedrich Dahl had revised the genus, with each criticising the other. Cambridge questioned Dahl's separating species on what he considered minor anatomical details, and the latter dismissed the former as an "ignoramus". Species the World Spider Catalog accepted the following species: Latrodectus antheratus (Badcock, 1932) – Paraguay, Argentina Latrodectus apicalis Butler, 1877 – Galapagos Islands Latrodectus bishopi Kaston, 1938 – USA Latrodectus cinctus Blackwall, 1865 – Cape Verde Is., Africa, Kuwait, Iran Latrodectus corallinus Abalos, 1980 – Argentina Latrodectus curacaviensis (Müller, 1776) – Lesser Antilles, South America Latrodectus dahli Levi, 1959 – North Africa, Cyprus, Turkey, Azerbaijan, Kazakhstan, Middle East, Iran, Central Asia Latrodectus diaguita Carcavallo, 1960 – Argentina Latrodectus elegans Thorell, 1898 – India, Myanmar, Thailand, China, Japan Latrodectus erythromelas Schmidt & Klaas, 1991 – India, Sri Lanka Latrodectus garbae Rueda & Realpe, 2021 – Colombia Latrodectus geometricus C. L. Koch, 1841 – Africa. Introduced to both Americas, Poland, Middle East, Pakistan, India, Thailand, Japan, China, Papua New Guinea, Australia, Hawaii Latrodectus hasselti Thorell, 1870 – Southeast Asia to Australia. Introduced to Iran, Pakistan, India, Japan, New Zealand Latrodectus hesperus Chamberlin & Ivie, 1935 – Canada, USA, Mexico. Introduced to Israel, Korea Latrodectus hurtadoi Rueda & Realpe, 2021 – Colombia Latrodectus hystrix Simon, 1890 – Yemen (mainland, Socotra) Latrodectus indistinctus O. Pickard-Cambridge, 1904 – Namibia, South Africa Latrodectus karrooensis Smithers, 1944 – South Africa Latrodectus katipo Powell, 1871 – New Zealand Latrodectus lilianae Melic, 2000 – Spain, Algeria? Latrodectus mactans (Fabricius, 1775) – Probably native to North America only. Introduced to South America, Asia Latrodectus menavodi Vinson, 1863 – Madagascar, Comoros, Seychelles (Aldabra) Latrodectus mirabilis (Holmberg, 1876) – Argentina Latrodectus obscurior Dahl, 1902 – Cape Verde Is., Madagascar Latrodectus occidentalis Valdez-Mondragón, 2023 – Mexico Latrodectus pallidus O. Pickard-Cambridge, 1872 – Cape Verde Is. to Libya, Turkey, Kazakhstan, Iran, Central Asia Latrodectus quartus Abalos, 1980 – Argentina Latrodectus renivulvatus Dahl, 1902 – Africa, Yemen, Saudi Arabia, Iraq Latrodectus revivensis Shulov, 1948 – Israel, Iran, possibly the Canary Islands Latrodectus rhodesiensis Mackay, 1972 – Southern Africa Latrodectus thoracicus Nicolet, 1849 – Chile Latrodectus tredecimguttatus (Rossi, 1790) (type) – Mediterranean, Ukraine, Caucasus, Russia (Europe to South Siberia), Kazakhstan, Iran, Central Asia, China Latrodectus umbukwane B. M. O. G. Wright, C. D. Wright, Lyle & Engelbrecht, 2019 – South Africa Latrodectus variegatus Nicolet, 1849 – Chile, Argentina Latrodectus variolus Walckenaer, 1837 – USA, Canada Nomina dubia L. dotatus C. L. Koch, 1841 L. limacidus Cantor, 1842 L. pallidus Caporiacco, 1933 Distribution Widow spiders are found on every continent of the world except Antarctica. In North America, the black widows commonly known as southern (Latrodectus mactans), western (Latrodectus hesperus), and northern (Latrodectus variolus) are found in the United States, equally in western Mexico (Latrodectus occidentalis), as well as parts of southern Canada – particularly in the Okanagan Valley of British Columbia, as are the "grey" or "brown widow spiders" (Latrodectus geometricus) and the "red widow spiders" (Latrodectus bishopi). The most prevalent species occurring in eastern Asia and Australia is commonly called the redback (Latrodectus hasselti). They are often confused with spiders in the genus Steatoda, known as false widow spiders, due to their similar appearance. Venom Due to the presence of latrotoxin in their venom, black widow bites are potentially dangerous and may result in systemic effects (latrodectism) including severe muscle pain, abdominal cramps, diaphoresis, tachycardia, and muscle spasms. Symptoms usually last for 3–7 days, but may persist for several weeks. In 1933, a University of Alabama medical faculty, Allan Blair conducted an experiment on himself to document the symptoms of a black widow bite, and to test whether someone can build immunity after being bitten. The effects of the bite were so painful and harsh that Blair failed to complete the experiment and did not follow through with being bitten a second time. In the United States each year, about 2,500 people report being bitten by a black widow, but most do not need medical treatment. Some bites have no venom injected—⁠a "dry" bite. In the United States, no deaths due to black widows have been reported to the American Association of Poison Control Centers since 1983. Black widows are not especially aggressive spiders, and they rarely bite humans unless startled or otherwise threatened. Contrary to popular belief, most people who are bitten suffer no serious damage, let alone death. Fatal bites were reported in the early 20th century mostly with Latrodectus tredecimguttatus, the Mediterranean black widow. Since the venom is not usually life-threatening, antivenom has been used as pain relief and not to save lives. However, a study demonstrated that standardized pain medication, when combined with either antivenom or a placebo, had similar improvements in pain and resolution of symptoms.
Biology and health sciences
Spiders
Animals
287752
https://en.wikipedia.org/wiki/Mattock
Mattock
A mattock () is a hand tool used for digging, prying, and chopping. Similar to the pickaxe, it has a long handle and a stout head which combines either a vertical axe blade with a horizontal adze (cutter mattock), or a pick and an adze (pick mattock). A cutter mattock is similar to a Pulaski used in fighting fires. It is also commonly known in North America as a "grub axe". Description A mattock has a shaft, typically made of wood, which is long. The head consists of two ends, opposite each other and separated by a central eye. A mattock head typically weighs . The form of the head determines the kind and uses of the mattock: A cutter mattock combines the functions of an axe and adze, with its axe blade oriented vertically and longer adze horizontally. A pick mattock combines the function of a pick and adze, with a pointed end opposite an adze blade. Both are used for grubbing in hard soils and rocky terrain, with the pick mattock having the advantage of a superior penetrating tool over the cutter mattock, which excels at cutting roots. Uses Mattocks are "the most versatile of hand-planting tools". They can be used to chop into the ground with the adze and pull the soil towards the user, opening a slit to plant into. They can also be used to dig holes for planting into, and are particularly useful where there is a thick layer of matted sod. The use of a mattock can be tiring because of the effort needed to drive the blade into the ground, and the amount of bending and stooping involved. The adze of a mattock is useful for digging or hoeing, especially in hard soil. Cutter mattocks () are used in rural Africa for removing stumps from fields, including unwanted banana suckers. History As a simple but effective tool, mattocks have a long history. Their shape was already established by the Bronze Age in Asia Minor and ancient Greece. According to Sumerian mythology, the mattock was invented by the god Enlil. Mattocks () are the most commonly depicted tool in Byzantine manuscripts of Hesiod's Works and Days. Mattocks made from antlers first appear in the British Isles in the Late Mesolithic. They were probably used chiefly for digging, and may have been related to the rise of agriculture. Mattocks made of whalebone were used for tasks including flensing – stripping blubber from the carcass of a whale – by the broch people of Scotland and by the Inuit. Etymology The word mattock is of unclear origin; one theory traces it from Proto-Germanic, from Proto-Indo-European. There are no clear cognates in other Germanic languages, and similar words in various Celtic languages are borrowings from the English (e.g. , , ). However, there are proposed cognates in Old High German and Middle High German, and more speculatively with words in Balto-Slavic languages, including Old Church Slavonic and Lithuanian , and even Sanskrit. It may be cognate to or derived from the unattested Vulgar Latin , meaning club or cudgel. The New English Dictionary of 1906 interpreted mattock as a diminutive, but there is no root to derive it from, and no semantic reason for the diminutive formation. Forms such as mathooke, motthook and mathook were produced by folk etymology. Although used to prepare whale blubber, which the Inuit call "mattaq", no such connection is known. While the noun mattock is attested from Old English onwards, the transitive verb "to mattock" or "to mattock up" first appeared in the mid-17th century.
Technology
Agricultural tools
null
288196
https://en.wikipedia.org/wiki/Sarcoma
Sarcoma
A sarcoma is a malignant tumor, a type of cancer that arises from cells of mesenchymal (connective tissue) origin. Connective tissue is a broad term that includes bone, cartilage, muscle, fat, vascular, or other structural tissues, and sarcomas can arise in any of these types of tissues. As a result, there are many subtypes of sarcoma, which are classified based on the specific tissue and type of cell from which the tumor originates. Sarcomas are primary connective tissue tumors, meaning that they arise in connective tissues. This is in contrast to secondary (or "metastatic") connective tissue tumors, which occur when a cancer from elsewhere in the body (such as the lungs, breast tissue or prostate) spreads to the connective tissue. Sarcomas are one of five different types of cancer, classified by the cell type from which they originate. The word sarcoma is derived from the Greek 'fleshy excrescence or substance', itself from σάρξ meaning 'flesh'. Classification Sarcomas are typically divided into two major groups: bone sarcomas and soft-tissue sarcomas, each of which has multiple subtypes. In the United States, the American Joint Committee on Cancer (AJCC) publishes guidelines that classify the subtypes of sarcoma. These subtypes are as follows: Subtypes of bone sarcoma Osteosarcoma Chondrosarcoma Poorly differentiated round/spindle cell tumors (includes Ewing sarcoma) Hemangioendothelioma Angiosarcoma Fibrosarcoma/myofibrosarcoma Chordoma Adamantinoma Other: Liposarcoma Leiomyosarcoma Malignant peripheral nerve sheath tumor Rhabdomyosarcoma Synovial sarcoma Malignant solitary fibrous tumor. Subtypes of soft-tissue sarcoma Liposarcoma (includes the following varieties: atypical lipomatous tumor/well-differentiated liposarcoma, dedifferentiated liposarcoma, myxoid sarcoma, pleomorphic liposarcoma, and myxoid pleomorphic liposarcoma Atypical lipomatous tumor Dermatofibrosarcoma protuberans (includes pigmented varieties) Dermatofibrosarcoma protuberans, fibrosarcomatous Giant cell fibroblastoma Malignant solitary fibrous tumor Inflammatory myofibroblastic tumor Low-grade myofibroblastic sarcoma Fibrosarcoma (includes adult and sclerosing epithelioid varieties) Myxofibrosarcoma (formerly myxoid malignant fibrous histiocytoma) Low-grade fibromyxoid sarcoma Giant cell tumor of soft tissues Leiomyosarcoma Malignant glomus tumor Rhabdomyosarcoma (includes the following varieties: embryonal, alveolar, pleomorphic, and spindle cell/sclerosing) Hemangioendothelioma (includes the following varieties: retiform, pseudomyogenic, and epithelioid) Angiosarcoma of soft tissue Extraskeletal osteosarcoma Gastrointestinal stromal tumor, malignant (GIST) Malignant peripheral nerve sheath tumor (includes epithelioid variety) Malignant Triton tumor Malignant granular cell tumor Malignant ossifying fibromyxoid tumor Stromal sarcoma not otherwise specified Myoepithelial carcinoma Malignant phosphaturic mesenchymal tumor Skin sarcomas Synovial sarcoma (includes the following varieties: spindle cell, biphasic, and not otherwise specified) Epithelioid sarcoma Alveolar soft part sarcoma Clear cell sarcoma of soft tissue Extraskeletal myxoid chondrosarcoma Extraskeletal Ewing sarcoma Interdigitating dendritic cell sarcoma Desmoplastic small round cell tumor Extrarenal rhabdoid tumor Perivascular epithelioid cell tumor, not otherwise specified Intimal sarcoma Undifferentiated spindle cell sarcoma Undifferentiated pleomorphic sarcoma Undifferentiated round cell sarcoma Undifferentiated epithelioid sarcoma Undifferentiated sarcoma, not otherwise specified. Signs and symptoms Symptoms of bone sarcomas typically include bone pain, especially at night, and swelling around the site of the tumor. Symptoms of soft-tissue sarcomas vary, but they often present as firm, oftentimes painless lumps or nodules. Gastrointestinal stromal tumors (GIST, a subtype of soft-tissue sarcoma) often are asymptomatic, but can be associated with vague complaints of abdominal pain, bleeding into the intestines, a feeling of fullness, or other signs of intestinal obstruction. Cause Causes and risk factors The cause of most bone sarcomas is not known, but several factors are associated with an increased risk of developing bone sarcoma. Previous exposure to ionizing radiation (such as prior radiation therapy) is one such risk factor. Therapeutic radiation is associated with sarcoma after 10 to 20 years. Exposure to alkylating agents, such as those found in certain cancer chemotherapeutic medicines, also increases the risk of bone sarcoma. Certain inherited genetic syndromes, including Li-Fraumeni syndrome, inherited RB1 gene mutations, and Paget's disease of bone are associated with an increased risk of developing bone sarcomas. Most soft-tissue sarcomas arise from what doctors call "sporadic" (or random) genetic mutations within an affected person's cells. Nevertheless, there are certain risk factors associated with an increased risk of developing soft-tissue sarcoma. Previous exposure to ionizing radiation is one such risk factor. Exposure to vinyl chloride (e.g., such as the fumes encountered in the production of polyvinyl chloride (PVC)), arsenic and Thorotrast all are associated with an increased risk of angiosarcoma. Lymphedema, such as that resulting from certain types of breast cancer treatment, also is a risk factor for development of angiosarcoma. As with bone sarcomas, certain inherited genetic syndromes also are associated with an increased risk of developing soft-tissue sarcoma, including Li-Fraumeni syndrome, familial adenomatous polyposis, neurofibromatosis type 1, and heritable RB1 gene mutations. Kaposi sarcoma is caused by Kaposi sarcoma-associated herpesvirus (HHV-8). Mechanisms The mechanisms by which healthy cells transform into cancer cells are described in detail elsewhere (see Cancer main page; Carcinogenesis main page). The precise molecular changes that result in sarcoma are not always known, but certain types of sarcomas are associated with particular genetic mutations. Examples include: Most cases of Ewing sarcoma are associated with a chromosomal translocation in which part of chromosome 11 fuses with part of chromosome 22. This results in the EWSR1 gene becoming fused to other genes, including the FLI1 gene in 90% of Ewing cases and ERG gene in 5-10% of cases. These fusions result in the production of abnormal proteins, although how these abnormal proteins result in cancer is not fully known. Dermatofibrosarcoma protuberans often is associated with a chromosomal translocation in which the COL1A1 gene becomes fused to the PDGFRB gene. This results in over-active PDGF signaling, which is thought to promote cell division and ultimately lead to tumor development. Inflammatory myofibroblastic tumor often is associated with rearrangements of the ALK gene, and occasionally with rearrangements of the HMGA2 gene. Tenosynovial giant cell tumor (not a sarcoma, but a non-metastasizing and locally aggressive soft tissue tumor) frequently is associated with a chromosomal translocation between chromosome 1 and chromosome 2, in which the CSF1 gene becomes fused with the COL6A3 gene. This results in increased CSF1 protein production, which is thought to play a role in cancer development. Many liposarcomas are associated with amplification of part of chromosome 12, which results in extra copies of known cancer-promoting genes ("oncogenes") such as the CDK4 gene, the MDM2 gene and the HMGA2 gene. Diagnosis Bone sarcomas Diagnosis of bone sarcomas begins with a thorough history and physical examination which may reveal characteristic signs and symptoms (see Signs and Symptoms above). Laboratory studies are not particularly useful in diagnosis, although some bone sarcomas (such as osteosarcoma) may be associated with elevated alkaline phosphatase levels, while others (such as Ewing sarcoma) can be associated with elevated erythrocyte sedimentation rate. Importantly, however, none of these laboratory findings are specific to bone sarcomas, meaning that elevations in these lab values are associated with many other conditions as well as sarcoma, and thus cannot be relied upon to conclusively diagnose sarcoma. Imaging studies are critically important in diagnosis, and most clinicians will order a plain radiograph (X-ray) initially. Other imaging studies commonly used in diagnosis include magnetic resonance imaging (MRI) studies and radioisotope bone scans. A CT scan is typically not used in diagnosis of most types of bone sarcoma, although it is an important tool for staging (see below). Definitive diagnosis requires biopsy of the tumor and careful review of the biopsy specimen by an experienced pathologist. Soft-tissue sarcomas Diagnosis of soft-tissue sarcomas also begins with a thorough history and physical examination. Imaging studies can include either CT or MRI, although CT tends to be preferred for soft-tissue sarcomas located in the thorax, abdomen, or retroperitoneum. Positron emission tomography (PET) also may be useful in diagnosis, although its most common use is for staging (see below). As with bone sarcomas, definitive diagnosis requires biopsy of the tumor with evaluation of histology by a trained pathologist. Staging In general, cancer staging refers to how advanced a cancer is, and usually it is based upon factors such as tumor size and whether it has spread to other parts of the body. Staging is important because the stage affects the prognosis (likely outcome), as well as the types of treatments that are likely to be effective against the cancer. With sarcomas, staging requires a determination of whether the tumor has grown into surrounding tissues ("local invasion"), as well as imaging to determine whether it has spread (a process known as "metastasis") to lymph nodes (forming "nodal metastases") or to other tissues or organs in the body (forming "distant metastases"). The most common imaging tools used for staging bone sarcomas are MRI or CT to evaluate the primary tumor, contrast-enhanced CT of the chest to evaluate whether the cancer has spread (i.e., metastasized) to the lungs, and radioisotope bone scan to evaluate whether the cancer has spread to other bones. Staging for soft-tissue sarcomas typically includes imaging of the primary tumor by MRI or CT to determine tumor size, as well as contrast-enhanced CT of the chest to evaluate for metastatic tumors in the lungs. Grade Like some other cancers, sarcomas are assigned a grade (low, intermediate, or high) based on the appearance of the tumor cells under a microscope. In general, grade refers to how aggressive the cancer is and how likely it is to spread to other parts of the body ("metastasize"). Low-grade sarcomas have a better prognosis than higher-grade sarcomas, and are usually treated surgically, although sometimes radiation therapy or chemotherapy are used. Intermediate- and high-grade sarcomas are more frequently treated with a combination of surgery, chemotherapy, or radiation therapy. Since high-grade tumors are more likely to undergo metastasis (invasion and spread to locoregional and distant sites), they are treated more aggressively. The recognition that many sarcomas are sensitive to chemotherapy has dramatically improved the survival of patients. For example, in the era before chemotherapy, long-term survival for pediatric patients with localized osteosarcoma was only about 20%, but now has risen to 60–70%. Screening In the US, the US Preventive Services Task Force (USPSTF) publishes guidelines recommending preventive screening for certain types of common cancers and other diseases. , the USPSTF does not recommend screening for sarcoma, possibly because it is a very rare type of cancer (see Epidemiology below). The American Cancer Society (ACS) also publishes guidelines recommending preventive screening for certain types of common cancers. Like the USPSTF, ACS does not recommend preventive screening for sarcoma. However, patients with some inherited conditions, such as neurofibromatosis, may benefit from screening for development of cancers from pre-existing benign tumors called neurofibromas. Treatment Surgery is the most common form of the treatment for most sarcomas that have not spread to other parts of the body, and for most sarcomas, surgery is the only curative treatment. Limb-sparing surgery, as opposed to amputation, can now be used to save the limbs of patients in at least 90% of extremity (arm or leg) sarcoma cases. Additional treatments, including chemotherapy, radiation therapy (also called "radiotherapy"), which includes proton therapy, may be administered before surgery (called "neoadjuvant" chemotherapy or radiotherapy) or after surgery (called "adjuvant" chemotherapy or radiotherapy). The use of neoadjuvant or adjuvant chemotherapy and radiotherapy significantly improves the prognosis for many sarcoma patients. Treatment can be a long and arduous process, lasting about a year for many patients. Liposarcoma treatment usually consists of surgical resection, with chemotherapy considered depending on the aggressiveness of the sarcoma. Radiotherapy may also be used before or after surgical excision for liposarcoma. Pediatric rhabdomyosarcoma is usually treated with chemotherapy, surgery, and sometimes radiotherapy. Pediatric rhabdomyosarcoma patients have a 50–85% long term survival rate. Osteosarcoma is a cancer of the bone that is treated with surgical resection of as much of the cancer as possible, often along with chemotherapy. Radiotherapy is a second alternative to surgery, although not as successful. It was believed that higher doses of chemotherapy might improve survival. However, high doses of chemotherapy stop the production of blood cells in the bone marrow and can be harmful. Stem cells collected from people before high‐dose chemotherapy can be transplanted back to the person if the blood cell count gets too low; this is called autologous hematopoietic stem cell transplantation, or high dose therapy with stem cell rescue. Research to investigate if using high‐dose chemotherapy followed by autologous hematopoietic stem cell transplantation was more favourable than standard‐dose chemotherapy found only one RCT and this did not favour either of the two treatment arms with respect to overall survival. As a result, high dose chemotherapy with stem cell rescue is generally considered appropriate only in the research setting. Prognosis Factors that affect prognosis The AJCC has identified several factors that affect prognosis of bone sarcomas: Size of the tumor: larger tumors tend to have a worse prognosis compared to smaller tumors. Spread of tumor to surrounding tissues: tumors that have spread locally to surrounding tissues tend to have a worse prognosis compared to tumors that have not spread beyond their place of origin. Stage and presence of metastases: tumors that have spread ("metastasized") to the lymph nodes (which is rare for bone sarcomas) or other organs or tissues (for example, to the lungs) have a worse prognosis compared to tumors that have not metastasized. Tumor grade: higher grade tumors (grades 2 and 3) tend to have a worse prognosis compared to low grade (grade 1) tumors. Skeletal location: tumors originating in the spine or pelvic bones tend to have a worse prognosis compared to tumors originating in arm or leg bones. For soft-tissue sarcomas other than GISTs, factors that affect prognosis include: Stage: as with bone sarcomas, tumors that have metastasized have a worse prognosis compared to tumors that have not metastasized. Grade: the AJCC recommends using a grading system called the French Federation of Cancer Centers Sarcoma Group (FNCLCC) Grade for soft-tissue sarcomas, with high-grade tumors having a worse prognosis compared to low-grade tumors. For GISTs, the key factor that affects prognosis is: Mitotic rate: mitotic rate refers to the fraction of cells that are actively dividing within the tumor; GISTs that have a high mitotic rate have a worse prognosis compared to GISTs that have a low mitotic rate. Outcome data According to data published by the US National Cancer Institute (NCI), the overall 5-year survival for bone sarcomas is 66.9%. The American Cancer Society (ACS) estimates that 2,140 people in the US will die in 2023 from bone sarcomas, accounting for 0.3% of all cancer deaths. The median age at death is 61 years old, although death can occur in any age group. Thus, 12.3% of bone sarcoma deaths occur in people under 20 years old, 13.8% occur in people 20–34 years old, 5.5% occur in people 35–44 years old, 9.3% occur in people 45–54 years old, 13.5% occur in people 55–64 years old, 16.2% occur in people 65–74 years old, 16.4% occur in people 75–84 years old, and 13.1% occur in people 85 years or older. For soft-tissue sarcomas, the overall 5-year survival (irrespective of stage) is 64.5%, but survival is affected by many factors, including stage. Thus, the 5-year survival is 80.8% for soft-tissue sarcomas that have not spread beyond the primary tumor ("localized" tumors), 58.0% for soft-tissue sarcomas that have spread only to nearby lymph nodes, and 16.4% for soft-tissue sarcomas that have spread to distant organs. The ACS estimates that 5,140 people will die from soft-tissue sarcoma in 2023, accounting for 0.9% of all cancer deaths. Epidemiology Sarcomas are rare cancers. The risk of a previously healthy person receiving a new diagnosis of bone cancer is less than 0.001%, while the risk of receiving a new diagnosis of soft-tissue sarcoma is between 0.0014 and 0.005%. The American Cancer Society estimates that in the United States there will be 3,970 new cases of bone sarcoma in 2023, and 13,400 new cases of soft-tissue sarcoma. Considering that the total estimated number of new cancer diagnoses (all types of cancer) is 1,958,310, this means bone sarcomas represent only 0.2% of all new cancer diagnoses (making them the 30th most common type of cancer) and soft-tissue sarcomas represent only 0.7% (making them the 22nd most common type of cancer) of all new cancer diagnoses in the US in 2023. These estimates are similar to previously reported data. Sarcomas affect people of all ages. Around 50% of bone sarcomas and 20% of soft-tissue sarcomas are diagnosed in people under the age of 35. Some sarcomas, such as leiomyosarcoma, chondrosarcoma, and gastrointestinal stromal tumor (GIST), are more common in adults than in children. Most high-grade bone sarcomas, including Ewing sarcoma and osteosarcoma, are much more common in children and young adults. In fossils In 2016, scientists reported the discovery of an osteosarcoma tumor in a 1.6–1.8 million-year-old fossil from the skeleton of the now-extinct hominin species Australopithecus sediba, making it the earliest-known case of human cancer. Research Treatment of sarcoma, especially when the sarcoma has spread, or "metastasized", often requires chemotherapy but existing chemotherapeutic medicines are associated with significant toxicities and are not highly effective in killing cancer cells. Therefore, research to identify new medications to treat sarcoma is being conducted . One new type of therapy still under investigation is the use of cancer immunotherapy (e.g., immune checkpoint inhibitors like anti-PD1, anti-PDL1, and anti-CTLA4 agents) to treat sarcomas. These drugs are not yet FDA- or other regulator-approved treatment, except PDL1 inhibitor atezolizumab for the ultra-rate diagnosis of alveolar soft part sarcoma. Other strategies, such as small-molecule targeted therapy, biologic agents (e.g., small interfering RNA molecules), and nanoparticle-directed therapy, also are under active investigated. Research to understand the specific genetic and molecular factors that cause sarcoma to develop is underway. This could allow for the design of new targeted therapies and allow physicians to more accurately predict a patient's prognosis. Awareness In the US, July is widely recognized as Sarcoma Awareness Month. The UK has a Sarcoma Awareness Week in July led by Sarcoma UK, the bone and soft-tissue cancer charity. American YouTuber Technoblade was diagnosed with sarcoma in August 2021, and died from his illness in June 2022 after the sarcoma metastasized. He had raised over $500,000 in a charity stream. Many YouTubers have raised awareness and donated to charities such as the Sarcoma Foundation of America after Technoblade's diagnosis and passing. To date, Technoblade's fans have raised over $1,000,000 for sarcoma research. TikTok has provided a voice for many creators to chronicle their experiences with sarcoma. "Dance You Outta My Head", by American singer Cat Janice went viral on TikTok in early 2024 before the singer died of sarcoma, prompting awareness of this rare disease. Kimberley Nix, a Canadian physician, chronicled her journey with undifferentiated pleomorphic sarcoma, from her diagnosis to eventual death, on TikTok under the username @cancerpatientmd. Nix died on 8 May 2024 at the age of 31, and her death was announced in a video uploaded posthumously to her TikTok page. In many of her videos, she links viewers to her Own.Cancer fundraiser, which has raised almost $118,000 CAD as of 17 May 2024.
Biology and health sciences
Cancer
Health
288216
https://en.wikipedia.org/wiki/Blue%20giant
Blue giant
In astronomy, a blue giant is a hot star with a luminosity class of III (giant) or II (bright giant). In the standard Hertzsprung–Russell diagram, these stars lie above and to the right of the main sequence. The term applies to a variety of stars in different phases of development, all evolved stars that have moved from the main sequence but have little else in common, so blue giant simply refers to stars in a particular region of the HR diagram rather than a specific type of star. They are much rarer than red giants, because they only develop from more massive and less common stars, and because they have short lives in the blue giant stage. Because O-type and B-type stars with a giant luminosity classification are often somewhat more luminous than their normal main-sequence counterparts of the same temperatures and because many of these stars are relatively nearby to Earth on the galactic scale of the Milky Way Galaxy, many of the bright stars in the night sky are examples of blue giants, including Beta Centauri (B1III); Mimosa (B0.5III); Bellatrix (B2III); Epsilon Canis Majoris (B2II); and Alpha Lupi (B1.5III) among others. The name blue giant is sometimes misapplied to other high-mass luminous stars, such as main-sequence stars, simply because they are large and hot. Properties Blue giant is not a strictly defined term and it is applied to a wide variety of different types of stars. They have in common a moderate increase in size and luminosity compared to main-sequence stars of the same mass or temperature, and are hot enough to be called blue, meaning spectral class O, B, and sometimes early A. Their temperatures exceed around 10,000 K, and they have zero age main sequence (ZAMS) masses greater than about twice the Sun (), and absolute magnitudes around 0 or brighter. These stars are only 5–10 times the radius of the Sun (), compared to red giants which are up to . The coolest and least luminous stars referred to as blue giants are on the horizontal branch, intermediate-mass stars that have passed through a red giant phase and are now burning helium in their cores. Depending on mass and chemical composition these stars gradually move blue wards until they exhaust the helium in their cores and then they return redwards to the asymptotic giant branch (AGB). The RR Lyrae variable stars, usually with spectral types of A, lie across the middle of the horizontal branch. Horizontal-branch stars hotter than the RR Lyrae gap are generally considered to be blue giants, and sometimes the RR Lyrae stars themselves are called blue giants despite some of them being F class. The hottest stars, blue horizontal branch (BHB) stars, are called extreme horizontal branch (EHB) stars and can be hotter than main-sequence stars of the same luminosity. In these cases they are called blue subdwarf (sdB) stars rather than blue giants, named for their position to the left of the main sequence on the HR diagram rather than for their increased luminosity and temperature compared to when they were themselves main-sequence stars. There are no strict upper limits for giant stars, but early O types become increasingly difficult to classify separately from main sequence and supergiant stars, have almost identical sizes and temperatures to the main-sequence stars from which they develop, and very short lifetimes. A good example is Plaskett's star, a close binary consisting of two O type giants both over , temperatures over 30,000 K, and more than 100,000 times the luminosity of the Sun (). Astronomers still differ over whether to classify at least one of the stars as a supergiant, based on subtle differences in the spectral lines. Evolution Stars found in the blue giant region of the HR diagram can be in very different stages of their lives, but all are evolved stars that have largely exhausted their core hydrogen supplies. In the simplest case, a hot luminous star begins to expand as its core hydrogen is exhausted, and first becomes a blue subgiant then a blue giant, becoming both cooler and more luminous. Intermediate-mass stars will continue to expand and cool until they become red giants. Massive stars also continue to expand as hydrogen shell burning progresses, but they do so at approximately constant luminosity and move horizontally across the HR diagram. In this way they can quickly pass through blue giant, bright blue giant, blue supergiant, and yellow supergiant classes, until they become red supergiants. The luminosity class for such stars is determined from spectral lines that are sensitive to the surface gravity of the star, with more expanded and luminous stars being given I (supergiant) classifications while somewhat less expanded and more luminous stars are given luminosity II or III. Because they are massive stars with short lives, many blue giants are found in O–B associations, that are large collections of loosely bound young stars. BHB stars are more evolved and have helium burning cores, although they still have an extensive hydrogen envelope. They also have moderate masses around so they are often much older than more massive blue giants. The BHB takes its name from the prominent horizontal grouping of stars seen on colour-magnitude diagrams for older clusters, where core helium burning stars of the same age are found at a variety of temperatures with roughly the same luminosity. These stars also evolve through the core helium burning stage at constant luminosity, first increasing in temperature then decreasing again as they move toward the AGB. However, at the blue end of the horizontal branch, it forms a "blue tail" of stars with lower luminosity, and occasionally a "blue hook" of even hotter stars. There are other highly evolved hot stars not generally referred to as blue giants: Wolf–Rayet stars, highly luminous and distinguished by their extreme temperatures and prominent helium and nitrogen emission lines; post-AGB stars forming planetary nebulae, similar to Wolf–Rayet stars but smaller and less massive; blue stragglers, uncommon luminous blue stars observed apparently on the main sequence in clusters where main-sequence stars of their luminosity should have evolved into giants or supergiants; and the true blue supergiants, the most massive stars evolved beyond blue giants and identified by the effects of greater expansion on their spectra. A purely theoretical group of stars could be formed when red dwarfs finally exhaust their core hydrogen trillions of years into the future. These stars are convective through their depth and are expected to very slowly increase both their temperature and luminosity as they accumulate more and more helium until eventually they cannot sustain fusion and they quickly collapse to white dwarfs. Although these stars can become hotter than the Sun they will never become more luminous, so are hardly blue giants as we see them today. The name blue dwarf has been coined although that name could easily be confusing.
Physical sciences
Stellar astronomy
Astronomy
288220
https://en.wikipedia.org/wiki/Blue%20supergiant
Blue supergiant
A blue supergiant (BSG) is a hot, luminous star, often referred to as an OB supergiant. They are usually considered to be those with luminosity class I and spectral class B9 or earlier, although sometimes A-class supergiants are also deemed blue supergiants. Blue supergiants are found towards the top left of the Hertzsprung–Russell diagram, above and to the right of the main sequence. By analogy to the red giant branch for low-mass stars, this region is also called the blue giant branch. They are larger than the Sun but smaller than a red supergiant, with surface temperatures of 10,000–50,000 K and luminosities from about 10,000 to a million times that of the Sun. They are most often an evolutionary phase between high-mass, hydrogen-fusing main-sequence stars and helium-fusing red supergiants, although new research suggests they could be the result of stellar mergers. The majority of supergiants are also blue (B-type) supergiants; blue supergiants from classes O9.5 to B2 are even more common than their main sequence counterparts. More post-main-sequence blue supergiants are observed than what is expected from theoretical models, which expect blue supergiants to be short-lived. This results in the blue supergiant problem, although unusual stellar interiors (such as hotter blue supergiants having oversized hydrogen-fusing cores and cooler ones having undersized helium-fusing cores) may explain this. Formation It was once believed that blue supergiants originated from a "feeding" with the interstellar medium when stars passed through interstellar dust clouds, although the current consensus is that blue supergiants are evolved high-mass stars, larger and more luminous than main-sequence stars. O-type and early B-type stars with initial masses around evolve away from the main sequence in just a few million years as their hydrogen is consumed and heavy elements (with atomic numbers of 26 (Fe) and less) start to appear near the surface of the star. These stars usually become blue supergiants, although it is possible that some of them (particularly the more massive ones) evolve directly to Wolf–Rayet stars. Expansion into the supergiant stage occurs when hydrogen in the core of the star is depleted and hydrogen shell burning starts, but it may also be caused as heavy elements are dredged up to the surface by convection and mass loss due to radiation pressure increases. Blue supergiants are newly evolved from the main sequence, have extremely high luminosities, high mass loss rates, and are generally unstable. Many of them become luminous blue variables (LBVs) with episodes of extreme mass loss. Lower mass blue supergiants continue to expand until they become red supergiants. In the process they must spend some time as yellow supergiants or yellow hypergiants, but this expansion occurs in just a few thousand years and so these stars are rare. Higher mass red supergiants blow away their outer atmospheres and evolve back to blue supergiants, and possibly onwards to Wolf–Rayet stars. Depending on the exact mass and composition of a red supergiant, it can execute a number of blue loops before either exploding as a type II supernova or finally dumping enough of its outer layers to become a blue supergiant again, less luminous than the first time but more unstable. If such a star can pass through the yellow evolutionary void it is expected that it becomes one of the lower luminosity LBVs. The most massive blue supergiants are too luminous to retain an extensive atmosphere and they never expand into a red supergiant. The dividing line is approximately , although the coolest and largest red supergiants develop from stars with initial masses of . It is not clear whether more massive blue supergiants can lose enough mass to evolve safely into old age as a Wolf Rayet star and finally a white dwarf, or they reach the Wolf Rayet stage and explode as supernovae, or they explode as supernovae while blue supergiants. Supernova progenitors are most commonly red supergiants and it was believed that only red supergiants could explode as supernovae. SN 1987A, however, forced astronomers to re-examine this theory, as its progenitor, Sanduleak -69° 202, was a B3 blue supergiant. Now it is known from observation that almost any class of evolved high-mass star, including blue and yellow supergiants, can explode as a supernova although theory still struggles to explain how in detail. While most supernovae are of the relatively homogeneous type II-P and are produced by red supergiants, blue supergiants are observed to produce supernovae with a wide range of luminosities, durations, and spectral types, sometimes sub-luminous like SN 1987A, sometimes super-luminous such as many type IIn supernovae. Properties Because of their extreme masses they have relatively short lifespans and are mainly observed in young cosmic structures such as open clusters, the arms of spiral galaxies, and in irregular galaxies. They are rarely observed in spiral galaxy cores, elliptical galaxies, or globular clusters, most of which are believed to be composed of older stars, although the core of the Milky Way has recently been found to be home to several massive open clusters and associated young hot stars. The best known example is Rigel, the brightest star in the constellation of Orion. Its mass is about 20 times that of the Sun, and its luminosity is around 117,000 times greater. Despite their rarity and their short lives they are heavily represented among the stars visible to the naked eye; their immense brightness is more than enough to compensate for their scarcity. Blue supergiants have fast stellar winds and the most luminous, called hypergiants, have spectra dominated by emission lines that indicate strong continuum driven mass loss. Blue supergiants show varying quantities of heavy elements in their spectra, depending on their age and the efficiency with which the products of nucleosynthesis in the core are convected up to the surface. Quickly rotating supergiants can be highly mixed and show high proportions of helium and even heavier elements while still burning hydrogen at the core; these stars show spectra very similar to a Wolf Rayet star. Many blue supergiant stars are Alpha Cygni variables. While the stellar wind from a red supergiant is dense and slow, the wind from a blue supergiant is fast but sparse. When a red supergiant becomes a blue supergiant, the faster wind it produces impacts the already emitted slow wind and causes the outflowing material to condense into a thin shell. In some cases, several concentric faint shells can be seen from successive episodes of mass loss, either previous blue loops from the red supergiant stage, or eruptions such as LBV outbursts. Examples Rigel (β Orionis), a blue (B-type) supergiant, believed to be evolving to the red supergiant phase Deneb (Alpha Cygni), a blue (A-type) supergiant, believed to be evolving to the red supergiant phase Mu Sagittarii, a multiple star system containing a B-type supergiant Alnitak, an O-type blue supergiant Eta Canis Majoris, a blue supergiant of spectral type B5Ia UW Canis Majoris (UW CMa), two blue (O-type) supergiants in a binary system Zeta Puppis (Naos), a blue (O-type) supergiant, spectral type O4I(n)fp Alnilam (Epsilon Orionis) B-type supergiant, spectral type B0Ia, central star of Orion's Belt Saiph (Kappa Orionis) B-type supergiant, spectral type B0.5Ia Chi2 Orionis B-type supergiant, spectral type B2Ia 5 Persei, B-type supergiant, spectral type B5Ia 10 Persei, B-type supergiant, spectral type B2Ia Omicron² Canis Majoris, B-type supergiant, spectral type B3Ia Lambda Cephei, B-type supergiant, specteal type O6.5I(n)fp Mu Sagittarii, B-type supergiant, spectral type B8Iab(e) 4 Lacertae, B-type supergiant, spectral type B9Iab, believed to be in a blue loop Nu Cephei, A-type supergiant, spectral type A2Ia Alpha Camelopardalis, O-type supergiant, specteal type O9Ia Sigma Cygni, A-type supergiant, specteal type A0Ia
Physical sciences
Stellar astronomy
Astronomy
288224
https://en.wikipedia.org/wiki/Gibbard%E2%80%93Satterthwaite%20theorem
Gibbard–Satterthwaite theorem
The Gibbard–Satterthwaite theorem is a theorem in social choice theory. It was first conjectured by the philosopher Michael Dummett and the mathematician Robin Farquharson in 1961 and then proved independently by the philosopher Allan Gibbard in 1973 and economist Mark Satterthwaite in 1975. It deals with deterministic ordinal electoral systems that choose a single winner, and shows that for every voting rule of this form, at least one of the following three things must hold: The rule is dictatorial, i.e. there exists a distinguished voter who can choose the winner; or The rule limits the possible outcomes to two alternatives only; or The rule is not straightforward, i.e. there is no single always-best strategy (one that does not depend on other voters' preferences or behavior). Gibbard's proof of the theorem is more general and covers processes of collective decision that may not be ordinal, such as cardinal voting. Gibbard's 1978 theorem and Hylland's theorem are even more general and extend these results to non-deterministic processes, where the outcome may depend partly on chance; the Duggan–Schwartz theorem extends these results to multiwinner electoral systems. Informal description Consider three voters named Alice, Bob and Carol, who wish to select a winner among four candidates named , , and . Assume that they use the Borda count: each voter communicates his or her preference order over the candidates. For each ballot, 3 points are assigned to the top candidate, 2 points to the second candidate, 1 point to the third one and 0 points to the last one. Once all ballots have been counted, the candidate with the most points is declared the winner. Assume that their preferences are as follows. If the voters cast sincere ballots, then the scores are: . Hence, candidate will be elected, with 7 points. But Alice can vote strategically and change the result. Assume that she modifies her ballot, in order to produce the following situation. Alice has strategically upgraded candidate and downgraded candidate . Now, the scores are: . Hence, is elected. Alice is satisfied by her ballot modification, because she prefers the outcome to , which is the outcome she would obtain if she voted sincerely. We say that the Borda count is manipulable: there exists situations where a sincere ballot does not defend a voter's preferences best. The Gibbard–Satterthwaite theorem states that every ranked-choice voting is manipulable, except possibly in two cases: if there is a distinguished voter who has a dictatorial power, or if the rule limits the possible outcomes to two options only. Formal statement Let be the set of alternatives (which is assumed finite), also called candidates, even if they are not necessarily persons: they can also be several possible decisions about a given issue. We denote by the set of voters. Let be the set of strict weak orders over : an element of this set can represent the preferences of a voter, where a voter may be indifferent regarding the ordering of some alternatives. A voting rule is a function . Its input is a profile of preferences and it yields the identity of the winning candidate. We say that is manipulable if and only if there exists a profile where some voter , by replacing her ballot with another ballot , can get an outcome that she prefers (in the sense of ). We denote by the image of , i.e. the set of possible outcomes for the election. For example, we say that has at least three possible outcomes if and only if the cardinality of is 3 or more. We say that is dictatorial if and only if there exists a voter who is a dictator, in the sense that the winning alternative is always her most-liked one among the possible outcomes regardless of the preferences of other voters. If the dictator has several equally most-liked alternatives among the possible outcomes, then the winning alternative is simply one of them. Counterexamples and loopholes A variety of "counterexamples" to the Gibbard-Satterthwaite theorem exist when the conditions of the theorem do not apply. Cardinal voting Consider a three-candidate election conducted by score voting. It is always optimal for a voter to give the best candidate the highest possible score, and the worst candidate the lowest possible score. Then, no matter which score the voter assigns to the middle candidate, it will always fall (non-strictly) between the first and last scores; this implies the voter's score ballot will be weakly consistent with that voter's honest ranking. However, the actual optimal score may depend on the other ballots cast, as indicated by Gibbard's theorem. Serial dictatorship The serial dictatorship is defined as follows. If voter 1 has a unique most-liked candidate, then this candidate is elected. Otherwise, possible outcomes are restricted to the most-liked candidates, whereas the other candidates are eliminated. Then voter 2's ballot is examined: if there is a unique best-liked candidate among the non-eliminated ones, then this candidate is elected. Otherwise, the list of possible outcomes is reduced again, etc. If there are still several non-eliminated candidates after all ballots have been examined, then an arbitrary tie-breaking rule is used. This voting rule is not manipulable: a voter is always better off communicating his or her sincere preferences. It is also dictatorial, and its dictator is voter 1: the winning alternative is always that specific voter's most-liked one or, if there are several most-liked alternatives, it is chosen among them. Simple majority vote If there are only 2 possible outcomes, a voting rule may be non-manipulable without being dictatorial. For example, it is the case of the simple majority vote: each voter assigns 1 point to her top alternative and 0 to the other, and the alternative with most points is declared the winner. (If both alternatives reach the same number of points, the tie is broken in an arbitrary but deterministic manner, e.g. outcome wins.) This voting rule is not manipulable because a voter is always better off communicating his or her sincere preferences; and it is clearly not dictatorial. Many other rules are neither manipulable nor dictatorial: for example, assume that the alternative wins if it gets two thirds of the votes, and wins otherwise. Corollary We now consider the case where by assumption, a voter cannot be indifferent between two candidates. We denote by the set of strict total orders over and we define a strict voting rule as a function . The definitions of possible outcomes, manipulable, dictatorial have natural adaptations to this framework. For a strict voting rule, the converse of the Gibbard–Satterthwaite theorem is true. Indeed, a strict voting rule is dictatorial if and only if it always selects the most-liked candidate of the dictator among the possible outcomes; in particular, it does not depend on the other voters' ballots. As a consequence, it is not manipulable: the dictator is perfectly defended by her sincere ballot, and the other voters have no impact on the outcome, hence they have no incentive to deviate from sincere voting. Thus, we obtain the following equivalence. In the theorem, as well as in the corollary, it is not needed to assume that any alternative can be elected. It is only assumed that at least three of them can win, i.e. are possible outcomes of the voting rule. It is possible that some other alternatives can be elected in no circumstances: the theorem and the corollary still apply. However, the corollary is sometimes presented under a less general form: instead of assuming that the rule has at least three possible outcomes, it is sometimes assumed that contains at least three elements and that the voting rule is onto, i.e. every alternative is a possible outcome. The assumption of being onto is sometimes even replaced with the assumption that the rule is unanimous, in the sense that if all voters prefer the same candidate, then she must be elected. Sketch of proof The Gibbard–Satterthwaite theorem can be proved using Arrow's impossibility theorem for social ranking functions. We give a sketch of proof in the simplified case where some voting rule is assumed to be Pareto-efficient. It is possible to build a social ranking function , as follows: in order to decide whether , the function creates new preferences in which and are moved to the top of all voters' preferences. Then, examines whether chooses or . It is possible to prove that, if is non-manipulable and non-dictatorial, satisfies independence of irrelevant alternatives. Arrow's impossibility theorem says that, when there are three or more alternatives, such a function must be a dictatorship. Hence, such a voting rule must also be a dictatorship. Later authors have developed other variants of the proof. History The strategic aspect of voting is already noticed in 1876 by Charles Dodgson, also known as Lewis Carroll, a pioneer in social choice theory. His quote (about a particular voting system) was made famous by Duncan Black:This principle of voting makes an election more of a game of skill than a real test of the wishes of the electors.During the 1950s, Robin Farquharson published influential articles on voting theory. In an article with Michael Dummett, he conjectures that deterministic voting rules with at least three outcomes are never straightforward tactical voting. This conjecture was later proven independently by Allan Gibbard and Mark Satterthwaite. In a 1973 article, Gibbard exploits Arrow's impossibility theorem from 1951 to prove the result we now know as Gibbard's theorem. Independently, Satterthwaite proved the same result in his PhD dissertation in 1973, then published it in a 1975 article. This proof is also based on Arrow's impossibility theorem, but does not involve the more general version given by Gibbard's theorem. Related results Gibbard's theorem deals with processes of collective choice that may not be ordinal, i.e. where a voter's action may not consist in communicating a preference order over the candidates. Gibbard's 1978 theorem and Hylland's theorem extend these results to non-deterministic mechanisms, i.e. where the outcome may not only depend on the ballots but may also involve a part of chance. The Duggan–Schwartz theorem extend this result in another direction, by dealing with deterministic voting rules that choose multiple winners. Importance The Gibbard–Satterthwaite theorem is generally presented as a result about voting systems, but it can also be seen as an important result of mechanism design, which deals with a broader class of decision rules. Noam Nisan describes this relation:The GS theorem seems to quash any hope of designing incentive-compatible social-choice functions. The whole field of Mechanism Design attempts escaping from this impossibility result using various modifications in the model.The main idea of these "escape routes" is that they allow for a broader class of mechanisms than ranked voting, similarly to the escape routes from Arrow's impossibility theorem.
Mathematics
Game theory
null
288311
https://en.wikipedia.org/wiki/Web%20application
Web application
A web application (or web app) is application software that is created with web technologies and runs via a web browser. Web applications emerged during the late 1990s and allowed for the server to dynamically build a response to the request, in contrast to static web pages. Web applications are commonly distributed via a web server. There are several different tier systems that web applications use to communicate between the web browsers, the client interface, and server data. Each system has their own uses as they function in different ways. However, there are many security risks that developers must be aware of during development; proper measures to protect user data is vital. Web applications are often constructed with the use of a web application framework. Single-page and progressive are two approaches for a website to seem more like a native app. History The concept of a "web application" was first introduced in the Java language in the Servlet Specification version 2.2, which was released in 1999. At that time, both JavaScript and XML had already been developed, but the XMLHttpRequest object had only been recently introduced on Internet Explorer 5 as an ActiveX object.[citation needed] Beginning around the early 2000s, applications such as "Myspace (2003), Gmail (2004), Digg (2004), [and] Google Maps (2005)," started to make their client sides more and more interactive. A web page script is able to contact the server for storing/retrieving data without downloading an entire web page. The practice became known as Ajax in 2005. In earlier computing models like client-server, the processing load for the application was shared between code on the server and code installed on each client locally. In other words, an application had its own pre-compiled client program which served as its user interface and had to be separately installed on each user's personal computer. An upgrade to the server-side code of the application would typically also require an upgrade to the client-side code installed on each user workstation, adding to the support cost and decreasing productivity. Additionally, both the client and server components of the application were bound tightly to a particular computer architecture and operating system, which made porting them to other systems prohibitively expensive for all but the largest applications. Later, in 1995, Netscape introduced the client-side scripting language called JavaScript, which allowed programmers to add dynamic elements to the user interface that ran on the client side. Essentially, instead of sending data to the server in order to generate an entire web page, the embedded scripts of the downloaded page can perform various tasks such as input validation or showing/hiding parts of the page. "Progressive web apps", the term coined by designer Frances Berriman and Google Chrome engineer Alex Russell in 2015, refers to apps taking advantage of new features supported by modern browsers, which initially run inside a web browser tab but later can run completely offline and can be launched without entering the app URL in the browser. Structure Traditional PC applications are typically single-tiered, residing solely on the client machine. In contrast, web applications inherently facilitate a multi-tiered architecture. Though many variations are possible, the most common structure is the three-tiered application. In its most common form, the three tiers are called presentation, application and storage. The first tier, presentation, refers to a web browser itself. The second tier refers to any engine using dynamic web content technology (such as ASP, CGI, ColdFusion, Dart, JSP/Java, Node.js, PHP, Python or Ruby on Rails). The third tier refers to a database that stores data and determines the structure of a user interface. Essentially, when using the three-tiered system, the web browser sends requests to the engine, which then services them by making queries and updates against the database and generates a user interface. The 3-tier solution may fall short when dealing with more complex applications, and may need to be replaced with the n-tiered approach; the greatest benefit of which is how business logic (which resides on the application tier) is broken down into a more fine-grained model. Another benefit would be to add an integration tier, which separates the data tier and provides an easy-to-use interface to access the data. For example, the client data would be accessed by calling a "list_clients()" function instead of making an SQL query directly against the client table on the database. This allows the underlying database to be replaced without making any change to the other tiers. There are some who view a web application as a two-tier architecture. This can be a "smart" client that performs all the work and queries a "dumb" server, or a "dumb" client that relies on a "smart" server. The client would handle the presentation tier, the server would have the database (storage tier), and the business logic (application tier) would be on one of them or on both. While this increases the scalability of the applications and separates the display and the database, it still does not allow for true specialization of layers, so most applications will outgrow this model. Security Security breaches on these kinds of applications are a major concern because it can involve both enterprise information and private customer data. Protecting these assets is an important part of any web application, and there are some key operational areas that must be included in the development process. This includes processes for authentication, authorization, asset handling, input, and logging and auditing. Building security into the applications from the beginning is sometimes more effective and less disruptive in the long run. Development Writing web applications is simplified with the use of web application frameworks. These frameworks facilitate rapid application development by allowing a development team to focus on the parts of their application which are unique to their goals without having to resolve common development issues such as user management. In addition, there is potential for the development of applications on Internet operating systems, although currently there are not many viable platforms that fit this model.
Technology
Computer software
null
288316
https://en.wikipedia.org/wiki/Instinct
Instinct
Instinct is the inherent inclination of a living organism towards a particular complex behaviour, containing innate (inborn) elements. The simplest example of an instinctive behaviour is a fixed action pattern (FAP), in which a very short to medium length sequence of actions, without variation, are carried out in response to a corresponding clearly defined stimulus. Any behaviour is instinctive if it is performed without being based upon prior experience (that is, in the absence of learning), and is therefore an expression of innate biological factors. Sea turtles, newly hatched on a beach, will instinctively move toward the ocean. A marsupial climbs into its mother's pouch upon being born. Other examples include animal fighting, animal courtship behaviour, internal escape functions, and the building of nests. Though an instinct is defined by its invariant innate characteristics, details of its performance can be changed by experience; for example, a dog can improve its listening skills by practice. Instincts are inborn complex patterns of behaviour that exist in most members of the species, and should be distinguished from reflexes, which are simple responses of an organism to a specific stimulus, such as the contraction of the pupil in response to bright light or the spasmodic movement of the lower leg when the knee is tapped. The absence of volitional capacity must not be confused with an inability to modify fixed action patterns. For example, people may be able to modify a stimulated fixed action pattern by consciously recognizing the point of its activation and simply stop doing it, whereas animals without a sufficiently strong volitional capacity may not be able to disengage from their fixed action patterns, once activated. Instinctual behaviour in humans has been studied. Early theorists Jean Henri Fabre Jean Henri Fabre (1823–1915) is said to be the first person to study small animals (other than birds) and insects, and he specifically specialized in the instincts of insects. Fabre considered an instinct to be a linked set of behaviours that an organism undergoes unconsciously in response to external conditions. Insect and animal behaviour Fabre concluded a significant difference between humans and other animals is that most animals cannot reason. He came to this conclusion after observing how insects and wild birds continued to repeat a certain behaviour in response to a novel situation. While these instinctive behaviours appeared complex, the insects and animals did not adjust their behaviour despite it not helping them in that novel situation. The following are some insect and animal behaviours that Fabre observed and labelled "instinctive", for they do not involve reasoning: Maternal instincts Metamorphosis Mimicry Molting Playing dead Taxis Fixed patterns Fabre believed instincts were "fixed patterns", meaning these linked sets of behaviours do not change in response to novel environmental situations. One specific example that helped him arrive at this conclusion is his study of various wasp species. All of the wasp species he studied performed a certain pattern of behaviour when catching their prey, which Fabre called a fixed pattern. Then Fabre intervened in the wasps' process of catching prey, and only one of the species adjusted their behaviour in response to this unfamiliar interception. Fabre explained this contradiction by arguing that any individuals which stray from the norms of their species are merely an exception, while also admitting that there could be some room for growth within a species' instincts. Fabre's belief that instincts are fixed opposes the theory of evolution. He rejected that one species could evolve into another, and also rejected that human consciousness could be achieved through the evolution of unconscious traits. Wilhelm Wundt Wilhelm Wundt (1832–1920) is known for founding the first psychology laboratory, which occurred in 1879 at the University of Leipzig. He was able to draw conclusions about instinct from his careful observations of both animal and human behaviour. Unconscious processes Wundt believed unconscious processes (which he called "instinctive movements") were the result of sensations and emotions, and these unconscious processes were building blocks towards consciousness. Facial expressions An example of what Wundt studied to arrive at his conclusions regarding unconscious processes includes the facial expressions babies made in response to the sensations of sweet, sour, and bitter tastes. He concluded these facial expressions were the result of the babies trying to avoid unpleasant emotions because there was something unpleasant in their mouths, and that these instincts (which he uses interchangeably with reflexive movements) only became innate because past generations learned it and it benefited their survival. Natural selection The process by which Wundt explained the existence of instincts is natural selection. More specifically, his research suggests natural selection causes small changes in the nervous system over time. These changes bring about hereditary drives in organisms, which are then responsible for any unconscious processes. Another thing to note is that Wundt used the terms unconscious processes, reflexive movements, and instinctive movements interchangeably, often grouping them together. Sigmund Freud Sigmund Freud considered that mental images of bodily needs, expressed in the form of mental desires, are called instincts. William McDougall In the early 20th century, there was recognized a "union of instinct and emotion". William McDougall held that many instincts have their respective associated specific emotions. As research became more rigorous and terms better defined, instinct as an explanation for human behaviour became less common. In 1932, McDougall argued that the word instinct is more suitable for describing animal behaviour, while he recommended the word propensity for goal-directed combinations of the many innate human abilities, which are loosely and variably linked, in a way that shows strong plasticity. Abraham Maslow In the 1950s, the psychologist Abraham Maslow argued that humans no longer have instincts because we have the ability to override them in certain situations. He felt that what is called instinct is often imprecisely defined, and really amounts to strong "drives". For Maslow, an instinct is something which cannot be overridden, and therefore while the term may have applied to humans in the past, it no longer does. Konrad Lorenz An interest in innate behaviours arose again in the 1950s with Konrad Lorenz and Nikolaas Tinbergen, who made the distinction between instinct and learned behaviours. Our modern understanding of instinctual behaviour in animals owes much to their work. For instance, there exists a sensitive period for a bird in which it learns the identity of its mother. Konrad Lorenz famously had a goose imprint on his boots. Thereafter the goose would follow whoever wore the boots. This suggests that the identity of the goose's mother was learned, but the goose's behaviour towards what it perceived as its mother was instinctive. Frank Beach In a conference in 1960, chaired by Frank Beach, a pioneer in comparative psychology, and attended by luminaries in the field, the term instinct was restricted in its application. During the 1960s and 1970s, textbooks still contained some discussion of instincts in reference to human behaviour. By the year 2000, a survey of the 12 best selling textbooks in introductory psychology revealed only one reference to instincts, and that was in regard to Sigmund Freud's referral to the "id" instincts. In this sense, the term instinct appeared to have become outmoded for introductory textbooks on human psychology. The book Instinct: An Enduring Problem in Psychology (1961) selected a range of writings about the topic. Richard Herrnstein In a classic paper published in 1972, the psychologist Richard Herrnstein wrote: "A comparison of McDougall's theory of instinct and Skinner's reinforcement theory—representing nature and nurture—shows remarkable, and largely unrecognized, similarities between the contending sides in the nature–nurture debate as applied to the analysis of behavior." F. B. Mandal proposed a set of criteria by which a behaviour might be considered instinctual: (a) be automatic, (b) be irresistible, (c) occur at some point in development, (d) be triggered by some event in the environment, (e) occur in every member of the species, (f) be unmodifiable, and (g) govern behaviour for which the organism needs no training (although the organism may profit from experience and to that degree the behaviour is modifiable). In Information Behavior: An Evolutionary Instinct (2010, pp. 35–42), Amanda Spink notes that "currently in the behavioral sciences instinct is generally understood as the innate part of behavior that emerges without any training or education in humans." She claims that the viewpoint that information behaviour has an instinctive basis is grounded in the latest thinking on human behaviour. Furthermore, she notes that "behaviors such as cooperation, sexual behavior, child rearing and aesthetics are [also] seen as 'evolved psychological mechanisms' with an instinctive basis." Spink adds that Steven Pinker similarly asserts that language acquisition is instinctive in humans in his book The Language Instinct (1994). In 1908, William McDougall wrote about the "instinct of curiosity" and its associated "emotion of wonder", though Spink's book does not mention this. M. S. Blumberg in 2017 examined the use of the word instinct, and found it varied significantly. In humans Among possible examples of instinct-influenced behaviour in humans are the following. Congenital preparedness for developing fear of snakes and spiders was found in six-month-old babies. Infant cry is believed to be a manifestation of instinct. The infant cannot otherwise protect itself for survival during its long period of maturation. The maternal and paternal bond manifest particularly in response to the infant cry. Its mechanism has been partly elucidated by observations with functional MRI of the parent's brain. The herd instinct is found in human children and chimpanzee infants, but is apparently absent in the young orangutans. Hormones are linked to specific forms of human behaviour, such as sexuality. High levels of testosterone are often associated in a person (male or female) with aggressiveness. Decrease in testosterone level after the birth of a child was found among fathers. Hygiene behaviour in humans was suggested to be partly instinctive, based on emotions such as disgust. Maternal bond or maternal instinct is when a mother develops a relationship to a child to provide for its well-being. Maternal oxytocin is the hormone and neuropeptide thought to be responsible for predisposing women to showing bonding behavior and bonding. Self-preservation in people generally is when they have the instinct to survive. Fight-or-flight response in human beings has been said to be a particular response to the arising harmful event, attack or threat to survival. Cooperation behavior or social instinct has been postulated as an instinct necessary for the future survival of people. Resistance towards change is the difficulty experienced by a person when they are trying to push against the suggestions made to change behavior or accept certain treatments regardless of whether it will improve their condition or not. Adaptive behaviour to environment is an inherited innate phenotypic characteristic, whether inherited as instincts intricately, or as a neuropsychological capacity that furthers learning. Examples are mating, searching for food, situational awareness, establishing the pecking order and vocalizations. Reflexes Examples of behaviours that do not require thought include many reflexes. The stimulus in a reflex may not require brain activity but instead may travel to the spinal cord as a message that is then transmitted back through the body, tracing a path called the reflex arc. Reflexes are similar to fixed action patterns in that most reflexes meet the criteria of a fixed action pattern. However, a fixed action pattern can be processed in the brain as well; a male stickleback's instinctive aggression towards anything red during his mating season is such an example. Examples of instinctive behaviours in humans include many of the primitive reflexes, such as rooting and suckling, behaviours which are present in mammals. In rats, it has been observed that innate responses are related to specific chemicals, and these chemicals are detected by two organs located in the nose: the vomeronasal organ (VNO) and the main olfactory epithelium (MOE). Maturational Some instinctive behaviours depend on maturational processes to appear. For instance, we commonly refer to birds "learning" to fly. However, young birds have been experimentally reared in devices that prevent them from moving their wings until they reached the age at which their cohorts were flying. These birds flew immediately and normally when released, showing that their improvement resulted from neuromuscular maturation and not true learning. In evolution Imprinting provides one example of instinct. This complex response may involve visual, auditory, and olfactory cues in the environment surrounding an organism. In some cases, imprinting attaches an offspring to its parent, which is a reproductive benefit to offspring survival. If an offspring has attachment to a parent, it is more likely to stay nearby under parental protection. Attached offspring are also more likely to learn from a parental figure when interacting closely. (Reproductive benefits are a driving force behind natural selection.) Environment is an important factor in the evolution of innate behaviour. A hypothesis of Michael McCollough, a positive psychologist, explains that environment plays a key role in human behaviours such as forgiveness and revenge. This hypothesis theorizes that various social environments cause either forgiveness or revenge to prevail. McCollough relates his theory to game theory. In a tit-for-tat strategy, cooperation and retaliation are comparable to forgiveness and revenge. The choice between the two can be beneficial or detrimental, depending on what the partner-organism chooses. Though this psychological example of game theory does not have such directly measurable results, it provides an interesting theory of unique thought. From a more biological standpoint, the brain's limbic system operates as the main control-area for response to certain stimuli, including a variety of instinctual behaviour. The limbic system processes external stimuli related to emotions, social activity, and motivation, which propagates a behavioural response. Some behaviours include maternal care, aggression, defense, and social hierarchy. These behaviours are influenced by sensory input — sight, sound, touch, and smell. Within the circuitry of the limbic system, there are various places where evolution could have taken place, or could take place in the future. For example, many rodents have receptors in the vomeronasal organ that respond explicitly to predator stimuli that specifically relate to that individual species of rodent. The reception of a predatory stimulus usually creates a response of defense or fear. Mating in rats follows a similar mechanism. The vomeronasal organ and the main olfactory epithelium, together called the olfactory system, detect pheromones from the opposite sex. These signals then travel to the medial amygdala, which disperses the signal to a variety of brain parts. The pathways involved with innate circuitry are extremely specialized and specific. Various organs and sensory receptors play parts in this complex process. Instinct is a phenomenon that can be investigated from a multitude of angles: genetics, limbic system, nervous pathways, and environment. Researchers can study levels of instincts, from molecular to groups of individuals. Extremely specialized systems have evolved, resulting in individuals which exhibit behaviours without learning them.
Biology and health sciences
Ethology
null
288557
https://en.wikipedia.org/wiki/Javan%20rhinoceros
Javan rhinoceros
The Javan rhinoceros (Rhinoceros sondaicus), Javan rhino, Sunda rhinoceros or lesser one-horned rhinoceros is a critically endangered member of the genus Rhinoceros, of the rhinoceros family, Rhinocerotidae, and one of the five remaining extant rhinoceros species across South Asia and Africa. The Javan rhinoceros is one of the smallest rhinoceros species, along with the Sumatran, or "hairy", rhinoceros. They are superficially similar to Indian rhinos, as they have plate-like, "armored" protective skin folds, but are slightly smaller, at just long and tall, on average. The heaviest specimens weigh around 2,300 kg/2.3 tonnes (2.54 short tons), similar to a black rhinoceros. However, unlike the long and potentially lethal horns of the black or white rhinoceroses of Africa, the Javan species' single, somewhat blunted horn (only present on males) is usually shorter than . Up until the mid-19th to about the early 20th century, the Javan rhinoceros had ranged beyond the islands of Java and Sumatra and onto the mainland of Southeast Asia and Indochina, northwest into East India, Bhutan, and the south of China. Today, it is the rarest of all rhinoceros, and among the rarest of all living animal species, with only one currently known wild population, and no individuals successfully kept in captivity. It is among the rarest large mammals on the planet Earth, with a population of approximately 74 rhinos within Ujung Kulon National Park, at the far western tip of Java, Indonesia. In 2023, the Indonesian authorities captured two gangs of poachers who confessed to killing 26 rhinos from 2019–2023. No census has been released since 2019. The Javan rhinoceros population in Vietnam's Cat Tien National Park was declared locally extinct in 2011. The decline of the Javan rhinoceros is primarily attributed to poaching, for the males' horns, which—despite merely being composed of keratin—are highly valued in traditional Chinese medicine, fetching as much as US$30,000 per kg on the black market. As the presence of colonial Dutch and other Europeans in its range increased, peaking in the 1700–1800s, trophy hunting also became a serious threat. Loss of habitat and massive human population growth (especially post-wartimes, such as the Vietnam War) have also contributed to its decline and hindered the species' recovery. The remaining range is within one nationally-protected area, and Ujung Kulon is also a UNESCO World Heritage Site. Nonetheless, rural, potentially rugged park boundaries mean that law enforcement cannot be equally present in all places at all times; in some areas, this lack of security still places the species at risk from poachers, disease exposure and, ultimately, loss of genetic diversity—leading to genetic "bottlenecking" (i.e., inbreeding depression). The Javan rhinoceros can live around 30–45 years in the wild. It historically inhabited dense lowland rainforest, wet grasslands, and vast floodplains at forest-edges. It is mostly solitary, except for courtship and rearing offspring, though groups may occasionally congregate near wallows and salt licks. Aside from humans, whom they usually avoid, adult rhinos have no natural predators in their range. Very small juveniles may be preyed upon, if left unsupervised, typically by leopards, Sumatran tigers or, rarely, crocodiles. Scientists and conservationists rarely study the animals directly due to their extreme rarity and the danger of interfering with such an endangered species. Researchers instead rely on camera traps and fecal samples to gauge health and behavior. Consequently, Javan rhinos are the least-studied of all rhinoceros species. Two adult female Javan rhinoceroses, each with a calf, were filmed using a motion-triggered trail camera, the video being released on 28 February 2011 by WWF and Indonesia's National Park Authority, proving they are still breeding in the wild. In April 2012, the National Parks Authority released further trailcam videos showing 35 individuals, including mother-offspring pairs and courting adults. Etymology The genus name Rhinoceros is a combination of the ancient Greek words ῥίς (ris) meaning 'nose' and κέρας (keras) meaning 'horn of an animal'. sondaicus is derived from sunda, the biogeographical region that comprises the islands of Sumatra, Java, Borneo, and surrounding smaller islands. The Javan rhino is also known as the lesser one-horned rhinoceros (in contrast with the greater one-horned rhinoceros, another name for the Indian rhino). Taxonomy Rhinoceros sondaicus was the scientific name used by Anselme Gaëtan Desmarest in 1822 for a rhinoceros from Java sent by Pierre-Médard Diard and Alfred Duvaucel to the National Museum of Natural History, France. In the 19th century, several zoological specimens of hornless rhinoceros were described: Rhinoceros inermis proposed by René Lesson in 1838 was a female rhinoceros without horns shot in the Sundarbans. Rhinoceros nasalis and Rhinoceros floweri proposed by John Edward Gray in 1867 were two rhinoceros skulls from Borneo and one from Sumatra, respectively. Rhinoceros annamiticus proposed by Pierre Marie Heude in 1892 was a specimen from Vietnam. As of 2005, three Javan rhinoceros subspecies are considered valid taxa: R. s. sondaicus, the nominate subspecies, known as the Indonesian Javan rhinoceros R. s. inermis, known as the Indian Javan rhinoceros or lesser Indian rhinoceros R. s. annamiticus, known as the Vietnamese Javan rhinoceros or Vietnamese rhinoceros Evolution Ancestral rhinoceroses are held to have first diverged from other perissodactyls in the Early Eocene. Mitochondrial DNA comparison suggests the ancestors of modern rhinos split from the ancestors of Equidae around 50 million years ago. The extant family, the Rhinocerotidae, first appeared in the Late Eocene in Eurasia, and the ancestors of the extant rhino species dispersed from Asia beginning in the Miocene. The last common ancestor of living rhinoceroses belonging to the subfamily Rhinocerotinae is suggested to have lived around 16 million years ago, with the ancestors of the genus Rhinoceros diverging from the ancestors of other living rhinoceroses around 15 million years ago. The genus Rhinoceros has been found to be overall slightly more closely related to the Sumatran rhinoceros (as well as to the extinct woolly rhinoceros and the extinct Eurasian genus Stephanorhinus) than to living African rhinoceroses, thought there appears to have been gene flow between the ancestors of living African rhinoceroses and the genus Rhinoceros, as well as between the ancestors of the genus Rhinoceros and the ancestors of the woolly rhinoceros and Stephanorhinus. A cladogram showing the relationships of recent and Late Pleistocene rhinoceros species (minus Stephanorhinus hemitoechus) based on whole nuclear genomes, after Liu et al., 2021. The oldest known definitive fossils of the Javan rhinoceros are from the Late Pliocene deposits of Myanmar and Java. Molecular estimates suggest the Indian and Javan rhinoceros diverged from each other earlier, around 4.3 million years ago. An astragalus fossil similar to that of the Javan rhinoceros from the Late Miocene deposits of Myanmar have been identified as Rhinoceros cf. R. sondaicus. Description Javan rhinos are smaller than the Indian rhinoceros, and are close in size to the black rhinoceros. They are the largest animal in Java and the second-largest animal in Indonesia after the Asian elephant. The length of Javan rhinos including their head is , and they can reach a height of . Adults are variously reported to weigh between , although a study to collect accurate measurements of the animals has never been conducted and is not a priority because of their extreme conservation status. No substantial size difference is seen between genders, but cows may be slightly bigger. The rhinos in Vietnam appeared to be significantly smaller than those in Java, based on studies of photographic evidence and measurements of their footprints. Like the Indian rhino, the Javan rhinos have a single horn (the other extant species have two horns). Its horn is the smallest of all extant rhinos, usually less than with the longest recorded only . Only bulls have horns. Cows are the only extant rhinos that remain hornless into adulthood, though they may develop a tiny bump of an inch or two in height. Javan rhinos do not appear to often use their horn for fighting but instead uses it to scrape mud away in wallows, to pull down plants for eating, and to open paths through thick vegetation. Similar to the other browsing species of rhino (black and Sumatran), Javan rhinos have a long, pointed, upper lip which helps in grabbing food. Their lower incisors are long and sharp; when Javan rhinos fight, they use these teeth. Behind the incisors, two rows of six low-crowned molars are used for chewing coarse plants. Like all rhinos, Javan rhinos smell and hear well, but have very poor vision. They are estimated to live for 30 to 45 years. Their hairless, splotchy gray or gray-brown skin falls in folds to the shoulder, back and rump. The skin has a natural mosaic pattern, which lends the rhino an armored appearance. The neck folds of Javan rhinos are smaller than those of the Indian rhinoceros, but still, form a saddle shape over the shoulder. Because of the risks of interfering with such an endangered species, however, Javan rhinos are primarily studied through fecal sampling and Camera traps. They are rarely encountered, observed or measured directly. Distribution and habitat Even the most optimistic estimate suggests fewer than 100 Javan rhinos remain in the wild. They are considered one of the most endangered species in the world. The Javan rhinoceros is known to survive in only one place, the Ujung Kulon National Park on the western tip of Java. The animal was once widespread from Assam and Bengal (where their range would have overlapped with both the Sumatran and Indian rhinos) eastward to Myanmar, Thailand, Cambodia, Laos, Vietnam, and southwards to the Malay Peninsula and the islands of Sumatra, Java, and Borneo. Javan rhinoceros remains were also found at the Neolithic site of Hemudu in Zhejiang, China, and the Classic of Mountains and Seas appears to describe one living in the Yangtze River basin. The Javan rhino primarily inhabits dense, lowland rain forests, grasslands, and reed beds with abundant rivers, large floodplains, or wet areas with many mud wallows. Although it historically preferred low-lying areas, the subspecies in Vietnam was pushed onto much higher ground (up to 2,000 m or 6,561 ft), probably because of human encroachment and poaching. The range of the Javan rhinoceros has been shrinking for at least 3,000 years. Starting around 1000 BC, the northern range of the rhinoceros extended into China, but began moving southward at roughly per year, as human settlements increased in the region. It likely became locally extinct in India in the first decade of the 20th century. The Javan rhino was hunted to extinction on the Malay Peninsula by 1932. The last ones on Sumatra died out during World War II. They were extinct from Chittagong and the Sunderbans by the middle of the 20th century. By the end of the Vietnam War, the Vietnamese rhinoceros was believed extinct across all of mainland Asia. Local hunters and woodcutters in Cambodia claim to have seen Javan rhinos in the Cardamom Mountains, but surveys of the area have failed to find any evidence of them. In the late 1980s, a small population was found in the Cat Tien area of Vietnam. However, the last known individual of that population was shot in 2010. A population may have existed on the island of Borneo, as well, though these specimens could have been the Sumatran rhinoceros, a small population of which still lives there. Behavior The Javan rhinoceros is a solitary animal with the exception of breeding pairs and mothers with calves. They sometimes congregate in small groups at salt licks and mud wallows. Wallowing in mud is a common behavior for all rhinos; the activity allows them to maintain cool body temperatures and helps prevent disease and parasite infestation. The Javan rhinoceros does not generally dig its own mud wallows, preferring to use other animals' wallows or naturally occurring pits, which it will use its horn to enlarge. Salt licks are also very important because of the essential nutrients the rhino receives from the salt. Bull home ranges are larger at compared to the cow, which are around . Bull territories overlap each other less than those of the cow. It is not known if there are territorial fights. Bulls mark their territories with dung piles and by urine spraying. Scrapes made by the feet in the ground and twisted saplings also seem to be used for communication. Members of other rhino species have a peculiar habit of defecating in massive rhino dung piles and then scraping their back feet in the dung. The Sumatran and Javan rhinos, while defecating in piles, do not engage in the scraping. This adaptation in behavior is thought to be ecological; in the wet forests of Java and Sumatra, the method may not be useful for spreading odors. The Javan rhino is much less vocal than the Sumatran; very few Javan rhino vocalizations have ever been recorded. Adults have no known predators other than humans. The species, particularly in Vietnam, is skittish and retreats into dense forests whenever humans are near. Though a valuable trait from a survival standpoint, it has made the rhinos difficult to study. Nevertheless, when humans approach too closely, the Javan rhino becomes aggressive and will attack, stabbing with the incisors of its lower jaw while thrusting upward with its head. Its comparatively antisocial behavior may be a recent adaptation to population stresses; historical evidence suggests they, like other rhinos, were once more gregarious. Diet The Javan rhinoceros is herbivorous, eating diverse plant species, especially their shoots, twigs, young foliage and fallen fruit. Most of the plants favored by the species grow in sunny areas in forest clearings, shrubland and other vegetation types with no large trees. The rhino knocks down saplings to reach its food and grabs it with its prehensile upper lip. It is the most adaptable feeder of all the rhino species. Currently, it is a pure browser, but probably once both browsed and grazed in its historical range. The rhino eats an estimated of food daily. Like the Sumatran rhino, it needs salt in its diet. The salt licks common in its historical range do not exist in Ujung Kulon but the rhinos there have been observed drinking seawater, likely for the same nutritional need. Conservation The main factor in the continued decline of the Javan rhinoceros population has been poaching for horns, a problem that affects all rhino species. The horns have been a traded commodity for more than 2,000 years in China, where they are believed to have healing properties. Historically, the rhinoceros' hide was used to make armor for Chinese soldiers, and some local tribes in Vietnam believed the hide could be used to make an antidote for snake venom. Because the rhinoceros' range encompasses many areas of poverty, it has been difficult to convince local people not to kill a seemingly (otherwise) useless animal which could be sold for a large sum of money. When the Convention on International Trade in Endangered Species of Wild Fauna and Flora first went into effect in 1975, the Javan rhinoceros was listed under Appendix I meaning commercial international trade in the Javan rhinoceros and products derived from it is prohibited. Surveys of the rhinoceros horn black market have determined that Asian rhinoceros horn fetches a price as high as $30,000 per kg, three times the value of African rhinoceros horn. Loss of habitat because of agriculture has also contributed to its decline, though this is no longer as significant a factor because the rhinoceros only lives in one nationally protected park. Deteriorating habitats have hindered the recovery of rhino populations that fell victim to poaching. Even with all the conservation efforts, the prospects for their survival are grim. Because the population is restricted to one small area, they are very susceptible to disease and inbreeding depression. Conservation geneticists estimate a population of 100 rhinos would be needed to preserve the genetic diversity of this conservation-reliant species. Ujung Kulon The Ujung Kulon peninsula of Java was devastated by the eruption of Krakatoa in 1883. The Javan rhinoceros recolonized the peninsula after the event, but humans never returned in large numbers, thus creating a haven for wildlife. In 1931, as the Javan rhinoceros was on the brink of extinction in Sumatra, the government of the Dutch East Indies declared the rhino a legally protected species, which it has remained ever since. A census of the rhinos in Ujung Kulon was first conducted in 1967; only 25 animals were recorded. By 1980, that population had doubled and has remained steady, at about 50, ever since. Although the rhinos in Ujung Kulon have no natural predators, they have to compete for scarce resources with wild cattle, which may keep their numbers below the peninsula's carrying capacity. Ujung Kulon is managed by the Indonesian Ministry of Forestry. Evidence of at least four baby rhinos was discovered in 2006, the most ever documented for the species. In March 2011, a hidden-camera video was published showing adults and juveniles, indicating recent matings and breeding. During the period from January to October 2011, the cameras had captured images of 35 rhinos. As of December 2011, a rhino breeding sanctuary in an area of 38,000 hectares is being finalized to help reach the target of 70 to 80 Javan rhinos by 2015. In April 2012, the WWF and International Rhino Foundation added 120 video cameras to the existing 40 to better monitor rhino movements and judge the size of the animals' population. A recent survey has found far fewer cows than bulls. Only four cows among 17 rhinos were recorded in the eastern half of Ujung Kulon, which is a potential setback in efforts to save the species. In 2012, the Asian Rhino Project was working out the best eradication programme for the arenga palm, which was blanketing the park and crowding out the rhinos' food sources. Following the trails of Javan rhinoceros allowed in-depth observation of their feeding habits in their natural habitat. Comparing the acid insoluble ash (MA) content of faeces and in the dry weight of food provided reliable estimates of digestibility, and this method has potential for wider application in situations where total collection of faecal matter is not feasible. There was a strong positive correlation between the size of home range and diversity of food intake, and between the size of home range with the numbers of wallow holes used. The quantity and quality of food intake were variable among rhinoceroses and over time. Overall energy consumption was related to the size of the animal, while the digestibility of plants consumed appeared to be influenced by individual age and habitat conditions. In May 2017, Director of the Biodiversity Conservation at the Ministry of Environment and Forestry, Bambang Dahono Adji announced plans to transfer the rhinos to the Cikepuh Wildlife Sanctuary located in West Java. The animals will first undergo DNA tests to determine lineage and risk to disease so as to avoid issues such as "inbreeding" or marriage kinship. As of December 2018, these plans had yet to concretely materialise. In December 2018, the remaining Javan rhino population was severely endangered by the tsunami triggered by nearby volcano Anak Krakatau. In 2024, officials announced that recently arrested poachers confessed to killing a total of 26 Javan rhinos, potentially cutting the total population by one-third. Cat Tien Once widespread in Southeast Asia, the Javan rhinoceros was presumed extinct in Vietnam in the mid-1970s, at the end of the Vietnam War. The combat wrought havoc on the ecosystems of the region through the use of napalm, extensive defoliation from Agent Orange, aerial bombing, use of landmines, and overhunting by local poachers. In 1988, the assumption of the subspecies' extinction was challenged when a hunter shot an adult cow, proving the species had somehow survived the war. In 1989, scientists surveyed Vietnam's southern forests to search for evidence of other survivors. Fresh tracks belonging to up to 15 rhinos were found along the Dong Nai River. Largely because of the rhinoceros, the region they inhabited became part of the Cat Tien National Park in 1992. By the early 2000s, their population was feared to have declined past the point of recovery in Vietnam, with some conservationists estimating as few as three to eight rhinos, and possibly no bulls, survived. Conservationists debated whether or not the Vietnamese rhinoceros had any chance of survival, with some arguing that rhinos from Indonesia should be introduced in an attempt to save the population, with others arguing that the population could recover. Genetic analysis of dung samples collected in Cat Tien National Park in a survey from October 2009 to March 2010 showed only a single individual Javan rhinoceros remained in the park. In early May 2010, the body of a Javan rhino was found in the park. The animal had been shot and its horn removed by poachers. In October 2011, the International Rhino Foundation confirmed the Javan rhinoceros was extinct in Vietnam, leaving only the rhinos in Ujung Kulon. In captivity A Javan rhinoceros has not been exhibited in a zoo for over a century. In the 19th century, at least four rhinos were exhibited in Adelaide, Calcutta, and London. At least 22 Javan rhinos have been documented as having been kept in captivity; the true number is possibly greater, as the species was sometimes confused with the Indian rhinoceros. The Javan rhinoceros never fared well in captivity. The oldest lived to be 20, about half the age that the rhinos can reach in the wild. No records are known of a captive rhino giving birth. The last captive Javan rhino died at the Adelaide Zoo in Australia in 1907, where the species was so little known that it had been exhibited as an Indian rhinoceros. In culture The Javan rhinoceros occurred in Cambodia in the past and there are at least three depictions of rhinos in the bas reliefs of the temple at Angkor Wat. The west wing of the North Gallery has a relief that shows a rhino mounted by a god thought to be the fire god Agni. The rhinos are thought to be Javan rhinoceros rather than the somewhat similar looking one-horned Indian rhino on the basis of the skinfold on the shoulder which continues along the back in the Javan to give a saddle-like appearance. A depiction of the rhino in the east wing of the South Gallery shows a rhino attacking the damned in the panel depicting heaven and hell. An architect of the temple is thought to have been an Indian Brahmin priest named Divakarapandita (1040–1120 AD) who served king Jayavarman VI, Dharanindravarman I as well as Suryavarman II who constructed the temple. It is thought that the Indian priest who died before the construction of the temple might have influenced the use of tubercles on the skin which are based on the Indian rhino while the local Khmer artisans carved the other details of the rhinos based on the more familiar local Javan rhino. The association of the rhinoceros as the vahana of the god Agni is unique to Khmer culture. Another rhinoceros carving in the centre of a circular arrangement in a column with other circles containing elephants and water buffalo is known from the temple of Ta Prohm. It has been at the centre of anachronistic speculation that it might represent a Stegosaur due to the leaves behind it that give the impression of plates. One of the mascots of the 2018 Asian Games is a Javan rhinoceros named Kaka. The mascot of the 2023 FIFA U-20 World Cup is a Javan rhinoceros named Bacuya.
Biology and health sciences
Perissodactyla
Animals
289406
https://en.wikipedia.org/wiki/Blood%20sugar%20level
Blood sugar level
The blood sugar level, blood sugar concentration, blood glucose level, or glycemia is the measure of glucose concentrated in the blood. The body tightly regulates blood glucose levels as a part of metabolic homeostasis. For a 70 kg (154 lb) human, approximately four grams of dissolved glucose (also called "blood glucose") is maintained in the blood plasma at all times. Glucose that is not circulating in the blood is stored in skeletal muscle and liver cells in the form of glycogen; in fasting individuals, blood glucose is maintained at a constant level by releasing just enough glucose from these glycogen stores in the liver and skeletal muscle in order to maintain homeostasis. Glucose can be transported from the intestines or liver to other tissues in the body via the bloodstream. Cellular glucose uptake is primarily regulated by insulin, a hormone produced in the pancreas. Once inside the cell, the glucose can now act as an energy source as it undergoes the process of glycolysis. In humans, properly maintained glucose levels are necessary for normal function in a number of tissues, including the human brain, which consumes approximately 60% of blood glucose in fasting, sedentary individuals. A persistent elevation in blood glucose leads to glucose toxicity, which contributes to cell dysfunction and the pathology grouped together as complications of diabetes. Glucose levels are usually lowest in the morning, before the first meal of the day, and rise after meals for an hour or two by a few millimoles per litre. Abnormal persistently high glycemia is referred to as hyperglycemia; low levels are referred to as hypoglycemia. Diabetes mellitus is characterized by persistent hyperglycemia from a variety of causes, and it is the most prominent disease related to the failure of blood sugar regulation. Diabetes mellitus is also characterized by frequent episodes of low sugar, or hypoglycemia. There are different methods of testing and measuring blood sugar levels. Drinking alcohol causes an initial surge in blood sugar and later tends to cause levels to fall. Also, certain drugs can increase or decrease glucose levels. Units of Measurement There are two ways of measuring blood glucose levels: In the United Kingdom and Commonwealth countries (Australia, Canada, India, etc.) and ex-USSR countries molar concentration, measured in mmol/L (millimoles per litre, or millimolar, abbreviated mM). In the United States, Germany, Japan and many other countries mass concentration is measured in mg/dl (milligrams per decilitre). Unit conversion formula from mmol/L to mg/dL Since the molecular mass of glucose C6H12O6 is 180.156 g/mol, the factor between the two units is about 18, so 1 mmol/L of glucose is equivalent to 18 mg/dL. Normal value range Humans Normal blood glucose level (tested while fasting) for non-diabetics should be 3.9–5.5 mmol/L (70–100 mg/dL). According to the American Diabetes Association, the fasting blood glucose target range for diabetics, should be 3.9 - 7.2 mmol/L (70 - 130 mg/dL) and less than 10 mmol/L (180 mg/dL) two hours after meals (as measured by a blood glucose monitor). Normal value ranges may vary slightly between laboratories. Glucose homeostasis, when operating normally, restores the blood sugar level to a narrow range of about 4.4 to 6.1 mmol/L (79 to 110 mg/dL) (as measured by a fasting blood glucose test). The global mean fasting plasma blood glucose level in humans is about 5.5 mmol/L (100 mg/dL); however, this level fluctuates throughout the day. Blood sugar levels for those without diabetes and who are not fasting is usually below 6.9 mmol/L (125 mg/dL). Despite widely variable intervals between meals or the occasional consumption of meals with a substantial carbohydrate load, human blood glucose levels tend to remain within the normal range. However, shortly after eating, the blood glucose level may rise, in non-diabetics, temporarily up to 7.8 mmol/L (140 mg/dL) or slightly more. The actual amount of glucose in the blood and body fluids is very small. In a healthy adult male of with a blood volume of 5 L, a blood glucose level of 5.5 mmol/L (100 mg/dL) amounts to 5 g, equivalent to about a teaspoonful of sugar. Part of the reason why this amount is so small is that, to maintain an influx of glucose into cells, enzymes modify glucose by adding phosphate or other groups to it. Other animals In general, ranges of blood sugar in common domestic ruminants are lower than in many monogastric mammals. However this generalization does not extend to wild ruminants or camelids. For serum glucose in mg/dL, reference ranges of 42 to 75 for cows, 44 to 81 for sheep, and 48 to 76 for goats, but 61 to 124 for cats; 62 to 108 for dogs, 62 to 114 for horses, 66 to 116 for pigs, 75 to 155 for rabbits, and 90 to 140 for llamas have been reported. A 90 percent reference interval for serum glucose of 26 to 181 mg/dL has been reported for captured mountain goats (Oreamnos americanus), where no effects of the pursuit and capture on measured levels were evident. For beluga whales, the 25–75 percent range for serum glucose has been estimated to be 94 to 115 mg/dL. For the white rhinoceros, one study has indicated that the 95 percent range is 28 to 140 mg/dL. For harp seals, a serum glucose range of 4.9 to 12.1 mmol/L [i.e. 88 to 218 mg/dL] has been reported; for hooded seals, a range of 7.5 to 15.7 mmol/L [i.e. about 135 to 283 mg/dL] has been reported. Regulation The body's homeostatic mechanism keeps blood glucose levels within a narrow range. It is composed of several interacting systems, of which hormone regulation is the most important. There are two types of mutually antagonistic metabolic hormones affecting blood glucose levels: catabolic hormones (such as glucagon, cortisol and catecholamines) which increase blood glucose; and one anabolic hormone (insulin), which decreases blood glucose. These hormones are secreted from pancreatic islets (bundles of endocrine tissues), of which there are four types: alpha (A) cells, beta (B) cells, Delta (D) cells and F cells. Glucagon is secreted from alpha cells, while insulin is secreted by beta cells. Together they regulate the blood-glucose levels through negative feedback, a process where the end product of one reaction stimulates the beginning of another reaction. In blood-glucose levels, insulin lowers the concentration of glucose in the blood. The lower blood-glucose level (a product of the insulin secretion) triggers glucagon to be secreted, and repeats the cycle. In order for blood glucose to be kept stable, modifications to insulin, glucagon, epinephrine and cortisol are made. Each of these hormones has a different responsibility to keep blood glucose regulated; when blood sugar is too high, insulin tells muscles to take up excess glucose for storage in the form of glycogen. Glucagon responds to too low of a blood glucose level; it informs the tissue to release some glucose from the glycogen stores. Epinephrine prepares the muscles and respiratory system for activity in the case of a "fight or flight" response. Lastly, cortisol supplies the body with fuel in times of heavy stress. Abnormalities High blood sugar If blood sugar levels remain too high the body suppresses appetite over the short term. Long-term hyperglycemia causes many health problems including heart disease, cancer, eye, kidney, and nerve damage. Blood sugar levels above 16.7mmol/L (300mg/dL) can cause fatal reactions. Ketones will be very high (a magnitude higher than when eating a very low carbohydrate diet) initiating ketoacidosis. The ADA (American Diabetes Association) recommends seeing a doctor if blood glucose reaches 13.3 mmol/L (240 mg/dL), and it is recommended to seek emergency treatment at 15mmol/L (270mg/dL) blood glucose if Ketones are present. The most common cause of hyperglycemia is diabetes. When diabetes is the cause, physicians typically recommend an anti-diabetic medication as treatment. From the perspective of the majority of patients, treatment with an old, well-understood diabetes drug such as metformin will be the safest, most effective, least expensive, and most comfortable route to managing the condition. Treatment will vary for the distinct forms of Diabetes and can differ from person to person based on how they are reacting to treatment. Diet changes and exercise implementation may also be part of a treatment plan for diabetes. Some medications may cause a rise in blood sugars of diabetics, such as steroid medications, including cortisone, hydrocortisone, prednisolone, prednisone, and dexamethasone. Low blood sugar When the blood sugar level is below 70 mg/dL, this is referred to as having low blood sugar. Low blood sugar is very frequent among type 1 diabetics. There are several causes of low blood sugar, including, taking an excessive amount of insulin, not consuming enough carbohydrates, drinking alcohol, spending time at a high elevation, puberty, and menstruation. If blood sugar levels drop too low, a potentially fatal condition called hypoglycemia develops. Symptoms may include lethargy, impaired mental functioning; irritability; shaking, twitching, weakness in arm and leg muscles; pale complexion; sweating; loss of consciousness. Mechanisms that restore satisfactory blood glucose levels after extreme hypoglycemia (below 2.2 mmol/L or 40 mg/dL) must be quick and effective to prevent extremely serious consequences of insufficient glucose: confusion or unsteadiness and, in the extreme (below 0.8 mmol/L or 15 mg/dL) loss of consciousness and seizures. Without discounting the potentially quite serious conditions and risks due to or oftentimes accompanying hyperglycemia, especially in the long-term (diabetes or pre-diabetes, obesity or overweight, hyperlipidemia, hypertension, etc.), it is still generally more dangerous to have too little glucose – especially if levels are very low – in the blood than too much, at least temporarily, because glucose is so important for metabolism and nutrition and the proper functioning of the body's organs. This is especially the case for those organs that are metabolically active or that require a constant, regulated supply of blood sugar (the liver and brain are examples). Symptomatic hypoglycemia is most likely associated with diabetes and liver disease (especially overnight or postprandial), without treatment or with wrong treatment, possibly in combination with carbohydrate malabsorption, physical over-exertion or drugs. Many other less likely illnesses, like cancer, could also be a reason. Starvation, possibly due to eating disorders, like anorexia, will also eventually lead to hypoglycemia. Hypoglycemic episodes can vary greatly between persons and from time to time, both in severity and swiftness of onset. For severe cases, prompt medical assistance is essential, as damage to brain and other tissues and even death will result from sufficiently low blood-glucose levels. Glucose measurement In the past to measure blood glucose it was necessary to take a blood sample, as explained below, but since 2015 it has also been possible to use a continuous glucose monitor, which involves an electrode placed under the skin. Both methods, as of 2023, cost hundreds of dollars or euros per year for supplies needed. Sample source Glucose testing in a fasting individual shows comparable levels of glucose in arterial, venous, and capillary blood. But following meals, capillary and arterial blood glucose levels can be significantly higher than venous levels. Although these differences vary widely, one study found that following the consumption of 50 grams of glucose, "the mean capillary blood glucose concentration is higher than the mean venous blood glucose concentration by 35%." Sample type Glucose is measured in whole blood, plasma or serum. Historically, blood glucose values were given in terms of whole blood, but most laboratories now measure and report plasma or serum glucose levels. Because red blood cells (erythrocytes) have a higher concentration of protein (e.g., hemoglobin) than serum, serum has a higher water content and consequently more dissolved glucose than does whole blood. To convert from whole-blood glucose, multiplication by 1.14 has been shown to generally give the serum/plasma level. To prevent contamination of the sample with intravenous fluids, particular care should be given to drawing blood samples from the arm opposite the one in which an intravenous line is inserted. Alternatively, blood can be drawn from the same arm with an IV line after the IV has been turned off for at least 5 minutes, and the arm has been elevated to drain infused fluids away from the vein. Inattention can lead to large errors, since as little as 10% contamination with a 5% glucose solution (D5W) will elevate glucose in a sample by 500 mg/dL or more. The actual concentration of glucose in blood is very low, even in the hyperglycemic. Measurement techniques Two major methods have been used to measure glucose. The first, still in use in some places, is a chemical method exploiting the nonspecific reducing property of glucose in a reaction with an indicator substance that changes color when reduced. Since other blood compounds also have reducing properties (e.g., urea, which can be abnormally high in uremic patients), this technique can produce erroneous readings in some situations (5–15 mg/dL has been reported). The more recent technique, using enzymes specific to glucose, is less susceptible to this kind of error. The two most common employed enzymes are glucose oxidase and hexokinase. Average blood glucose concentrations can also be measured. This method measures the level of glycated hemoglobin, which is representative of the average blood glucose levels over the last, approximately, 120 days. In either case, the chemical system is commonly contained on a test strip which is inserted into a meter, and then has a blood sample applied. Test-strip shapes and their exact chemical composition vary between meter systems and cannot be interchanged. Formerly, some test strips were read (after timing and wiping away the blood sample) by visual comparison against a color chart printed on the vial label. Strips of this type are still used for urine glucose readings, but for blood glucose levels they are obsolete. Their error rates were, in any case, much higher. Errors when using test strips were often caused by the age of the strip or exposure to high temperatures or humidity. More precise blood glucose measurements are performed in a medical laboratory, using hexokinase, glucose oxidase, or glucose dehydrogenase enzymes. Urine glucose readings, however taken, are much less useful. In properly functioning kidneys, glucose does not appear in urine until the renal threshold for glucose has been exceeded. This is substantially above any normal glucose level, and is evidence of an existing severe hyperglycemic condition. However, as urine is stored in the bladder, any glucose in it might have been produced at any time since the last time the bladder was emptied. Since metabolic conditions change rapidly, as a result of any of several factors, this is delayed news and gives no warning of a developing condition. Blood glucose monitoring is far preferable, both clinically and for home monitoring by patients. Healthy urine glucose levels were first standardized and published in 1965 by Hans Renschler. A noninvasive method of sampling to monitor glucose levels has emerged using an exhaled breath condensate. However this method does need highly sensitive glucose biosensors. Clinical correlation The fasting blood glucose level, which is measured after a fast of 8 hours, is the most commonly used indication of overall glucose homeostasis, largely because disturbing events such as food intake are avoided. Conditions affecting glucose levels are shown in the table below. Abnormalities in these test results are due to problems in the multiple control mechanism of glucose regulation. The metabolic response to a carbohydrate challenge is conveniently assessed by a postprandial glucose level drawn 2 hours after a meal or a glucose load. In addition, the glucose tolerance test, consisting of several timed measurements after a standardized amount of oral glucose intake, is used to aid in the diagnosis of diabetes. Error rates for blood glucose measurements systems vary, depending on laboratories, and on the methods used. Colorimetry techniques can be biased by color changes in test strips (from airborne or finger-borne contamination, perhaps) or interference (e.g., tinting contaminants) with light source or the light sensor. Electrical techniques are less susceptible to these errors, though not to others. In home use, the most important issue is not accuracy, but trend. Thus if a meter / test strip system is consistently wrong by 10%, there will be little consequence, as long as changes (e.g., due to exercise or medication adjustments) are properly tracked. In the US, home use blood test meters must be approved by the federal Food and Drug Administration before they can be sold. Finally, there are several influences on blood glucose level aside from food intake. Infection, for instance, tends to change blood glucose levels, as does stress either physical or psychological. Exercise, especially if prolonged or long after the most recent meal, will have an effect as well. In the typical person, maintenance of blood glucose at near constant levels will nevertheless be quite effective.
Biology and health sciences
Basics_3
null
289607
https://en.wikipedia.org/wiki/Smoking%20cessation
Smoking cessation
Smoking cessation, usually called quitting smoking or stopping smoking, is the process of discontinuing tobacco smoking. Tobacco smoke contains nicotine, which is addictive and can cause dependence. As a result, nicotine withdrawal often makes the process of quitting difficult. Smoking is the leading cause of preventable death and a global public health concern. Tobacco use leads most commonly to diseases affecting the heart and lungs, with smoking being a major risk factor for heart attacks, strokes, chronic obstructive pulmonary disease (COPD), idiopathic pulmonary fibrosis (IPF), emphysema, and various types and subtypes of cancers (particularly lung cancer, cancers of the oropharynx, larynx, and mouth, esophageal and pancreatic cancer). Smoking cessation significantly reduces the risk of dying from smoking-related diseases. The risk of heart attack in a smoker decreases by 50% after one year of cessation. Similarly, the risk of lung cancer decreases by 50% in 10 years of cessation From 2001 to 2010, about 70% of smokers in the United States expressed a desire to quit smoking, and 50% reported having attempted to do so in the past year. Many strategies can be used for smoking cessation, including abruptly quitting without assistance ("cold turkey"), cutting down then quitting, behavioral counseling, and medications such as bupropion, cytisine, nicotine replacement therapy, or varenicline. In recent years, especially in Canada and the United Kingdom, many smokers have switched to using electronic cigarettes to quit smoking tobacco. However, a 2022 study found that 20% of smokers who tried to use e-cigarettes to quit smoking succeeded but 66% of them ended as dual users of cigarettes and vape products one year out. Most smokers who try to quit do so without assistance. However, only 3–6% of quit attempts without assistance are successful long-term. Behavioral counseling and medications each increase the rate of successfully quitting smoking, and a combination of behavioral counseling with a medication such as bupropion is more effective than either intervention alone. A meta-analysis from 2018, conducted on 61 randomized controlled trials, showed that among people who quit smoking with a cessation medication and some behavioral help, approximately 20% were still nonsmokers a year later, as compared to 12% who did not take medication. In nicotine-dependent smokers, quitting smoking can lead to nicotine withdrawal symptoms such as nicotine cravings, anxiety, irritability, depression, and weight gain. Professional smoking cessation support methods generally attempt to address nicotine withdrawal symptoms to help the person break free of nicotine addiction. Smoking cessation methods Unassisted It often takes several attempts, and potentially utilizing different approaches each time, before achieving long-term abstinence. Over 74.7% of smokers attempt to quit without any assistance, otherwise known as "cold turkey", or with home remedies. Previous smokers make between an estimated 6 to 30 attempts before successfully quitting. Identifying which approach or technique is eventually most successful is difficult. It has been estimated, for example, that only about 4% to 7% of people are able to quit smoking on any given attempt without medicines or other help. The majority of quit attempts are still unassisted, though the trend seems to be shifting. In the U.S., for example, the rate of unassisted quitting fell from 91.8% in 1986 to 52.1% during 2006 to 2009. The most frequent unassisted methods were "cold turkey", a term that has been used to mean either unassisted quitting or abrupt quitting and "gradually decreased number" of cigarettes, or "cigarette reduction". Cold turkey "Cold turkey" is a colloquial term indicating abrupt withdrawal from an addictive drug. In this context, it indicates sudden and complete cessation of all nicotine use. In three studies, it was the quitting method cited by 76%, 85%, or 88% of long-term successful quitters. In a large British study of ex-smokers in the 1980s, before the advent of pharmacotherapy, 53% of the ex-smokers said that it was "not at all difficult" to stop, 27% said it was "fairly difficult", and the remaining 20% found it very difficult. Studies have found that two-thirds of recent quitters reported using the cold turkey method and found it helpful. Cutting down to quit Gradual reduction involves slowly reducing one's daily intake of nicotine. This method can theoretically be accomplished through repeated changes to cigarettes with lower nicotine levels, by gradually reducing the number of cigarettes smoked daily, or by smoking only a fraction of a cigarette on each occasion. A 2009 systematic review by researchers at the University of Birmingham found that gradual nicotine replacement therapy could be effective in smoking cessation. There is no significant difference in quit rates between smokers who quit by gradual reduction or abrupt cessation as measured by abstinence from smoking of at least six months from the quit day. The same review also looked at five pharmacological aids for reduction. When reducing the number of smoked cigarettes, it found some evidence that additional varenicline or fast-acting nicotine replacement therapy can positively affect quitting for six months or longer. Medications The American Cancer Society notes that "Studies in medical journals have reported that about 25% of smokers who use medicines can stay smoke-free for over 6 months." Single medications include: Nicotine replacement therapy (NRT): Five medications have been approved by the U.S. Food and Drug Administration (FDA) to deliver nicotine in a form that does not involve the risks of smoking: transdermal nicotine patches, nicotine gum, nicotine lozenges, nicotine inhalers, nicotine oral sprays, and nicotine nasal sprays. High-quality evidence indicates that these forms of NRT improve the success rate of people who attempt to stop smoking. NRTs are meant to be used for a short period of time and should be tapered down to a low dose before stopping. NRTs increase the chance of stopping smoking by 50 to 60% compared to placebo or to no treatment. Some reported side effects are local slight irritation (inhalers and sprays) and non-ischemic chest pain (rare). Others include mouth soreness and dyspepsia (gum), nausea or heartburn (lozenges), as well as sleep disturbances, insomnia, and a local skin reaction (patches). A study found that 93% of over-the-counter NRT users relapse and return to smoking within six months. There is weak evidence that adding mecamylamine to nicotine is more effective than nicotine alone. Antidepressants: The antidepressant bupropion is considered a first-line medication for smoking cessation and has been shown in many studies to increase long-term success rates. Although bupropion increases the risk of getting adverse events, there is no clear evidence that the drug has more or less adverse effects when compared to a placebo. Nortriptyline produces significant rates of abstinence versus placebo. Other antidepressants such as selective serotonin reuptake inhibitors (SSRIs) and St. John's wort have not been consistently shown to be effective for smoking cessation. Varenicline decreases the urge to smoke and reduces withdrawal symptoms and is therefore considered a first-line medication for smoking cessation. The number of people stopping smoking with varenicline is higher than with bupropion or NRT. Varenicline more than doubled the chances of quitting compared to placebo, and was also as effective as combining two types of NRT. 2 mg/day of varenicline has been found to lead to the highest abstinence rate (33.2%) of any single therapy, while 1 mg/day leads to an abstinence rate of 25.4%. A 2016 systematic review and meta-analysis of randomized controlled trials concluded there is no evidence supporting a connection between varenicline and increased cardiovascular events. Concerns arose that varenicline may cause neuropsychiatric side effects, including suicidal thoughts and behavior. However, more recent studies indicate less serious neuropsychiatric side effects. For example, a 2016 study involving 8,144 patients treated at 140 centers in 16 countries "did not show a significant increase in neuropsychiatric adverse events attributable to varenicline or bupropion relative to nicotine patch or placebo". No link between depressed moods, agitation or suicidal thinking in smokers taking varenicline to decrease the urge to smoke has been identified. For people who have pre-existing mental health difficulties, varenicline may slightly increase the risk of experiencing these neuropsychiatric adverse events. Clonidine may reduce withdrawal symptoms and "approximately doubles abstinence rates when compared to a placebo," but its side effects include dry mouth and sedation, and abruptly stopping the drug can cause high blood pressure and other side effects. There is no good evidence anxiolytics are helpful. Previously, rimonabant, which is a cannabinoid receptor type 1 antagonist, was used to help in quitting and moderate the expected weight gain. But it is important to know that the manufacturers of rimonabant and taranabant stopped production in 2008 due to serious CNS side effects. The 2008 US Guideline specifies that three combinations of medications are effective: Long-term nicotine patch and ad libitum NRT gum or spray Nicotine patch and nicotine inhaler Nicotine patch and bupropion (the only combination that the US FDA has approved for smoking cessation) A meta-analysis from 2018, conducted on 61 RCTs, showed that during their first year of trying to quit, approximately 80% of the participants in the studies who got drug assistance (bupropion, NRT, or varenicline) returned to smoking, while 20% continued to not smoke for the entire year (i.e.: remained sustained abstinent). In comparison, 12% the people who got placebo kept from smoking for (at least) an entire year. This makes the net benefit of the drug treatment to be 8% after the first 12 months. In other words, out of 100 people who will take medication, approximately 8 of them would remain non-smoking after one year thanks to the treatment. During one year, the benefit from using smoking cessation medications (Bupropion, NRT, or varenicline) decreases from 17% in 3 months, to 12% in 6 months to 8% in 12 months. Community interventions Community interventions using "multiple channels to provide reinforcement, support and norms for not smoking" may have an effect on smoking cessation outcomes among adults. Specific methods used in the community to encourage smoking cessation among adults include: Policies making workplaces and public places smoke-free. It is estimated that "comprehensive clean indoor laws" can increase smoking cessation rates by 12%–38%. In 2008, the New York State of Alcoholism and Substance Abuse Services banned smoking by patients, staff, and volunteers at 1,300 addiction treatment centers. Voluntary rules making homes smoke-free, which are thought to promote smoking cessation. Initiatives to educate the public regarding the health effects of second-hand smoke, including the significant dangers of secondhand smoke infiltration for residents of multi-unit housing. Increasing the price of tobacco products, for example by taxation. The US Task Force on Community Preventive Services found "strong scientific evidence" that this is effective in increasing tobacco use cessation It is estimated that an increase in price of 10% will increase smoking cessation rates by 3–5%. Mass media campaigns. There is evidence to suggest that when combined with other types of interventions, mass media campaigns may be of benefit. Weak evidence suggests that imposing institutional level smoking bans in hospitals and prisons may reduce smoking rates and second hand smoke exposure. Researchers explored whether an opportunistic stop smoking intervention (advice, a vape starter pack and a referral to stop smoking services) was effective for people attending the emergency department. At 6 months, more people who received the intervention had quit smoking compared with people who received advice only. Pharmacist Interventions Pharmacist-led interventions have proven to be effective in helping smoking cessation attempts. Many systematic reviews have looked at the importance of pharmacist involvement. In Malaysia, their study looked at how pharmacist intervention in patients' overall healthcare showed improvements in screening early stages of disease. This allowed for earlier treatment starts in smoking-caused COPD. In addition, pharmacists in Malaysia could prescribe NRT products, and when they led a smoking cessation service, it was more successful than other smoking cessation trials in Malaysia. It was also shown that pharmacist counselling and NRT products were more effective in smoking cessation than using NRT alone. In pharmacist-led smoking cessation services in Ethiopia, the study found statistically and clinically significant benefits favouring pharmacist intervention. They found that structured care, and regular visits, easy accessibility to pharmacists helped more people trying to quit than without. However, the study concluded that more research should be done in the area as they found an unknown risk of bias in the studies included Another systematic review analyzed pharmacist intervention in smoking cessation and alcohol and weight interventions. They found that evidence suggests that the longer the duration of pharmacist-led intervention, the more influential the attempt at quitting was In addition, they found that community pharmacists were beneficial in delivering public health information. Pharmacists have a great reach in the community to help with smoking cessation and have proven to help with lifestyle modifications and proper NRT use. Digital interventions Interactive web-based and stand-alone computer programs and online communities assist participants in quitting. For example, "quit meters" keep track of statistics such as how long a person has remained abstinent. Computerised and interactive tailored interventions may be promising, however, the evidence base for such interventions is weak. A mobile phone-based intervention where automated, supportive text messages are sent alongside other forms of support helps more people quit smoking: "The current evidence supports a beneficial impact of mobile phone-based cessation interventions on six-month cessation outcomes." A 2011 randomized trial of mobile phone-based smoking cessation support in the UK found that a Txt2Stop cessation program significantly improved cessation rates at six months. A 2013 meta-analysis also noted "modest benefits" of mobile health interventions. Interactive web-based programs combined with a Mobile phone: Two RCTs documented long-term treatment effects (abstinence rate: 20-22%) of such interventions. Psychosocial approaches The Great American Smokeout is an annual event that invites smokers to quit for one day, hoping they will be able to extend this forever. The World Health Organization's World No Tobacco Day is held on May 31 each year. Smoking-cessation support is often offered over the telephone quitlines (e.g., the US toll-free number 1-800-QUIT-NOW), or in person. Three meta-analyses have concluded that telephone cessation support is effective when compared with minimal or no counselling or self-help and that telephone cessation support with medication is more effective than medication alone, and that intensive individual counselling is more effective than the brief personal counselling intervention. A slight tendency towards better results for more intensive counselling was also observed in another meta-analysis. This analysis distinguished between reactive (smokers calling quitlines) and proactive (smokers receiving calls) interventions. For people who called the quitline themselves, additional calls helped to quit smoking for six months or longer. When proactively initiating contact with a smoker, telephone counselling increased the chances of smoking cessation by 2–4% compared with people who received no calls. There is about 10% to 25% increase in the chance of smoking cessation success with more behavioral support provided in person or via telephone when used as an adjunct to pharmacotherapy. Online social cessation networks attempt to emulate offline group cessation models using purpose built web applications. They are designed to promote online social support and encouragement for smokers when (usually automatically calculated) milestones are reached. Early studies have shown social cessation to be especially effective with smokers aged 19–29. Group or individual psychological support can help people who want to quit. Recently, group therapy has been more helpful than self-help and some other individual intervention. The psychological support form of counselling can be effective alone; combining it with medication is more effective, and the number of support sessions with medication correlates with effectiveness. The counselling styles that have been effective in smoking cessation activities include motivational interviewing, cognitive behavioral therapy and acceptance and commitment therapy, methods based on cognitive behavioral therapy. The Freedom From Smoking group clinic includes eight sessions and features a step-by-step plan for quitting smoking. Each session is designed to help smokers gain control over their behavior. The clinic format encourages participants to work on the process and problems of quitting both individually and as part of a group. Multiple formats of psychosocial interventions increase quit rates: 10.8% for no intervention, 15.1% for one format, 18.5% for 2 formats, and 23.2% for three or four formats. The transtheoretical model, including "stages of change", has been used in tailoring smoking cessation methods to individuals, however, there is some evidence to suggest that "stage-based self-help interventions (expert systems and/or tailored materials) and individual counselling are neither more nor less effective than their non-stage-based equivalents." How to set a quit date Most smoking cessation resources such as the Centers for Disease Control and Prevention (CDC) and The Mayo Clinic encourage smokers to create a quit plan, including setting a quit date, which helps them anticipate and plan for smoking challenges. A quit plan can improve a smoker's chance of a successful quit as can setting Monday, as the quit date, given that research has shown that Monday more than any other day is when smokers are seeking information online to quit smoking and calling state quitlines. In Nepal, smokers are not selfish, a health campaign of two weeks is started on the occasion of Valentine day and Vasant panchami to motiviate individuals to quit smoking as a sacrifice for their loved ones and making it a meaningful decision of life. This campaign is taking public attention. Self-help Self-help materials may produce a small increase in quit rates specially when there is no other supporting intervention form. "The effect of self-help was weak", and the number of types of self-help did not produce higher abstinence rates. Nevertheless, self-help modalities for smoking cessation include: In-person self-help groups such as Nicotine Anonymous, or web-based cessation resources such as Smokefree.gov, which offers various types of assistance including self-help materials. WebMD: a resource providing health information, tools for managing health, and support. Self-help books such as Allen Carr's Easy Way to Stop Smoking. Spirituality: In one survey of adult smokers, 88% reported a history of spiritual practice or belief, and of those more than three-quarters were of the opinion that using spiritual resources may help them quit smoking. A review of mindfulness training as a treatment for addiction showed reduction in craving and smoking following training. Physical activities help in the maintenance of smoking cessation even if there is no conclusive evidence of the most appropriate exercise intensity. Biochemical feedback Various methods allow a smoker to see the impact of their tobacco use and the immediate effects of quitting. Using biochemical feedback methods can allow tobacco users to be identified and assessed, and monitoring throughout an effort to quit can increase motivation to quit. Evidence-wise, little is known about the effects of using biomechanical tests to determine a person's risk related to smoking cessation. Breath carbon monoxide (CO) monitoring: carbon monoxide is a significant component of cigarette smoke, and a breath carbon monoxide monitor can be used to detect current cigarette use. Carbon monoxide concentration in breath is directly correlated with the CO concentration in blood, known as percent carboxyhemoglobin. The value of demonstrating blood CO concentration to a smoker through a non-invasive breath sample is that it links the smoking habit with the physiological harm associated with smoking. CO concentrations show a noticeable decrease within hours of quitting, which can encourage someone to work on quitting. Breath CO monitoring has been utilized in smoking cessation as a tool to provide patients with biomarker feedback, similar to how other diagnostic tools such as the stethoscope, the blood pressure cuff, and the cholesterol test have been used by treatment professionals in medicine. Cotinine: Cotinine, a metabolite of nicotine, is present in smokers. Like carbon monoxide, a cotinine test can be a reliable biomarker to determine smoking status. Cotinine levels can be tested through urine, saliva, blood, or hair samples. One of the main concerns of cotinine testing is the invasiveness of typical sampling methods. While both measures offer high sensitivity and specificity, they differ in usage method and cost. For example, breath CO monitoring is non-invasive, while cotinine testing relies on bodily fluid. For instance, these two methods can be used alone or together when abstinence verification needs additional confirmation. Competitions and incentives Financial or material incentives to entice people to quit smoking improve smoking cessation while the motivation is in place. Competitions that require participants to deposit their own money, "betting" that they will succeed in quitting smoking, appear to be an effective incentive. However, it is more difficult to recruit participants for this type of contest in head-to-head comparisons with other incentive models, such as giving participants NRT or placing them in a more typical rewards program. Evidence shows that incentive programs may be effective for pregnant mothers who smoke. As of 2019, there is an insufficient number of studies on "quit and win," and other competition-based interventions and results from the existing studies were inconclusive. Workplace incentives A 2008 Cochrane review of smoking cessation activities in work-places concluded that "interventions directed towards individual smokers increase the likelihood of quitting smoking". A 2010 systematic review determined that worksite incentives and competitions needed to be combined with additional interventions to produce significant increases in smoking cessation rates. Healthcare systems Interventions delivered via healthcare providers and healthcare systems have been shown to improve smoking cessation among people who visit those services. A clinic screening system (e.g., computer prompts) to identify whether or not a person smokes doubled abstinence rates, from 3.1% to 6.4%. Similarly, the Task Force on Community Preventive Services determined that provider reminders alone or with provider education effectively promote smoking cessation. A 2008 Guideline meta-analysis estimated that physician advice to quit smoking led to a quit rate of 10.2%, as opposed to a quit rate of 7.9% among patients who did not receive physician advice to quit smoking. Even brief advice from physicians may have "a small effect on cessation rates", and there is evidence that the physicians' probability of giving smoking cessation advice declines with the person who smokes age. There is evidence that only 81% of smokers age 50 or greater received advice on quitting from their physicians in the preceding year. For one-to-one or person-to-person counselling sessions, the duration of each session, the total contact time, and the number of sessions all correlated with the effectiveness of smoking cessation. For example, "Higher intensity" interventions (>10 minutes) produced a quit rate of 22.1% as opposed to 10.9% for "no contact" over 300 minutes of contact time made a quit rate of 25.5% as opposed to 11.0% for "no minutes" and more than 8 sessions produced a quit rate of 24.7% as opposed to 12.4% for 0–1 sessions. Both physicians and non-physicians increased abstinence rates compared with self-help or no clinicians. For example, a Cochrane review of 58 studies found that nursing interventions increased the likelihood of quitting. Another review found some positive effects when trained community pharmacists support patients in their smoking cessation trials. Dental professionals also provide a key component in increasing tobacco abstinence rates in the community through counseling patients on the effects of tobacco on oral health in conjunction with an oral exam. According to the 2008 Guideline, based on two studies the training of clinicians in smoking cessation methods may increase abstinence rates; however, a Cochrane review found and measured that such training decreased smoking in patients. Reducing or eliminating the costs of cessation therapies for smokers increased quit rates in three meta-analyses. In one systematic review and meta-analysis, multi-component interventions increased quit rates in primary care settings. "Multi-component" interventions were defined as those that combined two or more of the following strategies known as the "5 A's": Ask — Systematically identify all tobacco users at every visit Advise — Strongly urge all tobacco users to quit Assess — Determine willingness to make a quit attempt Assist — Aid the patient in quitting (provide counselling-style support and medication) Arrange — Ensure follow-up contact Substitutes for cigarettes Nicotine replacement therapy (NRT) is the general term for using products that contain nicotine but not tobacco to aid smoking cessation. These include nicotine lozenges, nicotine gum and inhalers, nicotine patches, and electronic cigarettes. In a review of 136 NRT-related Cochrane Tobacco Addiction Group studies, substantial evidence supported NRT use in increasing the chances of successfully quitting smoking by 50 to 60% in comparison to placebo or a non-NRT control group. Electronic cigarettes (ECs): There is high‐certainty evidence that ECs with nicotine increase quit rates compared to NRT and moderate‐certainty evidence that they increase quit rates compared to ECs without nicotine. Little is known regarding the long-term harms related to vaping. A 2016 UK Royal College of Physicians report supports using e-cigarettes as a smoking cessation tool. A 2015 Public Health England report stated that "Smokers who have tried other methods of quitting without success could be encouraged to try e-cigarettes (EC) to stop smoking and stop smoking services should support smokers using EC to quit by offering them behavioural support." However, since little is known about long term effects, other regulated options such as nicotine replacement therapy, varenicline or bupropion should be discussed primarily. Alternative approaches It is important to note that most of the alternative approaches below have minimal evidence to support their use, and their efficacy and safety should be discussed with a healthcare professional before starting. Acupuncture: Acupuncture has been explored as an adjunct treatment method for smoking cessation. A 2014 Cochrane review was unable to make conclusions regarding acupuncture as the evidence is poor. A 2008 guideline found no difference between acupuncture and placebo, found no scientific studies supporting laser therapy based on acupuncture principles but without the needles. Hypnosis: Hypnosis often involves the hypnotherapist suggesting the unpleasant outcomes of smoking to the patient. Clinical trials studying hypnosis and hypnotherapy as a method for smoking cessation have been inconclusive. A Cochrane review was unable to find evidence of benefit of hypnosis in smoking cessation, and suggested if there is a beneficial effect, it is small at best. However, a randomized trial published in 2008 found that hypnosis and nicotine patches "compares favorably" with standard behavioral counseling and nicotine patches in 12-month quit rates. Herbal medicine: Many herbs have been studied as a method for smoking cessation, including lobelia and St John's wort. The results are inconclusive, but St. Johns Wort shows few adverse events, but is a contraindication to many medications. Lobelia has been used to treat respiratory diseases like asthma and bronchitis, and has been used for smoking cessation because of chemical similarities to tobacco; lobelia is now listed in the FDA's Poisonous Plant Database. Lobelia can still be found in many products sold for smoking cessation and should be used with caution. Herbal products should be discussed with healthcare professionals before use to confirm safety with other medications. Smokeless tobacco: There is little smoking in Sweden, which is reflected in the very low cancer rates for Swedish men. Use of snus (a form of steam-pasteurized, rather than heat-pasteurized, air-cured smokeless tobacco) is an observed cessation method for Swedish men and even recommended by some Swedish doctors. However, the report by the Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) concludes "STP (smokeless tobacco products) are addictive and their use is hazardous to health. Evidence on the effectiveness of STP as a smoking cessation aid is insufficient." A recent national study on the use of alternative tobacco products, including snus, did not show that these products promote cessation. Aversion therapy: It is a method of treatment works by pairing the pleasurable stimulus of smoking with other unpleasant stimuli. A Cochrane review reported that there is insufficient evidence of its efficacy. Nicotine vaccines: Nicotine vaccines (e.g., NicVAX and TA-NIC) work by reducing the amount of nicotine reaching the brain; however, this method of therapy needs more investigations to establish its role and determine its side effects. Technology and machine learning: Research studies using machine learning or artificial intelligence tools to provide feedback and communication with those who are trying to quit smoking are increasing, yet the findings are so far inconclusive. Psilocybin has been being investigated as a potential smoking cessation aid for several years. In 2021, Johns Hopkins Medicine has been awarded a grant from the National Institutes of Health to explore the potential impacts of psilocybin and talk therapy on tobacco addiction. Special populations Children and adolescents Methods used with children and adolescents include: Motivational enhancement Psychological support Youth anti-tobacco activities, including participation in sports School-based curricula, like life-skills training School-based nurse counseling sessions Access reduction to tobacco Anti-tobacco media Family communication Cochrane reviews, mainly of studies combining motivational enhancement and psychological support, concluded that "complex approaches" for smoking cessation among young people show promise. The 2008 US Guideline recommends counselling-style support for adolescent smokers on the basis of a meta-analysis of seven studies. Neither the Cochrane review nor the 2008 Guideline recommend medications for adolescents who smoke. Pregnant women Smoking during pregnancy can cause adverse health effects in both the woman and the fetus. The 2008 US Guideline determined that "person-to-person psychosocial interventions" (typically including "intensive counseling") increased abstinence rates in pregnant women who smoke to 13.3%, compared with 7.6% in usual care. Mothers who smoke during pregnancy have a greater tendency towards premature births. Their babies are often underdeveloped, have smaller organs, and weigh much less than the average baby weight. In addition, these babies have weaker immune systems, making them more susceptible to many diseases such as middle ear inflammations and asthmatic bronchitis, as well as metabolic conditions such as diabetes and hypertension, all of which can bring significant morbidity. Additionally, a study published by American Academy of Pediatrics shows that smoking during pregnancy increases the chance of sudden unexpected infant death ((SUID) or (SIDS)). There is also an increased chance that the child will be a smoker in adulthood. A systematic review showed that psychosocial interventions help women quit smoking in late pregnancy and can reduce the incidence of low birth weight infants. It is a myth that a female smoker can cause harm to a fetus by quitting immediately upon discovering she is pregnant. This idea is not based on any medical study or fact. In a UK study that included 1140 pregnant women, e-cigarettes were found to be as effective as nicotine patches at helping pregnant women to quit smoking. The safety of the two products was also similar. However, life style modification are the preferred method for pregnant women, and they should discuss smoking cessation techniques with a healthcare professional. Schizophrenia Studies across 20 countries show a strong association between patients with schizophrenia and smoking. People with schizophrenia are much more likely to smoke than those without the disease. For example, in the United States, 80% or more of people with schizophrenia smoke, compared to 20% of the general population in 2006. Hospitalized smokers Smokers who are hospitalised may be particularly motivated to quit. A 2012 Cochrane review found that interventions beginning during a hospital stay and continuing for one month or more after discharge were effective in producing abstinence. Patients undergoing elective surgery may get benefits of preoperative smoking cessation interventions, when starting 4–8 weeks before surgery with weekly counseling for behavioral support and use of nicotine replacement therapy. It is found to reduce the complications and the number of postoperative morbidity. Mood disorders People with mood disorders or attention deficit hyperactivity disorders have a greater chance to begin smoking and a lower chance of quitting smoking. A higher correlation with smoking has also been seen in people diagnosed with the major depressive disorder at any time throughout their lifetime compared to those without it. Success rates in quitting smoking were lower for those with a major depressive disorder diagnosis versus people without the diagnosis. Exposure to cigarette smoke early on in life, during pregnancy, infancy, or adolescence, may negatively impact a child's neurodevelopment and increase the risk of developing anxiety disorders in the future. Homeless and poverty Homelessness doubles the likelihood of an individual currently being a smoker. Homelessness is independent of other socioeconomic factors and behavioral health conditions. Homeless individuals have the same rates of desire to quit smoking. Still, they are less likely than the general population to attempt to stop successfully. In the United States, 60–80% of homeless adults are smokers. This is a considerably higher rate than the general adult population of 19%. Many current smokers who are homeless report smoking as a means of coping with "all the pressure of being homeless." The perception that homeless people smoking being "socially acceptable" can reinforce these trends. Americans under the poverty line have higher rates of smoking and lower rates of quitting than those over the poverty line. While the homeless population is concerned about short-term effects of smoking, such as shortness of breath or recurrent bronchitis, they are not as concerned with long-term consequences. The homeless population has unique barriers to quitting smoking, such as unstructured days, the stress of finding a job, and immediate survival needs that supersede the desire to quit smoking. These unique barriers can be combated through pharmacotherapy and behavioral counseling for high levels of nicotine dependence. The emphasis of immediate financial benefits to those who concern themselves with the short-term over the long-term, partnering with shelters to reduce the social acceptability of smoking in this population, and increased taxes on cigarettes and alternative tobacco products to further make the addiction more difficult to fund. Concurrent substance use disorders Over three-quarters of people in treatment or recovery from substance misuse issues are current smokers. Providing behavioural interventions (such as counseling and advice) and pharmacotherapy including nicotine replacement therapy (such as the use of patches or gum, varenicline, and/or bupropion) increase tobacco abstinence that is sustainable and also reduces the risk of returning to other substance use. Comparison of success rates Comparison of success rates across interventions can be difficult because of different definitions of "success" used by the different studies. Robert West and Saul Shiffman, authorities in this field recognized by government health departments in a number of countries, have concluded that, used together, "behavioral support" and "medication" can quadruple the chances that a quit attempt will be successful. A 2008 systematic review in the European Journal of Cancer Prevention found that group behavioural therapy was the most effective intervention strategy for smoking cessation, followed by bupropion, intensive physician advice, nicotine replacement therapy, individual counselling, telephone counselling, nursing interventions, and tailored self-help interventions; the study did not discuss varenicline. Factors affecting success Quitting can be harder for individuals with darkly pigmented skin than individuals with pale skin since nicotine has an affinity for melanin-containing tissues. Studies suggest this can cause the phenomenon of increased nicotine dependence and lower smoking cessation rate in darker-pigmented individuals. There is an important social component to smoking. The spread of smoking cessation from person to person contributes to the decrease in smoking among different populations or groups. A 2008 study of a densely interconnected network of over 12,000 individuals found that smoking cessation by any given individual reduced the chances of others around them lighting up by the following amounts: a spouse by 67%, a sibling by 25%, a friend by 36%, and a coworker by 34%. Nevertheless, a Cochrane review determined that interventions to increase social support for a smoker's cessation attempt did not improve long-term quit rates. Smokers trying to quit are faced with social influences that may persuade them to conform and continue smoking. Cravings are easier to detain when one's environment does not provoke the habit. Suppose a person who stopped smoking has close relationships with active smokers. In that case, they are often put into situations that make the urge to conform more tempting. However, in a small group with at least one other not smoking, the likelihood of conformity decreases. The social influence of smoking cigarettes has been proven to rely on simple variables. One researched variable depends on whether there is influence from a friend or non-friend. The research shows that individuals are 77% more likely to conform to non-friends, while close friendships decrease conformity. Therefore, if an acquaintance offers a cigarette as a polite gesture, the person who has stopped smoking will be likelier to break his commitment than if a friend had suggested it. Recent research from the International Tobacco Control (ITC) Four Country Survey of over 6,000 smokers found that smokers with fewer smoking friends were more likely to intend to quit and to succeed in their quit attempt. Expectations and attitude are significant factors. A self-perpetuating cycle occurs when a person feels bad for smoking yet smokes to alleviate feeling bad. Breaking that cycle can be a key in changing the sabotaging attitude. Smokers with major depressive disorder may be less successful at quitting smoking than non-depressed smokers. Relapse (resuming smoking after quitting) has been related to psychological issues such as low self-efficacy, or non-optimal coping responses; however, psychological approaches to prevent relapse have not been proven to be successful. In contrast, varenicline is suggested to have some effects and nicotine replacement therapy may help the unassisted abstainers. Side effects Withdrawal symptoms The CDC recognizes seven common nicotine withdrawal symptoms that people often face when stopping smoking: "cravings to smoke, feeling irritated, grouchy, or upset, feeling jumpy and restless, having a hard time concentrating, having trouble sleeping, feeling hungry or gaining weight, or feeling anxious, sad or depressed." Studies have shown that the use of pharmacotherapies, such as varenicline can be useful in reducing withdrawal symptoms during the quitting process. Weight gain Giving up smoking is associated with an average weight gain of after 12 months, most of which occurs within the first three months of quitting. The possible causes of the weight gain include: Smoking over-expresses the gene AZGP1 which stimulates lipolysis, so smoking cessation may decrease lipolysis. Smoking suppresses appetite, which may be caused by nicotine's effect on central autonomic neurons (e.g., via regulation of melanin concentrating hormone neurons in the hypothalamus). Smoking cessation will increase the persons appetite once again, especially as taste buds can return to its normal function. Heavy smokers are reported to burn 200 calories per day more than non-smokers eating the same diet. Possible reasons for this phenomenon include nicotine's ability to increase energy metabolism or nicotine's effect on peripheral neurons. The U.S. Department of Health and Human Services guideline suggests that sustained-release bupropion, nicotine gum, and nicotine lozenge be used "to delay weight gain after quitting." There is not currently enough evidence to suggest one method of weight loss works better than others in preventing weight gain during the smoking cessation process. It is helpful to reach for healthy snacks, such as celery and carrots, to aid in the increased appetite while also helping to limit weight gain. Regardless of post-cessation weight gain, there is a significant decrease in risk of cardiovascular disease in those who have quit smoking. The risks of rebound weight gain is significantly less than the risks of continued smoking. Mental health Like other physically addictive drugs, nicotine addiction causes a down-regulation of the production of dopamine and other stimulatory neurotransmitters as the brain attempts to compensate for the artificial stimulation caused by smoking. Some studies from the 1990s found that when people stop smoking, depressive symptoms such as suicidal tendencies or actual depression may result, although a recent international study comparing smokers who had stopped for 3 months with continuing smokers found that stopping smoking did not appear to increase anxiety or depression. A 2021 review found that quitting smoking lessens anxiety and depression. A 2013 study by The British Journal of Psychiatry has found that smokers who successfully quit feel less anxious afterward, with the effect being greater among those who had mood and anxiety disorders than those who smoked for pleasure. Health benefits Many of tobacco's detrimental health effects can be reduced or largely removed through smoking cessation. The health benefits over time of stopping smoking include: Within 20 minutes after quitting, blood pressure and heart rate decrease Within a few days, carbon monoxide levels in the blood decrease to normal Within 48 hours, nerve endings and sense of smell and taste both start recovering Within 3 months, circulation and lung function improve Within 1 year, there are decreases in cough and shortness of breath Within 1–2 years, the risk of coronary heart disease is cut in half Within 5–10 years, the risk of stroke falls to the same as a non-smoker, and the risks of many cancers (mouth, throat, esophagus, bladder, cervix) decrease significantly Within 10 years, the risk of dying from lung cancer is cut in half, and the risks of larynx and pancreas cancers decrease Within 15 years, the risk of coronary heart disease drops to the level of a non-smoker; lowered risk for developing COPD (chronic obstructive pulmonary disease) The British Doctors Study showed that those who stopped smoking before they reached 30 years old lived almost as long as those who never smoked. Stopping in one's sixties can still add three years of healthy life. Randomized U.S. and Canadian trials showed that a ten-week smoking cessation program decreased mortality from all causes over 14 years later. A recent article on mortality in a cohort of 8,645 smokers who were followed up after 43 years determined that "current smoking and lifetime persistent smoking were associated with an increased risk of all-cause, CVD [cardiovascular disease], COPD [chronic obstructive pulmonary disease], and any cancer, and lung cancer mortality." The significant increase in the risk of all-cause mortality that is present in people who smoke is decreased with long-term smoking cessation. Smoking cessation can improve health status and quality of life at any age. Evidence shows that cessation of smoking reduces risk of lung, laryngeal, oral cavity and pharynx, esophageal, pancreatic, bladder, stomach, colorectal, cervical, and kidney cancer, in addition to reducing the risk of acute myeloid leukemia. Another published study, "Smoking Cessation Reduces Postoperative Complications: A Systematic Review and Meta-analysis," examined six randomized trials and 15 observational studies to examine preoperative smoking cessation's effects on postoperative complications. The findings were: 1) taken together, the studies demonstrated a decreased likelihood of postoperative complications in patients who ceased smoking before surgery; 2) overall, each week of cessation before surgery increased the magnitude of the effect by 19%. A significant positive effect was noted in trials where smoking cessation occurred at least four weeks before surgery; 3) For the six randomized trials, they demonstrated, on average, a relative risk reduction of 41% for postoperative complications. Cost-effectiveness Cost-effectiveness analyses of smoking cessation activities have shown that they increase quality-adjusted life years (QALYs) at costs comparable with other types of interventions to treat and prevent disease. Studies of the cost-effectiveness of smoking cessation include: In a 1997 U.S. analysis, the estimated cost per QALY varied by the type of cessation approach, ranging from group intensive counselling without nicotine replacement at $1108 per QALY to minimal counselling with nicotine gum at $4542 per QALY. A study from Erasmus University Rotterdam limited to people with chronic obstructive pulmonary disease found that the cost-effectiveness of minimal counselling, intensive counselling, and drug therapy were €16,900, €8,200, and €2,400 per QALY gained respectively. Among National Health Service smoking cessation clients in Glasgow, pharmacy one-to-one counselling cost £2,600 per QALY gained and group support cost £4,800 per QALY gained. Statistical trends The frequency of smoking cessation among smokers varies across countries. Smoking cessation increased in Spain between 1965 and 2000, in Scotland between 1998 and 2007, and in Italy after 2000. In contrast, in the U.S. the cessation rate was "stable (or varied little)" between 1998 and 2008, and in China smoking cessation rates declined between 1998 and 2003. Nevertheless, in a growing number of countries there are now more ex-smokers than smokers. In the United States, 61.7% of adult smokers (55.0 million adults) who had ever smoked had quit by 2018, an increase from 51.7% in 2009. As of 2020, the CDC reports that the number of adults who smoke in the U.S. has fallen to 30.8 million.
Biology and health sciences
Drugs and pharmacology
null
289834
https://en.wikipedia.org/wiki/Fenugreek
Fenugreek
Fenugreek (; Trigonella foenum-graecum) is an annual plant in the family Fabaceae, with leaves consisting of three small obovate to oblong leaflets. It is cultivated worldwide as a semiarid crop. Its leaves and seeds are common ingredients in dishes from the Indian subcontinent, and have been used as a culinary ingredient since ancient times. Its use as a food ingredient in small quantities is safe. Although a common dietary supplement, no significant clinical evidence suggests that fenugreek has therapeutic properties. Commonly used in traditional medicine, fenugreek can increase the risk of serious adverse effects, including allergic reactions. History Fenugreek is believed to have been brought into cultivation in the Near East. Which wild strain of the genus Trigonella gave rise to domesticated fenugreek is uncertain. Charred fenugreek seeds have been recovered from Tell Halal, Iraq (carbon dated to 4000 BC), Bronze Age levels of Lachish, and desiccated seeds from the tomb of Tutankhamen. Cato the Elder lists fenugreek with clover and vetch as crops grown to feed cattle. In one first-century AD recipe, the Romans flavoured wine with fenugreek. In the 1st century AD, in Galilee, it was grown as a staple food, as Josephus mentions in his book, the Wars of the Jews. The plant is mentioned in the second-century compendium of Jewish Oral Law (Mishnah) under its Hebrew name tiltan. Etymology The English name derives via Middle French fenugrec from Latin faenugraecum, faenum Graecum meaning "Greek hay". Production India is a major producer of fenugreek, and over 80% of India's output is from the state of Rajasthan. Uses Fenugreek is used as a herb (dried or fresh leaves), spice (seeds), and vegetable (fresh leaves, sprouts, and microgreens). Sotolon is the chemical responsible for the distinctive maple-syrup smell of fenugreek. Cuboid, yellow- to amber-coloured fenugreek seeds are frequently encountered in the cuisines of the Indian subcontinent, used both whole and powdered in the preparation of pickles, vegetable dishes, dal, and spice mixes such as panch phoron and sambar powder. They are often roasted to reduce inherent bitterness and to enhance flavour (Maillard browning). Cooking Fresh fenugreek leaves are an ingredient in some curries, such as with potatoes in Indian cuisines to make aloo methi (potato fenugreek) curry. In Armenian cuisine, fenugreek seed powder is used to make a paste that is an important ingredient to cover dried and cured beef to make basturma. In Iranian cuisine, fenugreek leaves are called shambalileh. They are one of several greens incorporated into the herb stew ghormeh sabzi, the herb frittata kuku sabzi, and a soup known as eshkeneh. In Georgian cuisine, a related species—Trigonella caerulea called "blue fenugreek"—is used. In Egyptian cuisine, fenugreek is known by the Arabic name hilba or helba حلبة. Seeds are boiled to make a drink that is consumed at home, as well as in coffee shops. Peasants in Upper Egypt add fenugreek seeds and maize to their pita bread to produce aish merahrah, a staple of their diet. Basterma, a cured ,dried beef, gets its distinctive flavour from the fenugreek used as a coating. In the same way in Turkish cuisine, fenugreek seed powder, called çemen, is used to make a paste with paprika powder and garlic to cover dried and cured beef in making pastirma/basturma. (Its name comes from the Turkish verb bastırmak, meaning "to press"). In Moroccan cuisine, fenugreek is used in rfissa, a dish associated with the countryside. Fenugreek is used in Eritrean and Ethiopian cuisines. The word for fenugreek in Amharic is abesh (or abish), and the seed is used in Ethiopia as a natural herbal medicine in the treatment of diabetes. Yemenite Jews following the interpretation of Rabbi Shelomo Yitzchak (Rashi) believe fenugreek, which they call hilbah, hilbeh, hilba, helba, or halba "חילבה", to be the Talmudic rubia. When the seed kernels are ground and mixed with water, they greatly expand; hot spices, turmeric, and lemon juice are added to produce a frothy relish eaten with a sop. The relish is also called hilbeh; it is reminiscent of curry. It is eaten daily and ceremonially during the meal of the first and/or second night of the Jewish New Year, Rosh Hashana. In Yemen, a small amount of oud al hilba (عود الحلبة), which appears to be the same as ashwagandha, is traditionally added to ground fenugreek seeds before they are mixed with water to prepare the hulbah paste. This is believed to aid in digestion and more importantly to prevent or lessen the maple-syrup smell that usually occurs when consuming fenugreek. Nutritional profile In a 100-gram reference amount, fenugreek seeds provide of food energy and contain 9% water, 58% carbohydrates, 23% protein, and 6% fat. Fenugreek seeds provide calcium at 14% of the Daily Value (DV, table). Fenugreek seeds (per 100 grams) are a rich source of protein (46% DV), dietary fiber, B vitamins, and dietary minerals, particularly manganese (59% DV) and iron (262% DV) (table). Dietary supplement Fenugreek dietary supplements are manufactured from powdered seeds into capsules, loose powders, teas, and liquid extracts in many countries. No high-quality evidence supports that these products have any clinical effectiveness. Animal feed Fenugreek is sometimes used as animal feed. It provides a green fodder palatable to ruminants. The seeds are also used to feed fish and domestic rabbits. Food additive Fenugreek seeds and leaves contain sotolone, which imparts the aroma of fenugreek and curry in high concentrations, and maple syrup or caramel in lower concentrations. Fenugreek is used as a flavoring agent in imitation maple syrup or tea, and as a dietary supplement. Research Constituents of fenugreek seeds include flavonoids, alkaloids, coumarins, vitamins, and saponins; the most prevalent alkaloid is trigonelline and coumarins include cinnamic acid and scopoletin. Research into whether fenugreek reduces biomarkers in people with diabetes and with prediabetic conditions is of limited quality. As of 2023, no high-quality evidence has been found for whether fenugreek is safe and effective in relieving dysmenorrhea or improving lactation during breastfeeding. Studies of fenugreek are characterized as having variable, poor experimental design and quality, including small numbers of subjects, failure to describe methods, inconsistency and duration of dosing, and not recording adverse effects. Because research on the potential biological effects of consuming fenugreek has provided no high-quality evidence for health or antidisease effect, fenugreek is not approved or recommended for clinical use by the United States Food and Drug Administration. Traditional medicine Although once a folk remedy for an insufficient milk supply when nursing, no good evidence indicates that fenugreek is effective or safe for this use, nor is it useful in traditional practices for treating dysmenorrhea, inflammation, diabetes, or any human disorder. Adverse effects and allergies The use of fenugreek has the potential for serious adverse effects, as it may be unsafe for women with hormone-sensitive cancers. Fenugreek is not safe for use during pregnancy, as it has possible abortifacient effects and may induce preterm uterine contractions. Some people are allergic to fenugreek, including those with peanut allergy or chickpea allergy. Fenugreek seeds can cause diarrhea, dyspepsia, abdominal distention, flatulence, and perspiration, and impart a maple-like smell to sweat, urine, or breast milk. A risk of hypoglycemia exists, particularly in people with diabetes, and it may interfere with the activity of antidiabetic drugs. Because of the high content of coumarin-like compounds in fenugreek, it may interfere with the activity and dosing of anticoagulants and antiplatelet drugs. Fenugreek sprouts, cultivated from a single specific batch of seeds imported from Egypt into Germany in 2009, were implicated as the source of the 2011 outbreak of Escherichia coli O104:H4 in Germany and France. Identification of a common producer and a single batch of fenugreek seeds was evidence for the origin of the outbreaks.
Biology and health sciences
Herbs and spices
Plants
289895
https://en.wikipedia.org/wiki/Arboriculture
Arboriculture
Arboriculture () is the cultivation, management, and study of individual trees, shrubs, vines, and other perennial woody plants. The science of arboriculture studies how these plants grow and respond to cultural practices and to their environment. The practice of arboriculture includes cultural techniques such as selection, planting, training, fertilization, pest and pathogen control, pruning, shaping, and removal. Overview A person who practices or studies arboriculture can be termed an arborist or an arboriculturist. A tree surgeon is more typically someone who is trained in the physical maintenance and manipulation of trees and therefore more a part of the arboriculture process rather than an arborist. Risk management, legal issues, and aesthetic considerations have come to play prominent roles in the practice of arboriculture. Businesses often need to hire arboriculturists to complete "tree hazard surveys" and generally manage the trees on-site to fulfill occupational safety and health obligations. Arboriculture is primarily focused on individual woody plants and trees maintained for permanent landscape and amenity purposes, usually in gardens, parks or other populated settings, by arborists, for the enjoyment, protection, and benefit of people. Arboricultural matters are also considered to be within the practice of urban forestry yet the clear and separate divisions are not distinct or discreet. Tree Benefits Tree benefits are the economic, ecological, social and aesthetic use, function purpose, or services of a tree (or group of trees), in its situational context in the landscape. Environmental Benefits Erosion control and soil retention Improved water infiltration and percolation Protection from exposure: windbreak, shade, impact from hail/rainfall Air humidification Modulates environmental conditions in a given microclimate: shields wind, humidifies, provides shade Carbon sequestration and oxygen production Ecological Benefits Attracting pollinators Increased biodiversity Food for decomposers, consumers, and pollinators Soil health: organic matter accumulation from leaf litter and root exudates (symbiotic microbes) Ecological habitat Socioeconomic Benefits Increases employment: forestry, education, tourism Run-off and flood control (e.g. bioswales, plantings on slopes) Aesthetic beauty: parks, gatherings, social events, tourism, senses (fragrance, visual), focal point Adds character and prestige to the landscape, creating a "natural" feel Climate control (e.g shade): can reduce energy consumption of buildings Privacy and protection: from noise, wind Cultural benefits: eg. memorials for a loved one Medical benefits: eg. Taxus chemotherapy Materials: wood for building, paper pulp Fodder for livestock Property value: trees can increase by 10–20% Increases the amount of time customers will spend in a mall, strip mall, shopping district Tree Defects A tree defect is any feature, condition, or deformity of a tree that indicates weak structure or instability that could contribute to tree failure. Common types of tree defects: Codominant stems: two or more stems that grow upward from a single point of origin and compete with one another. common with decurrent growth habits occurs in excurrent trees only after the leader is killed and multiple leaders compete for dominance Included bark: bark is incorporated in the joint between two limbs, creating a weak attachment occurs in branch unions with a high attachment angle (i.e. v-shaped unions) common in many columnar/fastigiate growing deciduous trees Dead, diseased, or broken branches: woundwood cannot grow over stubs or dead branches to seal off decay symptoms/signs of disease: e.g. oozing through the bark, sunken areas in the bark, and bark with abnormal patterns or colours, stunted new growth, discolouration of the foliage Cracks longitudinal cracks result from interior decay, bark rips/tears, or torsion from wind load transverse cracks result from buckled wood, often caused by unnatural loading on branches, such as lion's tailing. Seams: bark edges meet at a crack or wound Ribs: bulges, indicating interior cracks Cavity and hollows: sunken or open areas wherein a tree has suffered injury followed by decay. Further indications include: fungal fruiting structures, insect or animal nests. Lean: a lean of more than 40% from vertical presents a risk of tree failure Taper: change in diameter over the length of trunks branches and roots Epicormic branches (water sprouts in canopy or suckers from root system): often grow in response to major damage or excessive pruning Roots: girdling roots compress the trunk, leading to poor trunk taper, and restrict vascular flow kinked roots provide poor structural support; the kink is a site of potential root failure circling roots occurs when roots encounter obstructions/limitations such as a small tree well or being grown too long in a nursery pot; these cannot provide adequate structural support and are limited in accessing nutrients and water healthy soil texture and depth, drainage, water availability, makes for healthy roots Tree Installation Proper tree installation ensures the long-term viability of the tree and reduces the risk of tree failure. Quality nursery stock must be used. There must be no visible damage or sign of disease. Ideally the tree should have good crown structure. A healthy root ball should not have circling roots and new fibrous roots should be present at the soil perimeter. Girdling or circling roots should be pruned out. Excess soil above the root flare should be removed immediately, since it present a risk of disease ingress into the trunk. Appropriate time of year to plant: generally fall or early spring in temperate regions of the northern hemisphere. Planting hole: the planting hole should be 3 times the width of the root ball. The hole should be dug deep enough that when the root ball is placed on the substrate, the root flare is 3–5cm above the surrounding soil grade. If soil is left against the trunk, it may lead to bark, cambium and wood decay. Angular sides to the planting hole will encourage roots to grow radially from the trunk, rather than circling the planting hole. In urban settings, soil preparation may include the use of: Silva cells: suspended pavement over modular cells containing soil for root development Structural soils: growing medium composed of 80% crushed rock and 20% loam, which supports surface load without it leading to soil compaction Tree wells: a zone of mulch can be installed around the tree trunk to: limit root zone competition (from turf or weeds), reduce soil compaction, improve soil structure, conserve moisture, and keep lawn equipment at a distance. No more than 5–10cm of mulch should be used to avoid suffocating the roots. Mulch must be kept approximately 20cm from the trunk to avoid burying the root flare. With city trees additional tree well preparation includes: Tree grates/grill and frames: limit compaction on root zone and mechanical damage to roots and trunk Root barriers: forces roots to grow down under surface asphalt/concrete/pavers to limit infrastructure damage from roots Staking: newly planted, immature trees should be staked for one growing season to allow for the root system to establish. Staking for longer than one season should only be considered in situations where the root system has failed to establish sufficient structural support. Guy wires can be used for larger, newly planted trees. Care must be used to avoid stem girdling from the support system ties. Irrigation: irrigation infrastructure may be installed to ensure a regular water supply throughout the lifetime of the tree. Wicking beds are an underground reservoir from which water is wicked into soil. Watering bags may be temporarily installed around tree stakes to provide water until the root system becomes established. Permeable paving allows for water infiltration in paved urban settings, such as parks and walkways. UK Within the United Kingdom trees are considered as a material consideration within the town planning system and may be conserved as amenity landscape features. The role of the Arborist or Local Government Arboricultural Officer is likely to have a great effect on such matters. Identification of trees of high quality which may have extensive longevity is a key element in the preservation of trees. Urban and rural trees may benefit from statutory protection under the Town and Country Planning system. Such protection can result in the conservation and improvement of the urban forest as well as rural settlements. Historically the profession divides into the operational and professional areas. These might be further subdivided into the private and public sectors. The profession is broadly considered as having one trade body known as the Arboricultural Association, although the Institute of Chartered Foresters offers a route for professional recognition and chartered arboriculturist status. The qualifications associated with the industry range from vocational to Doctorate. Arboriculture is a comparatively young industry.
Technology
Trees and forestry
null
290053
https://en.wikipedia.org/wiki/Airfoil
Airfoil
An airfoil (American English) or aerofoil (British English) is a streamlined body that is capable of generating significantly more lift than drag. Wings, sails and propeller blades are examples of airfoils. Foils of similar function designed with water as the working fluid are called hydrofoils. When oriented at a suitable angle, a solid body moving through a fluid deflects the oncoming fluid (for fixed-wing aircraft, a downward force), resulting in a force on the airfoil in the direction opposite to the deflection. This force is known as aerodynamic force and can be resolved into two components: lift (perpendicular to the remote freestream velocity) and drag (parallel to the freestream velocity). The lift on an airfoil is primarily the result of its angle of attack. Most foil shapes require a positive angle of attack to generate lift, but cambered airfoils can generate lift at zero angle of attack. Airfoils can be designed for use at different speeds by modifying their geometry: those for subsonic flight generally have a rounded leading edge, while those designed for supersonic flight tend to be slimmer with a sharp leading edge. All have a sharp trailing edge. The air deflected by an airfoil causes it to generate a lower-pressure "shadow" above and behind itself. This pressure difference is accompanied by a velocity difference, via Bernoulli's principle, so the resulting flowfield about the airfoil has a higher average velocity on the upper surface than on the lower surface. In some situations (e.g., inviscid potential flow) the lift force can be related directly to the average top/bottom velocity difference without computing the pressure by using the concept of circulation and the Kutta–Joukowski theorem. Overview The wings and stabilizers of fixed-wing aircraft, as well as helicopter rotor blades, are built with airfoil-shaped cross sections. Airfoils are also found in propellers, fans, compressors and turbines. Sails are also airfoils, and the underwater surfaces of sailboats, such as the centerboard, rudder, and keel, are similar in cross-section and operate on the same principles as airfoils. Swimming and flying creatures and even many plants and sessile organisms employ airfoils/hydrofoils, common examples being bird wings, the bodies of fish, and the shape of sand dollars. An airfoil-shaped wing can create downforce on an automobile or other motor vehicle, improving traction. When the wind is obstructed by an object such as a flat plate, a building, or the deck of a bridge, the object will experience drag and also an aerodynamic force perpendicular to the wind. This does not mean the object qualifies as an airfoil. Airfoils are highly-efficient lifting shapes, able to generate more lift than similarly sized flat plates of the same area, and able to generate lift with significantly less drag. Airfoils are used in the design of aircraft, propellers, rotor blades, wind turbines and other applications of aeronautical engineering. A lift and drag curve obtained in wind tunnel testing is shown on the right. The curve represents an airfoil with a positive camber so some lift is produced at zero angle of attack. With increased angle of attack, lift increases in a roughly linear relation, called the slope of the lift curve. At about 18 degrees this airfoil stalls, and lift falls off quickly beyond that. The drop in lift can be explained by the action of the upper-surface boundary layer, which separates and greatly thickens over the upper surface at and past the stall angle. The thickened boundary layer's displacement thickness changes the airfoil's effective shape, in particular it reduces its effective camber, which modifies the overall flow field so as to reduce the circulation and the lift. The thicker boundary layer also causes a large increase in pressure drag, so that the overall drag increases sharply near and past the stall point. Airfoil design is a major facet of aerodynamics. Various airfoils serve different flight regimes. Asymmetric airfoils can generate lift at zero angle of attack, while a symmetric airfoil may better suit frequent inverted flight as in an aerobatic airplane. In the region of the ailerons and near a wingtip a symmetric airfoil can be used to increase the range of angles of attack to avoid spin–stall. Thus a large range of angles can be used without boundary layer separation. Subsonic airfoils have a round leading edge, which is naturally insensitive to the angle of attack. The cross section is not strictly circular, however: the radius of curvature is increased before the wing achieves maximum thickness to minimize the chance of boundary layer separation. This elongates the wing and moves the point of maximum thickness back from the leading edge. Supersonic airfoils are much more angular in shape and can have a very sharp leading edge, which is very sensitive to angle of attack. A supercritical airfoil has its maximum thickness close to the leading edge to have a lot of length to slowly shock the supersonic flow back to subsonic speeds. Generally such transonic airfoils and also the supersonic airfoils have a low camber to reduce drag divergence. Modern aircraft wings may have different airfoil sections along the wing span, each one optimized for the conditions in each section of the wing. Movable high-lift devices, flaps and sometimes slats, are fitted to airfoils on almost every aircraft. A trailing edge flap acts similarly to an aileron; however, it, as opposed to an aileron, can be retracted partially into the wing if not used. A laminar flow wing has a maximum thickness in the middle camber line. Analyzing the Navier–Stokes equations in the linear regime shows that a negative pressure gradient along the flow has the same effect as reducing the speed. So with the maximum camber in the middle, maintaining a laminar flow over a larger percentage of the wing at a higher cruising speed is possible. However, some surface contamination will disrupt the laminar flow, making it turbulent. For example, with rain on the wing, the flow will be turbulent. Under certain conditions, insect debris on the wing will cause the loss of small regions of laminar flow as well. Before NASA's research in the 1970s and 1980s the aircraft design community understood from application attempts in the WW II era that laminar flow wing designs were not practical using common manufacturing tolerances and surface imperfections. That belief changed after new manufacturing methods were developed with composite materials (e.g. laminar-flow airfoils developed by Professor Franz Wortmann for use with wings made of fibre-reinforced plastic). Machined metal methods were also introduced. NASA's research in the 1980s revealed the practicality and usefulness of laminar flow wing designs and opened the way for laminar-flow applications on modern practical aircraft surfaces, from subsonic general aviation aircraft to transonic large transport aircraft, to supersonic designs. Schemes have been devised to define airfoils – an example is the NACA system. Various airfoil generation systems are also used. An example of a general purpose airfoil that finds wide application, and pre–dates the NACA system, is the Clark-Y. Today, airfoils can be designed for specific functions by the use of computer programs. Airfoil terminology The various terms related to airfoils are defined below: The suction surface (a.k.a. upper surface) is generally associated with higher velocity and lower static pressure. The pressure surface (a.k.a. lower surface) has a comparatively higher static pressure than the suction surface. The pressure gradient between these two surfaces contributes to the lift force generated for a given airfoil. The geometry of the airfoil is described with a variety of terms : The leading edge is the point at the front of the airfoil that has maximum curvature (minimum radius). The trailing edge is the point on the airfoil most remote from the leading edge. The angle between the upper and lower surfaces at the trailing edge is the trailing edge angle. The chord line is the straight line connecting leading and trailing edges. The chord length, or simply chord, , is the length of the chord line. That is the reference dimension of the airfoil section. The shape of the airfoil is defined using the following geometrical parameters: The mean camber line or mean line is the locus of points midway between the upper and lower surfaces. Its shape depends on the thickness distribution along the chord; The thickness of an airfoil varies along the chord. It may be measured in either of two ways: Thickness measured perpendicular to the camber line. This is sometimes described as the "American convention"; Thickness measured perpendicular to the chord line. This is sometimes described as the "British convention". Some important parameters to describe an airfoil's shape are its camber and its thickness. For example, an airfoil of the NACA 4-digit series such as the NACA 2415 (to be read as 2 – 4 – 15) describes an airfoil with a camber of 0.02 chord located at 0.40 chord, with 0.15 chord of maximum thickness. Finally, important concepts used to describe the airfoil's behaviour when moving through a fluid are: The aerodynamic center, which is the chord-wise location about which the pitching moment is independent of the lift coefficient and the angle of attack. The center of pressure, which is the chord-wise location about which the pitching moment is momentarily zero. On a cambered airfoil, the center of pressure is not a fixed location as it moves in response to changes in angle of attack and lift coefficient. In two-dimensional flow around a uniform wing of infinite span, the slope of the lift curve is determined primarily by the trailing edge angle. The slope is greatest if the angle is zero; and decreases as the angle increases. For a wing of finite span, the aspect ratio of the wing also significantly influences the slope of the curve. As aspect ratio decreases, the slope also decreases. Thin airfoil theory Thin airfoil theory is a simple theory of airfoils that relates angle of attack to lift for incompressible, inviscid flows. It was devised by German mathematician Max Munk and further refined by British aerodynamicist Hermann Glauert and others in the 1920s. The theory idealizes the flow around an airfoil as two-dimensional flow around a thin airfoil. It can be imagined as addressing an airfoil of zero thickness and infinite wingspan. Thin airfoil theory was particularly notable in its day because it provided a sound theoretical basis for the following important properties of airfoils in two-dimensional inviscid flow: on a symmetric airfoil, the center of pressure and aerodynamic center are coincident and lie exactly one quarter of the chord behind the leading edge. on a cambered airfoil, the aerodynamic center lies exactly one quarter of the chord behind the leading edge, but the position of the center of pressure moves when the angle of attack changes. the slope of the lift coefficient versus angle of attack line is units per radian. As a consequence of (3), the section lift coefficient of a thin symmetric airfoil of infinite wingspan is: where is the section lift coefficient, is the angle of attack in radians, measured relative to the chord line. (The above expression is also applicable to a cambered airfoil where is the angle of attack measured relative to the zero-lift line instead of the chord line.) Also as a consequence of (3), the section lift coefficient of a cambered airfoil of infinite wingspan is: where is the section lift coefficient when the angle of attack is zero. Thin airfoil theory assumes the air is an inviscid fluid so does not account for the stall of the airfoil, which usually occurs at an angle of attack between 10° and 15° for typical airfoils. In the mid-late 2000s, however, a theory predicting the onset of leading-edge stall was proposed by Wallace J. Morris II in his doctoral thesis. Morris's subsequent refinements contain the details on the current state of theoretical knowledge on the leading-edge stall phenomenon. Morris's theory predicts the critical angle of attack for leading-edge stall onset as the condition at which a global separation zone is predicted in the solution for the inner flow. Morris's theory demonstrates that a subsonic flow about a thin airfoil can be described in terms of an outer region, around most of the airfoil chord, and an inner region, around the nose, that asymptotically match each other. As the flow in the outer region is dominated by classical thin airfoil theory, Morris's equations exhibit many components of thin airfoil theory. Derivation In thin airfoil theory, the width of the (2D) airfoil is assumed negligible, and the airfoil itself replaced with a 1D blade along its camber line, oriented at the angle of attack . Let the position along the blade be , ranging from at the wing's front to at the trailing edge; the camber of the airfoil, , is assumed sufficiently small that one need not distinguish between and position relative to the fuselage. The flow across the airfoil generates a circulation around the blade, which can be modeled as a vortex sheet of position-varying strength . The Kutta condition implies that , but the strength is singular at the bladefront, with for . If the main flow has density , then the Kutta–Joukowski theorem gives that the total lift force is proportional to and its moment about the leading edge proportional to From the Biot–Savart law, the vorticity produces a flow field oriented normal to the airfoil at . Since the airfoil is an impermeable surface, the flow must balance an inverse flow from . By the small-angle approximation, is inclined at angle relative to the blade at position , and the normal component is correspondingly . Thus, must satisfy the convolution equation which uniquely determines it in terms of known quantities. An explicit solution can be obtained through first the change of variablesand then expanding both and as a nondimensionalized Fourier series in with a modified lead term: The resulting lift and moment depend on only the first few terms of this series. The lift coefficient satisfies and the moment coefficient The moment about the 1/4 chord point will thus be From this it follows that the center of pressure is aft of the 'quarter-chord' point , by The aerodynamic center is the position at which the pitching moment does not vary with a change in lift coefficient: Thin-airfoil theory shows that, in two-dimensional inviscid flow, the aerodynamic center is at the quarter-chord position.
Technology
Aircraft components
null
290189
https://en.wikipedia.org/wiki/Tringa
Tringa
Tringa is a genus of waders, containing the shanks and tattlers. The genus name Tringa is the Neo-Latin name given to the green sandpiper by the Italian naturalist Ulisse Aldrovandi in 1599. They are mainly freshwater birds, often with brightly coloured legs as reflected in the English names of six species, as well as the specific names of two of these and the green sandpiper. They are typically associated with northern hemisphere temperate regions for breeding. Some of this group—notably the green sandpiper—nest in trees, using the old nests of other birds, usually thrushes. The willet and the tattlers have been found to belong in Tringa; these genus changes were formally adopted by the American Ornithologists' Union in 2006. The present genus in the old, more limited sense was even further subdivided into Tringa proper and Totanus, either as subgenera or as full genera. The available DNA sequence data suggests however that neither of these is monophyletic and that the latter simply lumps together a number of more or less closely related apomorphic species. Therefore, it seems unwarranted to recognize Totanus even as a subgenus for the time being. Taxonomy The genus Tringa was introduced in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. The name Tringa is the Neo-Latin name given to the green sandpiper by the Italian naturalist Ulisse Aldrovandi in 1603 based on Ancient Greek trungas, a thrush-sized, white-rumped, tail-bobbing wading bird mentioned by Aristotle. The type species is the green sandpiper (Tringa ochropus). Species The genus contains 13 species. Systematics and evolution The shanks' and tattlers' closest relatives are sandpipers of the genera Actitis and Xenus. Together with these, they are related to the phalaropes, as well as the turnstones and calidrids. The large genus Tringa and the two very small genera which are most closely related form a phylogeny similar to the situation found in many other shorebird lineages such as calidrids, snipes and woodcocks, or gulls. The same study has indicated that some morphological characters such as details of the furcula and pelvis have evolved convergently and are no indicators of close relationship. Similarly, the leg/foot color wildly varies between close relatives, with the spotted redshank, the greater yellowlegs, and the common greenshank for example being more closely related among each other than to any other species in the group; the ancestral coloration of the legs and feet was fairly certainly drab buffish as in e.g. the green sandpiper. On the other hand, the molecular phylogeny reveals that the general habitus and size as well as the overall plumage pattern are good indicators of an evolutionary relationship in this group. The Nordmann's greenshank, a rare and endangered species, was not available for molecular analyses. It is fairly aberrant and was formerly placed in the monotypic genus Pseudototanus. It appears closest overall to the semipalmata-flavipes and the stagnatilis-totanus-glareola groups, though it also has some similarities to the greater yellowlegs and common greenshank. Fossil record Fossil shanks are known since the Miocene, possibly even since the Eo-/Oligocene some 33-30 million years ago (mya) which would be far earlier than most extant genera of birds. However, it is uncertain whether Tringa edwardsi indeed belongs into the present-day genus or is a distinct, ancestral form. The time of the Tringa-Actitis-Xenus-Phalaropus divergence has been tentatively dated at 22 mya, the beginning of the Miocene; even if the dating is largely conjectural, it suggests that T. edwardsi does indeed not belong into the modern genus. Molecular dating—which is not too reliable, however—indicates that the diversification into the known lineages occurred between 20 and 5 mya. The fossil record contains species formerly separated in Totanus from the Early Miocene onwards. Although these are usually known from very scant remains, the fact that apparently apomorphic Tringa as well as a putative phalarope are known from about 23-22 mya indicates that the shank-phalarope group had already diverged into the modern genera by the start of the Miocene. The biogeography of living and fossil species—notably, the rarity of the latter in well-researched North American sites—seems to suggest that Tringa originated in Eurasia. Time and place neatly coincide with the disappearance of the last vestiges of the Turgai Sea, and this process may well have been a major factor in the separation of the genera in the shank-phalarope clade. Still, scolopacids are very similar osteologically, and many of the early fossils of presumed shanks require revaluation. ?Tringa edwardsi (Quercy Late Eocene/Early Oligocene of Mouillac, France) ?Tringa gracilis (Early Miocene of WC Europe) – calidrid? ?Tringa lartetianus (Early Miocene of Saint-Gérand-le-Puy, France) Tringa spp. (Early Miocene of Ravolzhausen, Germany – Early Pleistocene of Europe) ?Tringa grivensis (Middle Miocene of Grive-Saint-Alban, France) ?Tringa majori (Middle Miocene of Grive-Saint-Alban, France) ?Tringa minor (Middle Miocene of Grive-Saint-Alban, France) – includes "Erolia" ennouchii; calidriid? ?Tringa grigorescui (Middle Miocene of Ciobăniţa, Romania) ?Tringa scarabellii (Late Miocene of Senigallia, Italy) Tringa sp. 1 (Late Miocene/Early Pliocene of Lee Creek Mine, USA) Tringa sp. 2 (Late Miocene/Early Pliocene of Lee Creek Mine, USA) ?Tringa numenioides (Early Pliocene of Odesa, Ukraine) Tringa antiqua (Late Pliocene of Meade County, USA) Tringa ameghini (Late Pleistocene of Talara Tar Seeps, Peru) "Tringa" hoffmanni is now in Ludiortyx. While its relationships are disputed, it was not a charadriiform.
Biology and health sciences
Charadriiformes
Animals
290236
https://en.wikipedia.org/wiki/Gymnosperm
Gymnosperm
The gymnosperms ( ; ) are a group of woody, perennial seed-producing plants, typically lacking the protective outer covering which surrounds the seeds in flowering plants, that include conifers, cycads, Ginkgo, and gnetophytes, forming the clade Gymnospermae The term gymnosperm comes from the composite word in ( and ), and literally means 'naked seeds'. The name is based on the unenclosed condition of their seeds (called ovules in their unfertilized state). The non-encased condition of their seeds contrasts with the seeds and ovules of flowering plants (angiosperms), which are enclosed within an ovary. Gymnosperm seeds develop either on the surface of scales or leaves, which are often modified to form cones, or on their own as in yew, Torreya, and Ginkgo. The life cycle of a gymnosperm involves alternation of generations, with a dominant diploid sporophyte phase, and a reduced haploid gametophyte phase, which is dependent on the sporophytic phase. The term "gymnosperm" is often used in paleobotany to refer to (the paraphyletic group of) all non-angiosperm seed plants. In that case, to specify the modern monophyletic group of gymnosperms, the term Acrogymnospermae is sometimes used. The gymnosperms and angiosperms together constitute the spermatophytes or seed plants. The spermatophytes are subdivided into five divisions, the angiosperms and four divisions of gymnosperms: the Cycadophyta, Ginkgophyta, Gnetophyta, and Pinophyta (also known as Coniferophyta). Newer classification place the gnetophytes among the conifers. Numerous extinct seed plant groups are recognised including those considered pteridosperms/seed ferns, as well other groups like the Bennettitales. By far the largest group of living gymnosperms are the conifers (pines, cypresses, and relatives), followed by cycads, gnetophytes (Gnetum, Ephedra and Welwitschia), and Ginkgo biloba (a single living species). About 65% of gymnosperms are dioecious, but conifers are almost all monoecious. Some genera have ectomycorrhiza fungal associations with roots (Pinus), while in some others (Cycas) small specialised roots called coralloid roots are associated with nitrogen-fixing cyanobacteria. Diversity and origin Over 1,000 living species of gymnosperm exist. It was previously widely accepted that the gymnosperms originated in the Late Carboniferous period, replacing the lycopsid rainforests of the tropical region, but more recent phylogenetic evidence indicates that they diverged from the ancestors of angiosperms during the Early Carboniferous. The radiation of gymnosperms during the late Carboniferous appears to have resulted from a whole genome duplication event around . Early characteristics of seed plants are evident in fossil progymnosperms of the late Devonian period around 383 million years ago. It has been suggested that during the mid-Mesozoic era, pollination of some extinct groups of gymnosperms was by extinct species of scorpionflies that had specialized proboscis for feeding on pollination drops. The scorpionflies likely engaged in pollination mutualisms with gymnosperms, long before the similar and independent coevolution of nectar-feeding insects on angiosperms. Evidence has also been found that mid-Mesozoic gymnosperms were pollinated by Kalligrammatid lacewings, a now-extinct family with members which (in an example of convergent evolution) resembled the modern butterflies that arose far later. All gymnosperms are perennial woody plants, Unlike in other extant gymnosperms the soft and highly parenchymatous wood in cycads is poorly lignified, and their main structural support comes from an armor of sclerenchymatous leaf bases covering the stem, with the exception of species with underground stems. There are no herbaceous gymnosperms and compared to angiosperms they occupy fewer ecological niches, but have evolved both parasites (Parasitaxus), epiphytes (Zamia pseudoparasitica) and rheophytes (Retrophyllum minus). Conifers are by far the most abundant extant group of gymnosperms with six to eight families, with a total of 65–70 genera and 600–630 species (696 accepted names). Most conifers are evergreens. The leaves of many conifers are long, thin and needle-like, while other species, including most Cupressaceae and some Podocarpaceae, have flat, triangular scale-like leaves. Agathis in Araucariaceae and Nageia in Podocarpaceae have broad, flat strap-shaped leaves. Cycads, small palm-like trees, are the next most abundant group of gymnosperms, with two or three families, 11 genera, and approximately 338 species. A majority of cycads are native to tropical climates and are most abundantly found in regions near the equator. The other extant groups are the 95–100 species of Gnetophytes and one species of Ginkgo. The ginkgo or maidenhair trees are tall and have bilobed leaves, while gnetophytes are a diverse groups of plants and shrubs including the horizontally growing welwitschia Today, gymnosperms are the most threatened of all plant groups. Classification A formal classification of the living gymnosperms is the "Acrogymnospermae", which form a monophyletic group within the spermatophytes. The wider "Gymnospermae" group includes extinct gymnosperms and is thought to be paraphyletic. The fossil record of gymnosperms includes many distinctive taxa that do not belong to the four modern groups, including seed-bearing trees that have a somewhat fern-like vegetative morphology (the so-called "seed ferns" or pteridosperms). When fossil gymnosperms such as these and the Bennettitales, glossopterids, and Caytonia are considered, it is clear that angiosperms are nested within a larger gymnospermae clade, although which group of gymnosperms is their closest relative remains unclear. The extant gymnosperms include 12 main families and 83 genera which contain more than 1000 known species. Subclass Cycadidae Order Cycadales Family Cycadaceae: Cycas Family Zamiaceae: Dioon, Bowenia, Macrozamia, Lepidozamia, Encephalartos, Stangeria, Ceratozamia, Microcycas, Zamia Subclass Ginkgoidae Order Ginkgoales Family Ginkgoaceae: Ginkgo Subclass Gnetidae Order Welwitschiales Family Welwitschiaceae: Welwitschia Order Gnetales Family Gnetaceae: Gnetum Order Ephedrales Family Ephedraceae: Ephedra Subclass Pinidae Order Pinales Family Pinaceae: Cedrus, Pinus, Cathaya, Picea, Pseudotsuga, Larix, Pseudolarix, Tsuga, Nothotsuga, Keteleeria, Abies Order Araucariales Family Araucariaceae: Araucaria, Wollemia, Agathis Family Podocarpaceae: Phyllocladus, Lepidothamnus, Prumnopitys, Sundacarpus, Halocarpus, Parasitaxus, Lagarostrobos, Manoao, Saxegothaea, Microcachrys, Pherosphaera, Acmopyle, Dacrycarpus, Dacrydium, Falcatifolium, Retrophyllum, Nageia, Afrocarpus, Podocarpus Order Cupressales Family Sciadopityaceae: Sciadopitys Family Cupressaceae: Cunninghamia, Taiwania, Athrotaxis, Metasequoia, Sequoia, Sequoiadendron, Cryptomeria, Glyptostrobus, Taxodium, Papuacedrus, Austrocedrus, Libocedrus, Pilgerodendron, Widdringtonia, Diselma, Fitzroya, Callitris, Actinostrobus, Neocallitropsis, Thujopsis, Thuja, Fokienia, Chamaecyparis, Cupressus, Juniperus, Calocedrus, Tetraclinis, Platycladus, Microbiota Family Taxaceae: Austrotaxus, Pseudotaxus, Taxus, Cephalotaxus, Amentotaxus, Torreya Extinct groupings Order Cordaitales Order Calamopityales Order Callistophytales Order Caytoniales Order Gigantopteridales Order Glossopteridales Order Lyginopteridales Order Medullosales Order Peltaspermales Order Corystospermales (also known as Umkomasiales) Order Czekanowskiales Order Bennettitales (cycadeoids) Order Erdtmanithecales Order Pentoxylales Order Czekanowskiales Order Petriellales Life cycle Gymnosperms, like all vascular plants, have a sporophyte-dominant life cycle, which means they spend most of their life cycle with diploid cells, while the gametophyte (gamete-bearing phase) is relatively short-lived. Like all seed plants, they are heterosporous, having two spore types, microspores (male) produced in microsporangium and megaspores (female) produced in megasporangium that are typically present in pollen cones or ovulate cones respectively. The microsporangium is carried by microsporophyll (modified leaf) and seeds are carried by ovuliferous scales in the male and female cones respectively. The exception is the females in the cycad genus Cycas, which form a loose structure called megasporophylls instead of cones. As with all heterosporous plants, the gametophytes develop within the spore wall. Pollen grains (microgametophytes) mature from microspores, and ultimately produce sperm cells. Megagametophytes develop from megaspores and are retained within the ovule. Gymnosperms produce multiple archegonia, which produce the female gametes. During pollination, pollen grains are physically transferred between plants from the pollen cone to the ovule. Pollen is usually moved by wind or insects. Whole grains enter each ovule through a microscopic gap in the ovule coat (integument) called the micropyle. The pollen grains mature further inside the ovule and produce sperm cells. Two main modes of fertilization are found in gymnosperms. Cycads and Ginkgo have flagellated motile sperm that swim directly to the egg inside the ovule, whereas conifers and gnetophytes have sperm with no flagella that are moved along a pollen tube to the egg. After syngamy (joining of the sperm and egg cell), the zygote develops into an embryo (young sporophyte). More than one embryo is usually initiated in each gymnosperm seed. The mature seed comprises the embryo and the remains of the female gametophyte, which serves as a food supply, and the seed coat. Gymnosperms ordinarily reproduce by sexual reproduction, and only rarely express parthenogenesis. Sexual reproduction in gymnosperms appears to be required for maintaining long-term genomic integrity. Meiosis in sexual land plants provides a direct mechanism for repairing DNA in reproductive tissues. The likely primary benefit of cross-pollination in gymnosperms, as in other eukaryotes, is that it allows the avoidance of inbreeding depression caused by the presence of recessive deleterious mutations. Genetics The first published sequenced genome for any gymnosperm was the genome of Picea abies in 2013. Uses Gymnosperms have major economic uses. Some, such as pine, fir, spruce, and cedar, are used for lumber, paper production, and resin. Some other common uses for gymnosperms are soap, varnish, nail polish, food, gum, and perfumes.
Biology and health sciences
Gymnosperms
null
290462
https://en.wikipedia.org/wiki/Badger
Badger
Badgers are short-legged omnivores in the family Mustelidae (which also includes the otters, wolverines, martens, minks, polecats, weasels, and ferrets). Badgers are a polyphyletic rather than a natural taxonomic grouping, being united by their squat bodies and adaptions for fossorial activity. All belong to the caniform suborder of carnivoran mammals. The fifteen species of mustelid badgers are grouped in four subfamilies: four species of Melinae (genera Meles and Arctonyx) including the European badger, five species of Helictidinae (genus Melogale) or ferret-badger, the honey badger or ratel Mellivorinae (genus Mellivora), and the American badger Taxideinae (genus Taxidea). Badgers include the most basal mustelids; the American badger is the most basal of all, followed successively by the ratel and the Melinae; the estimated split dates are about 17.8, 15.5 and 14.8 million years ago, respectively. The two species of Asiatic stink badgers of the genus Mydaus were formerly included within Melinae (and thus Mustelidae), but more recent genetic evidence indicates these are actually members of the skunk family (Mephitidae). Badger mandibular condyles connect to long cavities in their skulls, which gives resistance to jaw dislocation and increases their bite grip strength. This in turn limits jaw movement to hinging open and shut, or sliding from side to side, but it does not hamper the twisting movement possible for the jaws of most mammals. Badgers have rather short, wide bodies, with short legs for digging. They have elongated, weasel-like heads with small ears. Their tails vary in length depending on species; the stink badger has a very short tail, while the ferret-badger's tail can be long, depending on age. They have black faces with distinctive white markings, grey bodies with a light-coloured stripe from head to tail, and dark legs with light-coloured underbellies. They grow to around in length, including tail. The European badger is one of the largest; the American badger, the hog badger, and the honey badger are generally a little smaller and lighter. Stink badgers are smaller still, and ferret-badgers are the smallest of all. They weigh around , while some Eurasian badgers weigh around . Etymology The word "badger", originally applied to the European badger (Meles meles), comes from earlier bageard (16th century), presumably referring to the white mark borne like a badge on its forehead. Similarly, a now archaic synonym was bauson 'badger' (1375), a variant of bausond 'striped, piebald', from Old French bausant, baucent 'id.'. The less common name brock (Old English: brocc), (Scots: brock) is a Celtic loanword (cf. Gaelic broc and Welsh broch, from Proto-Celtic *brokkos) meaning "grey". The Proto-Germanic term was *þahsuz (cf. German Dachs, Dutch das, Norwegian svintoks; Early Modern English dasse), probably from the PIE root *tek'- "to construct," so the badger would have been named after its digging of setts (tunnels); the Germanic term *þahsuz became taxus or taxō, -ōnis in Latin glosses, replacing mēlēs ("marten" or "badger"), and from these words the common Romance terms for the animal evolved (Italian tasso, French taisson—blaireau is now more common—Catalan toixó, Spanish tejón, Portuguese texugo). A male European badger is a boar, a female is a sow, and a young badger is a cub. However, in North America the young are usually called kits, while the terms male and female are generally used for adults. A collective name suggested for a group of colonial badgers is a cete, but badger colonies are more often called clans. A badger's home is called a sett. Classification The following list shows where the various species with the common name of badger are placed in the Mustelidae and Mephitidae classifications. The list is polyphyletic and the species commonly called badgers do not form a valid clade. Family Mustelidae Subfamily Melinae Genus Arctonyx Northern hog badger, Arctonyx albogularis Greater hog badger, Arctonyx collaris Sumatran hog badger, Arctonyx hoevenii Genus Meles Japanese badger, Meles anakuma Asian badger, Meles leucurus European badger, Meles meles Caucasian badger, Meles canescens Subfamily Helictidinae Genus Melogale Burmese ferret-badger, Melogale personata Javan ferret-badger, Melogale orientalis Chinese ferret-badger, Melogale moschata Formosan ferret-badger, Melogale subaurantiaca Bornean ferret-badger, Melogale everetti Vietnam ferret-badger, Melogale cucphuongensis Subfamily Mellivorinae Honey badger, Mellivora capensis Subfamily Taxidiinae: †Chamitataxus avitus †Pliotaxidea nevadensis †Pliotaxidea garberi American badger, Taxidea taxus Family Mephitidae Subfamily Mydainae Genus Mydaus Indonesian or Sunda stink badger (teledu), Mydaus javanensis Palawan stink badger, Mydaus marchei Distribution Badgers are found in much of North America, Great Britain, Ireland and most of the rest of Europe as far north as southern Scandinavia. They live as far east as Japan, Korea and China. The Javan ferret-badger lives in Indonesia, and the Bornean ferret-badger lives in Malaysia. The honey badger is found in most of sub-Saharan Africa, the Arabian Desert, southern Levant, Turkmenistan, Pakistan and India. Behaviour The behaviour of badgers differs by family, but all shelter underground, living in burrows called setts, which may be very extensive. Some are solitary, moving from home to home, while others are known to form clans called cetes. Cete size is variable from two to 15. Badgers can run or gallop at for short periods of time. Some species, notably the honey badger, can climb well. In March 2024, scientists released footage of a wild Asian badger climbing a tree to a height of 2.5 m in South Korea. Badgers are nocturnal. In North America, coyotes sometimes eat badgers and vice versa, but the majority of their interactions seem to be mutual or neutral. American badgers and coyotes have been seen hunting together in a cooperative fashion. Diet The diet of the Eurasian badger consists largely of earthworms (especially Lumbricus terrestris), insects, grubs, and the eggs and young of ground-nesting birds. They also eat small mammals, amphibians, reptiles and birds, as well as roots and fruit. In Britain, they are the main predator of hedgehogs, which have demonstrably lower populations in areas where badgers are numerous, so much so that hedgehog rescue societies do not release hedgehogs into known badger territories. They are occasional predators of domestic chickens, and are able to break into enclosures that a fox cannot. In southern Spain, badgers feed to a significant degree on rabbits. American badgers are fossorial carnivores – i.e. they catch a significant proportion of their food underground, by digging. They can tunnel after ground-dwelling rodents at speed. The honey badger of Africa consumes honey, porcupines, and even venomous snakes (such as the puff adder); they climb trees to gain access to honey from bees' nests. Badgers have been known to become intoxicated with alcohol after eating rotting fruit. Relation with humans Hunting Hunting badgers for sport has been common in many countries. The Dachshund (German for "badger hound") dog breed was bred for this purpose. Badger-baiting was formerly a popular blood sport. Although badgers are normally quite docile, they fight fiercely when cornered. This led people to capture and box badgers and then wager on whether a dog could succeed in removing the badger from its refuge. In England, opposition from naturalists led to its ban under the Cruelty to Animals Act of 1835 and the Protection of Badgers Act of 1992 made it an offence to kill, injure, or take a badger or to interfere with a sett unless under license from a statutory authority. The Hunting Act of 2004 further banned fox hunters from blocking setts during their chases. Badgers have been trapped commercially for their pelts, which have been used for centuries to make shaving brushes, a purpose to which it is particularly suited owing to its high water retention. Virtually all commercially available badger hair now comes from mainland China, though, which has farms for the purpose. The Chinese supply three grades of hair to domestic and foreign brush makers. Village cooperatives are also licensed by the national government to hunt and process badgers to avoid their becoming a crop nuisance in rural northern China. The European badger is also used as trim for some traditional Scottish clothing. The American badger is also used for paintbrushes and as trim for some Native American garments. Culling Controlling the badger population is prohibited in many European countries since badgers are listed in the Berne Convention, but they are not otherwise the subject of any international treaty or legislation. Many badgers in Europe were gassed during the 1960s and 1970s to control rabies. Until the 1980s, badger culling in the United Kingdom was undertaken in the form of gassing, allegedly to control the spread of bovine tuberculosis (bTB). Limited culling resumed in 1998 as part of a 10-year randomised trial cull, which was considered by John Krebs and others to show that culling was ineffective. Some groups called for a selective cull, whilst others favoured a programme of vaccination. Wales and Northern Ireland are currently (2013) conducting field trials of a badger vaccination programme. In 2012 the government authorised a limited cull led by the Department for Environment, Food and Rural Affairs. However it was later deferred and a wide range of reasons given. In August 2013 a full culling programme began, whereby it was expected that about 5,000 badgers would be killed over six weeks in West Somerset and Gloucestershire using a mixture of controlled shooting and free shooting (some badgers were to be trapped in cages first). The cull caused many protests, with emotional, economic and scientific reasons being cited. The badger is considered an iconic species of the British countryside and it has been claimed by shadow ministers that "The government's own figures show it will cost more than it saves...", and Lord Krebs, who led the Randomised Badger Culling Trial in the 1990s, said the two pilots "will not yield any useful information". Badger gates When protecting woodlands from deer and rabbit, installing fences in badger territory can be problematic. Because badgers are persistent and strong, if fences are placed across their "runs"—established foraging and travel paths—they may well dig through or under, damaging the fence and leaving openings that rabbits can get through. Ideally, badger runs should be identified before fence construction begins. The gateways are constructed in stages over time to ensure that badgers are using the manmade openings instead of damaging the new fence: starting with leaving a cut opening in the fence at ground level, later laying a floor (threshold), later still framing the opening, and eventually hanging a small free-swinging door that is heavy enough that rabbits don't seem to learn how to push them open. The recommended door size is 18 by 25 cm and weighs about 1.1 kg. With a special license, badger fencing and one-way gates may be installed to exclude resident badgers from an area being developed. Traditional medicine Badgers have been used in traditional medicine in Europe, Asia and Africa. Food Although rarely eaten today in the United States or the United Kingdom, badgers were once a primary meat source for the diets of Native Americans and European colonists. Badgers were also eaten in Britain during World War II and the 1950s. In some areas of Russia, the consumption of badger meat is still widespread. Shish kebabs made from badger, along with dog meat and pork, are a major source of trichinosis outbreaks in the Altai Region of Russia. In Croatia badger meat is rarely eaten, but when it is, it is usually smoked, dried, or served in goulash. In France, badger meat was used in the preparation of several dishes, such as Blaireau au sang, and it was a relatively common ingredient in countryside cuisine. Badger meat was eaten in some parts of Spain until recently. Pets Badgers are sometimes kept as pets. Keeping a badger as a pet or offering one for sale is an offence in the United Kingdom under the 1992 Protection of Badgers Act. In popular culture In Europe during the medieval period, accounts of badgers in bestiaries described badgers as working together to dig holes under mountains. They were said to lie down at the entrance of the hole holding a stick in their mouths, while other badgers piled dirt on their bellies. Two badgers would then take hold of the stick in the badger's mouth, and drag the animal loaded with dirt away, almost in the fashion of a wagon. The moralizing component of bestiaries often took precedence over their function as natural history texts, and this description of badgers most likely reflects an allegorical exemplar rather than what everyday people in the Middle Ages might or might not have believed about how badgers behave in the wild. The 19th-century poem "The Badger" by John Clare describes a badger hunt and badger-baiting. The character Frances in Russell Hoban's children's books, beginning with Bedtime for Frances (1948–1970), is depicted as a badger. Trufflehunter is a heroic badger in the Chronicles of Narnia book Prince Caspian (1951) by C. S. Lewis. Badger characters are featured in author Brian Jacques' Redwall series (1986–2011), they are depicted as feared warriors most often falling under the title of Badger Lord or Badger Mother. A badger character is featured in The Immortals (1992–1996) by Tamora Pierce and "The Badger" is a comic book hero created by Mike Baron. The badger is the emblem of the Hufflepuff house of the Hogwarts School of Witchcraft and Wizardry in the J. K. Rowling's Harry Potter book series (1997–2007), it is chosen as such because the badger is an animal that is often underestimated, because it lives quietly until attacked, but which, when provoked, can fight off animals much larger than itself, which resembles the Hufflepuff house in several ways. Many other stories featuring badgers as characters include Kenneth Grahame's children's novel The Wind in the Willows (1908), Beatrix Potter's The Tale of Mr. Tod (1912; featuring badger Tommy Brock), the Rupert Bear adventures by Mary Tourtel (appearing since 1920), T. H. White's Arthurian fantasy novels The Once and Future King (1958, written 1938–41) and The Book of Merlyn (1977), Fantastic Mr. Fox (1970) by Roald Dahl, Richard Adams's Watership Down (1972), Colin Dann's The Animals of Farthing Wood (1979), and Erin Hunter's Warriors (appearing since 2003). In the historic novel Incident at Hawk's Hill (1971) by Allan W. Eckert a badger is one of the main characters. Badgers are also featured in films and animations: a flash video called Badgers shows a cete doing calisthenics. The 1973 Disney animated film Robin Hood depicts the character of Friar Tuck as a badger. In the Doctor Snuggles series, Dennis the handyman was a badger. In Europe, badgers were traditionally used to predict the length of winter. The badger is the state animal of the U.S. state of Wisconsin, though this is a reference to the state's early miners rather than the animal itself, and Bucky Badger is the mascot of the athletic teams at the University of Wisconsin–Madison. The badger is also the official mascot of Brock University in St. Catharines, Ontario, Canada; The University of Sussex, England; and St Aidan's College at the University of Durham. In 2007, the appearance of honey badgers around the British base at Basra, Iraq, fueled rumours among the locals that British forces deliberately released "man-eating" and "bear-like" badgers to spread panic. These allegations were denied by the British army and the director of Basra's veterinary hospital. On 28 August 2013, the PC video game Shelter was released by developers Might and Delight in which players control a mother badger protecting her cubs. An internet meme (Badger, badger, badger) appeared viral in the early years of YouTube, later initiating other versions of the animation. As a sub-series of the Sonic the Hedgehog franchise, Sticks the Badger is one of the main characters of the Sonic Boom series.
Biology and health sciences
Mustelidae
Animals
4901720
https://en.wikipedia.org/wiki/Arthropod%20leg
Arthropod leg
The arthropod leg is a form of jointed appendage of arthropods, usually used for walking. Many of the terms used for arthropod leg segments (called podomeres) are of Latin origin, and may be confused with terms for bones: coxa (meaning hip, : coxae), trochanter, femur (: femora), tibia (: tibiae), tarsus (: tarsi), ischium (: ischia), metatarsus, carpus, dactylus (meaning finger), patella (: patellae). Homologies of leg segments between groups are difficult to prove and are the source of much argument. Some authors posit up to eleven segments per leg for the most recent common ancestor of extant arthropods but modern arthropods have eight or fewer. It has been argued that the ancestral leg need not have been so complex, and that other events, such as successive loss of function of a Hox-gene, could result in parallel gains of leg segments. In arthropods, each of the leg segments articulates with the next segment in a hinge joint and may only bend in one plane. This means that a greater number of segments is required to achieve the same kinds of movements that are possible in vertebrate animals, which have rotational ball-and-socket joints at the base of the fore and hind limbs. Biramous and uniramous The appendages of arthropods may be either biramous or uniramous. A uniramous limb comprises a single series of segments attached end-to-end. A biramous limb, however, branches into two, and each branch consists of a series of segments attached end-to-end. The external branch (ramus) of the appendages of crustaceans is known as the exopod or exopodite, while the internal branch is known as the endopod or endopodite. Other structures aside from the latter two are termed exites (outer structures) and endites (inner structures). Exopodites can be easily distinguished from exites by the possession of internal musculature. The exopodites can sometimes be missing in some crustacean groups (amphipods and isopods), and they are completely absent in insects. The legs of insects and myriapods are uniramous. In crustaceans, the first antennae are uniramous, but the second antennae are biramous, as are the legs in most species. For a time, possession of uniramous limbs was believed to be a shared, derived character, so uniramous arthropods were grouped into a taxon called Uniramia. It is now believed that several groups of arthropods evolved uniramous limbs independently from ancestors with biramous limbs, so this taxon is no longer used. Chelicerata Arachnid legs differ from those of insects by the addition of two segments on either side of the tibia, the patella between the femur and the tibia, and the metatarsus (sometimes called basitarsus) between the tibia and the tarsus (sometimes called telotarsus), making a total of seven segments. The tarsus of spiders has claws at the end as well as a hook that helps with web-spinning. Spider legs can also serve sensory functions, with hairs that serve as touch receptors, as well as an organ on the tarsus that serves as a humidity receptor, known as the tarsal organ. The situation is identical in scorpions, but with the addition of a pre-tarsus beyond the tarsus. The claws of the scorpion are not truly legs, but are pedipalps, a different kind of appendage that is also found in spiders and is specialised for predation and mating. In Limulus, there are no metatarsi or pretarsi, leaving six segments per leg. Crustacea The legs of crustaceans are divided primitively into seven segments, which do not follow the naming system used in the other groups. They are: coxa, basis, ischium, merus, carpus, propodus, and dactylus. In some groups, some of the limb segments may be fused together. The claw (chela) of a lobster or crab is formed by the articulation of the dactylus against an outgrowth of the propodus. Crustacean limbs also differ in being biramous, whereas all other extant arthropods have uniramous limbs. Myriapoda Myriapods (millipedes, centipedes and their relatives) have seven-segmented walking legs, comprising coxa, trochanter, prefemur, femur, tibia, tarsus, and a tarsal claw. Myriapod legs show a variety of modifications in different groups. In all centipedes, the first pair of legs is modified into a pair of venomous fangs called forcipules. In most millipedes, one or two pairs of walking legs in adult males are modified into sperm-transferring structures called gonopods. In some millipedes, the first leg pair in males may be reduced to tiny hooks or stubs, while in others the first pair may be enlarged. Insects Insects and their relatives are hexapods, having six legs, connected to the thorax, each with five components. In order from the body they are the coxa, trochanter, femur, tibia, and tarsus. Each is a single segment, except the tarsus which can be from three to seven segments, each referred to as a tarsomere. Except in species in which legs have been lost or become vestigial through evolutionary adaptation, adult insects have six legs, one pair attached to each of the three segments of the thorax. They have paired appendages on some other segments, in particular, mouthparts, antennae and cerci, all of which are derived from paired legs on each segment of some common ancestor. Some larval insects do however have extra walking legs on their abdominal segments; these extra legs are called prolegs. They are found most frequently on the larvae of moths and sawflies. Prolegs do not have the same structure as modern adult insect legs, and there has been a great deal of debate as to whether they are homologous with them. Current evidence suggests that they are indeed homologous up to a very primitive stage in their embryological development, but that their emergence in modern insects was not homologous between the Lepidoptera and Symphyta. Such concepts are pervasive in current interpretations of phylogeny. In general, the legs of larval insects, particularly in the Endopterygota, vary more than in the adults. As mentioned, some have prolegs as well as "true" thoracic legs. Some have no externally visible legs at all (though they have internal rudiments that emerge as adult legs at the final ecdysis). Examples include the maggots of flies or grubs of weevils. In contrast, the larvae of other Coleoptera, such as the Scarabaeidae and Dytiscidae have thoracic legs, but no prolegs. Some insects that exhibit hypermetamorphosis begin their metamorphosis as planidia, specialised, active, legged larvae, but they end their larval stage as legless maggots, for example the Acroceridae. Among the Exopterygota, the legs of larvae tend to resemble those of the adults in general, except in adaptations to their respective modes of life. For example, the legs of most immature Ephemeroptera are adapted to scuttling beneath underwater stones and the like, whereas the adults have more gracile legs that are less of a burden during flight. Again, the young of the Coccoidea are called "crawlers" and they crawl around looking for a good place to feed, where they settle down and stay for life. Their later instars have no functional legs in most species. Among the Apterygota, the legs of immature specimens are in effect smaller versions of the adult legs. Fundamental morphology of insect legs A representative insect leg, such as that of a housefly or cockroach, has the following parts, in sequence from most proximal to most distal: coxa trochanter femur tibia tarsus pretarsus. Associated with the leg itself there are various sclerites around its base. Their functions are articular and have to do with how the leg attaches to the main exoskeleton of the insect. Such sclerites differ considerably between unrelated insects. Coxa The coxa is the proximal segment and functional base of the leg. It articulates with the pleuron and associated sclerites of its thoracic segment, and in some species it articulates with the edge of the sternite as well. The homologies of the various basal sclerites are open to debate. Some authorities suggest that they derive from an ancestral subcoxa. In many species, the coxa has two lobes where it articulates with the pleuron. The posterior lobe is the meron which is usually the larger part of the coxa. A meron is well developed in Periplaneta, the Isoptera, Neuroptera and Lepidoptera. Trochanter The trochanter articulates with the coxa but usually is attached rigidly to the femur. In some insects, its appearance may be confusing; for example it has two subsegments in the Odonata. In parasitic Hymenoptera, the base of the femur has the appearance of a second trochanter. Femur In most insects, the femur is the largest region of the leg; it is especially conspicuous in many insects with saltatorial legs because the typical leaping mechanism is to straighten the joint between the femur and the tibia, and the femur contains the necessary massive bipennate musculature. Tibia The tibia is the fourth section of the typical insect leg. As a rule, the tibia of an insect is slender in comparison to the femur, but it generally is at least as long and often longer. Near the distal end, there is generally a tibial spur, often two or more. In the Apocrita, the tibia of the foreleg bears a large apical spur that fits over a semicircular gap in the first segment of the tarsus. The gap is lined with comb-like bristles, and the insect cleans its antennae by drawing them through. Tarsus The ancestral tarsus was a single segment and in the extant Protura, Diplura and certain insect larvae the tarsus also is single-segmented. Most modern insects have tarsi divided into subsegments (tarsomeres), usually about five. The actual number varies with the taxon, which may be useful for diagnostic purposes. For example, the Pterogeniidae characteristically have 5-segmented fore- and mid-tarsi, but 4-segmented hind tarsi, whereas the Cerylonidae have four tarsomeres on each tarsus. The distal segment of the typical insect leg is the pretarsus. In the Collembola, Protura and many insect larvae, the pretarsus is a single claw. On the pretarsus most insects have a pair of claws (ungues, singular unguis). Between the ungues, a median unguitractor plate supports the pretarsus. The plate is attached to the apodeme of the flexor muscle of the ungues. In the Neoptera, the parempodia are a symmetrical pair of structures arising from the outside (distal) surface of the unguitractor plate between the claws. It is present in many Hemiptera and almost all Heteroptera. Usually, the parempodia are bristly (setiform), but in a few species they are fleshy. Sometimes the parempodia are reduced in size so as to almost disappear. Above the unguitractor plate, the pretarsus expands forward into a median lobe, the arolium. Webspinners (Embioptera) have an enlarged basal tarsomere on each of the front legs, containing the silk-producing glands. Under their pretarsi, members of the Diptera generally have paired lobes or pulvilli, meaning "little cushions". There is a single pulvillus below each unguis. The pulvilli often have an arolium between them or otherwise a median bristle or empodium, meaning the meeting place of the pulvilli. On the underside of the tarsal segments, there frequently are pulvillus-like organs or plantulae. The arolium, plantulae and pulvilli are adhesive organs enabling their possessors to climb smooth or steep surfaces. They all are outgrowths of the exoskeleton and their cavities contain blood. Their structures are covered with tubular tenent hairs, the apices of which are moistened by a glandular secretion. The organs are adapted to apply the hairs closely to a smooth surface so that adhesion occurs through surface molecular forces. Insects control the ungues through muscle tension on a long tendon, the "retractor unguis" or "long tendon". In insect models of locomotion and motor control, such as Drosophila (Diptera), locusts (Acrididae), or stick insects (Phasmatodea), the long tendon courses through the tarsus and tibia before reaching the femur. Tension on the long tendon is controlled by two muscles, one in the femur and one in the tibia, which can operate differently depending on how the leg is bent. Tension on the long tendon controls the claw, but also bends the tarsus and likely affects its stiffness during walking. Variations in functional anatomy of insect legs The typical thoracic leg of an adult insect is adapted for running (cursorial), rather than for digging, leaping, swimming, predation, or other similar activities. The legs of most cockroaches are good examples. However, there are many specialized adaptations, including: The forelegs of mole crickets (Gryllotalpidae) and some scarab beetle (Scarabaeidae) are adapted to burrowing in earth (fossorial). The raptorial forelegs of mantidflies (Mantispidae), mantises (Mantodea), and ambush bugs (Phymatinae) are adapted to seizing and holding prey in one way, while those of whirligig beetles Gyrinidae are long and adapted for grasping food or prey in quite a different way. The forelegs of some butterflies, such as many Nymphalidae, are reduced so greatly that only two pairs of functional walking legs remain. In most grasshoppers and crickets (Orthoptera), the hind legs are saltatorial; they have heavily bipinnately muscled femora and straight, long tibiae adapted to leaping and to some extent to defence by kicking. Flea beetles (Alticini) also have powerful hind femora that enable them to leap spectacularly. Other beetles with spectacularly muscular hind femora may not be saltatorial at all, but very clumsy; for example, particular species of bean weevils (Bruchinae) use their swollen hind legs for forcing their way out of the hard-shelled seeds of plants such as Erythrina in which they grew to adulthood. The legs of the Odonata, the dragonflies and damselflies, are adapted for seizing prey that the insects feed on while flying or while sitting still on a plant; they are nearly incapable of using them for walking. The majority of aquatic insects use their legs only for swimming (natatorial), though many species of immature insects swim by other means such as by wriggling, undulating, or expelling water in jets. Evolution and homology of arthropod legs The embryonic body segments (somites) of different arthropods taxa have diverged from a simple body plan with many similar appendages which are serially homologous, into a variety of body plans with fewer segments equipped with specialised appendages. The homologies between these have been discovered by comparing genes in evolutionary developmental biology.
Biology and health sciences
External anatomy and regions of the body
Biology
4902413
https://en.wikipedia.org/wiki/Trypanosoma%20cruzi
Trypanosoma cruzi
Trypanosoma cruzi is a species of parasitic euglenoids. Among the protozoa, the trypanosomes characteristically bore tissue in another organism and feed on blood (primarily) and also lymph. This behaviour causes disease or the likelihood of disease that varies with the organism: Chagas disease in humans, dourine and surra in horses, and a brucellosis-like disease in cattle. Parasites need a host body and the haematophagous insect triatomine (descriptions "assassin bug", "cone-nose bug", and "kissing bug") is the major vector in accord with a mechanism of infection. The triatomine likes the nests of vertebrate animals for shelter, where it bites and sucks blood for food. Individual triatomines infected with protozoa from other contact with animals transmit trypanosomes when the triatomine deposits its faeces on the host's skin surface while blood feeding. Penetration of the infected faeces is further facilitated by the scratching of the bite area by the human or animal host. Etymology The specific name "cruzi" is an honor to Brazilian scientist Oswaldo Cruz, who taught discoverer Carlos Chagas. Life cycle The Trypanosoma cruzi life cycle starts in an animal reservoir, usually mammals, wild or domestic, including humans. A triatomine bug serves as the vector. While taking a blood meal, it ingests T. cruzi. In the triatomine bug (the principal species of which in terms of parasite transmission to humans being Triatoma infestans) the parasite goes into the epimastigote stage, making it possible to reproduce. After reproducing through binary fission, the epimastigotes move onto the rectal cell wall, where they become infectious. Infectious T. cruzi are called metacyclic trypomastigotes. When the triatomine bug subsequently takes a blood meal from a host, it defecates—its waste containing T. cruzi propagation stages. As a result, Trumper and Gorla 1991 find transmission success centers around the triatomine's defecation behaviors. Alternatively, in nature and in most recent cases of epidemiological outbreaks, infection occurs through the oral ingestion of parasites (mainly through a lack of infected food disinfection in the case of human infection). The trypomastigotes are in the feces and are capable of swimming into the host's cells using flagella, a characteristic swimming tail dominant in the Euglenoid class of protists. The trypomastigotes enter the host through the bite wound or by crossing mucous membranes. The host cells contain macromolecules such as laminin, thrombospondin, heparin sulphate, and fibronectin that cover their surface. These macromolecules are essential for adhesion between parasite and host and for the process of host invasion by the parasite. The trypomastigotes must cross a network of proteins that line the exterior of the host cells in order to make contact and invade the host cells. The molecules and proteins on the cytoskeleton of the cell also bind to the surface of the parasite and initiate host invasion. Pathophysiology Trypanosomiasis in humans progresses with the development of the trypanosome into a trypomastigote in the blood and into an amastigote in tissues. As the infection progresses, the number of infected cells increases, as well as the number of amastigotes per infected cell (APC). If the average of APC is one or close to one, the infection has just begun. A higher APC means that amastigotes have started to replicate. The acute form of trypanosomiasis is usually unnoticed, although it may manifest itself as a localized swelling at the site of entry. In this form appears elevated parasitism, myocarditis, and changes in the myocardial gene expression. The chronic form may develop 30 to 40 years after infection and affect internal organs (e.g., the heart, the oesophagus, the colon, and the peripheral nervous system). Affected people may die from heart failure and severe heart lesions. Acute cases are treated with nifurtimox and benznidazole, but no effective therapy for chronic cases is currently known. Cardiac manifestations Researchers of Chagas’ disease have demonstrated several processes that occur with all cardiomyopathies. The first event is an inflammatory response. Following inflammation, cellular damage occurs. Finally, in the body's attempt to recover from the cellular damage, fibrosis begins in the cardiac tissue. Another cardiomyopathy found in nearly all cases of chronic Chagas’ disease is thromboembolic syndrome. Thromboembolism describes thrombosis, the formation of a clot, and its main complication is embolism, the carrying of a clot to a distal section of a vessel and causing blockage there. This occurrence contributes to the death of a patient by four means: arrhythmias, stasis secondary to cardiac dilation, mural endocarditis, and cardiac fibrosis. These thrombi also affect other organs such as the brain, spleen and kidney. Myocardial biochemical response Subcellular findings in murine studies with induced T. cruzi infection revealed that the chronic state is associated with the persistent elevation of phosphorylated (activated) extracellular-signal-regulated kinase (ERK), AP-1, and NF-κB. Also, the mitotic regulator for G1 progression, cyclin D1 was found to be activated. Although there was no increase in any isoform of ERK, there was an increased concentration of phosphorylated ERK in mice infected with T. cruzi. It was found that within seven days the concentration of AP-1 was significantly higher in T. cruzi–infected mice when compared to the control. Elevated levels of NF-κB have also been found in myocardial tissue, with the highest concentrations being found in the vasculature. It was indicated through Western blot that cyclin D1 was upregulated from day 1 to day 60 post-infection. It was also indicated through immunohistochemical analysis that the areas that produced the most cyclin D1 were the vasculature and interstitial regions of the heart. Rhythm abnormalities Conduction abnormalities are also associated with T. cruzi. At the base of these conduction abnormalities is a depopulation of parasympathetic neuronal endings on the heart. Without proper parasympathetic innervations, one could expect to find not only chronotropic but also inotropic abnormalities. It is true that all inflammatory and non-inflammatory heart disease may display forms of parasympathetic denervation; this denervation presents in a descriptive fashion in Chagas’ disease. It has also been indicated that the loss of parasympathetic innervations can lead to sudden death due to a severe cardiac failure that occurs during the acute stage of infection. Another conduction abnormality presented with chronic Chagas’ disease is a change in ventricular repolarization, which is represented on an electrocardiogram as the T-wave. This change in repolarization inhibits the heart from relaxing and properly entering diastole. Changes in the ventricular repolarization in Chagas’ disease are likely due to myocardial ischemia. This ischemia can also lead to fibrillation. This sign is usually observed in chronic Chagas’ disease and is considered a minor electromyocardiopathy. Epicardial lesions Villous plaque is characterized by exophytic epicardial thickening, meaning that the growth occurs at the border of the epicardium and not the center of mass. Unlike milk spots and chagasic rosary, inflammatory cells and vasculature are present in villous plaque. Since villous plaque contains inflammatory cells it is reasonable to suspect that these lesions are more recently formed than milk spots or chagasic rosary. Motility When mammalian cells are present, trypomastigotes move in a sub diffusive fashion in short periods of time, but under control conditions their motion is diffusive. Parasites increase their mean speed; they explore smaller areas at short time scales and show a preference to be located nearby cells’ periphery. The extent of these changes depends on the cell type. Therefore, T. cruzi trypomastigotes can sense mammalian cells and modify their motility patterns to prepare themselves for infection. Parasite reorientation Epimastigotes, which are the culture forms of T. cruzi, swim in the direction of their flagellum, due to tip-to-base symmetrical flagellar beats, that are interrupted by base-to-tip highly asymmetric beats. Switching between both beating modes facilitates parasite reorientation, allowing many movements and trajectories. Epimastigote motility is characterized by alternation of quasi-rectilinear and restricted and complex paths. Invasion efficiency The invasion efficiency is positively correlated with the average parasite mean speed, and negatively correlated with the mean square displacement (MSD). Therefore, the motility modifications undergone by the parasites in the presence of mammalian cells may be functionally related to the cell invasion process. Moreover, different parasite strains infect different tissues with a variable invasion efficiency, due to the high genetic and phenotypic variability found among T. cruzi strains. T. cruzi trypomastigotes are capable of sensing mammalian cells to a different degree, depending on the cell type, and can modify their motility patterns to increase their invasion efficiency. Virulence chemistry T. cruzi does not produce prostaglandins itself. Instead Pinge-Filho et al. 1999 finds that the parasite induces mice to overproduce 2-series prostaglandins themselves. These PGs are immunosuppressive and so aid in immune evasion. Imipramines are trypanocidal. Doyle & Weinbach 1989 find imipramine and various of its derivatives 3-Chlorimipramine, 2-Nitroimipramine, and 2-Nitrodesmethylimipramine are trypanocidal in vitro. They find 2-Nitrodesmethylimipramine is the most effective among them. Epidemiology T. cruzi transmission has been documented in the Southwestern U.S., and warming trends may allow vector species to move north. U.S. domestic and wild animals are reservoirs for T. cruzi. Triatomine species in the southern U.S. have taken human blood meals, but because triatomines do not favor typical U.S. housing, risk to the U.S. population is very low. Chagas' disease's geographical occurrence happens worldwide but high-risk individuals include those who don't have access to proper housing. Its reservoir is in wild animals but its vector is a kissing bug. This is a contagious disease and can be transmitted through a number of ways: congenital transmission, blood transfusion, organ transplantation, consumption of uncooked food that has been contaminated with feces from infected bugs, and accidental laboratory exposure. Over 130 species can transmit this parasite Six taxonomic subunits are recognised. New data in 2024 suggests the prevalence of Trypanosoma cruzi infection among solid organ transplant recipients in the U.S. is on the rise, highlighting the need for enhanced screening protocols. The research revealed that lung transplant recipients had the highest prevalence of positive serology at 21%, followed by heart recipients at 14%, compared to lower rates in liver (6%) and kidney (5%) transplant recipients. Clinical The incubation period is five to fourteen days after a host comes in contact with feces. Chagas disease undergoes two phases, which are the acute and the chronic phase. The acute phase can last from two weeks to two months but can go unnoticed because symptoms are minor and short-lived. Symptoms of the acute phase include swelling, fever, fatigue, and diarrhea. The chronic phase causes digestive problems, constipation, heart failure, and pain in the abdomen. Diagnostic methods include microscopic examination, serology, or the isolation of the parasite by inoculating blood into a guinea pig, mouse, or rat. No vaccines are available. The most used method for epidemiological management and disease prevention resides within vector control, mainly by the use of insecticides and taking preventative measures such as applying bug repellent on the skin, wearing protective clothing, and staying in higher quality hotels when traveling. Investing in quality housing would be ideal to decrease risk of contracting this disease. Genetic exchange Genetic exchange has been identified among field populations of T. cruzi. This process appears to involve genetic recombination as well as a meiotic mechanism. Despite the capability for sexual reproduction, natural populations of T. cruzi exhibit clonal population structures. It appears that frequent sexual reproduction events occur primarily between close relatives resulting in an apparent clonal population structure.
Biology and health sciences
Excavata
Plants
4908578
https://en.wikipedia.org/wiki/Eastern%20Interconnection
Eastern Interconnection
The Eastern Interconnection is one of the two major alternating-current (AC) electrical grids in the North American power transmission grid. The other major interconnection is the Western Interconnection. The three minor interconnections are the Quebec, Alaska, and Texas interconnections. All of the electric utilities in the Eastern Interconnection are electrically tied together during normal system conditions and operate at a synchronized frequency at an average of 60 Hz. The Eastern Interconnection reaches from Central Canada eastward to the Atlantic coast (excluding Quebec), south to Florida, and back to the western Great Plains (excluding most of Texas). Interconnections can be tied to each other via high-voltage direct current power transmission lines (DC ties), or with variable-frequency transformers (VFTs), which permit a controlled flow of energy while also functionally isolating the independent AC frequencies of each side. The Eastern Interconnection is tied to the Western Interconnection with six DC ties, to the Texas Interconnection with two DC ties, and to the Quebec Interconnection with four DC ties and a VFT. In 2016, National Renewable Energy Laboratory simulated a year with 30% renewable energy (wind and solar power) in 5-minute intervals. Results show a stable grid with some changes in operation. Electricity demand The North American Electric Reliability Corporation (NERC) reported in 2008 the following actual and projected consumption for the regions of the Eastern Interconnection (all figures in gigawatts):
Technology
Specific energy structure
null
4908586
https://en.wikipedia.org/wiki/Western%20Interconnection
Western Interconnection
The Western Interconnection is a wide area synchronous grid and one of the two major alternating current (AC) power grids in the North American power transmission grid. The other major wide area synchronous grid is the Eastern Interconnection. The minor interconnections are the Québec Interconnection, the Texas Interconnection, and the Alaska Interconnections. All of the electric utilities in the Western Interconnection are electrically tied together during normal system conditions and operate at a synchronized frequency of 60 Hz. The Western Interconnection stretches from Western Canada south to Baja California in Mexico, reaching eastward over the Rockies to the Great Plains. Interconnections can be tied to each other via high-voltage direct current power transmission lines (DC ties) such as the north-south Pacific DC Intertie, or with variable-frequency transformers (VFTs), which permit a controlled flow of energy while also functionally isolating the independent AC frequencies of each side. There are six DC ties to the Eastern Interconnection in the US and one in Canada, and there are proposals to add four additional ties. It is not tied to the Alaska Interconnection. Consumption In 2015, WECC had an energy consumption of 883 TWh, roughly equally distributed between industrial, commercial and residential consumption. There was a summer peak demand of 150,700 MW and a winter peak demand (2014–15) of 126,200 MW. Production The region had a nameplate capacity of 265 GW in 2015, 276 GW in 2019, and 286 GW in 2021. Together, wind, solar, and hydro resources account for 47% of installed capacity. Installed coal capacity was 24 GW, compared to roughly 34 GW of wind and 28 GW of solar. While the resource mix is changing, with wind and solar eclipsing coal in installed capacity, in 2021 coal still generated slightly more power than wind and solar combined, down from twice as much in 2017.
Technology
Specific energy structure
null
6412655
https://en.wikipedia.org/wiki/Tabby%20cat
Tabby cat
A tabby cat, or simply tabby, is any domestic cat (Felis catus) with a distinctive M-shaped marking on its forehead, stripes by its eyes and across its cheeks, along its back, around its legs and tail, and characteristic striped, dotted, lined, flecked, banded, or swirled patterns on the body: neck, shoulders, sides, flanks, chest, and abdomen. The four known distinct patterns, each linked to genetics, are the mackerel, classic or blotched, ticked, and spotted tabby patterns. "Tabby" is not a breed of cat but a coat pattern. It is very common amongst non-pedigree cats around the world. The tabby pattern occurs naturally and is connected both to the coat of the domestic cat's direct ancestor and to those of its close relatives: the African wildcat (Felis lybica lybica), the European wildcat (Felis silvestris), and the Asiatic wildcat (Felis lybica ornata), all of which have similar coats, both by pattern and coloration. One genetic study of domestic cats found at least five founders. Etymology The English term tabby originally referred to "striped silk taffeta", from the French word tabis, meaning "a rich watered silk". This can be further traced to the Middle French atabis (14th century), which stemmed from the Arabic term عتابية / ʿattābiyya. This word is a reference to the Attabiya district of Baghdad, noted for its striped cloth and silk; itself named after the Umayyad governor of Mecca Attab ibn Asid. Such silk cloth became popular in the Muslim world and spread to England, where the word "tabby" became commonly used in the 17th and 18th centuries. Use of the term tabby cat for a cat with a striped coat began in the 1690s, and was shortened to tabby in 1774. The notion that tabby indicates a female cat may be due to the feminine proper name Tabby as a nickname of "Tabitha". Patterns The four known distinct patterns, each having a sound genetic explanation, are the mackerel, classic, ticked, and spotted tabby patterns. A fifth pattern is formed by any of the four basic patterns being included as part of a patched pattern. A patched tabby is a cat with calico or tortoiseshell markings combined with patches of tabby coat (such cats are called caliby and torbie, respectively, in cat fancy). All five patterns have been observed in random-bred populations. Several additional patterns are found in specific breeds and so are not as well known. For example, a modified classic tabby is found in the Sokoke breed. Some of these rarer patterns are because of the interaction of wild and domestic genes, as with the rosette and marbled patterns found in the Bengal breed. Mackerel (striped) tabby The mackerel, or striped, tabby pattern is made up of thin vertical, gently curving stripes on the sides of the body. These stripes can be continuous or broken into bars and short segments/spots, especially on the flanks and stomach. Three or five vertical lines in an "M" shape almost always appear on the forehead, along with dark lines from the corners of the eyes, one or more crossing each cheek, with many stripes and lines at various angles on the neck and shoulder area, on the flanks, and around the legs and tail, marks which are more or less perpendicular to the length of the body part. Mackerel tabbies are also called 'fishbone tabbies,' probably doubly named after the mackerel fish. Mackerels are the most common among tabbies. Classic (blotched) tabby The classic tabby, also known as blotched tabby, has the 'M' pattern on the forehead but, rather than primarily thin stripes or spots, the body markings are thick curving bands in whorls or a swirled pattern, with a distinctive mark on each side of the body resembling a bullseye. 80% of modern-day cats have the recessive allele responsible for the classic pattern. Black tabbies generally have dark browns, olives, and ochres that stand out more against their black colors. Classic tabbies each have a light-colored "butterfly" pattern on the shoulders and three thin stripes (the center stripe being the darkest) running along the spine. The legs, tail, and cheeks of a classic tabby have thick stripes, bands, and/or bars. The gene responsible for the coloring of a classic tabby is recessive. Many American shorthair cats demonstrate this pattern. Ticked tabby The ticked tabby pattern is due to even fields of agouti hairs, each with distinct bands of colour, which break up the tabby patterning into a salt-and-pepper appearance that makes them look sand-like—thus there are few to no stripes or bands. Residual ghost striping and/or barring can often be seen on the lower legs, face, and belly and sometimes at the tail tip, as well as the standard 'M' and a long dark line running along the spine, primarily in ticked tabbies that also carry a mackerel or classic tabby allele. These types of cats come in many forms and colours. Spotted tabby It's thought that the spotted tabby results from a modifier gene that breaks up the mackerel tabby pattern and causes the stripes to appear as spots. Similarly, the classic tabby pattern may be broken by the spotted tabby gene into large spots. One can see both large and small spot patterns in the Australian Mist, Bengal, Serengeti, Savannah, Egyptian Mau, Arabian Mau, Maine Coon, and Ocicat breeds, among others, as well as some crosses. Naturally, the most common spotted tabby looks most similar to the mackerel tabby, including the classic marks on the limbs, tail, and head, as well as the 'M' on the forehead. Orange tabby The orange tabby, also commonly called red or ginger tabby, is a color-variant of the above patterns, having pheomelanin (O allele) instead of eumelanin (o allele). Though generally a mix of orange and white, the ratio between fur color varies, from a few orange spots on the back of a white cat to a completely orange coloring with no white at all. The orange areas can be darker or lighter spots or stripes, but the white is nearly always solid and usually appears on the underbelly, paws, chest, and muzzle. The face markings are reminiscent of the mackerel or classic tabby and, with orange/white, inclusion of a white spot on the face that covers the mouth, coming to a point around the forehead. Because a masking gene is present on white fur, its inclusion is often asymmetrical, leading to more or less white fur on each paw or side of the face. Roughly 75% of ginger cats are male. Male cats with the gene for orange can be either X°Y ginger or X-Y black or non-ginger tabby. Females with the gene have three possibilities: X-X- black or non-ginger tabby, X°X° ginger, and X-X° tortoiseshell, thus male cats cannot be tortoiseshell unless they have two X chromosomes. Torbies and calibies Since female cats have two X chromosomes, it is possible for them to have the O (orange) allele on one X chromosome and o (black) on the other. This causes both colors to appear in random patches, either with or without the tabby pattern. When paired with the tabby pattern, these cats are known as torbie cats. If there is also white spotting, the cat is known as a caliby (US English). Genetic explanations Two distinct gene loci, the agouti gene locus (two alleles) and the tabby locus (three alleles), and one modifier, spotted (two alleles), cause the four basic tabby patterns. The fifth pattern is emergent, being expressed by female cats with one black and one orange gene on each of their two X chromosomes, and is explained by Barr bodies and the genetics of sex-linked inheritance. The agouti gene, with its two alleles, A and a, controls whether or not the tabby pattern is expressed. The dominant A expresses the underlying tabby pattern, while the recessive non-agouti or "hyper-melanistic" allele, a, does not. Solid-color (black or blue) cats have the aa combination, hiding the tabby pattern, although sometimes a suggestion of the underlying pattern can be seen ("ghost striping"). This underlying pattern, whether classic, mackerel, ticked or spotted, is most easily distinguishable under bright light in the early stages of kittenhood and on the tail in adulthood. However, the agouti gene primarily controls the production of black pigment, so a cat with an O allele for orange color will still express the tabby pattern. As a result, both red cats and the patches of red on tortoiseshell cats will always show tabby patterning, though sometimes the stripes are muted—especially in cream and blue/cream cats due to the pigment dilution. The mackerel pattern and its Tm allele at the tabby gene locus is dominant over the classic (or blotched) allele, Tb. So a cat with a TmTm or TmTb genotype sets the basic pattern of thin stripes (mackerel tabby) that underlies the coat, while a TbTb cat will express a classic tabby coat pattern with thick bands and a ring or concentric stripes on its sides. The ticked tabby pattern is a result of a different allele at the same gene locus as the mackerel and classic tabby patterns and this allele is dominant over the others. So a TaTa genotype as well as TaTm and TaTb genotypes will be ticked tabbies. The ticked tabby coat essentially masks any other tabby pattern, producing a non-patterned, or agouti tabby (much like the wild type agouti coat of many other mammals and the sable coat of dogs), with virtually no stripes or bars. If the ticked allele is present, no other tabby pattern will be expressed. The ticked allele actually shows incomplete dominance: cats homozygous for the ticked allele (TaTa) have less barring than cats heterozygous for the ticked allele (TaTm or TaTb). The spotted gene is a separate locus theorized to be directly connected to the Tm allele; it 'breaks' the lines and thin stripes of a mackerel tabby, creating spots. The spotted gene has a dominant and a recessive allele as well, which means a spotted cat will have an Sp Sp or Sp sp genotype along with at least one Tm allele and at least one A allele at those alleles’ respective loci. Temperament Personality and aggression vary widely from cat to cat, and is multifactorial. A 2015 study from University of California, Davis sought to examine the relationship between coat color and behavior in cats. Researchers ran statistical analyses from 1,274 online surveys completed by cat owners. The owners were asked to rank the cats' aggressiveness during interactions with human aggression, handling aggression, and veterinary aggression. The study concluded that, though aggressive behaviors did show up in different levels between different coats, these were relatively minor. The larger differences in aggression seemed to researchers to be sex-linked, rather than related to any coat pattern or coloring: A similar study also reported no evidence of a link between a cat's behavior and its coat pattern; however, it suggested that any differences were just how they were being perceived, i.e. people perceive orange cats as "friendly" and white cats as "shy", and then look for confirmation of these perceptions. History Since the tabby pattern is a common wild type, it might be assumed that medieval cats were tabbies. However, the natural philosopher John Aubrey believed this to be untrue. Sometime after the mid-17th century, he noted that William Laud, the Archbishop of Canterbury was "" and "". He then claimed that "" Despite this, most drawings and paintings of cats in medieval manuscripts depict them as tabbies. Notable examples Notable examples of tabby cats include: Think Think: one of two cats belonging to former President of Taiwan, Tsai Ing-wen. The Ithaca Kitty: a grey tabby cat with seven toes on each front foot that inspired one of the first mass-produced stuffed toys. Morris the Cat: an orange tabby who began appearing as an advertising mascot for 9Lives cat food in 1969. Morris became an iconic television character in the following decades, being played by three orange tabbies since 1968, all rescued from shelters. Maru: a tabby from Japan, and one of the most popular cats in the age of the internet. He once held the Guinness World Record for the most-watched animal on YouTube. Orangey: an orange tabby who starred in a number of movie and televisions roles. His most notable role was that of Cat in the 1961 film Breakfast at Tiffany's, for which he won his second PATSY Award. He is the only cat to win twice, his first win coming in 1951 for Rhubarb. Larry: a former stray tabby who was rescued by Battersea Dogs & Cats Home, and went on to become Chief Mouser to the Cabinet Office at 10 Downing Street.
Biology and health sciences
Cats
Animals
6415314
https://en.wikipedia.org/wiki/Chromosome%20abnormality
Chromosome abnormality
A chromosomal abnormality, chromosomal anomaly, chromosomal aberration, chromosomal mutation, or chromosomal disorder is a missing, extra, or irregular portion of chromosomal DNA. These can occur in the form of numerical abnormalities, where there is an atypical number of chromosomes, or as structural abnormalities, where one or more individual chromosomes are altered. Chromosome mutation was formerly used in a strict sense to mean a change in a chromosomal segment, involving more than one gene. Chromosome anomalies usually occur when there is an error in cell division following meiosis or mitosis. Chromosome abnormalities may be detected or confirmed by comparing an individual's karyotype, or full set of chromosomes, to a typical karyotype for the species via genetic testing. Sometimes chromosomal abnormalities arise in the early stages of an embryo, sperm, or infant. They can be caused by various environmental factors. The implications of chromosomal abnormalities depend on the specific problem, they may have quite different ramifications. Some examples are Down syndrome and Turner syndrome. Numerical abnormality An abnormal number of chromosomes is known as aneuploidy, and occurs when an individual is either missing a chromosome from a pair (resulting in monosomy) or has more than two chromosomes of a pair (trisomy, tetrasomy, etc.). Aneuploidy can be full, involving a whole chromosome missing or added, or partial, where only part of a chromosome is missing or added. Aneuploidy can occur with sex chromosomes or autosomes. Rather than having monosomy, or only one copy, the majority of aneuploid people have trisomy, or three copies of one chromosome. An example of trisomy in humans is Down syndrome, which is a developmental disorder caused by an extra copy of chromosome 21; the disorder is therefore also called "trisomy 21". An example of monosomy in humans is Turner syndrome, where the individual is born with only one sex chromosome, an X. Sperm aneuploidy Exposure of males to certain lifestyle, environmental and/or occupational hazards may increase the risk of aneuploid spermatozoa. In particular, risk of aneuploidy is increased by tobacco smoking, and occupational exposure to benzene, insecticides, and perfluorinated compounds. Increased aneuploidy is often associated with increased DNA damage in spermatozoa. Structural abnormalities When the chromosome's structure is altered, this can take several forms: Deletions: A portion of the chromosome is missing or has been deleted. Known disorders in humans include Wolf–Hirschhorn syndrome, which is caused by partial deletion of the short arm of chromosome 4; and Jacobsen syndrome, also called the terminal 11q deletion disorder. Duplications: A portion of the chromosome has been duplicated, resulting in extra genetic material. Known human disorders include Charcot–Marie–Tooth disease type 1A, which may be caused by duplication of the gene encoding peripheral myelin protein 22 (PMP22) on chromosome 17. Inversions: A portion of the chromosome has broken off, turned upside down, and reattached, therefore the genetic material is inverted. Insertions: A portion of one chromosome has been deleted from its normal place and inserted into another chromosome. Translocations: A portion of one chromosome has been transferred to another chromosome. There are two main types of translocations: Reciprocal translocation: Segments from two different chromosomes have been exchanged. Robertsonian translocation: An entire chromosome has attached to another at the centromere - in humans, these only occur with chromosomes 13, 14, 15, 21, and 22. Rings: A portion of a chromosome has broken off and formed a circle or ring. This happens with or without the loss of genetic material. Isochromosome: Formed by the mirror image copy of a chromosome segment including the centromere. Chromosome instability syndromes are a group of disorders characterized by chromosomal instability and breakage. They often lead to an increased tendency to develop certain types of malignancies. Inheritance Most chromosome abnormalities occur as an accident in the egg cell or sperm, and therefore the anomaly is present in every cell of the body. Some anomalies, however, can happen after conception, resulting in Mosaicism (where some cells have the anomaly and some do not). Chromosome anomalies can be inherited from a parent or be "de novo". This is why chromosome studies are often performed on parents when a child is found to have an anomaly. If the parents do not possess the abnormality it was not initially inherited; however, it may be transmitted to subsequent generations. Acquired chromosome abnormalities Most cancers, if not all, could cause chromosome abnormalities, with either the formation of hybrid genes and fusion proteins, deregulation of genes and overexpression of proteins, or loss of tumor suppressor genes (see the "Mitelman Database" and the Atlas of Genetics and Cytogenetics in Oncology and Haematology,). Furthermore, certain consistent chromosomal abnormalities can turn normal cells into a leukemic cell such as the translocation of a gene, resulting in its inappropriate expression. DNA damage during spermatogenesis During the mitotic and meiotic cell divisions of mammalian gametogenesis, DNA repair is effective at removing DNA damages. However, in spermatogenesis the ability to repair DNA damages decreases substantially in the latter part of the process as haploid spermatids undergo major nuclear chromatin remodeling into highly compacted sperm nuclei. As reviewed by Marchetti et al., the last few weeks of sperm development before fertilization are highly susceptible to the accumulation of sperm DNA damage. Such sperm DNA damage can be transmitted unrepaired into the egg where it is subject to removal by the maternal repair machinery. However, errors in maternal DNA repair of sperm DNA damage can result in zygotes with chromosomal structural aberrations. Melphalan is a bifunctional alkylating agent frequently used in chemotherapy. Meiotic inter-strand DNA damages caused by melphalan can escape paternal repair and cause chromosomal aberrations in the zygote by maternal misrepair. Thus both pre- and post-fertilization DNA repair appear to be important in avoiding chromosome abnormalities and assuring the genome integrity of the conceptus. Detection Depending on the information one wants to obtain, different techniques and samples are needed. For the prenatal diagnosis of a fetus, amniocentesis, chorionic villus sampling or circulating foetal cells would be collected and analysed in order to detect possible chromosomal abnormalities. For the preimplantational diagnosis of an embryo, a blastocyst biopsy would be performed. For a lymphoma or leukemia screening the technique used would be a bone marrow biopsy. Nomenclature The International System for Human Cytogenomic Nomenclature (ISCN) is an international standard for human chromosome nomenclature, which includes band names, symbols and abbreviated terms used in the description of human chromosome and chromosome abnormalities. Abbreviations include a minus sign (-) for chromosome deletions, and del for deletions of parts of a chromosome.
Biology and health sciences
Genetics
Biology
6418885
https://en.wikipedia.org/wiki/Banded%20palm%20civet
Banded palm civet
The banded palm civet (Hemigalus derbyanus), also called the banded civet, is a viverrid native to Indomalaya. They primarily inhabit lowland conifer habitat, which is under threat from encroaching human activity. It is estimated the population of the banded palm civet has decreased by around 30% in just three generations. Banded palm civets are usually approximately the size of a domestic cat; their fur is pale but with dark bands on the back. They are believed to be closely related to Hose's palm civets, which are similar in appearance and distribution. The banded palm civet is the only species in its genus, first scientifically described in 1837. The species comprises four subspecies, distributed across Indonesia and Southeast Asia. Two of the subspecies diverged from each other as long ago as 2.7 million years. Banded palm civets are affected by a variety of parasites, such as nematodes, and are primarily carnivorous, eating small animals such as rodents and bugs. They have sensitive hairs on their paws which help them to detect potential prey. Classification The genus Hemigalus was named and first described in 1837 by Claude Jourdan who had a skin and skeleton of one zoological specimen at his disposal. In the same year, John Edward Gray described a specimen from the Malay Peninsula under the names Paradoxurus derbyanus and Paradoxurus derbianus. In 1939, Reginald Innes Pocock subordinated banded palm civet specimens described between 1837 and 1915 under the genus Hemigalus and recognised that it is a monotypic taxon. The genus name is derived from the Greek hemi (half) and galus (weasel), due to its appearance. The species is believed to be closely related to Hose's palm civet – another species of civet in the subfamily Hemigalinae, also distributed in Southeast Asia, and with a similar build and appearance. Subspecies There are four subspecies: H. derbyanus derbyanus, H. d. boiei, H. d. minor, and H. d. sipora. H. d. derbyanus is known from Myanmar and mainland Malaysia as well as Sumatra; H. d. boiei is known only from Borneo; H. d. minor, from South Pagai and the Mentawai islands; and H. d. sipora, from Sipora and the Mentawai islands. There is also a population on Siberut island, but it has not been attributed to any subspecies. It is estimated that H. d. minor and H. d. derbyanus diverged from each other some 2.7 million years ago. Description The banded palm civet's fur is usually pale in colour, and they have between seven and eight dark bands on their face and on their back. The pale colour is typically pale brown, grey, whitish or buff, but can also be yellowish; the bands are usually dark brown, black, or chestnut in colour. It is roughly the size of domestic cat, growing up to in length – minus the tail – and weighing from . The tail is usually three-quarters the length of the body and head combined, and appear to swell in size in response to a threat. It has sensitive hairs in between the pads of its paws for sensing prey. Distribution and habitat The banded palm civet is native to Myanmar, Thailand, Peninsular Malaysia, Sumatra, the Mentawai Islands and Borneo from sea level up to an elevation of . In Myanmar, only two individuals were recorded between the early 20th century and the 1960s, both in the far south. In 2022, it was photographed by a camera trap for the first time in a reserved forest in Tanintharyi Region. In Thailand, it was photographed during camera trap surveys in the years 1996–2013 in Khlong Saeng Wildlife Sanctuary, Khao Sok National Park, Kui Buri National Park and Hala-Bala Wildlife Sanctuary, all in evergreen forests at elevations of . In Peninsular Malaysia, it was recorded in just two locations during surveys in 2011–2012 in a hilly dipterocarp forest in Terengganu. In Sumatra, it was recorded at an elevation of in primary forest in Kerinci Seblat National Park and on the west coast also at . In Bukit Barisan Selatan National Park, it was photographed in primary evergreen forest at the elevation of in 2011. In South Solok Regency, it was recorded in forest fragments within an oil palm plantation adjacent to Kerinci Seblat National Park in 2015. It was extirpated in Singapore in the early 20th century. Behaviour and ecology The banded palm civet is nocturnal and spends the day in low tree holes. It is thought to be a solitary animal. Its activity pattern overlaps with two other species of civet, rodents, as well as the clouded leopard – a potential predator. In response to a predator or other threat, banded palm civets swell their tails. Diet The banded palm civet is a strict carnivore and preys on a variety of small animals, including crustaceans, ants, spiders, worms, rats, frogs, small reptiles and birds. It occasionally feeds on vegetation and fruits. Twelve scat samples contained worms, orthopterans and invertebrates. Banded palm civets hunt around water or along the forest floor. To attack large prey, the civets bite the back of the victim's neck and then shake vigorously, then hold their victim with their front paws, allowing them to attack with their teeth. Reproduction Females have one or two litters a year with one or two young. The gestation period varies from 32 to 64 days. Data from the wild suggests they usually live up to twelve years of age, although one civet taken into captivity is recorded having lived for eighteen years. The newborns weigh as little as and usually first open their eyes eight to twelve days after being born. They typically nurse for up to 70 days. The generation length of the banded palm civet is five years. Health Analysis of the gut content of two banded palm civet roadkills in northern Borneo revealed a variety of parasites, including nematodes, eggs of trematodes, mites and pinworms. Threats The major threat to the banded palm civet is loss and destruction of natural habitat loss by logging and subsequent conversion to agriculture, plantations and construction of dams. It is hunted and eaten by local people in Sabah. Its preferred habitat, lowland forest, is particularly prone to such threats. In 2016, the population was thought to have declined by 30% over just three generations. In 2022, it was estimated that the population has declined to just 21% of the IUCN Red List distribution. Some humans take them from their natural habitat to keep them as pets. Conservation The banded palm civet is listed as Near Threatened on the IUCN Red List, and the global population is thought to be decreasing. It is protected under CITES' Appendix II. About 24% of its estimated range is in protected areas. but a later (2022) study estimated that value to be only 12%.
Biology and health sciences
Other carnivora
Animals
1298322
https://en.wikipedia.org/wiki/Metre-stick
Metre-stick
A metre-stick, metrestick (or meter-stick and meterstick as alternative spellings); or yardstick is either a straightedge or foldable ruler used to measure length, and is especially common in the construction industry. They are often made of wood or plastic, and often have metal or plastic joints so that they can be folded together. The normal length of a metre-stick made for the international market is either one or two metres, while a yardstick made for the U.S. market is typically one yard (3 feet or 0.9144 metres) long. Metre-sticks are usually divided with lines for each millimetre (1000 per metre) and numerical markings per centimetre (100 per metre), with numbers either in centimetres or millimetres. Yardsticks are most often marked with a scale in inches, but sometimes also feature marks for foot increments. Hybrid sticks with more than one measurement system also exist, most notably those which have metric measurements on one side and U.S. customary units on the other side (or both on the same side). The "tumstock" (literally "thumbstick", meaning "inch-stick") invented in 1883 by the Swedish engineer Karl-Hilmer Johansson Kollén was the first such hybrid stick, and was developed to help Sweden convert to the metric system. Construction Metre-sticks are often thin and rectangular, and made of wood or metal. Metal ones are often backed with 'grippy' material, such as cork, to improve friction. They are relatively cheap, with most wood models costing under US$5. Measurements In countries in which the metric system is used, the scale typically contains only a metric scale. The scale marks every millimetre with every 5th millimetre marked by a slightly longer line. Every centimetre is marked with an even longer line and a numeric label. Every 10th centimetre is usually predominantly marked. They might be referred to as yardsticks, metre-sticks or "inch sticks". In the United States, the marking is usually in customary units (three feet inches with inch and fractional inch). Hybrid measures bearing customary markings on one side and metric units on the other also exist and are sometimes referred to as yardsticks, metre-whesticks or "metre rulers". The spelling meter vs metre varies by country, though metre is the official and most widely used spelling in English-speaking countries. Although not used as often, metre-sticks with only a metric scale can be found in the United States. For example, they are common in schools where there is a desire for students to become familiar with metric units. They may also be used in American science labs. The folding carpenters' rulers used in Scandinavia are sometimes equipped with double measurements, metric and imperial on both sides, also functioning as a handy conversion table, accounting for its Scandinavian term: Tommestokk/tumstock (thumb (inch) stick), a term with the same meaning that is also used in Dutch: duimstok. Metric only carpenter's rulers are however common. Application The metre-stick is usually employed for work on a medium scale; larger than desktop work on paper, yet smaller than large-scale infrastructure work, where tape measures or longer measuring rods are used. Typical applications of metre-sticks are for building furniture, vehicles and houses. Modern carpenters' metre-sticks are usually made to be folded for ease of transport. Metre-sticks may be used as pointing devices for posters and projections. Metre-sticks are also used as spars to make wings for remote-controlled model aircraft that are made from corrugated plastic. Metre-sticks have also been used as a method of corporal punishment in schools in the United Kingdom to slap the palms of students to bring them in order.
Technology
Measuring instruments
null
1301022
https://en.wikipedia.org/wiki/Cofferdam
Cofferdam
A cofferdam is an enclosure built within a body of water to allow the enclosed area to be pumped out or drained. This pumping creates a dry working environment so that the work can be carried out safely. Cofferdams are commonly used for construction or repair of permanent dams, oil platforms, bridge piers, etc., built within water. They also form an integral part of naval architecture. These cofferdams are usually welded steel structures, with components consisting of sheet piles, wales, and cross braces. Such structures are usually dismantled after the construction work is completed. The origin of the word comes from coffer (originally from Latin meaning 'basket') and dam from Proto-Germanic meaning 'barrier across a stream of water to obstruct its flow and raise its level'). Uses For dam construction, two cofferdams are usually built, one upstream and one downstream of the proposed dam, after an alternative diversion tunnel or channel has been provided for the river flow to bypass the foundation area of the dam. These cofferdams are typically a conventional embankment dam of both earth- and rock-fill, but concrete or some sheet piling also may be used. Usually, upon completion of the dam and associated structures, the downstream coffer is removed and the upstream coffer is flooded as the diversion is closed and the reservoir begins to fill. Depending on the geography of a dam site, in some applications, a U-shaped cofferdam is used in the construction of one half of a dam. When complete, the cofferdam is removed and a similar one is created on the opposite side of the river for the construction of the dam's other half. Cofferdams are used in ship husbandry to allow dry access to underwater equipment and to close underwater openings while work is done on the fittings inside the ship. This is more common in naval vessels where a cofferdam may fit several vessels of a class. The cofferdam is also used on occasion in the shipbuilding and ship repair industry, when it is not practical to put a ship in drydock for repair work or modernization. An example of such an application is the lengthening of ships. In some cases a ship is actually cut in two while still in the water, and a new section of ship is floated in to lengthen the ship. The cutting of the hull is done inside a cofferdam attached directly to the hull of the ship; the cofferdam is then detached before the hull sections are floated apart. The cofferdam is later replaced while the hull sections are welded together again. As expensive as this may be to accomplish, the use of a drydock might be even more expensive. Cofferdams are also used in some marine salvage operations. Cofferdams have been used to recover aircraft from water as well, as in the case of Avro Lancaster ED603, which was recovered from the IJsselmeer in 2023 using a cofferdam, allowing for close examination of the wreckage, as well as to locate and repatriate the remains of its crew. Examples A 100-ton open caisson that was lowered more than a mile to the sea floor in attempts to stop the flow of oil in the Deepwater Horizon oil spill has been called a cofferdam. A cofferdam over 1 mile long was built to permit the construction of the Livingstone Channel in the Detroit River. See main article at Stony Island. The museum battleships USS Alabama (BB-60) and USS North Carolina (BB-55) have had cofferdams since 2003 and 2018, respectively. This saves much money compared to towing and dry docking them after the tow and this also provides additional security so there is a low chance of the ships sinking and becoming impossible to repair. Types Several types of structure performing this function can be distinguished, depending on how they are constructed and how they are used. Civil and coastal engineering In civil and costal engineering applications cofferdams are usually made from interlocking steel sheet piles which are driven deep into the bed of the water source in order to create a temporary dam behind which the engineering contractors can carry out their works. After the construction project is complete the sheet piles can then be removed and the area behind them rewetted. Naval architecture A cofferdam is a space between two watertight bulkheads or decks within a ship. It is usually a void (empty) space intended to ensure that the contents of nearly adjacent tanks cannot leak directly from one to the other which would result in contamination of the contents of one or both of the compartments. The cofferdam would be kept empty at all times and the ship may have sensors within it to warn if it has begun to fill with liquid. If two different cargoes that react dangerously with each other are carried on the same vessel, one or more cofferdams are usually required between the cargo spaces. Marine salvage When all or part of the main deck of a sunken ship is submerged, flooded spaces cannot be dewatered until all openings are sealed or the effective freeboard is extended above the high water level. One method of doing this is to build a temporary watertight extension of the entire hull of the ship, or the space to be dewatered, to the surface. This watertight extension is a cofferdam. Although they are temporary structures, cofferdams for this purpose have to be strongly built, adequately stiffened, and reinforced to withstand the hydrostatic and other loads that they will have to withstand. Large cofferdams are normally restricted to harbor operations. Complete cofferdams cover most or all of the sunken vessel and are equivalent to extensions of the ship's sides to above the water surface. Partial cofferdams are constructed around moderate-sized openings or areas such as a cargo hatch or small deckhouse. They can often be prefabricated and installed as a unit, or prefabricated panels can be joined during erection. When partial cofferdams are used, it may be necessary to compensate for hydrostatic pressure on the deck by shoring the decks. With both complete and partial cofferdams, there is usually a large free surface in the spaces being pumped. Sometimes this can be limited by dewatering one compartment at a time, or in groups, taking into account the beam strength loads on the ship induced by the load distribution. Small cofferdams are used for pumping or to allow salvors access to spaces that are covered by water at some stage of the tide. They are usually prefabricated and fitted around minor openings. Diving work on cofferdams often involves clearing obstructions, fitting, and fastening, including underwater welding, and where necessary, caulking, bracing and shoring the adjacent structure. Ship husbandry There are two common types of dry chambers used in underwater ship husbandry. Open bottom cofferdams allow divers direct access to the enclosed hull area, system, or opening. The flange sides of the chamber secure and seal against the hull, acting as an airtight boundary. Open bottom cofferdams are typically used as diver work space for rigging or welding and ventilation for welding or epoxy cure, where there is no opening to the interior of the vessel, or the interior is pressurised in this area. The air space is at the pressure of the water surface at the bottom of the chamber. Open top cofferdams allow surface access to the work area below the waterline, and are at atmospheric pressure. Openings through the hull to the interior of the ship are possible. Portable cofferdams Portable cofferdams can be inflatable or frame and fabric cofferdams that can be reused. Inflatable cofferdams are stretched across the site, then inflated with water from the prospected dry area. Frame and fabric cofferdams are erected in the water and covered with watertight fabric. Once the area is dry, water still remaining from the dry area can be siphoned over to the wet area.
Technology
Dams
null
1301424
https://en.wikipedia.org/wiki/Wood%E2%80%93plastic%20composite
Wood–plastic composite
{{Multiple issues| Wood–plastic composites (WPCs) are composite materials made of wood fiber/wood flour and thermoplastic(s) such as polyethylene (PE), polypropylene (PP), polyvinyl chloride (PVC), or polylactic acid (PLA). In addition to wood fiber and plastic, WPCs can also contain other ligno-cellulosic and/or inorganic filler materials. WPCs are a subset of a larger category of materials called natural fiber plastic composites (NFPCs), which may contain no cellulose-based fiber fillers such as pulp fibers, peanut hulls, coffee husk, bamboo, straw, digestate, etc. Chemical additives provide for integration of polymer and wood flour (powder) while facilitating optimal processing conditions. History The company that invented and patented the process to create WPC was Covema of Milan in 1960, founded by Terragni brothers (Dino and Marco). Covema made WPC under the tradename Plastic-Wood. After a few years from the invention of the Plastic-Wood the company Icma San Giorgio patented the first process to add wood fiber/wood flour to the thermoplastics (WPCs). Uses Also sometimes known as composite timber, WPCs are still new materials relative to the long history of natural lumber as a building material. The most widespread use of WPCs in North America is in outdoor deck floors, but it is also used for railings, fences, landscaping timbers, cladding and siding, park benches, molding and trim, prefab houses under the tradename Woodpecker WPC., window and door frames, and indoor furniture. WPCs were first introduced into the decking market in the early 1990s. Manufacturers claim that WPC is more environmentally friendly and requires less maintenance than the alternatives of solid wood treated with preservatives or solid wood of rot-resistant species. These materials can be molded with or without simulated wood grain details. Production WPCs are produced by thoroughly mixing ground wood particles and heated thermoplastic resin. The most common method of production is to extrude the material into the desired shape, though injection molding is also used. WPCs may be produced from either virgin or recycled thermoplastics including high-density polyethylene (HDPE), low-density polyethylene (LDPE), polyvinyl chloride (PVC), polypropylene (PP), acrylonitrile butadiene styrene (ABS), polystyrene (PS), and polylactic acid (PLA). PE-based WPCs are by far the most common. Additives such as colorants, coupling agents, UV stabilizers, blowing agents, foaming agents, and lubricants help tailor the end product to the target area of application. Extruded WPCs are formed into both solid and hollow profiles. A large variety of injection molded parts are also produced, from automotive door panels to cell phone covers. In some manufacturing facilities, the constituents are combined and processed in a pelletizing extruder, which produces pellets of the new material. The pellets are then re-melted and formed into the final shape. Other manufacturers complete the finished part in a single step of mixing and extrusion. Due to the addition of organic material, WPCs are usually processed at far lower temperatures than traditional plastics during extrusion and injection molding. WPCs tend to process at temperatures of about lower than the same, unfilled material, for instance. Most will begin to burn at temperatures around . Processing WPCs at excessively high temperatures increases the risk of shearing, or burning and discoloration resulting from pushing a material that is too hot through a gate which is too small, during injection molding. The ratio of wood to plastic in the composite will ultimately determine the melt flow index (MFI) of the WPC, with larger amounts of wood generally leading to a lower MFI. Advantages and disadvantages WPCs do not corrode and are highly resistant to rot, decay, and marine borer attack, though they do absorb water into the wood fibers embedded within the material. Water absorption is more pronounced in WFCs with a hydrophilic matrix such as PLA and also leads to decreased mechanical stiffness and strength. The mechanical performance in a wet environment can be enhanced by an acetylation treatment. WPCs have good workability and can be shaped using conventional woodworking tools. WPCs are often considered a sustainable material because they can be made using recycled plastics and the waste products of the wood industry. Although these materials continue the lifespan of used and discarded materials, they have their own considerable half life; the polymers and adhesives added make WPC difficult to recycle again after use. They can however be recycled easily in a new WPC, much like concrete. One advantage over wood is the ability of the material to be molded to meet almost any desired shape. A WPC member can be bent and fixed to form strong arching curves. Another major selling point of these materials is their lack of need for paint. They are manufactured in a variety of colors, but are widely available in grays and earth tones. Despite up to 70 percent cellulose content (although 50/50 is more common), the mechanical behavior of WPCs is most similar to neat polymers. Neat polymers are polymerized without added solvents. This means that WPCs have a lower strength and stiffness than wood, and they experience time and temperature-dependent behavior. The wood particles are susceptible to fungal attack, though not as much so as solid wood, and the polymer component is vulnerable to UV degradation. It is possible that the strength and stiffness may be reduced by freeze-thaw cycling, though testing is still being conducted in this area. Some WPC formulations are sensitive to staining from a variety of agents. WPC sandwich boards WPC boards show a good set of performance but monolithic composite sheets are relatively heavy (most often heavier than pure plastics) which limits their use to applications where low weight is not essential. WPC in a sandwich-structured composite form allows for a combination of the benefits of traditional wood polymer composites with the lightness of a sandwich panel technology. WPC sandwich boards consist of wood polymer composite skins and usually low-density polymer core which leads to a very effective increase of panel's rigidity. WPC sandwich boards are used mainly in automotive, transportation and building applications, but furniture applications are also being developed. New efficient and often in-line integrated production processes allow to produce stronger and stiffer WPC sandwich boards at lower costs compared to traditional plastic sheets or monolithic WPC panels. Issues Environmental impact The environmental impact of WPCs is directly affected by the ratio of renewable to non-renewable materials. The commonly used petroleum-based polymers have a negative environmental impact because they rely on non-renewable raw materials and the non-biodegradability of plastics. Fire hazards The types of plastic normally used in WPC formulations have higher fire hazard properties than wood alone, as plastic has a higher chemical heat content and can melt. The inclusion of plastic as a portion of the composite results in the potential for higher fire hazards in WPCs as compared with wood. Some code officials are becoming increasingly concerned with the fire performance of WPCs.
Technology
Building materials
null
1301620
https://en.wikipedia.org/wiki/Cardiogenic%20shock
Cardiogenic shock
Cardiogenic shock is a medical emergency resulting from inadequate blood flow to the body's organs due to the dysfunction of the heart. Signs of inadequate blood flow include low urine production (<30 mL/hour), cool arms and legs, and decreased level of consciousness. People may also have a severely low blood pressure and heart rate. Causes of cardiogenic shock include cardiomyopathic, arrhythmic, and mechanical. Cardiogenic shock is most commonly precipitated by a heart attack. Treatment of cardiogenic shock depends on the cause with the initial goals to improve blood flow to the body. If cardiogenic shock is due to a heart attack, attempts to open the heart's arteries may help. Certain medications, such as dobutamine and milrinone, improve the heart's ability to contract and can also be used. When these measures fail, more advanced options such as mechanical support devices or heart transplantation can be pursued. Cardiogenic shock is a condition that is difficult to fully reverse even with an early diagnosis. However, early initiation of treatment may improve outcomes. Care should also be directed to any other organs that are affected by this lack of blood flow (e.g., dialysis for the kidneys, mechanical ventilation for lung dysfunction). Mortality rates for cardiogenic shock are high but have been decreasing in the United States. This is likely due to its rapid identification and treatment in recent decades. Some studies have suggested that this is possibly related to new treatment advances. Nonetheless, the mortality rates remain high and multi-organ failure in addition to cardiogenic shock is associated with higher rates of mortality. Signs and symptoms The presentation is the following: Anxiety, restlessness, altered mental state due to decreased blood flow to the brain, and subsequent hypoxia. Low blood pressure due to decrease in cardiac output. A rapid, weak, thready pulse due to decreased circulation . Cool, clammy, and mottled skin (cutis marmorata) due to vasoconstriction and subsequent hypoperfusion of the skin. Distended jugular veins due to increased jugular venous pressure. Oliguria (low urine output) due to inadequate blood flow to the kidneys if the condition persists. Rapid and deeper respirations (hyperventilation) due to sympathetic nervous system stimulation and acidosis. Fatigue due to hyperventilation and hypoxia. Absent pulse in fast and abnormal heart rhythms. Pulmonary edema, involving fluid back-up in the lungs due to insufficient pumping of the heart. Loss of consciousness, coma, and persistent vegetative state due to loss of blood and oxygen to the brain. Causes Cardiogenic shock is caused by the failure of the heart to pump effectively. It is due to damage to the heart muscle, most often from a heart attack or myocardial contusion. Other causes include abnormal heart rhythms, cardiomyopathy, heart valve problems, ventricular outflow obstruction (i.e. systolic anterior motion in hypertrophic cardiomyopathy), or ventriculoseptal defects. It can also be caused by a sudden decompressurization (e.g. in an aircraft), where air bubbles are released into the bloodstream (Henry's law), causing heart failure. Diagnosis Electrocardiogram An electrocardiogram helps to establish the exact diagnosis and guides treatment, it may reveal: Abnormal heart rhythms, such as bradycardia (slowed heart rate) myocardial infarction (ST-elevation MI, STEMI, is usually more dangerous than non-STEMIs; MIs that affect the ventricles are usually more dangerous than those that affect the atria; those affecting the left side of the heart, especially the left ventricle, are usually more dangerous than those affecting the right side, unless that side is severely compromised) Signs of cardiomyopathy Echocardiography Echocardiography may show poor ventricular function, signs of PED, rupture of the interventricular septum, an obstructed outflow tract or cardiomyopathy. Swan-Ganz catheter The Swan–Ganz catheter or pulmonary artery catheter may assist in the diagnosis by providing information on the hemodynamics. Biopsy When cardiomyopathy is suspected as the cause of cardiogenic shock, a biopsy of heart muscle may be needed to make a definite diagnosis. Cardiac index If the cardiac index falls acutely below 2.2 L/min/m2, the person may be in cardiogenic shock. Treatment Medication Therapy Initial management of cardiogenic shock involves medications to augment the heart's function. Certain medications, such as dobutamine or milrinone, enhance the heart's pumping function and are often used first-line to improve the low blood pressure and delivery of blood to the rest of the body. Patients who have cardiogenic shock unresponsive to medication therapy may be candidates for more advanced options such as a mechanical circulatory support device. There are several types of mechanical circulatory support devices, the most common being intra-aortic balloon pumps, left ventricular assist devices, and venous-arterial extra-corporeal membrane oxygenation. It is important to note, however, that none of these devices are permanent solutions but rather are a bridge to a more definitive therapy such as a heart transplantion. Intra-aortic balloon pump An intra-aortic balloon pump is a device placed by a cardiac surgeon into the descending aorta. It consists of a small balloon filled with helium that helps the heart to pump blood by inflating during diastole (the resting phase of the cardiac cycle) and deflating during systole (the contracting phase of the cardiac cycle). Intra-aortic balloon pumps do not directly increase cardiac output, but importantly, they decrease the amount of pressure that the heart has to pump against, thereby allowing for more blood flow and oxygen to be delivered to the heart muscles. Intra-aortic balloon pumps have been around for several decades and are most commonly used first-line of the mechanical circulatory support devices. However, it is not without its potential complications. Potential complications include injury upon insertion of the device to arteries supplying the spinal cord as well as risks with any procedure such as bleeding and infection. Contraindications to intra-aortic balloon pumps include aortic dissection, an abdominal aortic aneurysm, and irregularly fast heart beats. Left ventricular assist device There are several types of left ventricular assist devices, with the Impella devices being some of the most common. This device is placed by a cardiac surgeon into the left ventricle of the heart and essentially acts as a pump, drawing blood from the left ventricle and pushing it out into the aorta so that it could be delivered to the rest of the body. Unlike intra-aortic balloon pumps, the Impella acts independently from the cardiac cycle. It can be adjusted to pump at faster rates to take blood out of the left ventricle and into the aorta more quickly, thereby decreasing the amount of work that the left ventricle has to do. While the Impella is commonly used in settings of cardiogenic shock, some evidence suggests that it placing an Impella device in an acute cardiogenic shock setting, where the heart fails to pump suddenly, may not necessarily guarantee increased survival. Potential complications specific to an Impella device include hemolysis (shearing of the blood cells) as well as the formation of lesions on the heart valve, namely the mitral or aortic valves. Contraindications to an Impella device insertion include aortic dissection, the presence of a mechanical aortic valve, and the presence of a blood clot in the left ventricle. Venous-arterial extra-corporeal membrane oxygenation Venous-arterial extra-corporeal membrane oxygenation is a circuit support system that is meant to replace the function of the heart as it heals or awaits a more definitive treatment. It consists of a circuit that essentially drains blood from a patient's venous system, runs that blood through a circulator which adds oxygen and removes carbon dioxide, and ultimately returns blood back into the patient's arterial system where the newly oxygenated blood can be delivered to the person's organs. Some evidence suggests that the combination of both an Impella device and Venous-arterial extra-corporeal membrane oxygenation may decrease the heart's pulmonary capillary wedge pressure, thereby decreasing the amount of stress on the cardiac muscles. Because Venous-arterial extra-corporeal membrane oxygenation is a very invasive procedure, it is not usually the first-line chosen device for patients in cardiogenic shock and is often reserved only for patients who have not only cardiogenic shock but also respiratory failure and/or concomitant cardiac arrest. Complications of venous-arterial extra-corporeal membrane oxygenation include an air embolism, pulmonary edema, and blood clotting in the circuit machine.
Biology and health sciences
Cardiovascular disease
Health
1301658
https://en.wikipedia.org/wiki/Corylus%20maxima
Corylus maxima
Corylus maxima, the filbert, is a species of hazel in the birch family Betulaceae, native to southeastern Europe and southwestern Asia. Description It is a deciduous shrub tall, with stems up to thick. The leaves are rounded, long by broad, with a coarsely double-serrated margin. The flowers are wind-pollinated catkins produced in late winter; the male (pollen) catkins are pale yellow, long, while the female catkins are bright red and only long. The fruit is a nut produced in clusters of 1–5 together; each nut is long, fully enclosed in a long, tubular involucre (husk). Similar species The filbert is similar to the related common hazel, C. avellana, differing in having the nut more fully enclosed by the tubular involucre. This feature is shared by the beaked hazel C. cornuta of North America, and the Asian beaked hazel C. sieboldiana of eastern Asia. Distribution and habitat The species is native to southeastern Europe and southwestern Asia, from the Balkans to Ordu in Turkey. Uses The filbert nut is edible, and is very similar to the hazelnut (cobnut). Its main use in the United States is as large filler (along with peanuts as small filler) in most containers of mixed nuts. Filberts are sometimes grown in orchards for the nuts, but much less often than the common hazel. The purple-leaved cultivar C. maxima 'Purpurea' is a popular ornamental shrub in gardens. Name In Oregon, "filbert" is used for commercial hazelnuts in general. Use in this manner has faded partly due to the efforts of Oregon's hazelnut growers to brand their product to better appeal to global markets and avoid confusion. The etymology for 'filbert' may trace to Norman French. Saint Philibert's feast day is 20 August (old style) and the plant was possibly renamed after him because the nuts were mature on this day.
Biology and health sciences
Nuts
Plants
1301827
https://en.wikipedia.org/wiki/Last%20universal%20common%20ancestor
Last universal common ancestor
The last universal common ancestor (LUCA) is the hypothesized common ancestral cell from which the three domains of life, the Bacteria, the Archaea, and the Eukarya originated. The cell had a lipid bilayer; it possessed the genetic code and ribosomes which translated from DNA or RNA to proteins. The LUCA probably existed at latest 3.6 billion years ago, and possibly as early as 4.3 billion years ago or earlier. The nature of this point or stage of divergence remains a topic of research. All earlier forms of life preceding this divergence and all extant organisms are generally thought to share common ancestry. On the basis of a formal statistical test, this theory of a universal common ancestry (UCA) is supported versus competing multiple-ancestry hypotheses. The first universal common ancestor (FUCA) is a hypothetical non-cellular ancestor to LUCA and other now-extinct sister lineages. Whether the genesis of viruses falls before or after the LUCA–as well as the diversity of extant viruses and their hosts–remains a subject of investigation. While no fossil evidence of the LUCA exists, the detailed biochemical similarity of all current life (divided into the three domains) makes its existence widely accepted by biochemists. Its characteristics can be inferred from shared features of modern genomes. These genes describe a complex life form with many co-adapted features, including transcription and translation mechanisms to convert information from DNA to mRNA to proteins. Historical background A phylogenetic tree directly portrays the idea of evolution by descent from a single ancestor. An early tree of life was sketched by Jean-Baptiste Lamarck in his Philosophie zoologique in 1809. Charles Darwin more famously proposed the theory of universal common descent through an evolutionary process in his book On the Origin of Species in 1859: "Therefore I should infer from analogy that probably all the organic beings which have ever lived on this earth have descended from some one primordial form, into which life was first breathed." The last sentence of the book begins with a restatement of the hypothesis: The term "last universal common ancestor" or "LUCA" was first used in the 1990s for such a primordial organism. Inferring LUCA's features An anaerobic thermophile In 2016, Madeline C. Weiss and colleagues genetically analyzed 6.1 million protein-coding genes and 286,514 protein clusters from sequenced prokaryotic genomes representing many phylogenetic trees, and identified 355 protein clusters that were probably common to the LUCA. The results of their analysis are highly specific, though debated. They depict LUCA as "anaerobic, CO2-fixing, H2-dependent with a Wood–Ljungdahl pathway (the reductive acetyl-coenzyme A pathway), N2-fixing and thermophilic. LUCA's biochemistry was replete with FeS clusters and radical reaction mechanisms." The cofactors also reveal "dependence upon transition metals, flavins, S-adenosyl methionine, coenzyme A, ferredoxin, molybdopterin, corrins and selenium. Its genetic code required nucleoside modifications and S-adenosylmethionine-dependent methylations." They show that methanogenic clostridia were basal, near the root of the phylogenetic tree, in the 355 protein lineages examined, and that the LUCA may therefore have inhabited an anaerobic hydrothermal vent setting in a geochemically active environment rich in H2, CO2, and iron, where ocean water interacted with hot magma beneath the ocean floor. It is even inferred that LUCA also grew from H2 and CO2 via the reverse incomplete Krebs cycle. Other metabolic pathways inferred in LUCA are the pentose phosphate pathway, glycolysis, and gluconeogenesis. Even if phylogenetic evidence may point to a hydrothermal vent environment for a thermophilic LUCA, this does not constitute evidence that the origin of life took place at a hydrothermal vent since mass extinctions may have removed previously existing branches of life. While the gross anatomy of the LUCA can be reconstructed only with much uncertainty, its biochemical mechanisms can be described in some detail, based on the "universal" properties currently shared by all independently living organisms on Earth. The LUCA certainly had genes and a genetic code. Its genetic material was most likely DNA, so that it lived after the RNA world. The DNA was kept double-stranded by an enzyme, DNA polymerase, which recognises the structure and directionality of DNA. The integrity of the DNA was maintained by a group of repair enzymes including DNA topoisomerase. If the genetic code was based on dual-stranded DNA, it was expressed by copying the information to single-stranded RNA. The RNA was produced by a DNA-dependent RNA polymerase using nucleotides similar to those of DNA. It had multiple DNA-binding proteins, such as histone-fold proteins. The genetic code was expressed into proteins. These were assembled from 20 free amino acids by translation of a messenger RNA via a mechanism of ribosomes, transfer RNAs, and a group of related proteins. LUCA was likely capable of sexual interaction in the sense that adaptive gene functions were present that promoted the transfer of DNA between individuals of the population to facilitate genetic recombination. Homologous gene products that promote genetic recombination are present in bacteria, archaea and eukaryota, such as the RecA protein in bacteria, the RadA protein in archaea, and the Rad51 and Dmc1 proteins in eukaryota. The functionality of LUCA as well as evidence for the early evolution membrane-dependent biological systems together suggest that LUCA had cellularity and cell membranes. As for the cell's gross structure, it contained a water-based cytoplasm effectively enclosed by a lipid bilayer membrane; it was capable of reproducing by cell division. It tended to exclude sodium and concentrate potassium by means of specific ion transporters (or ion pumps). The cell multiplied by duplicating all its contents followed by cellular division. The cell used chemiosmosis to produce energy. It also reduced CO2 and oxidized H2 (methanogenesis or acetogenesis) via acetyl-thioesters. By phylogenetic bracketing, analysis of the presumed LUCA's offspring groups, LUCA appears to have been a small, single-celled organism. It likely had a ring-shaped coil of DNA floating freely within the cell. Morphologically, it would likely not have stood out within a mixed population of small modern-day bacteria. The originator of the three-domain system, Carl Woese, stated that in its genetic machinery, the LUCA would have been a "simpler, more rudimentary entity than the individual ancestors that spawned the three [domains] (and their descendants)". An alternative to the search for "universal" traits is to use genome analysis to identify phylogenetically ancient genes. This gives a picture of a LUCA that could live in a geochemically harsh environment and is like modern prokaryotes. Analysis of biochemical pathways implies the same sort of chemistry as does phylogenetic analysis. Weiss and colleagues write that "Experiments ... demonstrate that ... acetyl-CoA pathway [chemicals used in anaerobic respiration] formate, methanol, acetyl moieties, and even pyruvate arise spontaneously ... from CO2, native metals, and water", a combination present in hydrothermal vents. An experiment shows that Zn2+, Cr3+, and Fe can promote 6 of the 11 reactions of an ancient anabolic pathway called the reverse Krebs cycle in acidic conditions which implies that LUCA might have inhabited either hydrothermal vents or acidic metal-rich hydrothermal fields. Because both bacteria and archaea have differences in the structure of phospholipids and cell wall, ion pumping, most proteins involved in DNA replication, and glycolysis, it is inferred that LUCA had a permeable membrane without an ion pump. The emergence of Na+/H+ antiporters likely lead to the evolution of impermeable membranes present in eukaryotes, archaea, and bacteria. It is stated that "The late and independent evolution of glycolysis but not gluconeogenesis is entirely consistent with LUCA being powered by natural proton gradients across leaky membranes. Several discordant traits are likely to be linked to the late evolution of cell membranes, notably the cell wall, whose synthesis depends on the membrane and DNA replication". Although LUCA likely had DNA, it is unknown if it could replicate DNA and is suggested to "might just have been a chemically stable repository for RNA-based replication". It is likely that the permeable membrane of LUCA was composed of archaeal lipids (isoprenoids) and bacterial lipids (fatty acids). Isoprenoids would have enhanced stabilization of LUCA's membrane in the surrounding extreme habitat. Nick Lane and coauthors state that "The advantages and disadvantages of incorporating isoprenoids into cell membranes in different microenvironments may have driven membrane divergence, with the later biosynthesis of phospholipids giving rise to the unique G1P and G3P headgroups of archaea and bacteria respectively. If so, the properties conferred by membrane isoprenoids place the lipid divide as early as the origin of life". A 2024 study suggests that LUCA's genome was similar in size to that of modern prokaryotes, coding for some 2,600 proteins; that it respired anaerobically, and was an acetogen; and that it had an early CAS-based anti-viral immune system. Alternative interpretations Some other researchers have challenged Weiss et al.'s 2016 conclusions. Sarah Berkemer and Shawn McGlynn argue that Weiss et al. undersampled the families of proteins, so that the phylogenetic trees were not complete and failed to describe the evolution of proteins correctly. There are two risks in attempting to attribute LUCA's environment from near-universal gene distribution (as in Weiss et al. 2016). On the one hand, it risks misattributing convergence or horizontal gene transfer events to vertical descent; on the other hand, it risks misattributing potential LUCA gene families as horizontal gene transfer events. A phylogenomic and geochemical analysis of a set of proteins that probably traced to the LUCA show that it had K+-dependent GTPases and the ionic composition and concentration of its intracellular fluid was seemingly high K+/Na+ ratio, , Fe2+, CO2+, Ni2+, Mg2+, Mn2+, Zn2+, pyrophosphate, and which would imply a terrestrial hot spring habitat. It possibly had a phosphate-based metabolism. Further, these proteins were unrelated to autotrophy (the ability of an organism to create its own organic matter), suggesting that the LUCA had a heterotrophic lifestyle (consuming organic matter) and that its growth was dependent on organic matter produced by the physical environment. The presence of the energy-handling enzymes CODH/acetyl-coenzyme A synthase in LUCA could be compatible not only with being an autotroph but also with life as a mixotroph or heterotroph. Weiss et al. 2018 reply that no enzyme defines a trophic lifestyle, and that heterotrophs evolved from autotrophs. A 2024 study by Sawsan Wehbi and colleagues directly estimated the order in which amino acids were added into the genetic code from early protein sequences. It found that amino acids that bind metals, and those that contain sulphur, came early in the sequence. The study suggests that sulphur metabolism and catalysis involving metals were important elements of life at the time of LUCA. Evidence that LUCA was mesophilic Several lines of evidence suggest that LUCA was non-thermophilic. The content of G + C nucleotide pairs (compared to the occurrence of A + T pairs) can indicate an organism's thermal optimum as they are more thermally stable due to an additional hydrogen bond. As a result they occur more frequently in the rRNA of thermophiles; however this is not seen in LUCA's reconstructed rRNA. The identification of thermophilic genes in the LUCA has been criticized, as they may instead represent genes that evolved later in archaea or bacteria, then migrated between these via horizontal gene transfer, as in Woese's 1998 hypothesis. For instance, the thermophile-specific topoisomerase, reverse gyrase, was initially attributed to LUCA before an exhaustive phylogenetic study revealed a more recent origin of this enzyme followed by extensive horizontal gene transfer. LUCA could have been a mesophile that fixed CO2 and relied on H2, and lived close to hydrothermal vents. Further evidence that LUCA was mesophilic comes from the amino acid composition of its proteins. The abundance of I, V, Y, W, R, E, and L amino acids (denoted IVYWREL) in an organism's proteins is correlated with its optimal growth temperature. According to phylogentic analysis, the IVYWREL content of LUCA's proteins suggests its ideal temperature was below 50°C. Finally, evidence that bacteria and archaea both independently underwent phases of increased and subsequently decreased thermo-tolerance suggests a dramatic post-LUCA climate shift that affected both populations and would explain the seeming genetic pervasiveness of thermo-tolerant genetics. Age Studies from 2000 to 2018 have suggested an increasingly ancient time for the LUCA. In 2000, estimates of the LUCA's age ranged from 3.5 to 3.8 billion years ago in the Paleoarchean, a few hundred million years before the earliest fossil evidence of life, for which candidates range in age from 3.48 to 4.28 billion years ago. This placed the origin of the first forms of life shortly after the Late Heavy Bombardment which was thought to have repeatedly sterilized Earth's surface. However, a 2018 study by Holly Betts and colleagues applied a molecular clock model to the genomic and fossil record (102 species, 29 common protein-coding genes, mostly ribosomal), concluding that LUCA preceded the Late Heavy Bombardment (making the LUCA over 3.9 billion years ago). A 2022 study suggested an age of around 3.6-4.2 billion years for the LUCA. A 2024 study suggested that the LUCA lived around 4.2 billion years ago (with a confidence interval of 4.09–4.33 billion years ago). Root of the tree of life In 1990, a novel concept of the tree of life was presented, dividing the living world into three stems, classified as the domains Bacteria, Archaea, Eukarya. It is the first tree founded exclusively on molecular phylogenetics, and which includes the evolution of microorganisms. It has been called a "universal phylogenetic tree in rooted form". This tree and its rooting became the subject of debate. In the meantime, numerous modifications of this tree, mainly concerning the role and importance of horizontal gene transfer for its rooting and early ramifications have been suggested (e.g.). Since heredity occurs both vertically and horizontally, the tree of life may have been more weblike or netlike in its early phase and more treelike when it grew three-stemmed. Presumably horizontal gene transfer has decreased with growing cell stability. A modified version of the tree, based on several molecular studies, has its root between a monophyletic domain Bacteria and a clade formed by Archaea and Eukaryota. A small minority of studies place the root in the domain bacteria, in the phylum Bacillota, or state that the phylum Chloroflexota (formerly Chloroflexi) is basal to a clade with Archaea and Eukaryotes and the rest of bacteria (as proposed by Thomas Cavalier-Smith). Metagenomic analyses recover a two-domain system with the domains Archaea and Bacteria; in this view of the tree of life, Eukaryotes are derived from Archaea. With the later gene pool of LUCA's descendants, sharing a common framework of the AT/GC rule and the standard twenty amino acids, horizontal gene transfer would have become feasible and could have been common. The nature of LUCA remains disputed. In 1994, on the basis of primordial metabolism (sensu Wächtershäuser), Otto Kandler proposed a successive divergence of the three domains of life from a multiphenotypical population of pre-cells, reached by gradual evolutionary improvements (cellularization). These phenotypically diverse pre-cells were metabolising, self-reproducing entities exhibiting frequent mutual exchange of genetic information. Thus, in this scenario there was no "first cell". It may explain the unity and, at the same time, the partition into three lines (the three domains) of life. Kandler's pre-cell theory is supported by Wächtershäuser. In 1998, Carl Woese, based on the RNA world concept, proposed that no individual organism could be considered a LUCA, and that the genetic heritage of all modern organisms derived through horizontal gene transfer among an ancient community of organisms. Other authors concur that there was a "complex collective genome" at the time of the LUCA, and that horizontal gene transfer was important in the evolution of later groups; Nicolas Glansdorff states that LUCA "was in a metabolically and morphologically heterogeneous community, constantly shuffling around genetic material" and "remained an evolutionary entity, though loosely defined and constantly changing, as long as this promiscuity lasted." The theory of a universal common ancestry of life is widely accepted. In 2010, based on "the vast array of molecular sequences now available from all domains of life," D. L. Theobald published a "formal test" of universal common ancestry (UCA). This deals with the common descent of all extant terrestrial organisms, each being a genealogical descendant of a single species from the distant past. His formal test favoured the existence of a universal common ancestry over a wide class of alternative hypotheses that included horizontal gene transfer. Basic biochemical principles imply that all organisms do have a common ancestry. A proposed, earlier, non-cellular ancestor to LUCA is the First universal common ancestor (FUCA). FUCA would therefore be the ancestor to every modern cell as well as ancient, now-extinct cellular lineages not descendant of LUCA. FUCA is assumed to have had other descendants than LUCA, none of which have modern descendants. Some genes of these ancient now-extinct cell lineages are thought to have been horizontally transferred into the genome of early descendants of LUCA. LUCA and viruses The origin of viruses remains disputed. Since viruses need host cells for their replication, it is likely that they emerged after the formation of cells. Viruses may even have multiple origins and different types of viruses may have evolved independently over the history of life. There are different hypotheses for the origins of viruses, for instance an early viral origin from the RNA world or a later viral origin from selfish DNA. Based on how viruses are currently distributed across the bacteria and archaea, the LUCA is suspected of having been prey to multiple viruses, ancestral to those that now have those two domains as their hosts. Furthermore, extensive virus evolution seems to have preceded the LUCA, since the jelly-roll structure of capsid proteins is shared by RNA and DNA viruses across all three domains of life. LUCA's viruses were probably mainly dsDNA viruses in the groups called Duplodnaviria and Varidnaviria. Two other single-stranded DNA virus groups within the Monodnaviria, the Microviridae and the Tubulavirales, likely infected the last bacterial common ancestor. The last archaeal common ancestor was probably host to spindle-shaped viruses. All of these could well have affected the LUCA, in which case each must since have been lost in the host domain where it is no longer extant. By contrast, RNA viruses do not appear to have been important parasites of LUCA, even though straightforward thinking might have envisaged viruses as beginning with RNA viruses directly derived from an RNA world. Instead, by the time the LUCA lived, RNA viruses had probably already been out-competed by DNA viruses. LUCA might have been the ancestor to some viruses, as it might have had at least two descendants: LUCELLA, the Last Universal Cellular Ancestor, the ancestor to all cells, and the archaic virocell ancestor, the ancestor to large-to-medium-sized DNA viruses. Viruses might have evolved before LUCA but after the First universal common ancestor (FUCA), according to the reduction hypothesis, where giant viruses evolved from primordial cells that became parasitic.
Biology and health sciences
Basics_4
Biology
21002765
https://en.wikipedia.org/wiki/Platax
Platax
Platax is a genus of Indo-Pacific, reef-associated fish belonging to the family Ephippidae. There are currently five known extant species generally accepted to belong to the genus. They are one of the fish taxa commonly known as "batfish". Description Members of the genus Platax are generally similar in shape to the other species in the family. Adults are rather disc-shaped fish, with laterally compressed bodies and large dorsal and anal fins that give individuals a somewhat triangular profile. Platax teira is the largest species, reaching lengths of around . The other species reach maximum lengths of around . Distribution Platax can be found in reefs throughout the entire Indo-Pacific region. Their range extends from the western Indian Ocean in the Red Sea to as far east as Australia. Most Platax species can be found in higher latitudes, as high as the Ryukyu Islands in Japan and as far south as the eastern coast of Australia. However, the ranges of the individual species is not consistent throughout the genus' range. Platax pinnatus for example is most likely not found in the Indian Ocean. A few individuals have been found in Atlantic waters. Apparently, the species Platax orbicularis has been observed in Florida waters as a non-native, invasive species. The aquarium industry has been blamed for the spread of this species into the Caribbean. Members of the genus are most common around reefs and shipwrecks. Taxonomy The genus was first used by Cuvier with the publication of his 1816 system of animal classification. He assigned the genus to the batfish species Platax boersii, a classification which still holds to this day. Another species to be assigned to the genus by Cuvier was Platax ocellatus, a butterflyfish that is now more correctly classified in the genus Chaetodon in Chaetodontidae. In the same work, a species that is now known to belong to the genus, Platax teira was classified by Cuvier in a different genus, Chaetodon teira. A few species have been assigned to the genus that have since been reclassified into other genera. The butterflyfish, C. ocellatus mentioned above is one of these species. Another species that has been mistakenly classified as a Platax is the common freshwater angelfish, Pterophyllum scalare. In a joint effort with Valenciennes, Cuvier published a natural history work in 1831 where the freshwater angelfish was classified as Platax scalaris. The freshwater angelfish, of course is not as closely related to the marine batfish as to warrant classification in the same genus. A more scientifically acceptable mis-classification would be that of the species Zabidius novemaculeatus. This species was first described as Platax novemaculeatus by Mcculloch when it was discovered from Australia in the early 1900s. The species is now classified in the genus Zabidius, which is still in the same family as the genus Platax. The generic name, "platax" was coined from the Greek term platys - meaning "flat". This refers to the generally compressed body shape of the fish. They are commonly called "batfish". However, they are not the only fish taxon called by this name. Fish from the only distantly related family Ogcocephalidae are also commonly known as "batfish". Other families with species that have been referred to as "batfish" include the families Dactylopteridae, Drepaneidae, Monacanthidae, and Monodactylidae. Species There are currently five recognized extant species in this genus: There are also at least four fossil species known: Platax altissimus Agassiz, 1842 Platax macropterygious Agassiz, 1842 Platax papilio Agassiz, 1842 Platax woodwardii Agassiz, 1842
Biology and health sciences
Acanthomorpha
Animals
21005681
https://en.wikipedia.org/wiki/Giant%20isopod
Giant isopod
A giant isopod is any of the almost 20 species of large isopods in the genus Bathynomus. They are abundant in the cold, deep waters of the Atlantic, Pacific, and Indian Oceans. Bathynomus giganteus, the species upon which the generitype is based, is often considered the largest isopod in the world, though other comparably poorly known species of Bathynomus may reach a similar size (e.g., B. kensleyi). The giant isopods are noted for their resemblance to the much smaller common woodlouse (pill bug), to which they are related. French zoologist Alphonse Milne-Edwards was the first to describe the genus in 1879 after his colleague Alexander Agassiz collected a juvenile male B. giganteus from the Gulf of Mexico. This was an exciting discovery for both scientists and the public, as at the time the idea of a lifeless or "azoic" deep ocean had only recently been refuted by the work of Sir Charles Wyville Thomson and others. No females were recovered until 1891. Giant isopods are of little interest to most commercial fisheries, but are infamous for attacking and destroying fish caught in trawls. Specimens caught in the Americas and Japan are sometimes seen in public aquariums. Description Giant isopods are a good example of deep-sea gigantism (cf. giant squid), as they are far larger than the "typical" isopods that are up to . Bathynomus can be divided into "giant" species where the adults generally are between long and "supergiant" species where the adults are typically between . One of the "supergiants", B. giganteus, reaches a typical length between ; an individual claimed to be long has been reported by the popular press, but the largest confirmed was . Their morphology resembles that of their terrestrial relative, the woodlouse. Their bodies are dorsoventrally compressed, protected by a rigid, calcareous exoskeleton composed of overlapping segments. Like some woodlice, they can curl up into a "ball", where only the tough shell is exposed. This protects from predators trying to strike at the more vulnerable underside. The first shell segment is fused to the head; the most posterior segments are often fused, as well, forming a "caudal shield" over the shortened abdomen (pleon). The large eyes are compound with nearly 4,000 facets, sessile, and spaced far apart on the head. They have two pairs of antennae. The uniramous thoracic legs or pereiopods are arranged in seven pairs, the first of which is modified into maxillipeds to manipulate and bring food to the four sets of jaws. The abdomen has five segments called pleonites, each with a pair of biramous pleopods. These are modified into swimming legs and rami, flat respiratory structures acting as gills. The isopods are a pale lilac or pinkish. The individual species generally resemble each other but can be separated by various morphological features, notably the number (7–13) and shape (straight or upturned) of the spines on the pleotelson ("tail"), shape (simple or bifid) of the central spine on the pleotelson, and the shape and structure of the uropods and pereopods. Giant isopods like Bathynomus giganteus store substantial organic reserves in their midgut gland and fat body, with lipids forming a significant component, particularly in the fat body where triacylglycerols make up 88% of total lipids. Range Giant isopods have been recorded in the West Atlantic from the US state of Georgia to Brazil, including the Gulf of Mexico and the Caribbean. The four known Atlantic species are B. obtusus, B. miyarei, B. maxeyorum, and B. giganteus, and the last of these is the only species recorded off the United States. The remaining Bathynomus species are all restricted to the Indo-Pacific. No species occur in both the Atlantic and Indo-Pacific. Previous records of B. giganteus from the Indo-Pacific are now considered misidentifications of other species. Giant isopods are unknown from the East Atlantic or East Pacific. The greatest species richness (five species) is found off eastern Australia, but it is possible other regions that are not as well-sampled match this figure. In general, the distributions of giant isopods are imperfectly known, and undescribed species may exist. Ecology Giant isopods are important scavengers in the deep-sea benthic environment. They are mainly found from the gloomy sublittoral zone at a depth of to the pitch darkness of the bathyal zone at , where pressures are high and temperatures are very low. A few species from this genus have been reported from shallower depths, notably B. miyarei between , the poorly known B. decemspinosus between , and B. doederleini as shallow as . The depth record for any giant isopod is for B. kensleyi, but this species also occurs as shallow as . Over 80% of B. giganteus are found at a depth between . In regions with both "giant" and "supergiant" species, the former mainly live on the continental slope, while the latter mainly live on the bathyal plain. Although Bathynomus have been recorded in water as warm as , they are primarily found in much colder places. For example, during a survey of the deep-sea fauna of Exuma Sound in the Bahamas, B. giganteus was found to be common in water between , but more abundant towards the lower temperature. In contrast, preliminary studies indicate that B. doederleinii stops feeding when the temperature falls below . This lower temperature limit may explain their absence from temperate and frigid regions where seas at the depth preferred by Bathynomus often are colder. They are thought to prefer a muddy or clay substrate and lead solitary lives. Although generalist scavengers, these isopods are mostly carnivorous and feed on dead whales, fish, and squid. They may also prey on slow-moving animals such as sea cucumbers, sponges, radiolarians, nematodes, and other zoobenthos, and perhaps even live fish. They are known to attack trawl catches. One giant isopod was filmed attacking a larger dogfish shark in a deepwater trap by latching onto and eating its face. As food is scarce in the deep-ocean biome, giant isopods must take advantage of whatever food they have available. They are adapted to long periods of famine and have been known to survive over 5 years without food in captivity. When a significant source of food is encountered, giant isopods gorge themselves to the point that they could barely move. A study examining the digestive system contents of 1651 specimens of B. giganteus found that fish were most commonly eaten, followed by cephalopods and decapods, particularly carideans and galatheids. Giant isopods collected along the east coast of Australia by setting traps exhibit a variation in diversity with water depth. The deeper the water, the fewer number of species found, and the larger the species tended to be. The giant isopods found in very deep waters off Australia were compared to those found off Mexico and India. From the fossil record, Bathynomus is thought to have existed more than 160 million years ago, so it did not evolve independently in all three locations, but since then Bathynomus likely would show divergent evolution in the various locations. However, the giant isopods in all three locations are almost identical in appearance (although some differences are seen, and they are separate species). This reduced phenotypic divergence is linked to the extremely low light levels of their habitat. Reproduction A study of the seasonal abundance of B. giganteus juveniles and adults suggests a peak in reproductive capacity in the spring and winter. This is observed to be due to a shortage of food during the summer. Mature females develop a brood pouch or marsupium when sexually active, the pouch being formed by overlapping oostegites or brood plates grown from the medial border of the pereiopods. The young isopods emerge from the marsupium as miniatures of the adults, known as mancae. This is not a larval stage; the mancae are fully developed, lacking only the last pair of pereiopods. Human consumption There have been occasional attempts at utilizing giant isopods as novelty food, such as prepared in East Asian cuisine like ramen. Relative to total size, there is not very much "meat" to be harvested. The meat is sometimes described as resembling lobster and/or crab in taste, with a somewhat firmer, chewier texture. Classification The genus currently contains the following species: Bathynomus affinis Bathynomus brucei Bathynomus bruscai †Bathynomus civisi Bathynomus crosnieri Bathynomus decemspinosus Bathynomus doederleini Bathynomus giganteus Bathynomus immanis Bathynomus jamesi Bathynomus kapala Bathynomus keablei Bathynomus kensleyi †Bathynomus kominatoensis Bathynomus lowryi Bathynomus maxeyorum Bathynomus miyarei Bathynomus obtusus Bathynomus pelor Bathynomus propinquus (nomen dubium) Bathynomus raksasa Bathynomus richeri †Bathynomus sismondai †Bathynomus steatopigia †Bathynomus undecimspinosus Bathynomus yucatanensis Bathynomus vaderi Fossilized species Fossilized specimens of Bathynomus are known extending back to at least the Early Oligocene (Rupelian) of Italy, with other fossils being known from Japan and Spain.
Biology and health sciences
Malacostraca
Animals
26878593
https://en.wikipedia.org/wiki/Australopithecus%20sediba
Australopithecus sediba
Australopithecus sediba is an extinct species of australopithecine recovered from Malapa Cave, Cradle of Humankind, South Africa. It is known from a partial juvenile skeleton, the holotype MH1, and a partial adult female skeleton, the paratype MH2. They date to about 1.98 million years ago in the Early Pleistocene, and coexisted with Paranthropus robustus and Homo ergaster / Homo erectus. Malapa Cave may have been a natural death trap, the base of a long vertical shaft which creatures could accidentally fall into. A. sediba was initially described as being a potential human ancestor, and perhaps the progenitor of Homo, but this is contested and it could also represent a late-surviving population or sister species of A. africanus which had earlier inhabited the area. MH1 has a brain volume of about 350–440 cc, similar to other australopithecines. The face of MH1 is strikingly similar to Homo instead of other australopithecines, with a less pronounced brow ridge, cheek bones, and prognathism (the amount the face juts out), and there is evidence of a slight chin. However, such characteristics could be due to juvenility and lost with maturity. The teeth are quite small for an australopithecine. MH1 is estimated at tall, which would equate to an adult height of . MH1 and MH2 were estimated to have been about the same weight at . Like other australopithecines, A. sediba is thought to have had a narrow and apelike upper chest, but a broad and humanlike lower chest. Like other australopithecines, the arm anatomy seems to suggest a degree of climbing and arboreal behaviour. The pelvis indicates A. sediba was capable of a humanlike stride, but the foot points to a peculiar gait not demonstrated in any other hominin involving hyperpronation of the ankle, and resultantly rotating the leg inwards while pushing off. This suite of adaptations may represent a compromise between habitual bipedalism and arboreality. A. sediba seems to have eaten only C3 forest plants such as some grasses and sedges, fruits, leaves, and bark. This strongly contrasts from other early hominins which ate a mix of C3 and abundant C4 savanna plants, but is similar to modern savanna chimpanzees. No other hominin bears evidence of eating bark as part of regular diet. Such a generalist diet may have allowed it to occupy a smaller home range than savanna chimps. The Malapa area may have been cooler and more humid than today, featuring closed forests surrounded by more open grasslands. Research history Specimens The first fossil find was a right clavicle, MH1 (UW88-1), in Malapa Cave, Cradle of Humankind, South Africa, discovered by 9-year-old Matthew Berger on 15 August 2008 while exploring the digsite headed by his father, South African palaeoanthropologist Lee Rogers Berger. Further excavation yielded a partial skeleton for MH1, additionally including a partial skull and jawbone fragments, as well as aspects of the arms, fingers, shoulders, ribcage, spine, pelvis, legs, and feet. MH1 is interpreted as having been a juvenile male due to the apparently pronounced development of the brow ridge and canine roots, eversion of the angle of the mandible, and large scarring on the bones. However, anthropologists William Kimbel and Yoel Rak contend that these are unreliable methods of determining sex, and suggest that MH1 is female based on the lack of anterior pillars (columns running down alongside the nasal opening down to around the mouth) and a slightly convex subnasal plate, using methods of sex determination for A. africanus. MH1 was nicknamed "Karabo", which means "answer" in Tswana, by 17-year-old Omphemetse Keepile from St Mary's School, Johannesburg, in a naming contest. She chose this name because, "The fossil represents a solution towards understanding the origins of humankind." Another partial skeleton, the adult MH2, was recovered by Lee on 4 September 2008 with isolated upper teeth, a partial jawbone, a nearly complete right arm, the right scapula, and fragments of the shoulders, right arm, spine, ribs, pelvis, knee joint, and feet. The pubic bone is broad and square, and the muscle scarring on the body is weak to moderate, which suggest that MH2 is female. The presence of species which evolved after 2.36 million years ago and became extinct around 1.5 million years ago indicates the A. sediba layer dates to sometime within this interval during the Early Pleistocene. Uranium–lead dating of a flowstone capping the layer yielded a date of 2.026±0.021 million years ago. Using archaeomagnetic dating, the sediments have a normal magnetic polarity (as opposed to the reverse of the magnetic polarity in modern day) and the only time when this occurred during this interval is between 1.95 and 1.78 million years ago. In 2011, the flowstone was more firmly dated to 1.977±0.002 million years ago again using uranium–lead dating. Taphonomy The cave networks around Malapa comprise long, interconnected cave openings within a area. The Malapa site may have been at the base of an at most cavern system. The cave is at the intersection of a north-northeast and north-northwest chert-filled fracture, and the hominin remains were unearthed in a section on the north-northwest fracture. The layer was exposed by limestone mining in the early 20th century. The cave comprises five sedimentary facies A–E of water-laid sandstone, with A. sediba being recovered from facies D, and more hominin remains from facies E. MH1 and MH2 are separated vertically by at most . Facies D is a , lightly coloured layer overlying flowstone. Small peloids are common, but are fused into large and irregular groups, which indicate they were deposited in a water-logged setting. Peloids may represent faecal matter or soil microbes. The preservation state of MH1 and MH2 indicate they were deposited quickly, were moved very little, and were cemented soon after deposition in a phreatic environment (in a subterranean stream). There is no evidence of scavenging, indicating the area was inaccessible to carnivores. This could all indicate that Malapa Cave was a deathtrap, with inconspicuous cave openings at the surface. Animals may have been lured by the scent of water emanating from the shaft, and carnivores to the scent of dead animals, and then fallen to their deaths. A large debris flow caused the remains to be deposited deeper into the cave along a subterranean stream, perhaps due to a heavy rainstorm. The chamber eventually collapsed and filled with mud. Classification In 2010, Lee and colleagues officially described the species Australopithecus sediba with MH1 as the holotype and MH2 the paratype. The species name "sediba" means "fountain" or "wellspring" in the local Sesotho language. Because A. sediba had many traits in common with Homo ergaster/H. erectus, particularly in the pelvis and legs, the describers postulated that A. sediba was a transitional fossil between Australopithecus and Homo. Dental traits are also suggestive of some close relationship between A. sediba and the ancestor of Homo. However, the specimens were found in a stratigraphic unit dating to 1.95–1.78 million years ago, whereas the earliest Homo fossils at the time dated to 2.33 million years ago (H. habilis from Hadar, Ethiopia). Currently, the oldest Homo specimen is LD 350-1 dating to 2.8–2.75 million years ago from Ledi-Geraru, Ethiopia. To reconcile the dating discrepancy, the describers also hypothesised that A. sediba evolved from a population of A. africanus (which inhabited the same general region) some time before the Malapa hominins, and that Homo split from A. sediba sometime thereafter. This would imply an 800,000 year ghost lineage between A. africanus and the Malapa hominins. It was also suggested that A. sediba, instead of H. habilis or H. rudolfensis, was the direct ancestor of H. ergaster/H. erectus (the earliest uncontested member of the genus Homo), primarily because the Malapa hominins were dated to 1.98 million years ago in 2011, which at the time predated the earliest representative of H. ergaster/H. erectus. A. sediba is now thought to have been contemporaneous with H. ergaster/H. erectus and Paranthropus robustus in the Cradle of Humankind. Alternatively, A. sediba could also represent a late-surviving morph or sister species of A. africanus unrelated to Homo, which would mean Homo-like traits evolved independently in A. sediba and Homo (homoplasy). The fossil record of early Homo is poorly known and based largely on fragmentary remains, making convincing anatomical comparisons difficult and sometimes unfeasible. A. africanus, A. afarensis, and A. garhi have also been proposed as the true ancestor of Homo, and the matter is much debated. Further, the holotype is a juvenile, which Kimbel and Rak cite in arguing that some of the Homo-like facial characteristics may have been lost with maturity. Phylogenetic analyses in 2023 based on craniodental morphology recovered A. sediba in an unstable, varied position among hominins, so the researchers concluded that adult skeletons of this species are required for appropriate classification. The present classification of australopithecines is in disarray. Australopithecus may be considered a grade taxon whose members are united by their similar physiology rather than close relations with each other over other hominin genera, and, for the most part, it is largely unclear how any species relates to the others. Anatomy Skull Only the cranial vault of MH1 was preserved, which has a volume of 363 cc. The very back of the brain is estimated to have been 7–10 cc. To estimate the cerebellum, the australopithecines KNM-ER 23000 (Paranthropus boisei) and Sts 19 (A. africanus) with volumes of 40–50 cc, as well as KNM-ER 1813 (H. habilis), KNM-ER 1805 (H. habilis), and KNM-ER 1470 (H. rudolfensis) with volumes of 55–75 cc were used to estimate the volume of the MH1 cerebellum as about 50 cc. Considering all these, MH1 may have had a brain volume of about 420–440 cc. This is typical for australopithecines. Using trends seen in modern primates between adult and neonate brain size, neonate brain size may have been 153–201 cc, similar to what is presumed for other australopithecines. Brain configuration appears to have been mostly australopithecine-like, but the orbitofrontal cortex appears to have been more humanlike. Overall, A. sediba skull anatomy is most similar to A. africanus. However, MH1 has a smaller cranium, a transversely wider cranial vault, more vertically-inclined walls of the parietal bone, and more widely spaced temporal lines. Much like Homo, the brow ridge is less pronounced, the cheekbones are less flared, the face does not jut out as far (less prognathism), and there is a slight chin. However, such characteristics are also found in some A. africanus skulls from Sterkfontein Member 4, which Kimbel and Rak believed could indicate that these Homo-like attributes would have been lost in maturity. Also, if prognathism is measured using the anterior nasal spine instead of the very base of the nose, prognathism in MH1 falls within the range of that seen in A. africanus. The teeth are quite small for an australopithecine, and are more within the range of those of early Homo. However, unlike Homo, the molars progressively increase in size towards the back of the mouth—as opposed to the second molar being the largest—and the cusps are more closely spaced together. The shape of the mandibular ramus (the bar which connects the jaw to the skull) is quite different between MH1 and MH2. That of MH1 is taller and wider; the front and back border are nearly vertical and parallel, in contrast to the nonparallel borders of MH2 with a concave front border; and the coronoid process of MH1 is angled towards the back with a deep and asymmetrical mandibular notch, whereas MH2 has an uncurved coronoid process with a shallow mandibular notch. Compared to patterns seen in modern great apes, such marked differences exceed what would be expected if these could be explained as due to sexual dimorphism or the juvenile status of MH1. Skeletally, A. sediba may have been a highly variable species. Torso MH1 and MH2 were estimated to have been roughly the same size, about . This is smaller than many contemporary hominins, but reasonable for an australopithecine. MH1 was about tall, but he was a juvenile at about the same skeletal development of a 12-year-old human child or a 9-year-old chimpanzee. A. sediba, much like earlier and contemporary hominins, appears to have had an ape-like growth rate based on dental development rate, so MH1 may have reached about 85% of its adult size assuming a chimpanzeelike growth trajectory, or 80% assuming a humanlike trajectory. This would equate to roughly . MH1 preserves 4 neck, 6 thoracic, and 2 lumbar vertebrae; and MH2 preserves 2 neck, 7 thoracic, 2 lumbar, and 1 sacral vertebrae. The lordosis (humanlike curvature) and joints of the neck vertebrae, indicating similar head posture to humans. However, the overall anatomy of the neck vertebrae is apelike, and point to a much stiffer neck. A. sediba lacks a humanlike brachial plexus (which is identified in some A. afarensis), and the human brachial plexus is responsible for nerves and muscle innervation in the arms and hands enhancing motor control. Like humans, A. sediba appears to have had a flexible lumbar series comprising 5 vertebrae—as opposed to 6 static vertebrae in non-human apes—and exhibiting lumbar lordosis (human curvature of the spine) consistent with habitual upright posture. However, A. sediba seems to have had a highly mobile lower back and exaggerated lumbar lordosis, which may have been involved in counteracting torques directed inwards while walking in the hyperpronating gait proposed for A. sediba. MH1 preserves 2 upper thoracic, 1 mid-thoracic, and 3 lower thoracic ribs; and MH2 4 consecutive upper-to-mid-thoracic, and 3 lower thoracic ribs joined with the vertebrae. This indicates that A. sediba had an apelike constricted upper chest, but the humanlike anatomy of the pelvis may suggest A. sediba had a broad and humanlike lower chest. The narrow upper chest would have hindered arm swinging while walking, and would have restricted the rib cage and prevented heavy breathing and thereby fast walking or long-distance running. In contrast, A. sediba seems to have had a humanlike narrow waist, repositioned abdominal external oblique muscles, and wider iliocostalis muscles on the back, which all would improve walking efficiency by counteracting sideward flexion of the torso. The pelvis shares several traits with early Homo and H. ergaster, as well as KNM-ER 3228 from Koobi Fora, Kenya, and OH 28 from Olduvai Gorge, Tanzania, which are unassigned to a species (though generally are classified as Homo spp.) There was more buttressing along the acetabulum and sacrum improving hip extension, enlargement of the iliofemoral ligament attachment shifting the weight behind the centre of rotation of the hip, more buttressing along the acetabulum and iliac blade improving alternating pelvic tilt, and more distance between the acetabulum and the ischial tuberosity reducing moment arm at the hamstrings. This may have allowed a humanlike stride in A. sediba. The hip joint appears to have had a more humanlike pattern of load bearing than the H. habilis specimen OH 62. The birth canal of A. sediba appears to be more gynaecoid (the normal human condition) than those of other australopiths which are more platypelloid, though A. sediba is not completely gynaecoid which may be due to smaller neonate brain (and thus head) size. Like humans, the birth canal had increased diameter sagittally (from front to back) and the pubis bone curled upwards. Upper limbs Like other australopithecines and early Homo, A. sediba had somewhat apelike upper body proportions with relatively long arms, a high brachial index (forearm to humerus ratio) of 84, and large joint surfaces. It is debated if apelike upper limb configuration of australopithecines is indicative of arboreal behaviour or simply is a basal trait inherited from the great ape last common ancestor in the absence of major selective pressures to adopt a more humanlike arm anatomy. The shoulders are in a shrugging position, the shoulder blade has a well developed axillary border, and the conoid tubercle (important in muscle attachment around the shoulder joint) is well defined. Muscle scarring patterns on the clavicle indicate a humanlike range of motion. The shoulder blade is most similar to that of orangutans in terms of the size of the glenoid cavity (which forms the shoulder joint) and its angle with the spine, though the shape of the shoulder blade is most similar to humans and chimpanzees. The humerus has a low degree of torsion unlike humans and African apes, which (along with the short clavicle) suggests the shoulder blade was placed farther from the midline like in Homo, though it is positioned higher up the back like in other australopithecines. The apelike qualities of the arms are apparently more marked in A. sediba than the more ancient A. afarensis, and if A. afarensis is ancestral to A. sediba, this could indicate an adaptive shift towards arboreal behaviour. At the elbow joint, the lateral and medial epicondyles of the humerus are elongated, much like other australopithecines and non-human African apes. The humerus also sports a developed crest at the elbow joint to support the brachioradialis muscle which flexes the forearm. Like non-human African apes, there is a strong attachment for the biceps on the radius and for the triceps on the ulna. However, there is less mechanical advantage for the biceps and brachialis. The ulna also supports strong attachment for the flexor carpi ulnaris muscle. The olecranon fossa is large and deep and there is a prominent trochlear keel, which are important in maintaining stability in the arms while they are extended. The finger bones are long, robust, and curved, and support strong flexor digitorum superficialis muscles important for flexing the fingers. These are sometimes argued as evidence of arboreal behaviour in australopithecines. The hand also features a relatively long thumb and short fingers, much like Homo, which could suggest a precision grip important in creating and using complex stone tools. Lower limbs Like other australopithecines, the ankle, knee, and hip joints indicate habitual bipedalism. The leg bones are quite similar to those of A. afarensis. The ankle is mostly humanlike with perhaps a humanlike Achilles tendon. The talus bone is stout and more like those of non-human apes, and features a medially twisted neck and a low neck torsion angle. It is debated if A. sediba had a humanlike foot arch or if the foot was more apelike. The heel bone is angled at a 45-degree angle, and is markedly angled from the front to the back, most strongly at the peroneal trochlea. The robust peroneal trochlea indicates strong peroneus muscles which extend through the calf to the ankle. The foot lacks the lateral plantar tubercle (which may be involved in dissipate forces when the heel hits the ground in a normal human gait) seen in humans and A. afarensis. The gracile body of the heel bone and the robust malleolus (the bony prominence on each side of the ankle) are quite apelike, with less efficient force transfer between the heel bone and the talus, and apelike mobility at the midfoot. A. sediba is most similar to the condition seen in gorillas, and the foot may have been functionally equivalent to that of A. africanus. Palaeobiology Diet Analysis of phytoliths (microscopic plant remains) from the dental plaque of both specimens and carbon isotope analysis shows a diet of almost exclusively C3 forest plants despite a presumably wide availability of C4 plants in their mixed savanna environment. Such a feeding pattern is also observed in modern savanna chimps and is hypothesised for the Early Pliocene Ardipithecus ramidus, but is quite different from any other early hominin. A total of 38 phytoliths were recovered from two teeth from MH1, of which 15 are consistent with dicots, 9 monocots, and the other 14 indeterminate. The monocots were probably sourced from C3 grasses and sedges growing in well-watered and shady areas, and other phytoliths were sourced from fruit, leaves, and wood or bark. Though bark is commonly eaten by other primates for its high protein and sugar content, and bark bread has historically been recorded as a famine food, no other hominin is known to have consumed bark regularly. Dental microwearing analysis similarly suggests the two Malapa hominins ate hard foods, complexity values ranging between H. erectus and the robust P. robustus. Nonetheless, the jaw does not appear to have been as well adapted for producing high strains compared to other early hominins, which may indicate A. sediba was not as highly dependent on its ability to process mechanically challenging food. The interpretation of A. sediba as a generalist herbivore of C3 forest plants is consistent with it being at least partially arboreal. Such a broad diet may have allowed A. sediba to have occupied much smaller home ranges than modern savanna chimps which predominantly consume only fruit, as A. sediba was able to fall back on bark and other fracture-resistant foods. Gait While walking, A. sediba may have displayed hyperpronation of the ankle joint causing exaggerated transfer of weight inwards during stance phase. For modern human hyperpronators, the foot is highly inverted during the swing phase, and contact with the ground is first made by the outer border of the foot, causing high torques rotating the entire leg inwards. Similarly, the attachments for the rectus femoris and biceps femoralis muscles in A. sediba are consistent with midline-directed strains across the legs, hips, and knees. This mode of walking is unideal for modern human anatomy, and hyperpronators are at a higher risk of developing plantar fasciitis, shin splints, and tibial stress fractures. To counteract this, A. sediba may have made use of a mobile midfoot as opposed to a stiff humanlike midfoot, which may have prevented overly stressful loading of the ankle. The hyperpronating gait and related suite of adaptations have not been identified in other hominins, and it is unclear why A. sediba would develop this. A mobile midfoot would also be beneficial in extensive climbing behaviour, so hyperpronation may have been a compromise between habitual bipedalism and arboreality. Birth The pelvic inlet for a female A. sediba is estimated to have been long x broad (sagittal x transverse), and since the neonate head size is estimated to have been at longest, the neonate probably entered the pelvic inlet transversely orientated similar to other hominins. The midplane of the pelvic inlet is constricted to a minimum of , so the neonate may not have needed to be rotated while being birthed. Pelvic inlet dimensions were calculated using a composite reconstruction involving the juvenile male ischium; likewise, the birth canal was possibly actually larger than calculated. The shoulders are estimated to have been across, so they would not have obstructed birth more than the head would have. Therefore, the neonate would have occupied, at the point of most constriction, about 92.1% of the birth canal, allowing sufficient room for a completely non-rotational birth as is exhibited in non-human apes and possibly other australopithecines (though a semi-rotational birth is also proposed). Though it is possible to pass without any rotation, the midplane expands anteroposteriorly (from front to back), and there would have been more space for the neonate if it rotated so that the longest length of the head lined up with this expansion. Modern humans, in comparison, have a much more laborious and complex birth requiring full rotation of the neonate, as the large brain and thus head size, as well as the rigid shoulders, of the human neonate make it much more difficult to fit through the birth canal. Using an estimate of 145.8–180.4 cc for A. sediba neonate brain size, neonate head size would have been , similar to a chimp neonate. Development Growth trajectory seems to have been noticeably different in MH1 than other hominins. The nasomaxillary (bone from the nose to the upper lip) complex indicates a great degree of bone resorption, most markedly at the tooth roots of the front teeth. This contrasts with A. africanus and A. afarensis which are depository, reflecting increasing prognathism with age. P. robustus also features resorption of the upper jaw, but resorption in MH1 expands along the front teeth to the canine fossa near the cheek bones, resulting in a mesognathic (somewhat protrusive) face, as opposed to a flat face in P. robustus. Because resorption occurs so close to the cheek bones, this may explain why MH1 does not present flaring cheekbones characteristic of A. africanus. Tooth eruption probably did not affect the remodeling of the lower face as MH1 already had all of its permanent teeth. Nonetheless, smaller cheek tooth size may have permitted a mesognathic face. A. sediba apparently had a diet markedly in contrast to typical early hominin diets, possibly one similar to that of the modern-day olive colobus monkey, which mainly eats young leaves; the two species appear to have similar patterns of facial-bone growth. This may indicate diverging resorption and deposition patterns in A. sediba, reflecting different jaw-loading patterns from other hominins. The margins of the eye sockets of MH1 are curved, whereas they are indented in A. africanus, which may indicate bone deposition in A. sediba in regions where bone resorption occurs in A. africanus. Pathology The right lamina of the sixth thoracic vertebra of MH1 presents a penetrating bone tumour, probably a benign osteoid osteoma. The lesion penetrates deep and is wide, and was still active at the time of death. It did not penetrate the neural canal so it probably did not cause any neurological complications, and there is no evidence of scoliosis (abnormal curving of the spine). It may have affected movement of the shoulder blade and the upper right quadrant of the back, perhaps causing acute or chronic pain, muscular disturbances, or muscle spasms. Given A. sediba may have required climbing ability, the lesion's position near the insertion for the trapezius, erector spinae, and rhomboid major muscles may have limited normal movement patterns. MH1 has the earliest diagnosed case of cancer for a hominin by at least 200,000 years, predating the 1.8- to 1.6-million-year-old SK 7923 metatarsal fragment presenting osteosarcoma from Swartkrans, Cradle of Humankind. Tumours are rare in the hominin fossil record, likely due to low incidence rate in general for primates; early hominins likely had the same incidence rates as modern primates. The juvenile MH1 developing a bone tumour is consistent with the general trend of bone tumours mostly occurring in younger individuals. MH1 and MH2 exhibit perimortem (around the time of death) bone injuries consistent with blunt force trauma. This agrees with the interpretation of the site as the base of a tall shaft, acting as a natural death trap that animals accidentally fell into. MH1 and MH2 may have fallen about onto a sloping pile of gravel, sand, and bat guano, which probably cushioned the fall to some degree. For MH1, perimortem fracturing is most prominent on the jawbone and teeth, though it is possible that these injuries derived from being hit with a falling object in addition to the fall itself. MH2 bears evidence of bracing during injury, with loading to the forearm and hand and impact to the chest, perimortem fracturing identified on the right side of the body. These are the first deaths in the australopith fossil record confidently not ascribed to predation or natural causes. Palaeoecology A total of 209 non-hominin fossils were recovered alongside the hominins in facies D and E in 2010, and taxa identified from these are: the sabre-toothed cat Dinofelis barlowi, the leopard, the African wild cat, the black-footed cat, the brown hyena, the cape fox, the mongooses Atilax mesotes and Mungos, a genet, an African wild dog, a horse, a pig, a klipspringer, a Megalotragus antelope, a large alcelaphine antelope, a relative of the harnessed bushbuck, a relative of the greater kudu, and a hare. Today, the black-footed cat and cape fox are endemic to South African grass-, bush-, and scrublands. Similarly, the brown hyena inhabits dry, open habitats and has never been reported in a closed forest setting. Dinofelis and Atilax, on the other hand, are generally indicators of a closed, wet habitat. This may indicate the area featured a closed habitat as well as grasslands—judging by the home range of the cape fox, both existed within of the site. The coprolite of a carnivore from facies D contained pollen and phytoliths of Podocarpus or Afrocarpus trees, as well as wood fragments from unidentified conifers and dicots. No phytoliths from grasses were found. In modern day, the Malapa site is a grassland, and Podocarpus and Afrocarpus are found away in the Afromontane forest biome in the canyons above sea level in the Magaliesberg mountain range, where wildfires are less common. This may indicate that Malapa was a cooler, more humid area than today, allowing for enough fire reduction to allow such forest plants to spread that far beyond naturally sheltered areas. Malapa during the Early Pleistocene may have also been at a somewhat lower elevation than today, with valleys and Magaliesberg being less pronounced. Australopithecines and early Homo likely preferred cooler conditions than later Homo, as there are no australopithecine sites that were below in elevation at the time of deposition. This would mean that, like chimps, they often inhabited areas with an average diurnal temperature of , dropping to at night. Malapa Cave is currently above sea level. A. sediba lived alongside P. robustus and H. ergaster/H. erectus. Because A. africanus went extinct around this time, it is possible that South Africa was a refugium for Australopithecus until about 2 million years ago with the beginning of major climatic variability and volatility, and potentially competition with Homo and Paranthropus.
Biology and health sciences
Australopithecines
Biology
24016510
https://en.wikipedia.org/wiki/Airborne%20transmission
Airborne transmission
Airborne transmission or aerosol transmission is transmission of an infectious disease through small particles suspended in the air. Infectious diseases capable of airborne transmission include many of considerable importance both in human and veterinary medicine. The relevant infectious agent may be viruses, bacteria, or fungi, and they may be spread through breathing, talking, coughing, sneezing, raising of dust, spraying of liquids, flushing toilets, or any activities which generate aerosol particles or droplets. Infectious aerosols: physical terminology Aerosol transmission has traditionally been considered distinct from transmission by droplets, but this distinction is no longer used. Respiratory droplets were thought to rapidly fall to the ground after emission: but smaller droplets and aerosols also contain live infectious agents, and can remain in the air longer and travel farther. Individuals generate aerosols and droplets across a wide range of sizes and concentrations, and the amount produced varies widely by person and activity. Larger droplets greater than 100 μm usually settle within 2 m. Smaller particles can carry airborne pathogens for extended periods of time. While the concentration of airborne pathogens is greater within 2m, they can travel farther and concentrate in a room. The traditional size cutoff of 5 μm between airborne and respiratory droplets has been discarded, as exhaled particles form a continuum of sizes whose fates depend on environmental conditions in addition to their initial sizes. This error has informed hospital based transmission based precautions for decades. Indoor respiratory secretion transfer data suggest that droplets/aerosols in the 20 μm size range initially travel with the air flow from cough jets and air conditioning like aerosols, but fall out gravitationally at a greater distance as "jet riders". As this size range is most efficiently filtered out in the nasal mucosa, the primordial infection site in COVID-19, aerosols/droplets in this size range may contribute to driving the COVID-19 pandemic. Overview Airborne diseases can be transmitted from one individual to another through the air. The pathogens transmitted may be any kind of microbe, and they may be spread in aerosols, dust or droplets. The aerosols might be generated from sources of infection such as the bodily secretions of an infected individual, or biological wastes. Infectious aerosols may stay suspended in air currents long enough to travel for considerable distances; sneezes, for example, can easily project infectious droplets for dozens of feet (ten or more meters). Airborne pathogens or allergens typically enter the body via the nose, throat, sinuses and lungs. Inhalation of these pathogens affects the respiratory system and can then spread to the rest of the body. Sinus congestion, coughing and sore throats are examples of inflammation of the upper respiratory airway. Air pollution plays a significant role in airborne diseases. Pollutants can influence lung function by increasing air way inflammation. Common infections that spread by airborne transmission include SARS-CoV-2; measles morbillivirus, chickenpox virus; Mycobacterium tuberculosis, influenza virus, enterovirus, norovirus and less commonly other species of coronavirus, adenovirus, and possibly respiratory syncytial virus. Some pathogens which have more than one mode of transmission are also anisotropic, meaning that their different modes of transmission can cause different kinds of diseases, with different levels of severity. Two examples are the bacterias Yersinia pestis (which causes plague) and Francisella tularensis (which causes tularaemia), which both can cause severe pneumonia, if transmitted via the airborne route through inhalation. Poor ventilation enhances transmission by allowing aerosols to spread undisturbed in an indoor space. Crowded rooms are more likely to contain an infected person. The longer a susceptible person stays in such a space, the greater chance of transmission. Airborne transmission is complex, and hard to demonstrate unequivocally but the Wells-Riley model can be used to make simple estimates of infection probability. Some airborne diseases can affect non-humans. For example, Newcastle disease is an avian disease that affects many types of domestic poultry worldwide that is airborne. Poultry animals are often also airborne. It has been suggested that airborne transmission should be classified as being either obligate, preferential, or opportunistic, although there is limited research that show the importance of each of these categories. Obligate airborne infections spread only through aerosols; the most common example of this category is tuberculosis. Preferential airborne infections, such as chicken pox, can be obtained through different routes, but mainly by aerosols. Opportunistic airborne infections such as influenza typically transmit through other routes; however, under favourable conditions, aerosol transmission can occur. Transmission efficiency Environmental factors influence the efficacy of airborne disease transmission; the most evident environmental conditions are temperature and relative humidity. The transmission of airborne diseases is affected by all the factors that influence temperature and humidity, in both meteorological (outdoor) and human (indoor) environments. Circumstances influencing the spread of droplets containing infectious particles can include pH, salinity, wind, air pollution, and solar radiation as well as human behavior. Airborne infections usually land in the respiratory system, with the agent present in aerosols (infectious particles < 5 μm in diameter). This includes dry particles, often the remnant of an evaporated wet particle called nuclei, and wet particles. Relative humidity (RH) plays an important role in the evaporation of droplets and the distance they travel. 30 μm droplets evaporate in seconds. The CDC recommends a minimum of 40% RH indoors to significantly reduce the infectivity of aerosolized virus. An ideal humidity for preventing aerosol respiratory viral transmission at room temperature appears to be between 40% and 60% RH. If the relative humidity goes below 35% RH, infectious virus stays longer in the air. The number of rainy days (more important than total precipitation); mean daily sunshine hours; latitude and altitude are relevant when assessing the possibility of spread of airborne disease. Some infrequent or exceptional events influence the dissemination of airborne diseases, including tropical storms, hurricanes, typhoons, or monsoons. Climate affects temperature, winds and relative humidity, the main factors affecting the spread, duration and infectiousness of droplets containing infectious particles. The influenza virus spreads easily in the Northern Hemisphere winter due to climate conditions that favour the infectiousness of the virus. Isolated weather events decrease the concentration of airborne fungal spores; a few days later, number of spores increases exponentially. Socioeconomics has a minor role in airborne disease transmission. In cities, airborne disease spreads more rapidly than in rural areas and urban outskirts. Rural areas generally favor higher airborne fungal dissemination. Proximity to large bodies of water such as rivers and lakes can enhance airborne disease. A direct association between insufficient ventilation rates and increased COVID-19 transmission has been observed. Prior to COVID-19, standards for ventilation systems focused more on supplying sufficient oxygen to a room, rather than disease-related aspects of air quality. Poor maintenance of air conditioning systems has led to outbreaks of Legionella pneumophila. Hospital-acquired airborne diseases are associated with poorly-resourced and maintained medical systems. Air conditioning may reduce transmission by removing contaminated air, but may also contribute to the spread of respiratory secretions inside a room. The new findings reveal that understanding airflow patterns is even more crucial than simply increasing air changes per hour. During the COVID-19 pandemic, the common advice was to maximize ventilation, but this may not always be the most effective approach. A room can be well-prepared to prevent the spread of infectious diseases even at a low ACH. This insight could lead to safer building designs and significant energy savings during future pandemics. Prevention A layered risk-management approach to slowing the spread of a transmissible disease attempts to minimize risk through multiple layers of interventions. Each intervention has the potential to reduce risk. A layered approach can include interventions by individuals (e.g. mask wearing, hand hygiene), institutions (e.g. surface disinfection, ventilation, and air filtration measures to control the indoor environment), the medical system (e.g. vaccination) and public health at the population level (e.g. testing, quarantine, and contact tracing). Preventive techniques can include disease-specific immunization as well as nonpharmaceutical interventions such as wearing a respirator and limiting time spent in the presence of infected individuals. Wearing a face mask can lower the risk of airborne transmission to the extent that it limits the transfer of airborne particles between individuals. The type of mask that is effective against airborne transmission is dependent on the size of the particles. While fluid-resistant surgical masks prevent large droplet inhalation, smaller particles which form aerosols require a higher level of protection with filtration masks rated at N95 (US) or FFP3 (EU) required. Use of FFP3 masks by staff managing patients with COVID-19 reduced acquisition of COVID-19 by staff members. Engineering solutions which aim to control or eliminate exposure to a hazard are higher on the hierarchy of control than personal protective equipment (PPE). At the level of physically based engineering interventions, effective ventilation and high frequency air changes, or air filtration through high efficiency particulate filters, reduce detectable levels of virus and other bioaerosols, improving conditions for everyone in an area. Portable air filters, such as those tested in Conway Morris A et al. present a readily deployable solution when existing ventilation is inadequate, for instance in repurposed COVID-19 hospital facilities. The United States Centers for Disease Control and Prevention (CDC) advises the public about vaccination and following careful hygiene and sanitation protocols for airborne disease prevention. Many public health specialists recommend physical distancing (also known as social distancing) to reduce transmission. A 2011 study concluded that vuvuzelas (a type of air horn popular e.g. with fans at football games) presented a particularly high risk of airborne transmission, as they were spreading a much higher number of aerosol particles than e.g., the act of shouting. Exposure does not guarantee infection. The generation of aerosols, adequate transport of aerosols through the air, inhalation by a susceptible host, and deposition in the respiratory tract are all important factors contributing to the over-all risk for infection. Furthermore, the infective ability of the virus must be maintained throughout all these stages. In addition the risk for infection is also dependent on host immune system competency plus the quantity of infectious particles ingested. Antibiotics may be used in dealing with airborne bacterial primary infections, such as pneumonic plague.
Biology and health sciences
Concepts
Health
25393281
https://en.wikipedia.org/wiki/Bonding%20in%20solids
Bonding in solids
Solids can be classified according to the nature of the bonding between their atomic or molecular components. The traditional classification distinguishes four kinds of bonding: Covalent bonding, which forms network covalent solids (sometimes called simply "covalent solids") Ionic bonding, which forms ionic solids Metallic bonding, which forms metallic solids Weak inter molecular bonding, which forms molecular solids (sometimes anomalously called "covalent solids") Typical members of these classes have distinctive electron distributions, thermodynamic, electronic, and mechanical properties. In particular, the binding energies of these interactions vary widely. Bonding in solids can be of mixed or intermediate kinds, however, hence not all solids have the typical properties of a particular class, and some can be described as intermediate forms. Paper Basic classes of solids Network covalent solids A network covalent solid consists of atoms held together by a network of covalent bonds (pairs of electrons shared between atoms of similar electronegativity), and hence can be regarded as a single, large molecule. The classic example is diamond; other examples include silicon, quartz and graphite. Properties High strength (with the exception of graphite) High elastic modulus High melting point Brittle Their strength, stiffness, and high melting points are consequences of the strength and stiffness of the covalent bonds that hold them together. They are also characteristically brittle because the directional nature of covalent bonds strongly resists the shearing motions associated with plastic flow, and are, in effect, broken when shear occurs. This property results in brittleness for reasons studied in the field of fracture mechanics. Network covalent solids vary from insulating to semiconducting in their behavior, depending on the band gap of the material. Ionic solids A standard ionic solid consists of atoms held together by ionic bonds, that is by the electrostatic attraction of opposite charges (the result of transferring electrons from atoms with lower electronegativity to atoms with higher electronegativity). Among the ionic solids are compounds formed by alkali and alkaline earth metals in combination with halogens; a classic example is table salt, sodium chloride. Ionic solids are typically of intermediate strength and extremely brittle. Melting points are typically moderately high, but some combinations of molecular cations and anions yield an ionic liquid with a freezing point below room temperature. Vapour pressures in all instances are extraordinarily low; this is a consequence of the large energy required to move a bare charge (or charge pair) from an ionic medium into free space. Metallic solids Metallic solids are held together by a high density of shared, delocalized electrons, resulting in metallic bonding. Classic examples are metals such as copper and aluminum, but some materials are metals in an electronic sense but have negligible metallic bonding in a mechanical or thermodynamic sense (see intermediate forms). Metallic solids have, by definition, no band gap at the Fermi level and hence are conducting. Solids with purely metallic bonding are characteristically ductile and, in their pure forms, have low strength; melting points can be very low (e.g., Mercury melts at 234 K (−39 °C). These properties are consequences of the non-directional and non-polar nature of metallic bonding, which allows atoms (and planes of atoms in a crystal lattice) to move past one another without disrupting their bonding interactions. Metals can be strengthened by introducing crystal defects (for example, by alloying) that interfere with the motion of dislocations that mediate plastic deformation. Further, some transition metals exhibit directional bonding in addition to metallic bonding; this increases shear strength and reduces ductility, imparting some of the characteristics of a covalent solid (an intermediate case below). Solids of intermediate kinds The four classes of solids permit six pairwise intermediate forms: Ionic to network covalent Covalent and ionic bonding form a continuum, with ionic character increasing with increasing difference in the electronegativity of the participating atoms. Covalent bonding corresponds to sharing of a pair of electrons between two atoms of essentially equal electronegativity (for example, C–C and C–H bonds in aliphatic hydrocarbons). As bonds become more polar, they become increasingly ionic in character. Metal oxides vary along the iono-covalent spectrum. The Si–O bonds in quartz, for example, are polar yet largely covalent, and are considered to be of mixed character. Metallic to network covalent What is in most respects a purely covalent structure can support metallic delocalization of electrons; metallic carbon nanotubes are one example. Transition metals and intermetallic compounds based on transition metals can exhibit mixed metallic and covalent bonding, resulting in high shear strength, low ductility, and elevated melting points; a classic example is tungsten. Molecular to network covalent Materials can be intermediate between molecular and network covalent solids either because of the intermediate organization of their covalent bonds, or because the bonds themselves are of an intermediate kind. Intermediate organization of covalent bonds: Regarding the organization of covalent bonds, recall that classic molecular solids, as stated above, consist of small, non-polar covalent molecules. The example given, paraffin wax, is a member of a family of hydrocarbon molecules of differing chain lengths, with high-density polyethylene at the long-chain end of the series. High-density polyethylene can be a strong material: when the hydrocarbon chains are well aligned, the resulting fibers rival the strength of steel. The covalent bonds in this material form extended structures, but do not form a continuous network. With cross-linking, however, polymer networks can become continuous, and a series of materials spans the range from Cross-linked polyethylene, to rigid thermosetting resins, to hydrogen-rich amorphous solids, to vitreous carbon, diamond-like carbons, and ultimately to diamond itself. As this example shows, there can be no sharp boundary between molecular and network covalent solids. Intermediate kinds of bonding: A solid with extensive hydrogen bonding will be considered a molecular solid, yet strong hydrogen bonds can have a significant degree of covalent character. As noted above, covalent and ionic bonds form a continuum between shared and transferred electrons; covalent and weak bonds form a continuum between shared and unshared electrons. In addition, molecules can be polar, or have polar groups, and the resulting regions of positive and negative charge can interact to produce electrostatic bonding resembling that in ionic solids. Molecular to ionic A large molecule with an ionized group is technically an ion, but its behavior may be largely the result of non-ionic interactions. For example, sodium stearate (the main constituent of traditional soaps) consists entirely of ions, yet it is a soft material quite unlike a typical ionic solid. There is a continuum between ionic solids and molecular solids with little ionic character in their bonding. Metallic to molecular Metallic solids are bound by a high density of shared, delocalized electrons. Although weakly bound molecular components are incompatible with strong metallic bonding, low densities of shared, delocalized electrons can impart varying degrees of metallic bonding and conductivity overlaid on discrete, covalently bonded molecular units, especially in reduced-dimensional systems. Examples include charge transfer complexes. Metallic to ionic The charged components that make up ionic solids cannot exist in the high-density sea of delocalized electrons characteristic of strong metallic bonding. Some molecular salts, however, feature both ionic bonding among molecules and substantial one-dimensional conductivity, indicating a degree of metallic bonding among structural components along the axis of conductivity. Examples include tetrathiafulvalene salts.
Physical sciences
Basics_2
Physics
8395423
https://en.wikipedia.org/wiki/Chain%20transfer
Chain transfer
In polymer chemistry, chain transfer is a polymerization reaction by which the activity of a growing polymer chain is transferred to another molecule: where • is the active center, P is the initial polymer chain, X is the end group, and R is the substituent to which the active center is transferred. Chain transfer reactions reduce the average molecular weight of the final polymer. Chain transfer can be either introduced deliberately into a polymerization (by use of a chain transfer agent) or it may be an unavoidable side-reaction with various components of the polymerization. Chain transfer reactions occur in most forms of addition polymerization including radical polymerization, ring-opening polymerization, coordination polymerization, and cationic polymerization, as well as anionic polymerization. Types Chain transfer reactions are usually categorized by the nature of the molecule that reacts with the growing chain. Transfer to chain transfer agent. Chain transfer agents have at least one weak chemical bond, which therefore facilitates the chain transfer reaction. Common chain transfer agents include thiols, especially dodecyl mercaptan (DDM), and halocarbons such as carbon tetrachloride. Chain transfer agents are sometimes called modifiers or regulators. Transfer to monomer. Chain transfer to monomer may take place in which the growing polymer chain abstracts an atom from unreacted monomer existing in the reaction medium. Because, by definition, polymerization reactions only take place in the presence of monomer, chain transfer to monomer determines the theoretical maximum molecular weight that can be achieved by a given monomer. Chain transfer to monomer is especially significant in cationic addition polymerization and ring-opening polymerization. Transfer to polymer. Chain transfer may take place with an already existing polymer chain, especially under conditions in which much polymer is present. This often occurs at the end of a radical polymerization when almost all monomer has been consumed. Branched polymers are formed as monomer adds to the new radical site which is located along the polymer backbone. The properties of low-density polyethylene are critically determined by the amount of chain transfer to polymer that takes place. Transfer to solvent. In solution polymerization, the solvent can act as a chain transfer agent. Unless the solvent is chosen to be inert, very low molecular weight polymers (oligomers) can result. Historical development Chain transfer was first proposed by Hugh Stott Taylor and William H. Jones in 1930. They were studying the production of polyethylene [()n] from ethylene [] and hydrogen [] in the presence of ethyl radicals that had been generated by the thermal decomposition of (Et)2Hg and (Et)4Pb. The observed product mixture could be best explained by postulating "transfer" of radical character from one reactant to another. Flory incorporated the radical transfer concept in his mathematical treatment of vinyl polymerization in 1937. He coined the term "chain transfer" to explain observations that, during polymerization, average polymer chain lengths were usually lower than predicted by rate considerations alone. The first widespread use of chain transfer agents came during World War II in the US Rubber Reserve Company. The "Mutual" recipe for styrene-butadiene rubber was based on the Buna-S recipe, developed by I. G. Farben in the 1930s. The Buna-S recipe, however, produced a very tough, high molecular weight rubber that required heat processing to break it down and make it processable on standard rubber mills. Researchers at Standard Oil Development Company and the U. S. Rubber Company discovered that addition of a mercaptan modifier to the recipe not only produced a lower molecular weight and more tractable rubber, but it also increased the polymerization rate. Use of a mercaptan modifier became standard in the Mutual recipe. Although German scientists had become familiar with the actions of chain transfer agents in the 1930s, Germany continued to make unmodified rubber to the end of the war and did not fully exploit their knowledge. Throughout the 1940s and 1950s, progress was made in the understanding of the chain transfer reaction and the behavior of chain transfer agents. Snyder et al. proved the sulfur from a mercaptan modifier did indeed become incorporated into a polymer chain under the conditions of bulk or emulsion polymerization. A series of papers from Frank R. Mayo (at the U.S. Rubber Co.) laid the foundation for determining the rates of chain transfer reactions. In the early 1950s, workers at DuPont conclusively demonstrated that short and long branching in polyethylene was due to two different mechanisms of chain transfer to polymer. Around the same time, the presence of chain transfer in cationic polymerizations was firmly established. Current activity The nature of chain transfer reactions is currently well understood and is given in standard polymerization textbooks. Since the 1980s, however, a particularly active area of research has been in the various forms of free radical living polymerizations including catalytic chain transfer polymerization, RAFT, and iodine transfer polymerization (ITP). In these processes, the chain transfer reaction produces a polymer chain with similar chain transfer activity to the original chain transfer agent. Therefore, there is no net loss of chain transfer activity.
Physical sciences
Organic reactions
Chemistry
8400335
https://en.wikipedia.org/wiki/Software%20portability
Software portability
Software portability is a design objective for source code to be easily made to run on different platforms. An aid to portability is the generalized abstraction between the application logic and system interfaces. When software with the same functionality is produced for several computing platforms, portability is the key issue for development cost reduction. Strategies Software portability may involve: Transferring installed program files to another computer of basically the same architecture. Reinstalling a program from distribution files on another computer of basically the same architecture. Building executable programs for different platforms from source code; this is what is usually understood by "porting". Similar systems When operating systems of the same family are installed on two computers with processors with similar instruction sets it is often possible to transfer the files implementing program files between them. In the simplest case, the file or files may simply be copied from one machine to the other. However, in many cases, the software is installed on a computer in a way which depends upon its detailed hardware, software, and setup, with device drivers for particular devices, using installed operating system and supporting software components, and using different drives or directories. In some cases, software, usually described as "portable software", is specifically designed to run on different computers with compatible operating systems and processors, without any machine-dependent installation. Porting is no more than transferring specified directories and their contents. Software installed on portable mass storage devices such as USB sticks can be used on any compatible computer on simply plugging the storage device in, and stores all configuration information on the removable device. Hardware- and software-specific information is often stored in configuration files in specified locations such as the registry on Windows). Software which is not portable in this sense must be modified much more to support the environment on the destination machine. Different processors As of 2011 the majority of desktop and laptop computers used microprocessors compatible with the 32- and 64-bit x86 instruction sets. Smaller portable devices use processors with different and incompatible instruction sets, such as ARM. The difference between larger and smaller devices is such that detailed software operation is different; an application designed to display suitably on a large screen cannot simply be ported to a pocket-sized smartphone with a tiny screen even if the functionality is similar. Web applications are required to be processor independent, so portability can be achieved by using web programming techniques, writing in JavaScript. Such a program can run in a common web browser. Such web applications must, for security reasons, have limited control over the host computer, especially regarding reading and writing files. Non-web programs, installed upon a computer in the normal manner, can have more control, and yet achieve system portability by linking to portable libraries providing the same interface on different systems. Source code portability Software can be compiled and linked from source code for different operating systems and processors if written in a programming language supporting compilation for the platforms. This is usually a task for the program developers; typical users have neither access to the source code nor the required skills. In open-source environments such as Linux the source code is available to all. In earlier days source code was often distributed in a standardised format, and could be built into executable code with a standard Make tool for any particular system by moderately knowledgeable users if no errors occurred during the build. Some Linux distributions distribute software to users in source form. In these cases there is usually no need for detailed adaptation of the software for the system; it is distributed in a way which modifies the compilation process to match the system. Effort to port source code Even with seemingly portable languages like C and C++, the effort to port source code can vary considerably. The authors of UNIX/32V (1979) reported that "[t]he (Bourne) shell [...] required by far the largest conversion effort of any supposedly portable program, for the simple reason that it is not portable." Sometimes the effort consists of recompiling the source code, but sometimes it is necessary to rewrite major parts of the software. Many language specifications describe implementation defined behaviour (e.g. right shifting a signed integer in C can do a logical or an arithmetic shift). Operating system functions or third party libraries might not be available on the target system. Some functions can be available on a target system, but exhibit slightly different behavior such as fails under Windows with EACCES, when it is called for a directory). The program code can contain unportable things, like the paths of include files, drive letters, or the backslash. Implementation defined things like byte order and the size of an int can also raise the porting effort. In practice the claim of languages, like C and C++, to have the WOCA (write once, compile anywhere) is arguable.
Technology
Software development: General
null
1924894
https://en.wikipedia.org/wiki/AND%20gate
AND gate
The AND gate is a basic digital logic gate that implements the logical conjunction (∧) from mathematical logic AND gates behave according to their truth table. A HIGH output (1) results only if all the inputs to the AND gate are HIGH (1). If all of the inputs to the AND gate are not HIGH, a LOW (0) is outputted. The function can be extended to any number of inputs by multiple gates up in a chain. Symbols There are three symbols for AND gates: the American (ANSI or 'military') symbol and the IEC ('European' or 'rectangular') symbol, as well as the deprecated DIN symbol. Additional inputs can be added as needed. For more information see the Logic gate symbols article. It can also be denoted as symbol "^" or "&". The AND gate with inputs A and B and output C implements the logical expression . This expression also may be denoted as or . As of Unicode 16.0.0, the AND gate is also encoded in the Symbols for Legacy Computing Supplement block as . Implementations In logic families like TTL, NMOS, PMOS and CMOS, an AND gate is built from a NAND gate followed by an inverter. In the CMOS implementation above, transistors T1-T4 realize the NAND gate and transistors T5 and T6 the inverter. The need for an inverter makes AND gates less efficient than NAND gates. AND gates can also be made from discrete components and are readily available as integrated circuits in several different logic families. Analytical representation is the analytical representation of AND gate: Alternatives If no specific AND gates are available, one can be made from NAND or NOR gates, because NAND and NOR gates are "universal gates" meaning that they can be used to make all the others. AND gates with multiple inputs AND gates with multiple inputs are designated with the same symbol, with more lines leading in. While direct implementations with more than four inputs are possible in logic families like CMOS, these are inefficient. More efficient implementations use a cascade of NAND and NOR gates, as shown in the picture on the right below. This is more efficient than the cascade of AND gates shown on the left.
Technology
Digital logic
null
1924901
https://en.wikipedia.org/wiki/OR%20gate
OR gate
The OR gate is a digital logic gate that implements logical disjunction. The OR gate outputs "true" if any of its inputs is "true"; otherwise it outputs "false". The input and output states are normally represented by different voltage levels. Description Any OR gate can be constructed with two or more inputs. It outputs a 1 if any of these inputs are 1, or outputs a 0 only if all inputs are 0. The inputs and outputs are binary digits ("bits") which have two possible logical states. In addition to 1 and 0, these states may be called true and false, high and low, active and inactive, or other such pairs of symbols. Thus it performs a logical disjunction (∨) from mathematical logic. The gate can be represented with the plus sign (+) because it can be used for logical addition. Equivalently, an OR gate finds the maximum between two binary digits, just as the AND gate finds the minimum. Together with the AND gate and the NOT gate, the OR gate is one of three basic logic gates from which any Boolean circuit may be constructed. All other logic gates may be made from these three gates; any function in binary mathematics may be implemented with them. It is sometimes called the inclusive OR gate to distinguish it from XOR, the exclusive OR gate. The behavior of OR is the same as XOR except in the case of a 1 for both inputs. In situations where this never arises (for example, in a full-adder) the two types of gates are interchangeable. This substitution is convenient when a circuit is being implemented using simple integrated circuit chips which contain only one gate type per chip. Symbols There are two logic gate symbols currently representing the OR gate: the American (ANSI or 'military') symbol and the IEC ('European' or 'rectangular') symbol. The DIN symbol is deprecated. The "≥1" on the IEC symbol indicates that the output is activated by at least one active input. As of Unicode 16.0.0, the OR gate is also encoded in the Symbols for Legacy Computing Supplement block as . Hardware description and pinout OR gates are basic logic gates, and are available in TTL and CMOS ICs logic families. The standard 4000 series CMOS IC is the 4071, which includes four independent two-input OR gates. The TTL device is the 7432. There are many offshoots of the original 7432 OR gate, all having the same pinout but different internal architecture, allowing them to operate in different voltage ranges and/or at higher speeds. In addition to the standard 2-input OR gate, 3- and 4-input OR gates are also available. In the CMOS series, these are: 4075: triple 3-input OR gate 4072: dual 4-input OR gate Variations include: 74LS32: quad 2-input OR gate (low power Schottky version) 74HC32: quad 2-input OR gate (high speed CMOS version) - has lower current consumption/wider voltage range 74AC32: quad 2-input OR gate (advanced CMOS version) - similar to 74HC32, but with significantly faster switching speeds and stronger drive 74LVC32: low voltage CMOS version of the same. Implementations Analytical representation is the analytical representation of OR gate: OR gates with many inputs OR gates with multiple inputs are designated with the same symbol, with more lines leading in. While direct implementations with more than three inputs are possible in logic families like CMOS, these are inefficient. More efficient implementations use a cascade of NOR and NAND gates, as shown in the picture below. Alternatives If no specific OR gates are available, one can be made from NAND or NOR gates in the configuration shown in the image below. Any logic gate can be made from a combination of NAND or NOR gates. Wired-OR With active low open collector logic outputs, as used for control signals in many circuits, an OR function can be produced by wiring together several outputs. This arrangement is called a wired OR. This implementation of an OR function typically is also found in integrated circuits of N or P-type only transistor processes.
Technology
Digital logic
null
1925933
https://en.wikipedia.org/wiki/Lake%20stratification
Lake stratification
Lake stratification is the tendency of lakes to form separate and distinct thermal layers during warm weather. Typically stratified lakes show three distinct layers: the epilimnion, comprising the top warm layer; the thermocline (or metalimnion), the middle layer, whose depth may change throughout the day; and the colder hypolimnion, extending to the floor of the lake. Every lake has a set mixing regime that is influenced by lake morphometry and environmental conditions. However, changes to human influences in the form of land use change, increases in temperature, and changes to weather patterns have been shown to alter the timing and intensity of stratification in lakes around the globe. Rising air temperatures have the same effect on lake bodies as a physical shift in geographic location, with tropical zones being particularly sensitive. These changes can further alter the fish, zooplankton, and phytoplankton community composition, in addition to creating gradients that alter the availability of dissolved oxygen and nutrients. Definition The thermal stratification of lakes refers to a change in the temperature at different depths in the lake, and is due to the density of water varying with temperature. Cold water is denser than warm water and the epilimnion generally consists of water that is not as dense as the water in the hypolimnion. However, the temperature of maximum density for freshwater is 4 °C. In temperate regions where lake water warms up and cools through the seasons, a cyclical pattern of overturn occurs that is repeated from year to year as the cold dense water at the top of the lake sinks (see stable and unstable stratification). For example, in dimictic lakes the lake water turns over during the spring and the fall. This process occurs more slowly in deeper water and as a result, a thermal bar may form. If the stratification of water lasts for extended periods, the lake is meromictic. Heat is transported very slowly between the mixed layers of a stratified lake, where the diffusion of heat just one vertical meter takes about a month. The interaction between the atmosphere and lakes depends on how solar radiation is distributed, which is why water turbulence, mainly caused by wind stress, can greatly increase the efficiency of heat transfer. In shallow lakes, stratification into epilimnion, metalimnion, and hypolimnion often does not occur, as wind or cooling causes regular mixing throughout the year. These lakes are called polymictic. There is not a fixed depth that separates polymictic and stratifying lakes, as apart from depth, this is also influenced by turbidity, lake surface area, and climate. The lake mixing regime (e.g. polymictic, dimictic, meromictic) describes the yearly patterns of lake stratification that occur in most years. However, short-term events can influence lake stratification as well. Heat waves can cause periods of stratification in otherwise mixed, shallow lakes, while mixing events, such as storms or large river discharge, can break down stratification. Weather conditions induce a more rapid response in larger, shallower lakes, so these lakes are more dynamic and less understood. However, mixing regimes that are known to exist in large, shallow lakes are mostly diurnal, and the stratification is easily disturbed. Lake Taihu in China is an example of a large, shallow, diurnal lake, where even though the depth does not reach more than , the lake’s water turbidity is still dynamic enough to stratify and de-stratify due to the absorption of solar radiation mostly in the upper layer. The tendency for stratification to become disrupted affects the rate of transport and consumption of nutrients, in turn affecting the presence of algal growth. Stratification and mixing regimes in Earth’s largest lakes are also poorly understood, yet changes in thermal distributions, such as the rising temperatures found over time in Lake Michigan’s deep waters, have the ability to significantly alter the largest freshwater ecosystems on the planet. Recent research suggests that seasonally ice-covered dimictic lakes may be described as "cryostratified" or "cryomictic" according to their wintertime stratification regimes. Cryostratified lakes exhibit inverse stratification near the ice surface and have depth-averaged temperatures near 4°C, while cryomictic lakes have no under-ice thermocline and have depth-averaged winter temperatures closer to 0°C. Circulation processes during mixing periods cause the movement of oxygen and other dissolved nutrients, distributing them throughout the body of water. In lakes where benthic organisms are prominent, the respiration and consumption of these bottom-feeders may outweigh the mixing properties of strongly stratified lakes, resulting in zones of extremely low near-bottom oxygen and nutrient concentrations. This can be harmful to benthic organisms such as shellfish, which in the worst cases can wipe out entire populations. The accumulation of dissolved carbon dioxide in three meromictic lakes in Africa (Lake Nyos and Lake Monoun in Cameroon and Lake Kivu in Rwanda) is potentially dangerous because if one of these lakes is triggered into limnic eruption, a very large quantity of carbon dioxide can quickly leave the lake and displace the oxygen needed for life by people and animals in the surrounding area. De-stratification In temperate latitudes, many lakes that become stratified during the summer months de-stratify during cooler windier weather with surface mixing by wind being a significant driver in this process. This is often referred to as "autumn turn-over". The mixing of the hypolimnium into the mixed water body of the lake recirculates nutrients, particularly phosphorus compounds, trapped in the hypolimnion during the warm weather. It also poses a risk of oxygen sag as a long established hypolimnion can be anoxic or very low in oxygen. Lake mixing regimes can shift in response to increasing air temperatures. Some dimictic lakes can turn into monomictic lakes, while some monomictic lakes might become meromictic, as a consequence of rising temperatures. Many types of aeration equipment have been used to thermally de-stratify lakes, particularly lakes subject to low oxygen or undesirable algal blooms. In fact, natural resource and environmental managers are often challenged by problems caused by lake and pond thermal stratification. Fish die-offs have been directly associated with thermal gradients, stagnation, and ice cover. Excessive growth of plankton may limit the recreational use of lakes and the commercial use of lake water. With severe thermal stratification in a lake, the quality of drinking water also can be adversely affected. For fisheries managers, the spatial distribution of fish within a lake is often adversely affected by thermal stratification and in some cases may indirectly cause large die-offs of recreationally important fish. One commonly used tool to reduce the severity of these lake management problems is to eliminate or lessen thermal stratification through water aeration. Aeration has met with some success, although it has rarely proved to be a panacea. Anthropogenic influences Every lake has a set mixing regime that is influenced by lake morphometry and environmental conditions. However, changes to human influences in the form of land use change, increases in temperatures, and changes to weather patterns have been shown to alter the timing and intensity of stratification in lakes around the globe. These changes can further alter the fish, zooplankton, and phytoplankton community composition, in addition to creating gradients that alter the availability of dissolved oxygen and nutrients. There are a number of ways in which changes in human land use influence lake stratification and consequently water conditions. Urban expansion has led to the construction of roads and houses close to previously isolated lakes, sometimes causing increased runoff and pollution. The addition of particulate matter to lake bodies can lower water clarity, resulting in stronger thermal stratification and overall lower average water column temperatures, which can eventually affect the onset of ice cover. Water quality can also be influenced by the runoff of salt from roads and sidewalks, which often creates a benthic saline layer that interferes with vertical mixing of surface waters. Further, the saline layer can prevent dissolved oxygen from reaching the bottom sediments, decreasing phosphorus recycling and affecting microbial communities. On a global scale, rising temperatures and changing weather patterns can also affect stratification in lakes. Rising air temperatures have the same effect on lake bodies as a physical shift in geographic location, with tropical zones being particularly sensitive. The intensity and scope of impact depends on location and lake morphometry, but in some cases can be so extreme as to require a reclassification from monomictic to dimictic (e.g. Great Bear Lake). Globally, lake stratification appears to be more stable with deeper and steeper thermoclines, and average lake temperature as a main determinant in the stratification response to changing temperatures. Further, surface warming rates are much greater than bottom warming rates, again indicating stronger thermal stratification across lakes. Changes to stratification patterns can also alter the community composition of lake ecosystems. In shallow lakes, temperature increases can alter the diatom community; while in deep lakes, the change is reflected in the deep chlorophyll layer taxa. Changes in mixing patterns and increased nutrient availability can also affect zooplankton species composition and abundance, while decreased nutrient availability can be detrimental for benthic communities and fish habitat. In northern temperate lakes, as climate change continues to cause increased variability in weather patterns as well as the timing of ice-on and ice-off dates, subsequent changes in stratification patterns from year to year can also have impacts across multiple trophic levels. Fluctuations in stratification consistency can accelerate deoxygenation of lakes, nutrient mineralization, and phosphorus release, having significant consequences for phytoplankton species. Furthermore, these changes in phytoplankton species composition and abundance can lead to adverse effects on fish recruitment, such as walleye. When these asynchronies in predator and prey populations occur year after year due to changes in stratification, populations may take years to rebound to their “normal” consistency. Combined with typically warmer lake temperatures associated with stratification patterns brought on by climate change, variable prey populations from year-to-year can be detrimental to cold water fish species.
Physical sciences
Hydrology
Earth science
1926015
https://en.wikipedia.org/wiki/Electron%20paramagnetic%20resonance
Electron paramagnetic resonance
Electron paramagnetic resonance (EPR) or electron spin resonance (ESR) spectroscopy is a method for studying materials that have unpaired electrons. The basic concepts of EPR are analogous to those of nuclear magnetic resonance (NMR), but the spins excited are those of the electrons instead of the atomic nuclei. EPR spectroscopy is particularly useful for studying metal complexes and organic radicals. EPR was first observed in Kazan State University by Soviet physicist Yevgeny Zavoisky in 1944, and was developed independently at the same time by Brebis Bleaney at the University of Oxford. Theory Origin of an EPR signal Every electron has a magnetic moment and spin quantum number , with magnetic components or . In the presence of an external magnetic field with strength , the electron's magnetic moment aligns itself either antiparallel () or parallel () to the field, each alignment having a specific energy due to the Zeeman effect: where is the electron's so-called g-factor (see also the Landé g-factor), for the free electron, is the Bohr magneton. Therefore, the separation between the lower and the upper state is for unpaired free electrons. This equation implies (since both and are constant) that the splitting of the energy levels is directly proportional to the magnetic field's strength, as shown in the diagram below. An unpaired electron can change its electron spin by either absorbing or emitting a photon of energy such that the resonance condition, , is obeyed. This leads to the fundamental equation of EPR spectroscopy: . Experimentally, this equation permits a large combination of frequency and magnetic field values, but the great majority of EPR measurements are made with microwaves in the 9000–10000 MHz (9–10 GHz) region, with fields corresponding to about 3500 G (0.35 T). Furthermore, EPR spectra can be generated by either varying the photon frequency incident on a sample while holding the magnetic field constant or doing the reverse. In practice, it is usually the frequency that is kept fixed. A collection of paramagnetic centers, such as free radicals, is exposed to microwaves at a fixed frequency. By increasing an external magnetic field, the gap between the and energy states is widened until it matches the energy of the microwaves, as represented by the double arrow in the diagram above. At this point the unpaired electrons can move between their two spin states. Since there typically are more electrons in the lower state, due to the Maxwell–Boltzmann distribution (see below), there is a net absorption of energy, and it is this absorption that is monitored and converted into a spectrum. The upper spectrum below is the simulated absorption for a system of free electrons in a varying magnetic field. The lower spectrum is the first derivative of the absorption spectrum. The latter is the most common way to record and publish continuous wave EPR spectra. For the microwave frequency of 9388.4 MHz, the predicted resonance occurs at a magnetic field of about = 0.3350 T = 3350 G Because of electron-nuclear mass differences, the magnetic moment of an electron is substantially larger than the corresponding quantity for any nucleus, so that a much higher electromagnetic frequency is needed to bring about a spin resonance with an electron than with a nucleus, at identical magnetic field strengths. For example, for the field of 3350 G shown above, spin resonance occurs near 9388.2 MHz for an electron compared to only about 14.3 MHz for 1H nuclei. (For NMR spectroscopy, the corresponding resonance equation is where and depend on the nucleus under study.) Field modulation As previously mentioned an EPR spectrum is usually directly measured as the first derivative of the absorption. This is accomplished by using field modulation. A small additional oscillating magnetic field is applied to the external magnetic field at a typical frequency of 100 kHz. By detecting the peak to peak amplitude the first derivative of the absorption is measured. By using phase sensitive detection only signals with the same modulation (100 kHz) are detected. This results in higher signal to noise ratios. Note field modulation is unique to continuous wave EPR measurements and spectra resulting from pulsed experiments are presented as absorption profiles. The same idea underlies the Pound-Drever-Hall technique for frequency locking of lasers to a high-finesse optical cavity. Maxwell–Boltzmann distribution In practice, EPR samples consist of collections of many paramagnetic species, and not single isolated paramagnetic centers. If the population of radicals is in thermodynamic equilibrium, its statistical distribution is described by the Boltzmann distribution: where is the number of paramagnetic centers occupying the upper energy state, is the Boltzmann constant, and is the thermodynamic temperature. At 298 K, X-band microwave frequencies ( ≈ 9.75 GHz) give ≈ 0.998, meaning that the upper energy level has a slightly smaller population than the lower one. Therefore, transitions from the lower to the higher level are more probable than the reverse, which is why there is a net absorption of energy. The sensitivity of the EPR method (i.e., the minimal number of detectable spins ) depends on the photon frequency according to where is a constant, is the sample's volume, is the unloaded quality factor of the microwave cavity (sample chamber), is the cavity filling coefficient, and is the microwave power in the spectrometer cavity. With and being constants, ~ , i.e., ~ , where ≈ 1.5. In practice, can change varying from 0.5 to 4.5 depending on spectrometer characteristics, resonance conditions, and sample size. A great sensitivity is therefore obtained with a low detection limit and a large number of spins. Therefore, the required parameters are: A high spectrometer frequency to minimize the Eq. 2. Common frequencies are discussed below A low temperature to decrease the number of spin at the high level of energy as shown in Eq. 1. This condition explains why spectra are often recorded on sample at the boiling point of liquid nitrogen or liquid helium. Spectral parameters In real systems, electrons are normally not solitary, but are associated with one or more atoms. There are several important consequences of this: An unpaired electron can gain or lose angular momentum, which can change the value of its g-factor, causing it to differ from . This is especially significant for chemical systems with transition-metal ions. Systems with multiple unpaired electrons experience electron–electron interactions that give rise to "fine" structure. This is realized as zero field splitting and exchange coupling, and can be large in magnitude. The magnetic moment of a nucleus with a non-zero nuclear spin will affect any unpaired electrons associated with that atom. This leads to the phenomenon of hyperfine coupling, analogous to J-coupling in NMR, splitting the EPR resonance signal into doublets, triplets and so forth. Additional smaller splittings from nearby nuclei is sometimes termed "superhyperfine" coupling. Interactions of an unpaired electron with its environment influence the shape of an EPR spectral line. Line shapes can yield information about, for example, rates of chemical reactions. These effects (g-factor, hyperfine coupling, zero field splitting, exchange coupling) in an atom or molecule may not be the same for all orientations of an unpaired electron in an external magnetic field. This anisotropy depends upon the electronic structure of the atom or molecule (e.g., free radical) in question, and so can provide information about the atomic or molecular orbital containing the unpaired electron. The g factor Knowledge of the g-factor can give information about a paramagnetic center's electronic structure. An unpaired electron responds not only to a spectrometer's applied magnetic field but also to any local magnetic fields of atoms or molecules. The effective field experienced by an electron is thus written where includes the effects of local fields ( can be positive or negative). Therefore, the resonance condition (above) is rewritten as follows: The quantity is denoted and called simply the g-factor, so that the final resonance equation becomes This last equation is used to determine in an EPR experiment by measuring the field and the frequency at which resonance occurs. If does not equal , the implication is that the ratio of the unpaired electron's spin magnetic moment to its angular momentum differs from the free-electron value. Since an electron's spin magnetic moment is constant (approximately the Bohr magneton), then the electron must have gained or lost angular momentum through spin–orbit coupling. Because the mechanisms of spin–orbit coupling are well understood, the magnitude of the change gives information about the nature of the atomic or molecular orbital containing the unpaired electron. In general, the g factor is not a number but a 3×3 matrix. The principal axes of this tensor are determined by the local fields, for example, by the local atomic arrangement around the unpaired spin in a solid or in a molecule. Choosing an appropriate coordinate system (say, x,y,z) allows one to "diagonalize" this tensor, thereby reducing the maximal number of its components from 9 to 3: gxx, gyy and gzz. For a single spin experiencing only Zeeman interaction with an external magnetic field, the position of the EPR resonance is given by the expression gxxBx + gyyBy + gzzBz. Here Bx, By and Bz are the components of the magnetic field vector in the coordinate system (x,y,z); their magnitudes change as the field is rotated, so does the frequency of the resonance. For a large ensemble of randomly oriented spins (as in a fluid solution), the EPR spectrum consists of three peaks of characteristic shape at frequencies gxxB0, gyyB0 and gzzB0. In first-derivative spectrum, the low-frequency peak is positive, the high-frequency peak is negative, and the central peak is bipolar. Such situations are commonly observed in powders, and the spectra are therefore called "powder-pattern spectra". In crystals, the number of EPR lines is determined by the number of crystallographically equivalent orientations of the EPR spin (called "EPR center"). At higher temperatures, the three peaks coalesce to a singlet, corresponding to giso, for isotropic. The relationship between giso and the components is: One elementary step in analyzing an EPR spectrum is to compare giso with the g-factor for the free electron, ge. Metal-based radicals giso is typically well above ge whereas organic radicals, giso ~ ge. The determination of the absolute value of the g factor is challenging due to the lack of a precise estimate of the local magnetic field at the sample location. Therefore, typically so-called g factor standards are measured together with the sample of interest. In the common spectrum, the spectral line of the g factor standard is then used as a reference point to determine the g factor of the sample. For the initial calibration of g factor standards, Herb et al. introduced a precise procedure by using double resonance techniques based on the Overhauser shift. Hyperfine coupling Since the source of an EPR spectrum is a change in an electron's spin state, the EPR spectrum for a radical (S = 1/2 system) would consist of one line. Greater complexity arises because the spin couples with nearby nuclear spins. The magnitude of the coupling is proportional to the magnetic moment of the coupled nuclei and depends on the mechanism of the coupling. Coupling is mediated by two processes, dipolar (through space) and isotropic (through bond). This coupling introduces additional energy states and, in turn, multi-lined spectra. In such cases, the spacing between the EPR spectral lines indicates the degree of interaction between the unpaired electron and the perturbing nuclei. The hyperfine coupling constant of a nucleus is directly related to the spectral line spacing and, in the simplest cases, is essentially the spacing itself. Two common mechanisms by which electrons and nuclei interact are the Fermi contact interaction and by dipolar interaction. The former applies largely to the case of isotropic interactions (independent of sample orientation in a magnetic field) and the latter to the case of anisotropic interactions (spectra dependent on sample orientation in a magnetic field). Spin polarization is a third mechanism for interactions between an unpaired electron and a nuclear spin, being especially important for -electron organic radicals, such as the benzene radical anion. The symbols "a" or "A" are used for isotropic hyperfine coupling constants, while "B" is usually employed for anisotropic hyperfine coupling constants. In many cases, the isotropic hyperfine splitting pattern for a radical freely tumbling in a solution (isotropic system) can be predicted. Multiplicity For a radical having M equivalent nuclei, each with a spin of I, the number of EPR lines expected is 2MI + 1. As an example, the methyl radical, CH3, has three 1H nuclei, each with I = 1/2, and so the number of lines expected is 2MI + 1 = 2(3)(1/2) + 1 = 4, which is as observed. For a radical having M1 equivalent nuclei, each with a spin of I1, and a group of M2 equivalent nuclei, each with a spin of I2, the number of lines expected is (2M1I1 + 1) (2M2I2 + 1). As an example, the methoxymethyl radical, has two equivalent 1H nuclei, each with I = 1/2 and three equivalent 1H nuclei each with I = 1/2, and so the number of lines expected is (2M1I1 + 1) (2M2I2 + 1) = [2(2)(1/2) + 1] [2(3)(1/2) + 1] = 3×4 = 12, again as observed. The above can be extended to predict the number of lines for any number of nuclei. While it is easy to predict the number of lines, the reverse problem, unraveling a complex multi-line EPR spectrum and assigning the various spacings to specific nuclei, is more difficult. In the often encountered case of I = 1/2 nuclei (e.g., 1H, 19F, 31P), the line intensities produced by a population of radicals, each possessing M equivalent nuclei, will follow Pascal's triangle. For example, the spectrum at the right shows that the three 1H nuclei of the CH3 radical give rise to 2MI + 1 = 2(3)(1/2) + 1 = 4 lines with a 1:3:3:1 ratio. The line spacing gives a hyperfine coupling constant of aH = 23 G for each of the three 1H nuclei. Note again that the lines in this spectrum are first derivatives of absorptions. As a second example, the methoxymethyl radical, H3COCH2. the OCH2 center will give an overall 1:2:1 EPR pattern, each component of which is further split by the three methoxy hydrogens into a 1:3:3:1 pattern to give a total of 3×4 = 12 lines, a triplet of quartets. A simulation of the observed EPR spectrum is shown and agrees with the 12-line prediction and the expected line intensities. Note that the smaller coupling constant (smaller line spacing) is due to the three methoxy hydrogens, while the larger coupling constant (line spacing) is from the two hydrogens bonded directly to the carbon atom bearing the unpaired electron. It is often the case that coupling constants decrease in size with distance from a radical's unpaired electron, but there are some notable exceptions, such as the ethyl radical (CH2CH3). Resonance linewidth definition Resonance linewidths are defined in terms of the magnetic induction B and its corresponding units, and are measured along the x axis of an EPR spectrum, from a line's center to a chosen reference point of the line. These defined widths are called halfwidths and possess some advantages: for asymmetric lines, values of left and right halfwidth can be given. The halfwidth is the distance measured from the line's center to the point in which absorption value has half of maximal absorption value in the center of resonance line. First inclination width is a distance from center of the line to the point of maximal absorption curve inclination. In practice, a full definition of linewidth is used. For symmetric lines, halfwidth , and full inclination width . Applications EPR/ESR spectroscopy is used in various branches of science, such as biology, chemistry and physics, for the detection and identification of free radicals in the solid, liquid, or gaseous state, and in paramagnetic centers such as F-centers. Chemical reactions EPR is a sensitive, specific method for studying both radicals formed in chemical reactions and the reactions themselves. For example, when ice (solid H2O) is decomposed by exposure to high-energy radiation, radicals such as H, OH, and HO2 are produced. Such radicals can be identified and studied by EPR. Organic and inorganic radicals can be detected in electrochemical systems and in materials exposed to UV light. In many cases, the reactions to make the radicals and the subsequent reactions of the radicals are of interest, while in other cases EPR is used to provide information on a radical's geometry and the orbital of the unpaired electron. EPR is useful in homogeneous catalysis research for characterization of paramagnetic complexes and reactive intermediates. EPR spectroscopy is a particularly useful tool to investigate their electronic structures, which is fundamental to understand their reactivity. EPR/ESR spectroscopy can be applied only to systems in which the balance between radical decay and radical formation keeps the free radicals concentration above the detection limit of the spectrometer used. This can be a particularly severe problem in studying reactions in liquids. An alternative approach is to slow down reactions by studying samples held at cryogenic temperatures, such as 77 K (liquid nitrogen) or 4.2 K (liquid helium). An example of this work is the study of radical reactions in single crystals of amino acids exposed to x-rays, work that sometimes leads to activation energies and rate constants for radical reactions. Medical and biological Medical and biological applications of EPR also exist. Although radicals are very reactive, so they do not normally occur in high concentrations in biology, special reagents have been developed to attach "spin labels", also called "spin probes", to molecules of interest. Specially-designed nonreactive radical molecules can attach to specific sites in a biological cell, and EPR spectra then give information on the environment of the spin labels. Spin-labeled fatty acids have been extensively used to study dynamic organisation of lipids in biological membranes, lipid-protein interactions and temperature of transition of gel to liquid crystalline phases. Injection of spin-labeled molecules allows for electron resonance imaging of living organisms. A type of dosimetry system has been designed for reference standards and routine use in medicine, based on EPR signals of radicals from irradiated polycrystalline α-alanine (the alanine deamination radical, the hydrogen abstraction radical, and the radical). This method is suitable for measuring gamma and X-rays, electrons, protons, and high-linear energy transfer (LET) radiation of doses in the 1 Gy to 100 kGy range. EPR can be used to measure microviscosity and micropolarity within drug delivery systems as well as the characterization of colloidal drug carriers. The study of radiation-induced free radicals in biological substances (for cancer research) poses the additional problem that tissue contains water, and water (due to its electric dipole moment) has a strong absorption band in the microwave region used in EPR spectrometers. Material characterization EPR/ESR spectroscopy is used in geology and archaeology as a dating tool. It can be applied to a wide range of materials such as organic shales, carbonates, sulfates, phosphates, silica or other silicates. When applied to shales, the EPR data correlates to the maturity of the kerogen in the shale. EPR spectroscopy has been used to measure properties of crude oil, such as determination of asphaltene and vanadium content. The free-radical component of the EPR signal is proportional to the amount of asphaltene in the oil regardless of any solvents, or precipitants that may be present in that oil. When the oil is subject to a precipitant such as hexane, heptane, pyridine however, then much of the asphaltene can be subsequently extracted from the oil by gravimetric techniques. The EPR measurement of that extract will then be function of the polarity of the precipitant that was used. Consequently, it is preferable to apply the EPR measurement directly to the crude. In the case that the measurement is made upstream of a separator (oil production), then it may also be necessary determine the oil fraction within the crude (e.g., if a certain crude contains 80% oil and 20% water, then the EPR signature will be 80% of the signature of downstream of the separator). EPR has been used by archaeologists for the dating of teeth. Radiation damage over long periods of time creates free radicals in tooth enamel, which can then be examined by EPR and, after proper calibration, dated. Similarly, material extracted from the teeth of people during dental procedures can be used to quantify their cumulative exposure to ionizing radiation. People (and other mammals) exposed to radiation from the atomic bombs, from the Chernobyl disaster, and from the Fukushima accident have been examined by this method. Radiation-sterilized foods have been examined with EPR spectroscopy, aiming to develop methods to determine whether a food sample has been irradiated and to what dose. Electrochemistry Applications EPR is a very important technique in the electrochemical field because it operates to detect paramagnetic species and unpaired electrons. The technique has a long history of being coupled to the field, starting with a report in 1958 using EPR to detect free radicals generated via electrochemistry. In an experiment performed by Austen, Given, Ingram, and Peover, solutions of aromatics were electrolyzed and placed into an EPR instrument, resulting in a broad signal response. While this result could not be used for any specific identification, the presence of an EPR signal validated the theory that free radical species were involved in electron transfer reactions as an intermediate state. Soon after, other groups discovered the possibility of coupling in situ electrolysis with EPR, producing the first resolved spectra of the nitrobenzene anion radical from a mercury electrode sealed within the instrument cavity. Since then, the impact of EPR on the field of electrochemistry has only expanded, serving as a way to monitor free radicals produced by other electrolysis reactions. In more recent years, EPR has also been used within the context of electrochemistry to study redox-flow reactions and batteries. Because of the in situ possibilities, it is possible to construct an electrochemical cell inside the EPR instrument and capture the short-lived intermediates involved at lower concentrations than necessitated for NMR. Often, NMR and EPR experiments are coupled to get a full picture of the electrochemical reaction over time. It is also possible to determine the concentration of a specific radical species via EPR, as it is proportional to the double integral of the EPR signal as referenced to a calibration standard. A specific application example can be seen in Lithium ion batteries, specifically studying Li-S battery sulfate ion formation or in Li-O2 battery oxygen radical formation via the 4-oxo-TEMP to 4-oxo-TEMPO conversion. Other electrochemical applications to EPR can be found in the context of water purification reactions and oxygen reduction reactions. In water purification reactions, reactive radical species such as singlet oxygen and hydroxyl, oxygen, and hydrogen radicals are consistently present, generated electrochemically in the breakdown of water pollutants. These intermediates are highly reactive and unstable, thus necessitating a technique such as EPR that can identify radical species specifically. Other applications In the field of quantum computing, pulsed EPR is used to control the state of electron spin qubits in materials such as diamond, silicon and gallium arsenide. High-field high-frequency measurements High-field high-frequency EPR measurements are sometimes needed to detect subtle spectroscopic details. However, for many years the use of electromagnets to produce the needed fields above 1.5 T was impossible, due principally to limitations of traditional magnet materials. The first multifunctional millimeter EPR spectrometer with a superconducting solenoid was described in the early 1970s by Y. S. Lebedev's group (Russian Institute of Chemical Physics, Moscow) in collaboration with L. G. Oranski's group (Ukrainian Physics and Technics Institute, Donetsk), which began working in the Institute of Problems of Chemical Physics, Chernogolovka around 1975. Two decades later, a W-band EPR spectrometer was produced as a small commercial line by the German Bruker Company, initiating the expansion of W-band EPR techniques into medium-sized academic laboratories. The EPR waveband is stipulated by the frequency or wavelength of a spectrometer's microwave source (see Table). EPR experiments often are conducted at X and, less commonly, Q bands, mainly due to the ready availability of the necessary microwave components (which originally were developed for radar applications). A second reason for widespread X and Q band measurements is that electromagnets can reliably generate fields up to about 1 tesla. However, the low spectral resolution over g-factor at these wavebands limits the study of paramagnetic centers with comparatively low anisotropic magnetic parameters. Measurements at > 40 GHz, in the millimeter wavelength region, offer the following advantages: EPR spectra are simplified due to the reduction of second-order effects at high fields. Increase in orientation selectivity and sensitivity in the investigation of disordered systems. The informativity and precision of pulse methods, e.g., ENDOR also increase at high magnetic fields. Accessibility of spin systems with larger zero-field splitting due to the larger microwave quantum energy h. The higher spectral resolution over g-factor, which increases with irradiation frequency and external magnetic field B0. This is used to investigate the structure, polarity, and dynamics of radical microenvironments in spin-modified organic and biological systems through the spin label and probe method. The figure shows how spectral resolution improves with increasing frequency. Saturation of paramagnetic centers occurs at a comparatively low microwave polarizing field B1, due to the exponential dependence of the number of excited spins on the radiation frequency . This effect can be successfully used to study the relaxation and dynamics of paramagnetic centers as well as of superslow motion in the systems under study. The cross-relaxation of paramagnetic centers decreases dramatically at high magnetic fields, making it easier to obtain more-precise and more-complete information about the system under study. This was demonstrated experimentally in the study of various biological, polymeric and model systems at D-band EPR. Hardware components Microwave bridge The microwave bridge contains both the microwave source and the detector. Older spectrometers used a vacuum tube called a klystron to generate microwaves, but modern spectrometers use a Gunn diode. Immediately after the microwave source there is an isolator which serves to attenuate any reflections back to the source which would result in fluctuations in the microwave frequency. The microwave power from the source is then passed through a directional coupler which splits the microwave power into two paths, one directed towards the cavity and the other the reference arm. Along both paths there is a variable attenuator that facilitates the precise control of the flow of microwave power. This in turn allows for accurate control over the intensity of the microwaves subjected to the sample. On the reference arm, after the variable attenuator there is a phase shifter that sets a defined phase relationship between the reference and reflected signal which permits phase sensitive detection. Most EPR spectrometers are reflection spectrometers, meaning that the detector should only be exposed to microwave radiation coming back from the cavity. This is achieved by the use of a device known as the circulator which directs the microwave radiation (from the branch that is heading towards the cavity) into the cavity. Reflected microwave radiation (after absorption by the sample) is then passed through the circulator towards the detector, ensuring it does not go back to the microwave source. The reference signal and reflected signal are combined and passed to the detector diode which converts the microwave power into an electrical current. Reference arm At low energies (less than 1 μW) the diode current is proportional to the microwave power and the detector is referred to as a square-law detector. At higher power levels (greater than 1 mW) the diode current is proportional to the square root of the microwave power and the detector is called a linear detector. In order to obtain optimal sensitivity as well as quantitative information the diode should be operating within the linear region. To ensure the detector is operating at that level the reference arm serves to provide a "bias". Magnet In an EPR spectrometer the magnetic assembly includes the magnet with a dedicated power supply as well as a field sensor or regulator such as a Hall probe. EPR spectrometers use one of two types of magnet which is determined by the operating microwave frequency (which determine the range of magnetic field strengths required). The first is an electromagnet which are generally capable of generating field strengths of up to 1.5 T making them suitable for measurements using the Q-band frequency. In order to generate field strengths appropriate for W-band and higher frequency operation superconducting magnets are employed. The magnetic field is homogeneous across the sample volume and has a high stability at static field. Microwave resonator (cavity) The microwave resonator is designed to enhance the microwave magnetic field at the sample in order to induce EPR transitions. It is a metal box with a rectangular or cylindrical shape that resonates with microwaves (like an organ pipe with sound waves). At the resonance frequency of the cavity microwaves remain inside the cavity and are not reflected back. Resonance means the cavity stores microwave energy and its ability to do this is given by the quality factor , defined by the following equation: The higher the value of the higher the sensitivity of the spectrometer. The energy dissipated is the energy lost in one microwave period. Energy may be lost to the side walls of the cavity as microwaves may generate currents which in turn generate heat. A consequence of resonance is the creation of a standing wave inside the cavity. Electromagnetic standing waves have their electric and magnetic field components exactly out of phase. This provides an advantage as the electric field provides non-resonant absorption of the microwaves, which in turn increases the dissipated energy and reduces . To achieve the largest signals and hence sensitivity the sample is positioned such that it lies within the magnetic field maximum and the electric field minimum. When the magnetic field strength is such that an absorption event occurs, the value of will be reduced due to the extra energy loss. This results in a change of impedance which serves to stop the cavity from being critically coupled. This means microwaves will now be reflected back to the detector (in the microwave bridge) where an EPR signal is detected. Pulsed electron paramagnetic resonance The dynamics of electron spins are best studied with pulsed measurements. Microwave pulses typically 10–100 ns long are used to control the spins in the Bloch sphere. The spin–lattice relaxation time can be measured with an inversion recovery experiment. As with pulsed NMR, the Hahn echo is central to many pulsed EPR experiments. A Hahn echo decay experiment can be used to measure the dephasing time, as shown in the animation below. The size of the echo is recorded for different spacings of the two pulses. This reveals the decoherence, which is not refocused by the pulse. In simple cases, an exponential decay is measured, which is described by the time. Pulsed electron paramagnetic resonance could be advanced into electron nuclear double resonance spectroscopy (ENDOR), which utilizes waves in the radio frequencies. Since different nuclei with unpaired electrons respond to different wavelengths, radio frequencies are required at times. Since the results of the ENDOR gives the coupling resonance between the nuclei and the unpaired electron, the relationship between them can be determined.
Physical sciences
Nuclear physics
Physics
1927915
https://en.wikipedia.org/wiki/Sunda%20plate
Sunda plate
The Sunda plate is a minor tectonic plate straddling the equator in the Eastern Hemisphere on which the majority of Southeast Asia is located. The Sunda plate was formerly considered a part of the Eurasian plate, but the GPS measurements have confirmed its independent movement at 10 mm/yr eastward relative to Eurasia. Extent The Sunda plate includes the South China Sea, the Andaman Sea, southern parts of Vietnam, Myanmar, Laos and Thailand along with Malaysia, Singapore, Cambodia, southern Philippines, and the islands of Bali, Lombok, West Nusa Tenggara, Borneo, Sumatra, Java, and part of Sulawesi in Indonesia. The Sunda is bounded in the east by the Philippine Mobile Belt, Molucca Sea Collision Zone, Molucca Sea plate, Banda Sea plate and Timor plate; to the south and west by the Australian plate; and to the north by the Burma plate, Eurasian plate; and Yangtze plate. The Indo-Australian plate dips beneath the Sunda plate along the Sunda Trench also known as Java Trench, which generates frequent earthquakes and tsunamis. The plate margin between the lower Indo-Australian plate and the upper Sunda plate, features a unique form of subduction near the island of Timor. The subduction that occurred between the upper plate and lower plate started as oceanic plate subducting under oceanic. However, it then transitioned to continental passive margin subducting under oceanic plate. This rare phenomenon continues due to the previously subducted oceanic plate continuing to drag the continental plate under the oceanic upper plate. GPS data provides insights into the consequences of speed and direction of the colliding Indo-Australian plate and the Sunda plate. This data shows that lower Indo-Australian plate is the main driver for deformation seen in the nearby Sunda-Banda Arc system. The strain that is created within this system results in shortening, with the greatest concentration in the forearc and backarc. Active shortening is occurring within the Banda Orogen. The eastern, southern, and western boundaries of the Sunda plate are tectonically complex and seismically active. Only the northern boundary is relatively quiescent.
Physical sciences
Tectonic plates
Earth science
1928465
https://en.wikipedia.org/wiki/Flavour%20%28particle%20physics%29
Flavour (particle physics)
In particle physics, flavour or flavor refers to the species of an elementary particle. The Standard Model counts six flavours of quarks and six flavours of leptons. They are conventionally parameterized with flavour quantum numbers that are assigned to all subatomic particles. They can also be described by some of the family symmetries proposed for the quark-lepton generations. Quantum numbers In classical mechanics, a force acting on a point-like particle can only alter the particle's dynamical state, i.e., its momentum, angular momentum, etc. Quantum field theory, however, allows interactions that can alter other facets of a particle's nature described by non-dynamical, discrete quantum numbers. In particular, the action of the weak force is such that it allows the conversion of quantum numbers describing mass and electric charge of both quarks and leptons from one discrete type to another. This is known as a flavour change, or flavour transmutation. Due to their quantum description, flavour states may also undergo quantum superposition. In atomic physics the principal quantum number of an electron specifies the electron shell in which it resides, which determines the energy level of the whole atom. Analogously, the five flavour quantum numbers (isospin, strangeness, charm, bottomness or topness) can characterize the quantum state of quarks, by the degree to which it exhibits six distinct flavours (u, d, c, s, t, b). Composite particles can be created from multiple quarks, forming hadrons, such as mesons and baryons, each possessing unique aggregate characteristics, such as different masses, electric charges, and decay modes. A hadron's overall flavour quantum numbers depend on the numbers of constituent quarks of each particular flavour. Conservation laws All of the various charges discussed above are conserved by the fact that the corresponding charge operators can be understood as generators of symmetries that commute with the Hamiltonian. Thus, the eigenvalues of the various charge operators are conserved. Absolutely conserved quantum numbers in the Standard Model are: electric charge () weak isospin () baryon number () lepton number () In some theories, such as the grand unified theory, the individual baryon and lepton number conservation can be violated, if the difference between them () is conserved (see Chiral anomaly). Strong interactions conserve all flavours, but all flavour quantum numbers are violated (changed, non-conserved) by electroweak interactions. Flavour symmetry If there are two or more particles which have identical interactions, then they may be interchanged without affecting the physics. All (complex) linear combinations of these two particles give the same physics, as long as the combinations are orthogonal, or perpendicular, to each other. In other words, the theory possesses symmetry transformations such as , where and are the two fields (representing the various generations of leptons and quarks, see below), and is any unitary matrix with a unit determinant. Such matrices form a Lie group called SU(2) (see special unitary group). This is an example of flavour symmetry. In quantum chromodynamics, flavour is a conserved global symmetry. In the electroweak theory, on the other hand, this symmetry is broken, and flavour changing processes exist, such as quark decay or neutrino oscillations. Flavour quantum numbers Leptons All leptons carry a lepton number . In addition, leptons carry weak isospin, , which is − for the three charged leptons (i.e. electron, muon and tau) and + for the three associated neutrinos. Each doublet of a charged lepton and a neutrino consisting of opposite are said to constitute one generation of leptons. In addition, one defines a quantum number called weak hypercharge, , which is −1 for all left-handed leptons. Weak isospin and weak hypercharge are gauged in the Standard Model. Leptons may be assigned the six flavour quantum numbers: electron number, muon number, tau number, and corresponding numbers for the neutrinos (electron neutrino, muon neutrino and tau neutrino). These are conserved in strong and electromagnetic interactions, but violated by weak interactions. Therefore, such flavour quantum numbers are not of great use. A separate quantum number for each generation is more useful: electronic lepton number (+1 for electrons and electron neutrinos), muonic lepton number (+1 for muons and muon neutrinos), and tauonic lepton number (+1 for tau leptons and tau neutrinos). However, even these numbers are not absolutely conserved, as neutrinos of different generations can mix; that is, a neutrino of one flavour can transform into another flavour. The strength of such mixings is specified by a matrix called the Pontecorvo–Maki–Nakagawa–Sakata matrix (PMNS matrix). Quarks All quarks carry a baryon number and all anti-quarks have They also all carry weak isospin, The positively charged quarks (up, charm, and top quarks) are called up-type quarks and have the negatively charged quarks (down, strange, and bottom quarks) are called down-type quarks and have Each doublet of up and down type quarks constitutes one generation of quarks. For all the quark flavour quantum numbers listed below, the convention is that the flavour charge and the electric charge of a quark have the same sign. Thus any flavour carried by a charged meson has the same sign as its charge. Quarks have the following flavour quantum numbers: The third component of isospin (usually just "isospin") (), which has value for the up quark and for the down quark. Strangeness (): Defined as where represents the number of strange quarks () and represents the number of strange antiquarks (). This quantum number was introduced by Murray Gell-Mann. This definition gives the strange quark a strangeness of −1 for the above-mentioned reason. Charm (): Defined as where represents the number of charm quarks () and represents the number of charm antiquarks. The charm quark's value is +1. Bottomness (or beauty) (): Defined as where represents the number of bottom quarks () and represents the number of bottom antiquarks. Topness (or truth) (): Defined as where represents the number of top quarks () and represents the number of top antiquarks. However, because of the extremely short half-life of the top quark (predicted lifetime of only ), by the time it can interact strongly it has already decayed to another flavour of quark (usually to a bottom quark). For that reason the top quark doesn't hadronize, that is it never forms any meson or baryon. These five quantum numbers, together with baryon number (which is not a flavour quantum number), completely specify numbers of all 6 quark flavours separately (as i.e. an antiquark is counted with the minus sign). They are conserved by both the electromagnetic and strong interactions (but not the weak interaction). From them can be built the derived quantum numbers: Hypercharge (): Electric charge (): (see Gell-Mann–Nishijima formula) The terms "strange" and "strangeness" predate the discovery of the quark, but continued to be used after its discovery for the sake of continuity (i.e. the strangeness of each type of hadron remained the same); strangeness of anti-particles being referred to as +1, and particles as −1 as per the original definition. Strangeness was introduced to explain the rate of decay of newly discovered particles, such as the kaon, and was used in the Eightfold Way classification of hadrons and in subsequent quark models. These quantum numbers are preserved under strong and electromagnetic interactions, but not under weak interactions. For first-order weak decays, that is processes involving only one quark decay, these quantum numbers (e.g. charm) can only vary by 1, that is, for a decay involving a charmed quark or antiquark either as the incident particle or as a decay byproduct, likewise, for a decay involving a bottom quark or antiquark Since first-order processes are more common than second-order processes (involving two quark decays), this can be used as an approximate "selection rule" for weak decays. A special mixture of quark flavours is an eigenstate of the weak interaction part of the Hamiltonian, so will interact in a particularly simple way with the W bosons (charged weak interactions violate flavour). On the other hand, a fermion of a fixed mass (an eigenstate of the kinetic and strong interaction parts of the Hamiltonian) is an eigenstate of flavour. The transformation from the former basis to the flavour-eigenstate/mass-eigenstate basis for quarks underlies the Cabibbo–Kobayashi–Maskawa matrix (CKM matrix). This matrix is analogous to the PMNS matrix for neutrinos, and quantifies flavour changes under charged weak interactions of quarks. The CKM matrix allows for CP violation if there are at least three generations. Antiparticles and hadrons Flavour quantum numbers are additive. Hence antiparticles have flavour equal in magnitude to the particle but opposite in sign. Hadrons inherit their flavour quantum number from their valence quarks: this is the basis of the classification in the quark model. The relations between the hypercharge, electric charge and other flavour quantum numbers hold for hadrons as well as quarks. Flavour problem The flavour problem (also known as the flavour puzzle) is the inability of current Standard Model flavour physics to explain why the free parameters of particles in the Standard Model have the values they have, and why there are specified values for mixing angles in the PMNS and CKM matrices. These free parameters - the fermion masses and their mixing angles - appear to be specifically tuned. Understanding the reason for such tuning would be the solution to the flavor puzzle. There are very fundamental questions involved in this puzzle such as why there are three generations of quarks (up-down, charm-strange, and top-bottom quarks) and leptons (electron, muon and tau neutrino), as well as how and why the mass and mixing hierarchy arises among different flavours of these fermions. Quantum chromodynamics Quantum chromodynamics (QCD) contains six flavours of quarks. However, their masses differ and as a result they are not strictly interchangeable with each other. The up and down flavours are close to having equal masses, and the theory of these two quarks possesses an approximate SU(2) symmetry (isospin symmetry). Chiral symmetry description Under some circumstances (for instance when the quark masses are much smaller than the chiral symmetry breaking scale of 250 MeV), the masses of quarks do not substantially contribute to the system's behavior, and to zeroth approximation the masses of the lightest quarks can be ignored for most purposes, as if they had zero mass. The simplified behavior of flavour transformations can then be successfully modeled as acting independently on the left- and right-handed parts of each quark field. This approximate description of the flavour symmetry is described by a chiral group . Vector symmetry description If all quarks had non-zero but equal masses, then this chiral symmetry is broken to the vector symmetry of the "diagonal flavour group" , which applies the same transformation to both helicities of the quarks. This reduction of symmetry is a form of explicit symmetry breaking. The strength of explicit symmetry breaking is controlled by the current quark masses in QCD. Even if quarks are massless, chiral flavour symmetry can be spontaneously broken if the vacuum of the theory contains a chiral condensate (as it does in low-energy QCD). This gives rise to an effective mass for the quarks, often identified with the valence quark mass in QCD. Symmetries of QCD Analysis of experiments indicate that the current quark masses of the lighter flavours of quarks are much smaller than the QCD scale, ΛQCD, hence chiral flavour symmetry is a good approximation to QCD for the up, down and strange quarks. The success of chiral perturbation theory and the even more naive chiral models spring from this fact. The valence quark masses extracted from the quark model are much larger than the current quark mass. This indicates that QCD has spontaneous chiral symmetry breaking with the formation of a chiral condensate. Other phases of QCD may break the chiral flavour symmetries in other ways. History Isospin Isospin, strangeness and hypercharge predate the quark model. The first of those quantum numbers, Isospin, was introduced as a concept in 1932 by Werner Heisenberg, to explain symmetries of the then newly discovered neutron (symbol n): The mass of the neutron and the proton (symbol ) are almost identical: They are nearly degenerate, and both are thus often referred to as “nucleons”, a term that ignores their differences. Although the proton has a positive electric charge, and the neutron is neutral, they are almost identical in all other aspects, and their nuclear binding-force interactions (old name for the residual color force) are so strong compared to the electrical force between some, that there is very little point in paying much attention to their differences. The strength of the strong interaction between any pair of nucleons is the same, independent of whether they are interacting as protons or as neutrons. Protons and neutrons were grouped together as nucleons and treated as different states of the same particle, because they both have nearly the same mass and interact in nearly the same way, if the (much weaker) electromagnetic interaction is neglected. Heisenberg noted that the mathematical formulation of this symmetry was in certain respects similar to the mathematical formulation of non-relativistic spin, whence the name "isospin" derives. The neutron and the proton are assigned to the doublet (the spin-, 2, or fundamental representation) of SU(2), with the proton and neutron being then associated with different isospin projections and respectively. The pions are assigned to the triplet (the spin-1, 3, or adjoint representation) of SU(2). Though there is a difference from the theory of spin: The group action does not preserve flavor (in fact, the group action is specifically an exchange of flavour). When constructing a physical theory of nuclear forces, one could simply assume that it does not depend on isospin, although the total isospin should be conserved. The concept of isospin proved useful in classifying hadrons discovered in the 1950s and 1960s (see particle zoo), where particles with similar mass are assigned an SU(2) isospin multiplet. Strangeness and hypercharge The discovery of strange particles like the kaon led to a new quantum number that was conserved by the strong interaction: strangeness (or equivalently hypercharge). The Gell-Mann–Nishijima formula was identified in 1953, which relates strangeness and hypercharge with isospin and electric charge. The eightfold way and quark model Once the kaons and their property of strangeness became better understood, it started to become clear that these, too, seemed to be a part of an enlarged symmetry that contained isospin as a subgroup. The larger symmetry was named the Eightfold Way by Murray Gell-Mann, and was promptly recognized to correspond to the adjoint representation of SU(3). To better understand the origin of this symmetry, Gell-Mann proposed the existence of up, down and strange quarks which would belong to the fundamental representation of the SU(3) flavor symmetry. GIM-Mechanism and charm To explain the observed absence of flavor-changing neutral currents, the GIM mechanism was proposed in 1970, which introduced the charm quark and predicted the J/psi meson. The J/psi meson was indeed found in 1974, which confirmed the existence of charm quarks. This discovery is known as the November Revolution. The flavor quantum number associated with the charm quark became known as charm. Bottomness and topness The bottom and top quarks were predicted in 1973 in order to explain CP violation, which also implied two new flavor quantum numbers: bottomness and topness.
Physical sciences
Quantum numbers
Physics
13046964
https://en.wikipedia.org/wiki/Hippoidea
Hippoidea
Hippoidea is a superfamily of decapod crustaceans known as mole crabs or sand crabs. Ecology Hippoids are adapted to burrowing into sandy beaches, a habit they share with raninid crabs, and the parallel evolution of the two groups is striking. In the family Hippidae, the body is almost ovoid, the first pereiopods have no claws, and the telson is long, none of which are seen in related groups. Unlike most other decapods, sand crabs cannot walk; instead, they use their legs to dig into the sand. Members of the family Hippidae beat their uropods to swim. Apart from the polar regions, hippoids can be found on beaches throughout the world. Larvae of one species have also been found in Antarctic waters, despite the lack of suitable sandy beaches in the Antarctic. Classification Alongside hermit crabs and allies (Paguroidea), squat lobsters and allies (Galatheoidea) and the hairy stone crab (Lomis hirta, Lomisoidea), Hippoidea is one of the four groups that make up the infraorder Anomura. Of the four, Hippoidea is thought to be the most basal, with the other three groups being more closely related to each other than to Hippoidea. The fossil record of sand crabs is sparse, but extends back to the Cretaceous period. Sand crabs are placed in three families (exclusively fossil taxa are marked †): Albuneidae Stimpson, 1858 Albunea Weber, 1795 Austrolepidopa Efford & Haig, 1968 Harryhausenia Boyko, 2004 † Italialbunea Boyko, 2002 † Lepidopa Stimpson, 1858 Leucolepidopa Efford, 1969 Paralbunea Serène, 1977 Paraleucolepidopa Calado, 1996 Praealbunea Fraaije, 2002 † Squillalbunea Boyko, 2002 Stemonopa Efford & Haig, 1968 Zygopa Holthuis, 1961 Blepharipodidae Boyko, 2002 Blepharipoda Randall, 1840 Lophomastix Benedict, 1904 Hippidae Latreille, 1825 Emerita Scopoli, 1777 Hippa Fabricius, 1787 Mastigochirus Miers, 1878
Biology and health sciences
Crabs and hermit crabs
Animals
13049012
https://en.wikipedia.org/wiki/Glider%20%28aircraft%29
Glider (aircraft)
A glider is a fixed-wing aircraft that is supported in flight by the dynamic reaction of the air against its lifting surfaces, and whose free flight does not depend on an engine. Most gliders do not have an engine, although motor-gliders have small engines for extending their flight when necessary by sustaining the altitude (normally a sailplane relies on rising air to maintain altitude) with some being powerful enough to take off by self-launch. There are a wide variety of types differing in the construction of their wings, aerodynamic efficiency, location of the pilot, controls and intended purpose. Most exploit meteorological phenomena to maintain or gain height. Gliders are principally used for the air sports of gliding, hang gliding and paragliding. However some spacecraft have been designed to descend as gliders and in the past military gliders have been used in warfare. Some simple and familiar types of glider are toys such as paper planes and balsa wood gliders. Etymology Glider is the agent noun form of the verb to glide. It derives from Middle English gliden, which in turn derived from Old English glīdan. The oldest meaning of glide may have denoted a precipitous running or jumping, as opposed to a smooth motion. Scholars are uncertain as to its original derivation, with possible connections to "slide", and "light" having been advanced. History Early pre-modern accounts of flight are in most cases difficult to verify and it is unclear whether each craft was a glider, kite or parachute and to what degree they were truly controllable. Often the event is only recorded a long time after it allegedly took place. A 17th-century account reports an attempt at flight by the 9th-century poet Abbas Ibn Firnas near Córdoba, Spain which ended in heavy back injuries. The monk Eilmer of Malmesbury is reported by William of Malmesbury (), a fellow monk and historian, to have flown off the roof of his Abbey in Malmesbury, England, sometime between 1000 and 1010 AD, gliding about before crashing and breaking his legs. According to these reports, both used a set of (feathery) wings, and both blamed their crash on the lack of a tail. Hezârfen Ahmed Çelebi is alleged to have flown a glider with eagle-like wings over the Bosphorus strait from the Galata Tower to Üsküdar district in Istanbul around 1630–1632. 19th century The first heavier-than-air (i.e. non-balloon) man-carrying aircraft that were based on published scientific principles were Sir George Cayley's series of gliders which achieved brief wing-borne hops from around 1849. Thereafter gliders were built by pioneers such as Jean Marie Le Bris, John J. Montgomery, Otto Lilienthal, Percy Pilcher, Octave Chanute and Augustus Moore Herring to develop aviation. Lilienthal was the first to make repeated successful flights (eventually totaling over 2,000) and was the first to use rising air to prolong his flight. Using a Montgomery tandem-wing glider, Daniel Maloney was the first to demonstrate high-altitude controlled flight using a balloon-launched glider launched from 4,000 feet in 1905. The Wright Brothers developed a series of three manned gliders after preliminary tests with a kite as they worked towards achieving powered flight. They returned to glider testing in 1911 by removing the motor from one of their later designs. Development In the inter-war years, recreational gliding flourished in Germany under the auspices of Rhön-Rossitten. In the United States, the Schweizer brothers of Elmira, New York, manufactured sport sailplanes to meet the new demand. Sailplanes continued to evolve in the 1930s, and sport gliding has become the main application of gliders. As their performance improved, gliders began to be used to fly cross-country and now regularly fly hundreds or even over a thousand of kilometers in a day, if the weather is suitable. Military gliders were developed by during World War II by a number of countries for landing troops,. A glider – the Colditz Cock – was even built secretly by POWs as a potential escape method at Oflag IV-C near the end of the war in 1944. Development of flexible-wing hang gliders Foot-launched aircraft had been flown by Lilienthal and at the meetings at Wasserkuppe in the 1920s. However the innovation that led to modern hang gliders was in 1951 when Francis Rogallo and Gertrude Rogallo applied for a patent for a fully flexible wing with a stiffening structure. The American space agency NASA began testing in various flexible and semi-rigid configurations of this Rogallo wing in 1957 in order to use it as a recovery system for the Gemini space capsules. Charles Richards and Paul Bikle developed the concept producing a wing that was simple to build which was capable of slow flight and as gentle landing. Between 1960 and 1962 Barry Hill Palmer used this concept to make foot-launched hang gliders, followed in 1963 by Mike Burns who built a kite-hang glider called Skiplane. In 1963, John W. Dickenson began commercial production. Development of paragliders January 10, 1963 American Domina Jalbert filed a patent US Patent 3131894 on the Parafoil which had sectioned cells in an aerofoil shape; an open leading edge and a closed trailing edge, inflated by passage through the air – the ram-air design. The 'Sail Wing' was developed further for recovery of NASA space capsules by David Barish. Testing was done by using ridge lift. After tests on Hunter Mountain, New York in September 1965, he went on to promote "slope soaring" as a summer activity for ski resorts (apparently without great success). NASA originated the term "paraglider" in the early 1960s, and ‘paragliding’ was first used in the early 1970s to describe foot-launching of gliding parachutes. Although their use is mainly recreational, unmanned paragliders have also been built for military applications e.g. Atair Insect. Recreational types The main application today of glider aircraft is sport and recreation. Sailplane Gliders were developed from the 1920s for recreational purposes. As pilots began to understand how to use rising air, gliders were developed with a high lift-to-drag ratio. These allowed longer glides to the next source of 'lift', and so increase their chances of flying long distances. This gave rise to the popular sport known as gliding although the term can also be used to refer to merely descending flight. Such gliders designed for soaring are sometimes called sailplanes. Gliders were mainly built of wood and metal but the majority now have composite materials using glass, carbon fibre and aramid fibers. To minimise drag, these types have a fuselage and long narrow wings, i.e. a high aspect ratio. In the beginning, there were huge differences in the appearance of early-sailplanes. As technology and materials developed, the aspiration for the perfect balance between lift/drag, climbing ratio and gliding speed, made engineers from various producers create similar designs across the world. Both single-seat and two-seat gliders are available. Initially training was done by short 'hops' in primary gliders which are very basic aircraft with no cockpit and minimal instruments. Since shortly after World War II training has always been done in two-seat dual control gliders, but high performance two-seaters are also used to share the workload and the enjoyment of long flights. Originally skids were used for landing, but the majority now land on wheels, often retractable. Some gliders, known as motor gliders, are designed for unpowered flight, but can deploy piston, rotary, jet or electric engines. Gliders are classified by the FAI for competitions into glider competition classes mainly on the basis of span and flaps. A class of ultralight sailplanes, including some known as microlift gliders and some as 'airchairs', has been defined by the FAI based on a maximum weight. They are light enough to be transported easily, and can be flown without licensing in some countries. Ultralight gliders have performance similar to hang gliders, but offer some additional crash safety as the pilot can be strapped in an upright seat within a deformable structure. Landing is usually on one or two wheels which distinguishes these craft from hang gliders. Several commercial ultralight gliders have come and gone, but most current development is done by individual designers and home builders. Hang gliders Unlike a sailplane, a hang glider is capable of being carried, foot launched and landed solely by the use of the pilot's legs. In the original and still most common designs, Class 1, the pilot is suspended from the center of the flexible wing and controls the aircraft by shifting their weight. Class 2 (designated by the FAI as Sub-Class O-2) have a rigid primary structure with movable aerodynamic surfaces, such as spoilers, as the primary method of control. The pilot is often enclosed by means of a fairing. These offer the best performance and are the most expensive. Class 4 hang gliders are unable to demonstrate consistent ability to safely take-off and/or land in nil-wind conditions, but otherwise are capable of being launched and landed by the use of the pilot's legs. Class 5 hang gliders have a rigid primary structure with movable aerodynamic surfaces as the primary method of control and can safely take-off and land in nil-wind conditions. No pilot fairings are permitted. In a hang glider the shape of the wing is determined by a structure, and it is this that distinguishes them from the other main type of foot-launched aircraft, paragliders, technically Class 3. Some hang gliders have engines, and are known as powered hang gliders. Due to their commonality of parts, construction and design, they are usually considered by aviation authorities to be hang gliders, even though they may use the engine for the entire flight. Some flexible wing powered aircraft, Ultralight trikes, have a wheeled undercarriage, and so are not hang gliders. Paragliders A paraglider is a free-flying, foot-launched aircraft. The pilot sits in a harness suspended below a fabric wing. Unlike a hang glider whose wings have frames, the form of a paraglider wing is formed by the pressure of air entering vents or cells in the front of the wing. This is known as a ram-air wing (similar to the smaller parachute design). The paraglider's light and simple design allows them to be packed and carried in large backpacks, and make them one of the simplest and economical modes of flight. Competition level wings can achieve glide ratios up to 1:10 and fly around speeds of . Like sailplanes and hang gliders, paragliders use rising air (thermals or ridge lift) to gain height. This process is the basis for most recreational flights and competitions, though aerobatics and 'spot landing competitions' also occur. Launching is often done by jogging down a slope, but winch launches behind a towing vehicle are also used. A Paramotor is a paraglider wing powered by a motor attached to the back of the pilot, and is also known as a powered paraglider. A variation of this is the paraplane, which has a motor mounted on a wheeled frame rather than the pilot's back. Comparison of gliders, hang gliders and paragliders There can be confusion between gliders, hang gliders, and paragliders. Paragliders and hang gliders are both foot-launched glider aircraft and in both cases the pilot is suspended ("hangs") below the lift surface. "Hang glider" is the term for those where the airframe contains rigid structures, whereas the primary structure of paragliders is supple, consisting mainly of woven material. Military gliders Military gliders were used mainly during the Second World War for carrying troops and heavy equipment (see Glider infantry) to a combat zone, including the British Airspeed Horsa, Russian Polikarpov BDP S-1, American Waco CG-3, Japanese Kokusai Ku-8, and German Junkers Ju 322. These aircraft were towed into the air and most of the way to their target by military transport planes, e.g. C-47 Dakota, or by bombers that had been relegated to secondary activities, e.g. Short Stirling. Once released from the tow near the target, they landed as close to the target as possible. Advantages over paratroopers were that heavy equipment could be landed and that the troops were quickly assembled rather than being dispersed over a drop zone. The gliders were treated as disposable leading to construction from common and inexpensive materials such as wood, though a few were retrieved and re-used. By the time of the Korean War, transport aircraft had also become larger and more efficient so that even light tanks could be dropped by parachute, causing gliders to fall out of favor. Research aircraft Even after the development of powered aircraft, gliders have been built for research, where the lack of powerplant reduces complexity and construction costs and speeds development, particularly where new and poorly understood aerodynamic ideas are being tested that might require significant airframe changes. Examples have included delta wings, flying wings, lifting bodies and other unconventional lifting surfaces where existing theories were not sufficiently developed to estimate full scale characteristics. Unpowered flying wings built for aerodynamic research include the Horten flying wings, the scaled glider version of the Armstrong Whitworth A.W.52 jet powered flying wing. Lifting bodies were also developed using unpowered prototypes. Although the idea can be dated to Vincent Justus Burnelli in 1921, interest was nearly non-existent until it appeared to be a solution for returning spacecraft. Traditional space capsules have little directional control while conventionally winged craft cannot handle the stresses of re-entry, whereas a lifting body combines the benefits of both. The lifting bodies use the fuselage itself to generate lift without employing the usual thin and flat wing so as to minimize the drag and structure of a wing for very high supersonic or hypersonic flight as might be experienced during the re-entry of a spacecraft. Examples of type are the Northrop HL-10 and Martin-Marietta X-24. The NASA Paresev Rogallo flexible wing glider was built to investigate alternative methods of recovering spacecraft. Although this application was abandoned, publicity inspired hobbyists to adapt the flexible wing airfoil for modern hang gliders. Rocket gliders Rocket-powered aircraft consume their fuel quickly and so most must land unpowered unless there is another power source. The first rocket plane was the Lippisch Ente, and later examples include the Messerschmitt Me 163 rocket-powered interceptor. The American series of research aircraft starting with the Bell X-1 in 1946 up to the North American X-15 spent more time flying unpowered than under power. In the 1960s research was also done on unpowered lifting bodies and on the X-20 Dyna-Soar project, but although the X20 was cancelled, this research eventually led to the Space Shuttle. NASA's Space Shuttle first flew on April 12, 1981. The Shuttle re-entered at Mach 25 at the end of each spaceflight, landing entirely as a glider. The Space Shuttle and its Soviet equivalent, the Buran shuttle, were by far the fastest ever aircraft. Recent examples of rocket glider include the privately funded SpaceShipOne which is intended for sub-orbital flight and the XCOR EZ-Rocket which is being used to test engines. Rotary wing Most unpowered rotary-wing aircraft are kites rather than gliders, i.e. they are usually towed behind a car or boat rather than being capable of free flight. These are known as rotor kites. However rotary-winged gliders, 'gyrogliders', were investigated that could descend like an autogyro, using the lift from rotors to reduce the vertical speed. These were evaluated as a method of dropping people or equipment from other aircraft. Unmanned gliders Paper airplane A paper plane, paper aeroplane (UK), paper airplane (US), paper glider, paper dart or dart is a toy aircraft (usually a glider) made out of paper or paperboard; the practice of constructing paper planes is sometimes referred to as aerogami (Japanese: kamihikōki), after origami, the Japanese art of paper folding. Model gliders Model glider aircraft are flying or non-flying models of existing or imaginary gliders, often scaled-down versions of full size planes, using lightweight materials such as polystyrene, balsa wood, foam and fibreglass. Designs range from simple glider aircraft, to accurate scale models, some of which can be very large. Larger outdoor models are usually radio-controlled gliders that are piloted remotely from the ground with a transmitter. These can remain airborne for extended periods by using the lift produced by slopes and thermals. These can be winched into wind by a line attached to a hook under the fuselage with a ring, so that the line will drop when the model is overhead. Other methods of launching include towing aloft using a model powered aircraft, catapult-launching using an elastic bungee cord and hand-launching. When hand-launching the newer "discus" style of wing-tip hand-launching has largely supplanted the earlier "javelin" type of launch. Glide bombs A glide bomb is a bomb with aerodynamic surfaces to allow a gliding flightpath rather than a ballistic one. This allows the bomber aircraft to stand off from the target and launch the bomb from a safe distance. Most types have a remote control system which enables the aircraft to direct the bomb accurately to the target. Glide bombs were developed in Germany from as early as 1915. In World War II they were most successful as anti-shipping weapons. Some air forces today are equipped with gliding devices that can remotely attack airbases with a cluster bomb warhead.
Technology
Aviation
null
4914568
https://en.wikipedia.org/wiki/Busan%20Metro
Busan Metro
The Busan Metro () is the urban rail system operated by the Busan Transportation Corporation of Busan, South Korea. The metro network first opened in 1985 with seventeen stations, making Busan the second city in South Korea and third in the Korean Peninsula (after Seoul and Pyongyang) to have a metro system. The Metro itself consists of 4 numbered lines, covering of route and serving 114 stations. Including the BGL and the Donghae Line, the network covers of route and serving 158 stations. All directional signs on the Busan Metro are written in both Korean and English, and the voice announcement in the trains indicating the upcoming station, possible line transfer and exiting side are all spoken in Korean, followed by English. Station transfer announcements are first in Korean, followed by in English, then in Mandarin, and finally in Japanese. Announcements at stations for arriving trains are in Korean, followed by English, then Japanese and Mandarin. All stations are numbered and the first numeral of the number is the same as the line number, e.g. station 123 is on line 1. The Metro map includes information on which station, and which numbered exit from that station, to use for main attractions. Photography in the Busan Metro is permitted. Lines Line 1 Busan Metro Line 1 (1호선) is the north-south route. It is long with 40 stations. The line uses trains that have eight cars each. The total construction cost was 975.1 billion won. Plans for this line were made in 1979. Two years later, in 1981, construction began on the first phase, between Nopo-Dong (now Nopo) and Beomnaegol, which was finished in July 1985. This stretch was long. Further extensions continued southward: a extension from Beomnaegol to Jungang-dong (now Jungang) opened in May 1987; a extension to Seodaeshin-dong (now Seodaeshin) opened in February 1990; and a extension to Shinpyeong opened in June 1994. The extension of the line further into Saha-gu from Shinpyeong to Dadaepo Beach was finished in mid-April 2017. Line 2 Busan Metro Line 2 (2호선) crosses Busan from east to west, running along the shores of Haeundae and Gwangalli, and then north toward Yangsan. It is long, serving 43 stations. The line uses trains that have six cars each. Construction on the Phase 1 began in 1991. But this route, serving 21 stations between Hopo and Seomyeon, did not open until 30 June 1999. With Phase 2 (planned to be in total), the line was first extended southeast from Seomyeon to Geumnyeonsan on 8 August 2001. The remainder of Phase 2 was implemented in two stages: Line 2 was extended north to Gwangan on January 16, 2002, and finally on 29 August 2002 it was extended east to Jangsan. Phase 3, started in 1998, extends Line 2 north from Hopo more into the city of Yangsan. The phase was originally supposed to add another to the line, with an additional seven stations. On 10 January 2003, Line 2 was extended to the current terminus of Yangsan, but with only three of the originally planned seven stations in operation. Pusan National University Yangsan Campus Station, which was the fourth station to open in Phase 3, opened on 1 October 2009. The city of Yangsan subsequently gave up on finishing the extension and building the last three stations. In 2014, Munjeon station was renamed to Busan International Finance Center–Busan Bank station An extension of Line 2 towards the eastern extremity of Haeundae-gu is planned. If this extension opens, then 4 new stations will be added to Line 2. Line 3 Busan Metro Line 3 (3호선) construction began in November 1997. Opening was delayed many times, but the Line 3 finally started service on 28 November 2005, with an long stretch serving 17 stations. Line 3 uses 4-car trains. The first phase's estimated construction cost was 1,688.6 billion won, with the second phase split off into Line 4. Following the "Daegu Subway Fire" in 2003, it was decided during construction to install screen doors to all station platforms on Line 3. This was one of the first lines in Korea and in the world that have screen doors installed in every station. Line 3 significantly improved the metro transportation system by connecting the Suyeong and Yeonsan-dong region, as well as the Yeonsan-dong and Deokcheon region. Line 4 Busan Metro Line 4, also called the Bansong Line, is a rubber-tyred metro system that serves north-central and northeastern Busan. The line was originally planned as an extension of Line 3. Using automated guideway transit technology and extending from Minam to Anpyeong, Line 4 includes 14 stations and of route. Originally scheduled to open in 2008, the line opened on 30 March 2011. Of the 14 stations, 8 are underground, 1 is ground-level, and 5 are above-ground. Each train operates with 6 cars, though each car on Line 4 is significantly shorter than the cars used on the other lines in the Busan Metro system. Busan-Gimhae LRT (BGL) The Busan–Gimhae Light Rail Transit is a light metro system that connects the city of Busan to the neighboring city of Gimhae. The line opened on 9 September 2011. It is operated by B&G Metro. The line has 21 stations, including two stations, Daejeo and Sasang, where one can transfer to Line 3 and Line 2 respectively. The line serves as inner-city transit for both Busan and Gimhae, an inter-city network linking Gimhae and Busan, and a new way to get to Gimhae International Airport. All of the 21 stations are above-ground, and each train has 2 cars. Donghae Line Railway line along the coast being upgraded for commuter service, with trains every 30 min (15 min peak), was extended to Taehwagang Station in Ulsan by 2021. Fares A single ride fare (as of 1 June 2014) is 1300 won for a destination within less than and 1500 won for any other destinations. Tickets are sold at ticket vending machines with most machines accepting 1000 won notes as well as coins. Tickets are to be kept since they are required to leave the station once reaching destination, and getting caught "jumping the gate" will result in a hefty fine. The use of a metro pass, either a Hanaro Card (하나로카드) or a Digital Busan Card (디지털부산카드) will offer a fare discount of 10% to adults and 20% to youth of 13-18 of age. Both the Hanaro and the Digital Busan cards, are available in either card format or a more compact, yet slightly more expensive cell phone accessory format. The passes are equipped with a microchip and are scanned by laying them against sensor plates at the entrance and exit of stations. This makes them more efficient than magnetic stripe cards since they can be detected through a wallet or purse. Hanaro Cards are for sale at all stations for 2000 won. All type of passes can have credit added to them in any station at the "Automatic Charge Machine" (교통카드 자동 보충기); the instructions are available in both English and Korean. The passes can also be used to pay for bus fares and for purchases on specially equipped vending machines throughout the city. Proposed improvements and expansions An upgrade to the Gyeongjeon Line is under construction, between Bujeon and Masan. The line will have a length of 50 km and 10 stations, and is planned to open in December 2022. As the service will be similar to the Donghae Line, with some characteristics of commuter rail, there are also proposals for these two sections to merge, with a Gyeongjeon-Donghae Line offering service from Masan in Changwon to Taehwagang in Ulsan, passing through Busan. The Donghae Line will be further extended from Taehwagang to Bugulsan, with the extension completed by 2025. Busan Metro Line 5 is a light metro connecting Sasang and Hadan, which is planned for completion in 2023. The line will have 7 stations and a length of 6.9 km. There are further plans for additional expansions to the line to the south-west. A light rail line (Yangsan Metro) that connects Nopo of Line 1 to Yangsan Sports Complex of Line 2 and ends further away in Yangsan is under construction. The line is expected to be completed by 2023. Busan Metro Line 2 will be expanded from Jangsan Station to East Busan Tourism Complex in Gijang County. DMB service On May 25, 2006, TU Media started to serve the entire metro network with S-DMB service. The current S-DMB transmission allow subscriber to receive television and radio reception on hand-held device such as cell-phone. With an investment of 11 billion won TU Media installed 530 signal emitters to provide seamless reception in the entire underground system. Network Map
Technology
South Korea
null
4917604
https://en.wikipedia.org/wiki/Corvus
Corvus
Corvus is a widely distributed genus of passerine birds ranging from medium-sized to large-sized in the family Corvidae. It includes species commonly known as crows, ravens, and rooks. The species commonly encountered in Europe are the carrion crow, hooded crow, common raven, and rook; those discovered later were named "crow" or "raven" chiefly on the basis of their size, crows generally being smaller. The genus name is Latin for "raven". The 46 or so members of this genus occur on all temperate continents except South America, and several islands. The Corvus genus makes up a third of the species in the family Corvidae. The members appear to have evolved in Asia from the corvid stock, which had evolved in Australia. The collective name for a group of crows is a "flock" or a "murder". Recent research has found some crow species capable of not only tool use, but also tool construction. Crows are now considered to be among the world's most intelligent animals with an encephalization quotient equal to that of many non-human primates. Description Medium-large species are ascribed to the genus, ranging from of some small Mexican species to of the large common raven and thick-billed raven, which together with the lyrebird represent the larger passerines. These are birds with a robust and slender appearance, equipped with a small, rounded head with a strong, conical beak, elongated and pointed, with a slightly curved end towards the bottom; the legs are strong and the tail is short and wedge-shaped. The coloration of the livery is dominated by shades of black, with some species having plumage with metallic iridescence and others that have white or gray areas on the neck or torso. Australian species have light eyes, while generally the irises of other species are dark. Sexual dimorphism is limited. Evolutionary history and systematics The members of the genus Corvus are believed to have evolved in Central Asia and radiated out from there into North America, Africa, Europe, and Australia. The center of diversity of Corvus is within Melanesia, Wallacea, and the island of New Guinea and surrounding islands, with numerous species endemic to islands in the area; other areas with a large number of crow species include South and Southeast Asia, East Africa, and Australia. A high density of endemics is also present in Mexico and the Caribbean. The diversification of Corvus corresponded with a quick geographic expansion. The radiation of the genus resulted in rapid expansion of morphological diversity and fast speciation rates, especially around the beginning of the genus' radiation around 10 million years ago. The fossil record of crows is rather dense in Europe, but the relationships among most prehistoric species are not clear. Early Pleistocene fossils of crows indeterminate to the species level are known from the Nihewan Basin of China. The genus was originally described by Carl Linnaeus in his 1758 10th edition of Systema Naturae. The name is derived from the Latin corvus meaning "raven". The type species is the common raven (Corvus corax); others named by Linnaeus in the same work include the carrion crow (C. corone), hooded crow (C. cornix), rook (C. frugilegus), and two species which have since been moved to other genera, the western jackdaw (now Coloeus monedula) and the Eurasian magpie (now Pica pica). At least 42 extant species are now considered to be members of Corvus, and at least 14 extinct species have been described. Corvids are found in major cities across the world, and a major increase in the number of crows in urban settings has occurred since the 1900s. Historical records suggest that the population of American crows found in North America has been growing steadily since the introduction of European colonization, and spread east to west with the opening of the frontier. Crows were uncommon in the Pacific Northwest in the 1900s, except in riparian habitats. Populations in the west increased substantially from the late 1800s to the mid-1900s. Crows and ravens spread along with agriculture and urbanization into the western part of North America. Species Behavior Communal roosting Crows gather in large communal roosts numbering between 200 and tens of thousands of individuals during nonbreeding months, particularly in the winter. These gatherings tend to happen near large food sources such as garbage dumps and shopping centers. Play Countless incidents are recorded of corvids at play. Many behaviourists see play as an essential quality in intelligent animals. Calls Crows and the other members of the genus make a wide variety of calls or vocalizations. Crows have also been observed to respond to calls of other species; presumably, this behavior is learned because it varies regionally. Crows' vocalizations are complex and poorly understood. Some of the many vocalizations that crows make are a "koww", usually echoed back and forth between birds, a series of "kowws" in discrete units, a long caw followed by a series of short caws (usually made when a bird takes off from a perch), an echo-like "eh-aw" sound, and more. These vocalizations vary by species, and within each species they vary regionally. In many species, the pattern and number of the numerous vocalizations have been observed to change in response to events in the surroundings (e.g. arrival or departure of crows). Foraging Along with other birds, ravens have been known to associate with other animals such as coyotes and wolves. These associations are linked to feeding and hunting. Ravens use their calls to notify these animals when an injured prey is near. This interaction is most noticeable in winter where ravens are associated with wolf packs nearly 100% of the time. As a result of this connection, studies have been conducted on the reaction of prey animals to the call of the raven. In areas where ravens associate with predators, prey animals are more likely to avoid predation by leaving after hearing the call. Crows are also capable of distinguishing between coyotes and wolves and have shown a preference for wolves. This may be due to the fact that wolves kill larger prey. When hunting, ravens can locate injured animals, like elk, and can call out to wolves to kill them. At times, ravens associate with wolves even when there is no carcass and can even be seen forming relationships with them. This includes playing with cubs by using sticks, picking at their tails, or flying around them. Ravens have been mostly seen among travelling wolf packs rather than resting wolves, possibly due to the increased likelihood of food. They are also known to trust wolves in the pack they follow; when encountering a carcass killed by animals other than wolves, they are more apprehensive to eat from it. This symbiotic relationship between ravens and wolves is shown to be mutualistic; ravens help wolves find prey and when the wolves kill them the ravens can eat too. However, this relationship is not without its faults. Ravens may sometimes eat more of the prey than the wolf does. This problem has also been linked to wolf pack size, with some researchers suggesting that one of the reasons wolves hunt in larger packs is so that ravens (and other scavengers) get less of the food. Along with contention in wolves, ravens can also bother each other. By feeding off of the same carcass it is possible that some ravens will steal from their conspecifics. This behaviour is related to the ravens' ability to make quick decisions about eating the food then or storing it for later, and to their dominance and fighting ability. Intelligence As a group, crows show remarkable examples of intelligence. Natural history books from the 18th century recount an often-repeated, but unproven anecdote of "counting crows"—specifically a crow whose ability to count to five (or four in some versions) is established through a logic trap set by a farmer. Crows and ravens often score very highly on intelligence tests. Certain species top the avian IQ scale. Wild hooded crows in Israel have learned to use bread crumbs for bait-fishing. Crows engage in a kind of midair jousting, or air "chicken" to establish pecking order. They have been found to engage in activities such as sports, tool use, the ability to hide and store food across seasons, episodic-like memory, and the ability to use individual experience in predicting the behavior of proximal conspecifics. One species, the New Caledonian crow, has also been intensively studied recently because of its ability to manufacture and use tools in the day-to-day search for food. On 5 October 2007, researchers from the University of Oxford presented data acquired by mounting tiny video cameras on the tails of New Caledonian crows. They pluck, smooth, and bend twigs and grass stems to procure a variety of foodstuffs. Crows in Queensland have learned how to eat the toxic cane toad by flipping the cane toad on its back and stabbing the throat where the skin is thinner, allowing the crow to access the nontoxic innards; their long beaks ensure that all of the innards can be removed. The western jackdaw and the Eurasian magpie have been found to have a nidopallium about the same relative size as the functionally equivalent neocortex in chimpanzees and humans, and significantly larger than is found in the gibbons. Crows have demonstrated the ability to distinguish individual humans by recognizing facial features. Evidence also suggests they are one of the few nonhuman animals, along with insects like bees or ants, capable of displacement (communication about things that are not immediately present, spatially or temporally). In the Gumyoji Park of Yokohama, Japan, crows have shown the ability to both activate public drinking fountains and adjust the water flow to appropriate levels for either bathing or drinking. Many studies have been conducted to research the ways in which ravens and corvids learn. Some have concluded that the brains of ravens and crows compare in relative size to great apes. The encephalization quotient (EQ) helps to expose the similarities between a great ape brain and a crow/raven brain. This includes cognitive ability. Though the brains differ significantly between mammals and birds, larger forebrains are seen in corvids compared to other birds (except some parrots), especially in areas associated with social learning, planning, decision making in humans and complex cognition in apes. Along with tool use, ravens can recognize themselves in a mirror. This complex cognition can also be extended to socio-cognitive abilities. Studies have been conducted regarding the development and evolution of social abilities in ravens. These results help to show how ravens prefer to form stable relationships with siblings and close social partners as opposed to strangers. The development in social abilities is essential for raven survival, including identifying whether something poses a threat and how ravens alert others nearby of an incoming threat. Diet Crows are omnivorous, and their diets are very diverse. They eat almost any food, including other birds, fruits, nuts, mollusks, earthworms, seeds, frogs, eggs, nestlings, mice, and carrion. The origin of placing scarecrows in grain fields resulted from the crow's incessant damaging and scavenging, although crows assist farmers by eating insects otherwise attracted to their crops. Reproduction Crows reach sexual maturity around the age of three years for females and five years for males. Clutch size is around three to nine eggs, and the nesting period lasts between 20 and 40 days. While crows typically mate for life, extra-pair copulation is not unusual, and young from previous years often help nesting pairs protect a nest and feed nestlings. Crow nestlings in urban areas face threats such as nest entanglement from anthropogenic nesting materials and stunted growth due to poor nutrition. Lifespan and disease Some crows may live to the age of 20, and the oldest known American crow in the wild was almost 30 years old. The oldest documented captive crow died at age 59. The American crow is highly susceptible to the recently introduced North American strain of West Nile virus. American crows typically die within one week of acquiring the disease and very few survive exposure. Conservation status Two species of crows have been listed as endangered by the U.S. Fish and Wildlife Service - the Hawaiian crow and the Mariana crow. The American crow, despite having its population reduced by 45% since 1999 by the West Nile virus, is considered a species of least concern. Problems and methods of control Intelligence and social structures make most crow species adaptable and opportunistic. Crows frequently cause damage to crops and property, strew trash, and transfer disease. In densely populated areas around the world, corvids are generally regarded as nuisance animals. Crows are protected in the U.S. under the federal Migratory Bird Treaty Act of 1918, but because of their perceived destructive nature, control of the species is allowed in certain areas. Because of their intelligence, control is often difficult or expensive. Methods for control include hunting, chemical immobilization, harassment and scare tactics, and trapping. Before any measure is used to confine, trap, kill, poison, immobilize, or alter the habits of any wild bird species, a person must check local, state, and federal regulations pertaining to such actions. Hunting In the United States, hunting is allowed under state and federal regulation. Crow hunting is considered a sport in rural areas of the U.S. because the birds are not considered a traditional edible game species. Some cultures do treat various corvid species as a food source. Liability and possible danger to persons and property limit the use of hunting or shooting as control methods in urban areas. Crows' wariness and cunning make harvesting crows in sufficient numbers difficult. Scare tactics Scare tactics have been the most widely used aversion tactic for crows in areas frequented by humans and domestic animal species. This safe method does not require constant maintenance or manpower to operate or monitor. However, corvids quickly become habituated to most tactics such as blast cannons, predator decoys, and traditional scarecrows. Greater success has been achieved by adding sound and motion to predator decoys to mimic a distressed crow being caught by a predator such as an owl or hawk. Work is currently being done which uses multiple aversion techniques in one area. The theory is that multiple techniques used together will confuse the crows, thereby lessening the probability of habituation to stimuli. Trapping Trapping is a rarely used technique in the U.S., but is being used with success in parts of Europe and Australia. The ladder-style trap (e.g., Australian Crow Trap or Modified Australian Crow Trap) seems to be the most effective in crow-trapping techniques. Ladder traps are constructed in such a way that unintentional catch of nontarget species is avoided. If a nontarget species is caught, it can be easily released without harm to the bird. The traps are cost-efficient because they are inexpensive and simple to construct, and require little manpower to monitor. The bait used in the traps can also be specific to corvids. Carrion, grains, unshelled raw peanuts, and shiny objects in the trap are effective baits. When removing crows from a ladder trap, one living crow is left as an effective decoy for other crows. Trapping is considered the most humane method for crow removal because the crows can be relocated without harm or stress. However, most wild birds in general have a knack for returning to their home ranges. Other methods Other methods have been used with little or limited success. Lasers have been used successfully to remove large flocks of birds from roost structures in urban areas, but success in keeping crows off roosts has been short-lived. Homeowners can reduce the presence of crows by keeping trash stored in containers, feeding pets indoors, and hanging tin pie-pans or reflective gazing globes around garden areas. As food Crows were hunted for survival by Curonians, a Baltic tribe, when common food was exhausted and the landscape changed so that farming was not as productive during the 18th and 19th centuries. Fishermen supplemented their diet by gathering coastal bird eggs and preserving crow meat by salting and smoking it. It became a traditional food for poor folk and is documented in a poem, "The Seasons" by K. Donelaitis. After the nonhunting policy was lifted by the Prussian government in 1721–1724 and alternative food supplies increased, the practice was forgotten. The tradition re-emerged after World War I; in marketplaces, butchered crows that were sought after and bought by townsfolk were common. The hunted crows were not the local, but the migrating ones; each year during the spring and autumn, crows migrated via the Curonian Spit between Finland and the rest of Europe. In 1943, the government even issued a hunting quota for such activities. Crows were usually caught by attracting them with smoked fish or grains soaked in spirits and then collecting them with nets. It was a job for the elderly or young who were unable to go to sea to fish, and it was common to catch 150 to 200 birds during a hunting day. Human interaction The common raven and carrion crow have been blamed for killing weak lambs and are often seen eating freshly dead corpses probably killed by other means. The Australian raven has been documented chasing, attacking, and seriously injuring lambs. Rooks have been blamed for eating grain in the UK and brown-necked ravens for raiding date crops in desert countries. Crows have been shown to have the ability to visually recognize individual humans and to transmit information about "bad" humans by squawking. Crows appear to show appreciation to humans by presenting them with gifts. Cultural depictions In folklore and mythology In Ancient Greece and Rome, several myths about crows and jackdaws included: An ancient Greek and Roman adage, told by Erasmus runs, "The swans will sing when the jackdaws are silent," meaning that educated or wise people will speak after the foolish become quiet. The Roman poet Ovid saw the crow as a harbinger of rain (Amores 2,6, 34). Pliny noted how the Thessalians, Illyrians, and Lemnians cherished jackdaws for destroying grasshoppers' eggs. The Veneti are fabled to have bribed the jackdaws to spare their crops. Ancient Greek authors tell how a jackdaw, being a social creature, may be caught with a dish of oil into which it falls while looking at its own reflection. In Greek legend, princess Arne was bribed with gold by King Minos of Crete and was punished for her avarice by being transformed into an equally avaricious jackdaw, which still seeks shiny things. In the Bible account at 1 Kings 17:6, ravens are credited with providing Elijah food. In Australian Aboriginal mythology, Crow is a trickster, culture hero, and ancestral being. Legends relating to Crow have been observed in various Aboriginal language groups and cultures across Australia; these commonly include stories relating to Crow's role in the theft of fire, the origin of death, and the killing of Eagle's son. Crows are mentioned often in Buddhism, especially Tibetan disciplines. The Dharmapala (protector of the Dharma) Mahakala is represented by a crow in one of his physical/earthly forms. In the Chaldean myth, the Epic of Gilgamesh, Utnapishtim releases a dove and raven to find land; however, the dove merely circles and returns. Only then does Utnapishtim send forth the raven, which does not return, and Utnapishtim concludes the raven has found land. In Chinese mythology, the world originally had 10 suns either spiritually embodied as 10 crows and/or carried by 10 crows; when all 10 decided to rise at once, the effect was devastating to crops, so the gods sent their greatest archer Houyi, who shot down nine crows and spared only one. In Denmark, the night raven is considered an exorcised spirit. A hole in its left wing denotes where the stake used to exorcise it was driven into the earth. He who looks through the hole will become a night raven himself. In Hinduism, crows are thought of as carriers of information that give omens to people regarding their situations. For example, when a crow crows in front of a person's house, the resident is expected to have special visitors that day. Also, in Hindu literature, crows have great memories which they use to give information. Symbolism is associated with the crow in the Hindu faith. On a positive note, crows are often associated with worship of ancestors because they are believed to be embodying the souls of the recently deceased. However, many other associations with crows are seen in Hinduism. Crows are believed to be connected with both the gods and goddesses, particularly the controversial ones such as Sani, the god of the planet Saturn, who uses a crow as his vehicle. In Hindu astrology, it is said that one who has the effect of Sani in their horoscope are angered easily, and may be unable to take control of their futures, but are extremely intelligent at the same time. Thus the presence of a crow, the vehicle of Sani is believed to have similar effects on the homes it lays its eyes on. Whether these effects are positive or negative is a source of debate in Hinduism. Crows are also considered ancestors in Hinduism and during Śrāddha, the practice of offering food or pinda to crows is still in vogue. Crows are associated with Dhumavati the form of mother goddess that invokes quarrel and fear. Crows are also fed during the fifteen day period of Pitru Paksha, which occurs in the autumn season, as an offering and sacrifice to the ancestors. During the time of Pitra Paksha, it is believed that the ancestors descend on Earth from pitra-loka, and are able to eat food offered to them by the means of a crow. This can also occur during the time of Kumbha, many Hindus prepare entire vegetarian meals that are eaten solely by the crows and other birds. In Irish mythology, crows are associated with Morrigan, the goddess of war and death. In Islam, the Surat Al-Ma'ida of the Qur'an describes the story of how the crow teaches the son of Adam to cover the dead body of his brother: "Then Allah sent a crow digging a grave in the ground for a dead crow, in order to show him how to bury the corpse of his brother. He cried, 'Alas! Have I even failed to be like this crow and bury the corpse of my brother?' So he became regretful." In Japanese mythology, a three-legged crow called is depicted. In Korean mythology, a three-legged crow is known as Samjokgo (hangul: 삼족오; hanja: 三足烏). In Norse mythology, Huginn and Muninn are a pair of common ravens that range the entire world, Midgard, bringing the god Odin information. In Sweden, ravens are held to be the ghosts of murdered men. In Welsh mythology, the god Brân the Blessed – whose name means "crow" or "raven"—is associated with corvids and death; tradition holds that Bran's severed head is buried under the Tower of London, facing France—a possible genesis for the practice of keeping ravens in the Tower, said to protect the fortunes of Britain. In Cornish folklore, crows—magpies particularly—are associated with death and the "other world", and must be greeted respectfully. The origin of "counting crows" as augury is British; however, the British version rather is to "count magpies"—their black and white pied colouring alluding to the realms of the living and dead. In some Native American mythologies, especially those in the Pacific Northwest, the raven is seen as both the Creator of the World and, separately, a trickster god. According to Landnámabók, a mythological account on the discovery of Iceland, Hrafna-Flóki is supposed to have used three ravens to scout for land around 860-870 CE when he came across the island. Experts debate whether the account is historical or mythological. In medieval times, crows were thought to live abnormally long lives. They were also thought to be monogamous throughout their long lives. They were thought to predict the future, anticipate rain and reveal ambushes. Crows were also thought to lead flocks of storks while they crossed the sea to Asia. In popular culture Literature In Aesop's Fables, the jackdaw embodies stupidity in one tale (by starving while waiting for figs on a fig tree to ripen), vanity in another (the jackdaw sought to become king of the birds with borrowed feathers, but was shamed when they fell off), and cunning in yet another (the crow comes up to a pitcher and knows that his beak is too short to reach the water, and if he tips it over, all the water will fall out, so the crow places pebbles in the pitcher so the water rises and he can reach it to relieve his thirst). In Ovid's Metamorphoses, in Greek mythology, the god Apollo became enraged when the crow exposed his lover Coronis' tryst with a mortal, his ire transmuting the crow's feathers from white to black. In the Story of Bhusunda, a chapter of the Yoga Vasistha, a very old sage in the form of a crow, Bhusunda, recalls a succession of epochs in the Earth's history, as described in Hindu cosmology. He survived several destructions, living on a wish-fulfilling tree on Mount Meru. Music Both ravens and crows have commonly featured in the lyrics of heavy metal songs. A 2019 study showed that ravens are the most frequent birds mentioned in heavy metal lyrics, while crows are the fourth (eagles and vultures being the second and third).
Biology and health sciences
Corvoidea
null
21009880
https://en.wikipedia.org/wiki/Lupus
Lupus
Lupus, formally called systemic lupus erythematosus (SLE), is an autoimmune disease in which the body's immune system mistakenly attacks healthy tissue in many parts of the body. Symptoms vary among people and may be mild to severe. Common symptoms include painful and swollen joints, fever, chest pain, hair loss, mouth ulcers, swollen lymph nodes, feeling tired, and a red rash which is most commonly on the face. Often there are periods of illness, called flares, and periods of remission during which there are few symptoms. Children up to 18 years old develop a more severe form of SLE termed childhood-onset systemic lupus erythematosus. The cause of SLE is not clear. It is thought to involve a combination of genetics and environmental factors. Among identical twins, if one is affected there is a 24% chance the other one will also develop the disease. Female sex hormones, sunlight, smoking, vitamin D deficiency, and certain infections are also believed to increase a person's risk. The mechanism involves an immune response by autoantibodies against a person's own tissues. These are most commonly anti-nuclear antibodies and they result in inflammation. Diagnosis can be difficult and is based on a combination of symptoms and laboratory tests. There are a number of other kinds of lupus erythematosus including discoid lupus erythematosus, neonatal lupus, and subacute cutaneous lupus erythematosus. There is no cure for SLE, but there are experimental and symptomatic treatments. Treatments may include NSAIDs, corticosteroids, immunosuppressants, hydroxychloroquine, and methotrexate. Although corticosteroids are rapidly effective, long-term use results in side effects. Alternative medicine has not been shown to affect the disease. Men have higher mortality. SLE significantly increases the risk of cardiovascular disease, with this being the most common cause of death. While women with lupus have higher risk pregnancies, most are successful. Rate of SLE varies between countries from 20 to 70 per 100,000. Women of childbearing age are affected about nine times more often than men. While it most commonly begins between the ages of 15 and 45, a wide range of ages can be affected. Those of African, Caribbean, and Chinese descent are at higher risk than those of European descent. Rates of disease in the developing world are unclear. Lupus is Latin for 'wolf': the disease was so-named in the 13th century as the rash was thought to appear like a wolf's bite. Signs and symptoms SLE is one of several diseases known as "the great imitator" because it often mimics or is mistaken for other illnesses. SLE is a classical item in differential diagnosis, because SLE symptoms vary widely and come and go unpredictably. Diagnosis can thus be elusive, with some people having unexplained symptoms of SLE for years before a definitive diagnosis is reached. Common initial and chronic complaints include fever, malaise, joint pains, muscle pains, and fatigue. Because these symptoms are so often seen in association with other diseases, these signs and symptoms are not part of the diagnostic criteria for SLE. When occurring in conjunction with other signs and symptoms, however, they are considered suggestive. While SLE can occur in both males and females, it is found far more often in women, and the symptoms associated with each sex are different. Females tend to have a greater number of relapses, a low white blood cell count, more arthritis, Raynaud syndrome, and psychiatric symptoms. Males tend to have more seizures, kidney disease, serositis (inflammation of tissues lining the lungs and heart), skin problems, and peripheral neuropathy. Skin As many as 70% of people with lupus have some skin symptoms. The three main categories of lesions are chronic cutaneous (discoid) lupus, subacute cutaneous lupus, and acute cutaneous lupus. People with discoid lupus may exhibit thick, red scaly patches on the skin. Similarly, subacute cutaneous lupus manifests as red, scaly patches of skin but with distinct edges. Acute cutaneous lupus manifests as a rash. Some have the classic malar rash (commonly known as the butterfly rash) associated with the disease. This rash occurs in 30–60% of people with SLE. Hair loss, mouth and nasal ulcers, and lesions on the skin are other possible manifestations. Muscles and bones The most commonly sought medical attention is for joint pain, with the small joints of the hand and wrist usually affected, although all joints are at risk. More than 90 percent of those affected will experience joint or muscle pain at some time during the course of their illness. Unlike rheumatoid arthritis, lupus arthritis is less disabling and usually does not cause severe destruction of the joints. Fewer than ten percent of people with lupus arthritis will develop deformities of the hands and feet. People with SLE are at particular risk of developing osteoarticular tuberculosis. A possible association between rheumatoid arthritis and SLE has been suggested, and SLE may be associated with an increased risk of bone fractures in relatively young women. Blood Anemia is common in children with SLE and develops in about 50% of cases. Low platelet count (thrombocytopenia) and low white blood cell count (leukopenia) may be due to the disease or a side effect of pharmacological treatment. People with SLE may have an association with antiphospholipid antibody syndrome (a thrombotic disorder), wherein autoantibodies to phospholipids are present in their serum. Abnormalities associated with antiphospholipid antibody syndrome include a paradoxical prolonged partial thromboplastin time (which usually occurs in hemorrhagic disorders) and a positive test for antiphospholipid antibodies; the combination of such findings have earned the term "lupus anticoagulant-positive". Another autoantibody finding in SLE is the anti-cardiolipin antibody, which can cause a false positive test for syphilis. Heart SLE may cause pericarditis (inflammation of the outer lining surrounding the heart), myocarditis (inflammation of the heart muscle), or endocarditis (inflammation of the inner lining of the heart). The endocarditis of SLE is non-infectious, and is also called Libman–Sacks endocarditis. It involves either the mitral valve or the tricuspid valve. Atherosclerosis also occurs more often and advances more rapidly than in the general population. Steroids are sometimes prescribed as an anti-inflammatory treatment for lupus; however, they can increase one's risk for heart disease, high cholesterol, and atherosclerosis. Lungs SLE can cause pleuritic pain as well as inflammation of the pleurae known as pleurisy, which can rarely give rise to shrinking lung syndrome involving a reduced lung volume. Other associated lung conditions include pneumonitis, chronic diffuse interstitial lung disease, pulmonary hypertension, pulmonary emboli, and pulmonary hemorrhage. Kidneys Painless passage of blood or protein in the urine may often be the only presenting sign of kidney involvement. Acute or chronic renal impairment may develop with lupus nephritis, leading to acute or end-stage kidney failure. Because of early recognition and management of SLE with immunosuppressive drugs or corticosteroids, end-stage renal failure occurs in less than 5% of cases; except in the black population, where the risk is many times higher. The histological hallmark of SLE is membranous glomerulonephritis with "wire loop" abnormalities. This finding is due to immune complex deposition along the glomerular basement membrane, leading to a typical granular appearance in immunofluorescence testing. Neuropsychiatric Neuropsychiatric syndromes can result when SLE affects the central or peripheral nervous system. The American College of Rheumatology defines 19 neuropsychiatric syndromes in systemic lupus erythematosus. The diagnosis of neuropsychiatric syndromes concurrent with SLE (now termed as NPSLE), is one of the most difficult challenges in medicine, because it can involve so many different patterns of symptoms, some of which may be mistaken for signs of infectious disease or stroke. A common neurological disorder people with SLE have is headache, although the existence of a specific lupus headache and the optimal approach to headache in SLE cases remains controversial. Other common neuropsychiatric manifestations of SLE include cognitive disorder, mood disorder, cerebrovascular disease, seizures, polyneuropathy, anxiety disorder, psychosis, depression, and in some extreme cases, personality disorders. Steroid psychosis can also occur as a result of treating the disease. It can rarely present with intracranial hypertension syndrome, characterized by an elevated intracranial pressure, papilledema, and headache with occasional abducens nerve paresis, absence of a space-occupying lesion or ventricular enlargement, and normal cerebrospinal fluid chemical and hematological constituents. More rare manifestations are acute confusional state, Guillain–Barré syndrome, aseptic meningitis, autonomic disorder, demyelinating syndrome, mononeuropathy (which might manifest as mononeuritis multiplex), movement disorder (more specifically, chorea), myasthenia gravis, myelopathy, cranial neuropathy and plexopathy. Neurological disorders contribute to a significant percentage of morbidity and mortality in people with lupus. As a result, the neural side of lupus is being studied in hopes of reducing morbidity and mortality rates. One aspect of this disease is severe damage to the epithelial cells of the blood–brain barrier. In certain regions, depression affects up to 60% of women with SLE. Eyes Up to one-third of patients report that their eyes are affected. The most common diseases are dry eye syndrome and secondary Sjögren's syndrome, but episcleritis, scleritis, retinopathy (more often affecting both eyes than one), ischemic optic neuropathy, retinal detachment, and secondary angle-closure glaucoma may occur. In addition, the medications used to treat SLE can cause eye disease: long-term glucocorticoid use can cause cataracts and secondary open-angle glaucoma, and long-term hydroxychloroquine treatment can cause vortex keratopathy and maculopathy. Reproductive While most pregnancies have positive outcomes, there is a greater risk of adverse events occurring during pregnancy. SLE causes an increased rate of fetal death in utero and spontaneous abortion (miscarriage). The overall live-birth rate in people with SLE has been estimated to be 72%. Pregnancy outcome appears to be worse in people with SLE whose disease flares up during pregnancy. Neonatal lupus is the occurrence of SLE symptoms in an infant born from a mother with SLE, most commonly presenting with a rash resembling discoid lupus erythematosus, and sometimes with systemic abnormalities such as heart block or enlargement of the liver and spleen. Neonatal lupus is usually benign and self-limited. Medications for treatment of SLE can carry severe risks for female and male reproduction. Cyclophosphamide (also known as Cytoxan), can lead to infertility by causing premature ovarian insufficiency (POI), the loss of normal function of one's ovaries prior to age forty. Methotrexate can cause termination or deformity in fetuses and is a common abortifacient, and for men taking a high dose and planning to father, a discontinuation period of 6 months is recommended before insemination. Systemic Fatigue in SLE is probably multifactorial and has been related to not only disease activity or complications such as anemia or hypothyroidism, but also to pain, depression, poor sleep quality, poor physical fitness and lack of social support. Causes Vitamin D deficiency Some studies have found that vitamin D deficiency (i.e., a low serum level of vitamin D) often occurs in patients with SLE and that its level is particularly low in patients with more active SLE. Furthermore, 5 studies reported that SLE patients treated with vitamin D had significant reductions in the activity of their disease. However, other studies have found that the levels of vitamin D in SLE are not low, that vitamin D does not reduce their SLE's activity, and/or that the vitamin D levels and responses to vitamin D treatment varied in different patient populations (i.e., varied based on whether the study was conducted on individuals living in Africa or Europe). Because of these conflicting findings, the following middle ground has been proposed for using vitamin D to treat SLE: a) patients with SLE that have 25-hydroxyvitamin D2 plus 25-hydroxyvitamin D3 serum levels less than 30 ng/ml should be treated with vitamin D to keep these levels at or above 30 ng/ml or, in patients having major SLE-related organ involvement, at 36 to 40 ng/ml and b) patients with 25-hydroxyvitamin D2 plus 25-hydroxyvitamin D3 levels at or above 30 ng/ml should not be treated with vitamin D unless they have major SLE-related organ involvement in which case they should be treated with 25-hydroxyvitamin D2 plus 25-hydroxyvitamin D3 to maintain their serum vitamin D levels between 36 and 40 ng/ml. Genetics Studies of identical twins (i.e., twins that develop from the same fertilized egg) and genome-wide association studies have identified numerous genes that by themselves promote the development of SLE, particularly childhood-onset SLE, i.e., cSLE, in rare cases of SLE/cSLE. The single-gene (also termed monogenic) causes of cSLE (or a cSLE-like disorder) develop in individuals before they reach 18 years of age. cSLE typically is more severe and potentially lethal than adult-onset SLE because it often involves SLE-induced neurologic disease, renal failure, and/or the macrophage activation syndrome. Mutations in about 40 genes have been reported to cause cSLE and/or a cSLE-like disease. These genes include 5 which as of February, 2024 were classified as inborn errors of immunity genes, i.e., DNASE1L3, TREX1, IFIH1, Tartrate-resistant acid phosphatase and PRKCD and 28 other genes, i.e., NEIL3, TMEM173, ADAR1, NRAS, SAMHD1, SOS1, FASLG, FAS receptor gene, RAG1, RAG2, DNASE1, SHOC2, KRAS, PTPN11, PTEN, BLK, RNASEH2A, RNASEH2B, RNASEH2C, Complement component 1qA, Complement component 1qB, Complement component 1r, Complement component 1s, Complement component 2, Complement component 3, UNC93B1, and the two complement component 4 genes ,C4A and C4B. (The C4A and C4B genes code respectively for complement component A and complement component B proteins. These two proteins combine to form the complement component 4 protein which plays various roles in regulating immune function. Individuals normally have multiple copies of the C4A and C4B gene but if they have reduced levels of one and/or both of these genes make low levels of complement component 4 protein and thereby are at risk for developing cSLE or a cSLE-like disorders.)(Note that mutations in the UNC93B1 gene may cause either cSLE or the chilblain lupus erythematosus form of cSLE.) Mutations in a wide range of other genes do not by themselves cause SLE but two or more of them may act together, act in concert with environmental factors, or act in some but not other populations (e.g., cause SLE in Chinese but not Europeans) to cause SLE or an SLE-like syndrome but do so in only a small percentage of cases. The development of a genetically-regulated trait or disorder that is dependent on the inheritance of two or more genes is termed oligogenic inheritance or polygenic inheritance. SLE is regarded as a prototype disease due to the significant overlap in its symptoms with other autoimmune diseases. Patients with SLE have higher levels of DNA damage than normal subjects, and several proteins involved in the preservation of genomic stability show polymorphisms, some of which increase the risk for SLE development. Defective DNA repair is a likely mechanism underlying lupus development. Drug-induced SLE Drug-induced lupus erythematosus is a (generally) reversible condition that usually occurs in people being treated for a long-term illness. Drug-induced lupus mimics SLE. However, symptoms of drug-induced lupus generally disappear once the medication that triggered the episode is stopped. While there are no established criteria for diagnosing drug-induced SLE, most authors have agreed on the following definition: the afflicted patient had a sufficient and continuing exposure to the drug, at least one symptom compatible with SLE, no history suggestive of SLE before starting the drug, and resolution of symptoms within weeks or months after stopping intake of the drug. The VigiBase drug safety data repositor diagnosed 12,166 cases of drug-induced SLE recorded between 1968 and 2017. Among the 118 agents causing SLE, five main classes were most often associated with drug-induced SLE. These drugs were antiarrhythmic agents such as procainamide or quinidine; antihypertensive agents such as hydralazine, captopril, or acebutolol; antimicrobial agents such as minocycline, isoniazid, carbamazepine, or phenytoin; and agents that inhibit the inflammation-inducing actions of interferon or tumor necrosis factor. Non-systemic forms of lupus Discoid (cutaneous) lupus is limited to skin symptoms and is diagnosed by biopsy of rash on the face, neck, scalp or arms. Approximately 5% of people with DLE progress to SLE. Pathophysiology SLE is triggered by environmental factors that are unknown. In SLE, the body's immune system produces antibodies against self-protein, particularly against proteins in the cell nucleus. These antibody attacks are the immediate cause of SLE. SLE is a chronic inflammatory disease believed to be a type III hypersensitivity response with potential type II involvement. Reticulate and stellate acral pigmentation should be considered a possible manifestation of SLE and high titers of anti-cardiolipin antibodies, or a consequence of therapy. People with SLE have intense polyclonal B-cell activation, with a population shift towards immature B cells. Memory B cells with increased CD27+/IgD—are less susceptible to immunosuppression. CD27-/IgD- memory B cells are associated with increased disease activity and renal lupus. T cells, which regulate B-cell responses and infiltrate target tissues, have defects in signaling, adhesion, co-stimulation, gene transcription, and alternative splicing. The cytokines B-lymphocyte stimulator (BLyS), also known as B-cell activating factor (BAFF), interleukin 6, interleukin 17, interleukin 18, type I interferons, and tumor necrosis factor α (TNFα) are involved in the inflammatory process and are potential therapeutic targets. SLE is associated with low C3 levels in the complement system. Cell death signaling Apoptosis is increased in monocytes and keratinocytes Expression of Fas by B cells and T cells is increased There are correlations between the apoptotic rates of lymphocytes and disease activity. Necrosis is increased in T lymphocytes. Tingible body macrophages (TBMs) – large phagocytic cells in the germinal centers of secondary lymph nodes – express CD68 protein. These cells normally engulf B cells that have undergone apoptosis after somatic hypermutation. In some people with SLE, significantly fewer TBMs can be found, and these cells rarely contain material from apoptotic B cells. Also, uningested apoptotic nuclei can be found outside of TBMs. This material may present a threat to the tolerization of B cells and T cells. Dendritic cells in the germinal center may endocytose such antigenic material and present it to T cells, activating them. Also, apoptotic chromatin and nuclei may attach to the surfaces of follicular dendritic cells and make this material available for activating other B cells that may have randomly acquired self-protein specificity through somatic hypermutation. Necrosis, a pro-inflammatory form of cell death, is increased in T lymphocytes, due to mitochondrial dysfunction, oxidative stress, and depletion of ATP. Clearance deficiency Impaired clearance of dying cells is a potential pathway for the development of this systemic autoimmune disease. This includes deficient phagocytic activity, impaired lysosomal degradation, and scant serum components in addition to increased apoptosis. SLE is associated with defects in apoptotic clearance, and the damaging effects caused by apoptotic debris. Early apoptotic cells express "eat-me" signals, of cell-surface proteins such as phosphatidylserine, that prompt immune cells to engulf them. Apoptotic cells also express find-me signals to attract macrophages and dendritic cells. When apoptotic material is not removed correctly by phagocytes, they are captured instead by antigen-presenting cells, which leads to the development of antinuclear antibodies. Monocytes isolated from whole blood of people with SLE show reduced expression of CD44 surface molecules involved in the uptake of apoptotic cells. Most of the monocytes and tingible body macrophages (TBMs), which are found in the germinal centres of lymph nodes, even show a definitely different morphology; they are smaller or scarce and die earlier. Serum components like complement factors, CRP, and some glycoproteins are, furthermore, decisively important for an efficiently operating phagocytosis. With SLE, these components are often missing, diminished, or inefficient. Macrophages during SLE fail to mature their lysosomes and as a result have impaired degradation of internalized apoptotic debris, which results in chronic activation of Toll-like receptors and permeabilization of the phagolysosomal membrane, allowing activation of cytosolic sensors. In addition, intact apoptotic debris recycles back to the cell membrane and accumulate on the surface of the cell. Recent research has found an association between certain people with lupus (especially those with lupus nephritis) and an impairment in degrading neutrophil extracellular traps (NETs). These were due to DNAse1 inhibiting factors, or NET protecting factors in people's serum, rather than abnormalities in the DNAse1 itself. DNAse1 mutations in lupus have so far only been found in some Japanese cohorts. The clearance of early apoptotic cells is an important function in multicellular organisms. It leads to a progression of the apoptosis process and finally to secondary necrosis of the cells if this ability is disturbed. Necrotic cells release nuclear fragments as potential autoantigens, as well as internal danger signals, inducing maturation of dendritic cells (DCs) since they have lost their membranes' integrity. Increased appearance of apoptotic cells also stimulates inefficient clearance. That leads to the maturation of DCs and also to the presentation of intracellular antigens of late apoptotic or secondary necrotic cells, via MHC molecules. Autoimmunity possibly results from the extended exposure to nuclear and intracellular autoantigens derived from late apoptotic and secondary necrotic cells. B and T cell tolerance for apoptotic cells is abrogated, and the lymphocytes get activated by these autoantigens; inflammation and the production of autoantibodies by plasma cells is initiated. A clearance deficiency in the skin for apoptotic cells has also been observed in people with cutaneous lupus erythematosus (CLE). Germinal centers In healthy conditions, apoptotic lymphocytes are removed in germinal centers (GC) by specialized phagocytes, the tingible body macrophages (TBM), which is why no free apoptotic and potential autoantigenic material can be seen. In some people with SLE, a buildup of apoptotic debris can be observed in GC because of an ineffective clearance of apoptotic cells. Close to TBM, follicular dendritic cells (FDC) are localised in GC, which attach antigen material to their surface and, in contrast to bone marrow-derived DC, neither take it up nor present it via MHC molecules. Autoreactive B cells can accidentally emerge during somatic hypermutation and migrate into the germinal center light zone. Autoreactive B cells, maturated coincidentally, normally do not receive survival signals by antigen planted on follicular dendritic cells and perish by apoptosis. In the case of clearance deficiency, apoptotic nuclear debris accumulates in the light zone of GC and gets attached to FDC. This serves as a germinal centre survival signal for autoreactive B-cells. After migration into the mantle zone, autoreactive B cells require further survival signals from autoreactive helper T cells, which promote the maturation of autoantibody-producing plasma cells and B memory cells. In the presence of autoreactive T cells, a chronic autoimmune disease may be the consequence. Anti-nRNP autoimmunity Anti-nRNP autoantibodies to nRNP A and nRNP C initially targeted restricted, proline-rich motifs. Antibody binding subsequently spread to other epitopes. The similarity and cross-reactivity between the initial targets of nRNP and Sm autoantibodies identifies a likely commonality in cause and a focal point for intermolecular epitope spreading. Others Elevated expression of HMGB1 was found in the sera of people and mice with systemic lupus erythematosus, high mobility group box 1 (HMGB1) is a nuclear protein participating in chromatin architecture and transcriptional regulation. Recently, there is increasing evidence HMGB1 contributes to the pathogenesis of chronic inflammatory and autoimmune diseases due to its inflammatory and immune stimulating properties. Diagnosis Laboratory tests Antinuclear antibody (ANA) testing and anti-extractable nuclear antigen (anti-ENA) form the mainstay of serologic testing for SLE. ANA testing for lupus is highly sensitive, with the vast majority of individuals with Lupus testing positive; but the test is not specific, as a positive result may or may not be indicative of Lupus. Several techniques are used to detect ANAs. The most widely used is indirect immunofluorescence (IF). The pattern of fluorescence suggests the type of antibody present in the people's serum. Direct immunofluorescence can detect deposits of immunoglobulins and complement proteins in people's skin. When skin not exposed to the sun is tested, a positive direct IF (the so-called lupus band test) is evidence of systemic lupus erythematosus. ANA screening yields positive results in many connective tissue disorders and other autoimmune diseases, and may occur in normal individuals. Subtypes of antinuclear antibodies include anti-Smith and anti-double stranded DNA (anti-dsDNA) antibodies (which are linked to SLE) and anti-histone antibodies (which are linked to drug-induced lupus). Anti-dsDNA antibodies are highly specific for SLE; they are present in 70% of cases, whereas they appear in only 0.5% of people without SLE. Laboratory tests can also help distinguish between closely related connective tissue diseases. A multianalyte panel (MAP) of autoantibodies, including ANA, anti-dsDNA, and anti-Smith in combination with the measurement of cell-bound complement activation products (CB-CAPs) with an integrated algorithm has demonstrated 80% diagnostic sensitivity and 86% specificity in differentiating diagnosed SLE from other autoimmune connective tissue diseases. The MAP approach has been further studied in over 40,000 patients tested with either the MAP or traditional ANA testing strategy (tANA), demonstrating patients who test MAP positive are at up to 6-fold increased odds of receiving a new SLE diagnosis and up to 3-fold increased odds of starting a new SLE medication regimen as compared to patients testing positive with the tANA approach. The anti-dsDNA antibody titers also tend to reflect disease activity, although not in all cases. Other ANA that may occur in people with SLE are anti-U1 RNP (which also appears in systemic sclerosis and mixed connective tissue disease), SS-A (or anti-Ro) and SS-B (or anti-La; both of which are more common in Sjögren's syndrome). SS-A and SS-B confer a specific risk for heart conduction block in neonatal lupus. Other tests routinely performed in suspected SLE are complement system levels (low levels suggest consumption by the immune system), electrolytes and kidney function (disturbed if the kidney is involved), liver enzymes, and complete blood count. The lupus erythematosus (LE) cell test was commonly used for diagnosis, but it is no longer used because the LE cells are only found in 50–75% of SLE cases and they are also found in some people with rheumatoid arthritis, scleroderma, and drug sensitivities. Because of this, the LE cell test is now performed only rarely and is mostly of historical significance. Diagnostic criteria Some physicians make a diagnosis based on the American College of Rheumatology (ACR) classification criteria. However, these criteria were primarily established for use in scientific research, including selection for randomized controlled trials, which require higher confidence levels. As a result, many people with SLE may not meet the full ACR criteria. Criteria The American College of Rheumatology (ACR) established eleven criteria in 1982, which were revised in 1997 as a classificatory instrument to operationalise the definition of SLE in clinical trials. They were not intended to be used to diagnose individuals and do not do well in that capacity. For the purpose of identifying people for clinical studies, a person has SLE if any 4 out of 11 symptoms are present simultaneously or serially on two separate occasions. Malar rash (rash on cheeks); sensitivity = 57%; specificity = 96%. Discoid rash (red, scaly patches on skin that cause scarring); sensitivity = 18%; specificity = 99%. Serositis: Pleurisy (inflammation of the membrane around the lungs) or pericarditis (inflammation of the membrane around the heart); sensitivity = 56%; specificity = 86% (pleural is more sensitive; cardiac is more specific). Oral ulcers (includes oral or nasopharyngeal ulcers); sensitivity = 27%; specificity = 96%. Arthritis: nonerosive arthritis of two or more peripheral joints, with tenderness, swelling, or effusion; sensitivity = 86%; specificity = 37%. Photosensitivity (exposure to ultraviolet light causes rash, or other symptoms of SLE flareups); sensitivity = 43%; specificity = 96%. Blood—hematologic disorder—hemolytic anemia (low red blood cell count), leukopenia (white blood cell count<4000/μL), lymphopenia (<1500/μL), or low platelet count (<100000/μL) in the absence of offending drug; sensitivity = 59%; specificity = 89%. Hypocomplementemia is also seen, due to either consumption of C3 and C4 by immune complex-induced inflammation or to congenitally complement deficiency, which may predispose to SLE. Renal disorder: More than 0.5 g per day protein in urine or cellular casts seen in urine under a microscope; sensitivity = 51%; specificity = 94%. Antinuclear antibody test positive; sensitivity = 99%; specificity = 49%. Immunologic disorder: Positive anti-Smith, anti-ds DNA, antiphospholipid antibody, or false positive serological test for syphilis; sensitivity = 85%; specificity = 93%. Presence of anti-ss DNA in 70% of cases (though also positive with rheumatic disease and healthy persons). Neurologic disorder: Seizures or psychosis; sensitivity = 20%; specificity = 98%. Other than the ACR criteria, people with lupus may also have: Fever (over 100 °F/ 37.7 °C) Extreme fatigue Hair loss Fingers turning white or blue when cold (Raynaud syndrome) Criteria for individual diagnosis Some people, especially those with antiphospholipid syndrome, may have SLE without four of the above criteria, and also SLE may present with features other than those listed in the criteria. Recursive partitioning has been used to identify more parsimonious criteria. This analysis presented two diagnostic classification trees: Simplest classification tree: SLE is diagnosed if a person has an immunologic disorder (anti-DNA antibody, anti-Smith antibody, false positive syphilis test, or LE cells) or malar rash. It has sensitivity = 92% and specificity = 92%. Full classification tree: Uses six criteria. It has sensitivity = 97% and specificity = 95%. Other alternative criteria have been suggested, e.g. the St. Thomas' Hospital "alternative" criteria in 1998. Treatment There is no cure for Lupus. The treatment of SLE involves preventing flares and reducing their severity and duration when they occur. Treatment can include corticosteroids and anti-malarial drugs. Certain types of lupus nephritis such as diffuse proliferative glomerulonephritis require intermittent cytotoxic drugs. These drugs include cyclophosphamide and mycophenolate. Cyclophosphamide increases the risk of developing infections, pancreas problems, high blood sugar, and high blood pressure. Hydroxychloroquine was approved by the FDA for lupus in 1955. Some drugs approved for other diseases are used for SLE 'off-label'. In November 2010, an FDA advisory panel recommended approving belimumab (Benlysta) as a treatment for the pain and flare-ups common in lupus. The drug was approved by the FDA in March 2011. In terms of healthcare utilization and costs, one study found that "patients from the US with SLE, especially individuals with moderate or severe disease, utilize significant healthcare resources and incur high medical costs." Medications Due to the variety of symptoms and organ system involvement with SLE, its severity in an individual must be assessed to successfully treat SLE. Mild or remittent disease may, sometimes, be safely left untreated. If required, nonsteroidal anti-inflammatory drugs and antimalarials may be used. Medications such as prednisone, mycophenolic acid and tacrolimus have been used in the past. Disease-modifying antirheumatic drugs Disease-modifying antirheumatic drugs (DMARDs) are used preventively to reduce the incidence of flares, the progress of the disease, and the need for steroid use; when flares occur, they are treated with corticosteroids. DMARDs commonly in use are antimalarials such as hydroxychloroquine and immunosuppressants (e.g. methotrexate and azathioprine). Hydroxychloroquine is an FDA-approved antimalarial used for constitutional, cutaneous, and articular manifestations. Hydroxychloroquine has relatively few side effects, and there is evidence that it improves survival among people who have SLE. Cyclophosphamide is used for severe glomerulonephritis or other organ-damaging complications. Mycophenolic acid is also used for the treatment of lupus nephritis, but it is not FDA-approved for this indication, and FDA is investigating reports that it may be associated with birth defects when used by pregnant women. A study involving more than 1,000 people with lupus found that people have a similar risk of serious infection with azathioprine and mycophenolic acid as with newer biological therapies (rituximab and belimumab). Immunosuppressive drugs In more severe cases, medications that modulate the immune system (primarily corticosteroids and immunosuppressants) are used to control the disease and prevent recurrence of symptoms (known as flares). Depending on the dosage, people who require steroids may develop Cushing's syndrome, symptoms of which may include obesity, puffy round face, diabetes mellitus, increased appetite, difficulty sleeping, and osteoporosis. These may subside if and when the large initial dosage is reduced, but long-term use of even low doses can cause elevated blood pressure and cataracts. Numerous new immunosuppressive drugs are being actively tested for SLE. Rather than broadly suppressing the immune system, as corticosteroids do, they target the responses of specific types of immune cells. Some of these drugs are already FDA-approved for treatment of rheumatoid arthritis, however due to high-toxicity, their use remains limited. Analgesia Since a large percentage of people with SLE have varying amounts of chronic pain, stronger prescription analgesics (painkillers) may be used if over-the-counter drugs (mainly nonsteroidal anti-inflammatory drugs) do not provide effective relief. Potent NSAIDs such as indomethacin and diclofenac are relatively contraindicated for people with SLE because they increase the risk of kidney failure and heart failure. Pain is typically treated with opioids, varying in potency based on the severity of symptoms. When opioids are used for prolonged periods, drug tolerance, chemical dependency, and addiction may occur. Opiate addiction is not typically a concern since the condition is not likely to ever completely disappear. Thus, lifelong treatment with opioids is fairly common for chronic pain symptoms, accompanied by periodic titration that is typical of any long-term opioid regimen. Intravenous immunoglobulins (IVIGs) Intravenous immunoglobulins may be used to control SLE with organ involvement, or vasculitis. It is believed that they reduce antibody production or promote the clearance of immune complexes from the body, even though their mechanism of action is not well understood. Unlike immunosuppressives and corticosteroids, IVIGs do not suppress the immune system, so there is less risk of serious infections with these drugs. Lifestyle changes Avoiding sunlight in SLE is critical since ultraviolet radiation is known to exacerbate skin manifestations of the disease. Avoiding activities that induce fatigue is also important since those with SLE fatigue easily and it can be debilitating. These two problems can lead to people becoming housebound for long periods of time. Physical exercise has been shown to help improve fatigue in adult with SLE. Drugs unrelated to SLE should be prescribed only when known not to exacerbate the disease. Occupational exposure to silica, pesticides, and mercury can also worsen the disease. Recommendations for evidence based non-pharmacological interventions in the management of SLE have been developed by an international task force of clinicians and patients with SLE. Kidney transplantation Kidney transplants are the treatment of choice for end-stage kidney disease, which is one of the complications of lupus nephritis, but the recurrence of the full disease is common in up to 30% of people. Antiphospholipid syndrome Approximately 20% of people with SLE have clinically significant levels of antiphospholipid antibodies, which are associated with antiphospholipid syndrome. Antiphospholipid syndrome is also related to the onset of neural lupus symptoms in the brain. In this form of the disease, the cause is very different from lupus: thromboses (blood clots or "sticky blood") form in blood vessels, which prove to be fatal if they move within the bloodstream. If the thromboses migrate to the brain, they can potentially cause a stroke by blocking the blood supply to the brain. If this disorder is suspected in people, brain scans are usually required for early detection. These scans can show localized areas of the brain where blood supply has not been adequate. The treatment plan for these people requires anticoagulation. Often, low-dose aspirin is prescribed for this purpose, although for cases involving thrombosis anticoagulants such as warfarin are used. Management of pregnancy While most infants born to mothers who have SLE are healthy, pregnant mothers with SLE should remain under medical care until delivery. However, SLE in the pregnant mother poses a higher risk of neonatal lupus, intrauterine growth restriction, preterm membrane rupture, preterm birth, and miscarriage. Neonatal lupus is rare, but identification of mothers at the highest risk for complications allows for prompt treatment before or after birth. In addition, SLE can flare up during pregnancy, and proper treatment can maintain the health of the mother longer. Women pregnant and known to have anti-Ro (SSA) or anti-La antibodies (SSB) often have echocardiograms during the 16th and 30th weeks of pregnancy to monitor the health of the heart and surrounding vasculature. Contraception and other reliable forms of pregnancy prevention are routinely advised for women with SLE since getting pregnant during active disease was found to be harmful. Lupus nephritis, gestational diabetes, and pre-eclampsia are common manifestations. Prognosis No cure is available for SLE but there are many treatments for the disease. In the 1950s, most people diagnosed with SLE lived fewer than five years. Today, over 90% now survive for more than ten years, and many live relatively symptom-free. 80–90% can expect to live a normal lifespan. Mortality rates are however elevated compared to people without SLE. Prognosis is typically worse for men and children than for women; however, if symptoms are present after age 60, the disease tends to run a more benign course. Early mortality, within five years, is due to organ failure or overwhelming infections, both of which can be altered by early diagnosis and treatment. The mortality risk is fivefold when compared to the normal population in the late stages, which can be attributed to cardiovascular disease from accelerated atherosclerosis, the leading cause of death for people with SLE. To reduce the potential for cardiovascular issues, high blood pressure and high cholesterol should be prevented or treated aggressively. Steroids should be used at the lowest dose for the shortest possible period, and other drugs that can reduce symptoms should be used whenever possible. Epidemiology The global rates of SLE are approximately 20–70 per 100,000 people. In females, the rate is highest between 45 and 64 years of age. The lowest overall rate exists in Iceland and Japan. The highest rates exist in the US and France. However, there is not sufficient evidence to conclude why SLE is less common in some countries compared to others; it could be the environmental variability in these countries. For example, different countries receive different levels of sunlight, and exposure to UV rays affects dermatological symptoms of SLE. Certain studies hypothesize that a genetic connection exists between race and lupus which affects disease prevalence. If this is true, the racial composition of countries affects disease and will cause the incidence in a country to change as the racial makeup changes. To understand if this is true, countries with largely homogenous and racially stable populations should be studied to better understand incidence. Rates of disease in the developing world are unclear. The rate of SLE varies between countries, ethnicity, and sex, and changes over time. In the United States, one estimate of the rate of SLE is 53 per 100,000; another estimate places the total affected population at 322,000 to over 1 million (98 to over 305 per 100,000). In Northern Europe the rate is about 40 per 100,000 people. SLE occurs more frequently and with greater severity among those of non-European descent. That rate has been found to be as high as 159 per 100,000 among those of Afro-Caribbean descent. Childhood-onset systemic lupus erythematosus generally presents between the ages of 3 and 15 and is four times more common in girls. While the onset and persistence of SLE can show disparities between genders, socioeconomic status also plays a major role. Women with SLE and of lower socioeconomic status have been shown to have higher depression scores, higher body mass index, and more restricted access to medical care than women of higher socioeconomic statuses with the illness. People with SLE had more self-reported anxiety and depression scores if they were from a lower socioeconomic status. Race There are assertions that race affects the rate of SLE. However, a 2010 review of studies that correlate race and SLE identified several sources of systematic and methodological error, indicating that the connection between race and SLE may be spurious. For example, studies show that social support is a modulating factor which buffers against SLE-related damage and maintains physiological functionality. Studies have not been conducted to determine whether people of different racial backgrounds receive differing levels of social support. If there is a difference, this could act as a confounding variable in studies correlating race and SLE. Another caveat to note when examining studies about SLE is that symptoms are often self-reported. This process introduces additional sources of methodological error. Studies have shown that self-reported data is affected by more than just the patient's experience with the disease- social support, the level of helplessness, and abnormal illness-related behaviors also factor into a self-assessment. Additionally, other factors like the degree of social support that a person receives, socioeconomic status, health insurance, and access to care can contribute to an individual's disease progression. Racial differences in lupus progression have not been found in studies that control for the socioeconomic status [SES] of participants. Studies that control for the SES of its participants have found that non-white people have more abrupt disease onset compared to white people and that their disease progresses more quickly. Non-white patients often report more hematological, serosal, neurological, and renal symptoms. However, the severity of symptoms and mortality are both similar in white and non-white patients. Studies that report different rates of disease progression in late-stage SLE are most likely reflecting differences in socioeconomic status and the corresponding access to care. The people who receive medical care have often accrued less disease-related damage and are less likely to be below the poverty line. Additional studies have found that education, marital status, occupation, and income create a social context that affects disease progression. Sex SLE, like many autoimmune diseases, affects females more frequently than males, at a rate of about 9 to 1. Hormonal mechanisms could explain the increased incidence of SLE in females. The onset of SLE could be attributed to the elevated hydroxylation of estrogen and the abnormally decreased levels of androgens in females. In addition, differences in GnRH signalling have also been shown to contribute to the onset of SLE. While females are more likely to relapse than males, the intensity of these relapses is the same for both sexes. In addition to hormonal mechanisms, specific genetic influences found on the X chromosome may also contribute to the development of SLE. The X chromosome carries immunologic genes like CD40L, which can mutate or simply escape silencing by X-chromosome inactivation and contribute to the onset of SLE. A study has shown an association between Klinefelter syndrome and SLE. XXY males with SLE have an abnormal X–Y translocation resulting in the partial triplication of the PAR1 gene region. Research has also implicated XIST, which encodes a long non-coding RNA that coats the inactive member of the pair of X chromosomes in females as part of a ribonucleoprotein complex, as a source of autoimmunity. Changing rate of disease The rate of SLE in the United States increased from 1.0 in 1955 to 7.6 in 1974. Whether the increase is due to better diagnosis or an increased frequency of the disease is unknown. History The history of SLE can be divided into three periods: classical, neoclassical, and modern. In each period, research and documentation advanced the understanding and diagnosis of SLE, leading to its classification as an autoimmune disease in 1851, and to the various diagnostic options and treatments now available to people with SLE. The advances made by medical science in the diagnosis and treatment of SLE have dramatically improved the life expectancy of a person diagnosed with SLE. Etymology There are several explanations ventured for the term lupus erythematosus. is Latin for "wolf", and in Medieval Latin was also used to refer to a disease of the skin, and "erythematosus" is derived from , Ancient Greek for "redness of the skin". All explanations originate with the reddish, butterfly-shaped malar rash that the disease classically exhibits across the nose and cheeks. The reason the term lupus was used to describe this disease comes from the mid-19th century. Many diseases that caused ulceration or necrosis were given the term "lupus" due to the wound being reminiscent of a wolf's bite. This is similar to the naming of lupus vulgaris or chronic facial tuberculosis, where the lesions are ragged and punched out and are said to resemble the bite of a wolf. Classical period The classical period began when the disease was first recognized in the Middle Ages. The term lupus is attributed to 12th-century Italian physician Rogerius Frugard, who used it to describe ulcerating sores on the legs of people. No formal treatment for the disease existed and the resources available to physicians to help people were limited. Neoclassical period The neoclassical period began in 1851 when the skin disease which is now known as discoid lupus was documented by the French physician, Pierre Cazenave. Cazenave termed the illness lupus and added the word erythematosus to distinguish this disease from other illnesses that affected the skin except they were infectious. Cazenave observed the disease in several people and made very detailed notes to assist others in its diagnosis. He was one of the first to document that lupus affected adults from adolescence into the early thirties and that facial rash is its most distinguishing feature. Research and documentation of the disease continued in the neoclassical period with the work of Ferdinand von Hebra and his son-in-law, Moritz Kaposi. They documented the physical effects of lupus as well as some insights into the possibility that the disease caused internal trauma. Von Hebra observed that lupus symptoms could last many years and that the disease could go "dormant" after years of aggressive activity and then re-appear with symptoms following the same general pattern. These observations led Hebra to term lupus a chronic disease in 1872. Kaposi observed that lupus assumed two forms: the skin lesions (now known as discoid lupus) and a more aggravated form that affected not only the skin but also caused fever, arthritis, and other systemic disorders in people. The latter also presented a rash confined to the face, appearing on the cheeks and across the bridge of the nose; he called this the "butterfly rash". Kaposi also observed those patients who developed the butterfly rash were often afflicted with another disease such as tuberculosis, anemia, or chlorisis which often caused death. Kaposi was one of the first people to recognize what is now termed systemic lupus erythematosus in his documentation of the remitting and relapsing nature of the disease and the relationship of skin and systemic manifestations during disease activity. The 19th century's research into lupus continued with the work of Sir William Osler who, in 1895, published the first of his three papers about the internal complications of erythema exudativum multiforme. Not all the patient cases in his paper had SLE but Osler's work expanded the knowledge of systemic diseases and documented extensive and critical visceral complications for several diseases including lupus. Noting that many people with lupus had a disease that not only affected the skin but many other organs in the body as well, Osler added the word "systemic" to the term lupus erythematosus to distinguish this type of disease from discoid lupus erythematosus. Osler's second paper noted that reoccurrence is a special feature of the disease and that attacks can be sustained for months or even years. Further study of the disease led to a third paper, published in 1903, documenting afflictions such as arthritis, pneumonia, the inability to form coherent ideas, delirium, and central nervous system damage as all affecting patients diagnosed with SLE. Modern period The modern period, beginning in 1920, saw major developments in research into the cause and treatment of discoid and systemic lupus. Research conducted in the 1920s and 1930s led to the first detailed pathologic descriptions of lupus and demonstrated how the disease affected the kidney, heart, and lung tissue. A breakthrough was made in 1948 with the discovery of the LE cell (the lupus erythematosus cell—a misnomer, as it occurs with other diseases as well). Discovered by a team of researchers at the Mayo Clinic, they discovered that the white blood cells contained the nucleus of another cell that was pushing against the white's cell proper nucleus. Noting that the invading nucleus was coated with antibody that allowed it to be ingested by a phagocytic or scavenger cell, they named the antibody that causes one cell to ingest another the LE factor and the two nuclei cell result in the LE cell. The LE cell, it was determined, was a part of an anti-nuclear antibody (ANA) reaction; the body produces antibodies against its own tissue. This discovery led to one of the first definitive tests for lupus since LE cells are found in approximately 60% of all people diagnosed with lupus. The LE cell test is rarely performed as a definitive lupus test today as LE cells do not always occur in people with SLE and can occur in individuals with other autoimmune diseases. Their presence can help establish a diagnosis but no longer indicates a definitive SLE diagnosis. The discovery of the LE cell led to further research and this resulted in more definitive tests for lupus. Building on the knowledge that those with SLE had auto-antibodies that would attach themselves to the nuclei of normal cells, causing the immune system to send white blood cells to fight off these "invaders", a test was developed to look for the anti-nuclear antibody (ANA) rather than the LE cell specifically. This ANA test was easier to perform and led not only to a definitive diagnosis of lupus but also many other related diseases. This discovery led to the understanding of what is now known as autoimmune diseases. To ensure that the person has lupus and not another autoimmune disease, the American College of Rheumatology (ACR) established a list of clinical and immunologic criteria that, in any combination, point to SLE. The criteria include symptoms that the person can identify (e.g. pain) and things that a physician can detect in a physical examination and through laboratory test results. The list was originally compiled in 1971, initially revised in 1982, and further revised and improved in 2009. Medical historians have theorized that people with porphyria (a disease that shares many symptoms with SLE) generated folklore stories of vampires and werewolves, due to the photosensitivity, scarring, hair growth, and porphyrin brownish-red stained teeth in severe recessive forms of porphyria (or combinations of the disorder, known as dual, homozygous, or compound heterozygous porphyrias). Useful medication for the disease was first found in 1894 when quinine was first reported as an effective therapy. Four years later, the use of salicylates in conjunction with quinine was noted to be of still greater benefit. This was the best available treatment until the middle of the twentieth century when Hench discovered the efficacy of corticosteroids in the treatment of SLE. Research A study called BLISS-76 tested the drug belimumab, a fully human monoclonal anti-BAFF (or anti-BLyS) antibody. BAFF stimulates and extends the life of B lymphocytes, which produce antibodies against foreign and self-protein. It was approved by the FDA in March 2011. Genetically engineered immune cells are also being studied in animal models of the disease as of 2019. In September 2022, researchers at the University of Erlangen-Nuremberg published promising results using genetically altered immune cells to treat severely ill patients. Four women and one man received transfusions of CAR T cells modified to attack their B cells, eliminating the aberrant ones. The therapy drove the disease into remission in all five patients, who have been off lupus medication for several months after the treatment ended. Famous cases Shannon Boxx, U.S. Olympic team soccer player Nick Cannon, American television host, actor, rapper, and comedian Pumpuang Duangjan, Thai Luk Thung singer Selena Gomez, singer, actress, producer, and businesswoman Sally Hawkins, actress Flannery O'Connor, Southern Gothic novelist and short-story author Michael Jackson, American singer, songwriter, dancer and philanthropist Seal, British singer
Biology and health sciences
Specific diseases
Health
21009963
https://en.wikipedia.org/wiki/Meningitis
Meningitis
Meningitis is acute or chronic inflammation of the protective membranes covering the brain and spinal cord, collectively called the meninges. The most common symptoms are fever, intense headache, vomiting and neck stiffness and occasionally photophobia. Other symptoms include confusion or altered consciousness, nausea, and an inability to tolerate light or loud noises. Young children often exhibit only nonspecific symptoms, such as irritability, drowsiness, or poor feeding. A non-blanching rash (a rash that does not fade when a glass is rolled over it) may also be present. The inflammation may be caused by infection with viruses, bacteria, fungi or parasites. Non-infectious causes include malignancy (cancer), subarachnoid hemorrhage, chronic inflammatory disease (sarcoidosis) and certain drugs. Meningitis can be life-threatening because of the inflammation's proximity to the brain and spinal cord; therefore, the condition is classified as a medical emergency. A lumbar puncture, in which a needle is inserted into the spinal canal to collect a sample of cerebrospinal fluid (CSF), can diagnose or exclude meningitis. Some forms of meningitis are preventable by immunization with the meningococcal, mumps, pneumococcal, and Hib vaccines. Giving antibiotics to people with significant exposure to certain types of meningitis may also be useful for preventing transmission. The first treatment in acute meningitis consists of promptly giving antibiotics and sometimes antiviral drugs. Corticosteroids can be used to prevent complications from excessive inflammation. Meningitis can lead to serious long-term consequences such as deafness, epilepsy, hydrocephalus, or cognitive deficits, especially if not treated quickly. In 2019, meningitis was diagnosed in about 7.7 million people worldwide, of whom 236,000 died, down from 433,000 deaths in 1990. With appropriate treatment, the risk of death in bacterial meningitis is less than 15%. Outbreaks of bacterial meningitis occur between December and June each year in an area of sub-Saharan Africa known as the meningitis belt. Smaller outbreaks may also occur in other areas of the world. The word meningitis comes from the Greek , 'membrane', and the medical suffix -itis, 'inflammation'. Signs and symptoms Clinical features In adults, the most common symptom of meningitis is a severe headache, occurring in almost 90% of cases of bacterial meningitis, followed by neck stiffness (the inability to flex the neck forward passively due to increased neck muscle tone and stiffness). The classic triad of diagnostic signs consists of neck stiffness, sudden high fever, and altered mental status; however, all three features are present in only 44–46% of bacterial meningitis cases. If none of the three signs are present, acute meningitis is extremely unlikely. Other signs commonly associated with meningitis include photophobia (intolerance to bright light) and phonophobia (intolerance to loud noises). Small children often do not exhibit the aforementioned symptoms, and may only be irritable and look unwell. The fontanelle (the soft spot on the top of a baby's head) can bulge in infants aged up to 6 months. Other features that distinguish meningitis from less severe illnesses in young children are leg pain, cold extremities, and an abnormal skin color. Neck stiffness occurs in 70% of bacterial meningitis in adults. Other signs include the presence of positive Kernig's sign or Brudziński sign. Kernig's sign is assessed with the person lying supine, with the hip and knee flexed to 90 degrees. In a person with a positive Kernig's sign, pain limits passive extension of the knee. A positive Brudzinski's sign occurs when flexion of the neck causes involuntary flexion of the knee and hip. Although Kernig's sign and Brudzinski's sign are both commonly used to screen for meningitis, the sensitivity of these tests is limited. They do, however, have very good specificity for meningitis: the signs rarely occur in other diseases. Another test, known as the "jolt accentuation maneuver" helps determine whether meningitis is present in those reporting fever and headache. A person is asked to rapidly rotate the head horizontally; if this does not make the headache worse, meningitis is unlikely. Other problems can produce symptoms similar to those above, but from non-meningitic causes. This is called meningism or pseudomeningitis. Meningitis caused by the bacterium Neisseria meningitidis (known as "meningococcal meningitis") can be differentiated from meningitis with other causes by a rapidly spreading petechial rash, which may precede other symptoms. The rash consists of numerous small, irregular purple or red spots ("petechiae") on the trunk, lower extremities, mucous membranes, conjunctiva, and (occasionally) the palms of the hands or soles of the feet. The rash is typically non-blanching; the redness does not disappear when pressed with a finger or a glass tumbler. Although this rash is not necessarily present in meningococcal meningitis, it is relatively specific for the disease; it does, however, occasionally occur in meningitis due to other bacteria. Other clues on the cause of meningitis may be the skin signs of hand, foot and mouth disease and genital herpes, both of which are associated with various forms of viral meningitis. Early complications Additional problems may occur in the early stage of the illness. These may require specific treatment, and sometimes indicate severe illness or worse prognosis. The infection may trigger sepsis, a systemic inflammatory response syndrome of falling blood pressure, fast heart rate, high or abnormally low temperature, and rapid breathing. Very low blood pressure may occur at an early stage, especially but not exclusively in meningococcal meningitis; this may lead to insufficient blood supply to other organs. Disseminated intravascular coagulation, the excessive activation of blood clotting, may obstruct blood flow to organs and paradoxically increase the bleeding risk. Gangrene of limbs can occur in meningococcal disease. Severe meningococcal and pneumococcal infections may result in hemorrhaging of the adrenal glands, leading to Waterhouse-Friderichsen syndrome, which is often fatal. The brain tissue may swell, pressure inside the skull may increase and the swollen brain may herniate through the skull base. This may be noticed by a decreasing level of consciousness, loss of the pupillary light reflex, and abnormal posturing. The inflammation of the brain tissue may also obstruct the normal flow of CSF around the brain (hydrocephalus). Seizures may occur for various reasons; in children, seizures are common in the early stages of meningitis (in 30% of cases) and do not necessarily indicate an underlying cause. Seizures may result from increased pressure and from areas of inflammation in the brain tissue. Focal seizures (seizures that involve one limb or part of the body), persistent seizures, late-onset seizures and those that are difficult to control with medication indicate a poorer long-term outcome. Inflammation of the meninges may lead to abnormalities of the cranial nerves, a group of nerves arising from the brain stem that supply the head and neck area and which control, among other functions, eye movement, facial muscles, and hearing. Visual symptoms and hearing loss may persist after an episode of meningitis. Inflammation of the brain (encephalitis) or its blood vessels (cerebral vasculitis), as well as the formation of blood clots in the veins (cerebral venous thrombosis), may all lead to weakness, loss of sensation, or abnormal movement or function of the part of the body supplied by the affected area of the brain. Causes Meningitis is typically caused by an infection. Most infections are due to viruses, and others due to bacteria, fungi, and parasites. Mostly the parasites are parasitic worms, but can also rarely include parasitic amoebae. Meningitis may also result from various non-infectious causes. The term aseptic meningitis refers to cases of meningitis in which no bacterial infection can be demonstrated. This type of meningitis is usually caused by viruses, but it may be due to bacterial infection that has already been partially treated, when bacteria disappear from the meninges, or when pathogens infect a space adjacent to the meninges (such as sinusitis). Endocarditis (an infection of the heart valves which spreads small clusters of bacteria through the bloodstream) may cause aseptic meningitis. Aseptic meningitis may also result from infection with spirochetes, a group of bacteria that includes Treponema pallidum (the cause of syphilis) and Borrelia burgdorferi (known for causing Lyme disease), and may also result from cerebral malaria (malaria infecting the brain). Bacterial The types of bacteria that cause bacterial meningitis vary according to the infected individual's age group. In premature babies and newborns up to three months old, common causes are group B streptococci (subtypes III which normally inhabit the vagina and are mainly a cause during the first week of life) and bacteria that normally inhabit the digestive tract such as Escherichia coli (carrying the K1 antigen). Listeria monocytogenes (serotype IVb) can be contracted when consuming improperly prepared food such as dairy products, produce and deli meats, and may cause meningitis in the newborn. Older children are more commonly affected by Neisseria meningitidis (meningococcus) and Streptococcus pneumoniae (serotypes 6, 9, 14, 18 and 23) and those under five by Haemophilus influenzae type B (in countries that do not offer vaccination). In adults, Neisseria meningitidis and Streptococcus pneumoniae together cause 80% of bacterial meningitis cases. Risk of infection with Listeria monocytogenes is increased in people over 50 years old. The introduction of pneumococcal vaccine has lowered rates of pneumococcal meningitis in both children and adults. A head injury potentially allows nasal cavity bacteria to enter the meningeal space. Similarly, devices in the brain and meninges, such as cerebral shunts, extraventricular drains or Ommaya reservoirs, carry an increased risk of meningitis. In these cases, people are more likely to be infected with Staphylococci, Pseudomonas, and other Gram-negative bacteria. These pathogens are also associated with meningitis in people with an impaired immune system. An infection in the head and neck area, such as otitis media or mastoiditis, can lead to meningitis in a small proportion of people. Recipients of cochlear implants for hearing loss are more at risk for pneumococcal meningitis. In rare cases, Enterococcus spp. can be responsible for meningitis, both community and hospital-acquired, usually as a secondary result of trauma or surgery, or due to intestinal diseases (e.g., strongyloidiasis). Tuberculous meningitis, which is meningitis caused by Mycobacterium tuberculosis, is more common in people from countries in which tuberculosis is endemic, but is also encountered in people with immune problems, such as AIDS. Recurrent bacterial meningitis may be caused by persisting anatomical defects, either congenital or acquired, or by disorders of the immune system. Anatomical defects allow continuity between the external environment and the nervous system. The most common cause of recurrent meningitis is a skull fracture, particularly fractures that affect the base of the skull or extend towards the sinuses and petrous pyramids. Approximately 59% of recurrent meningitis cases are due to such anatomical abnormalities, 36% are due to immune deficiencies (such as complement deficiency, which predisposes especially to recurrent meningococcal meningitis), and 5% are due to ongoing infections in areas adjacent to the meninges. Viral Viruses that cause meningitis include enteroviruses, herpes simplex virus (generally type 2, which produces most genital sores; less commonly type 1), varicella zoster virus (known for causing chickenpox and shingles), mumps virus, HIV, LCMV, Arboviruses (acquired from a mosquito or other insect), and the influenza virus. Mollaret's meningitis is a chronic recurrent form of herpes meningitis; it is thought to be caused by herpes simplex virus type 2. Fungal There are a number of risk factors for fungal meningitis, including the use of immunosuppressants (such as after organ transplantation), HIV/AIDS, and the loss of immunity associated with aging. It is uncommon in those with a normal immune system but has occurred with medication contamination. Symptom onset is typically more gradual, with headaches and fever being present for at least a couple of weeks before diagnosis. The most common fungal meningitis is cryptococcal meningitis due to Cryptococcus neoformans. In Africa, cryptococcal meningitis is now the most common cause of meningitis in multiple studies, and it accounts for 20–25% of AIDS-related deaths in Africa. Other less common pathogenic fungi which can cause meningitis include: Coccidioides immitis, Histoplasma capsulatum, Blastomyces dermatitidis, and Candida species. Parasitic A parasitic worm is often assumed to be the cause of eosinophilic meningitis when there is a predominance of eosinophils (a type of white blood cell) found in the cerebrospinal fluid. The most common parasites implicated are Angiostrongylus cantonensis, Gnathostoma spinigerum, Schistosoma, as well as the conditions cysticercosis, toxocariasis, baylisascariasis, paragonimiasis, and a number of rarer infections and noninfective conditions. Rarely, free-living parasitic amoebae can cause naegleriasis, also called amebic meningitis, a type of meningoencephalitis where not only the meninges are affected but also the brain tissue. Non-infectious Meningitis may occur as the result of several non-infectious causes: spread of cancer to the meninges (malignant or neoplastic meningitis) and certain drugs (mainly non-steroidal anti-inflammatory drugs, antibiotics and intravenous immunoglobulins). It may also be caused by several inflammatory conditions, such as sarcoidosis (which is then called neurosarcoidosis), connective tissue disorders such as systemic lupus erythematosus, and certain forms of vasculitis (inflammatory conditions of the blood vessel wall), such as Behçet's disease. Epidermoid cysts and dermoid cysts may cause meningitis by releasing irritant matter into the subarachnoid space. Rarely, migraine may cause meningitis, but this diagnosis is usually only made when other causes have been eliminated. Mechanism The meninges comprise three membranes that, together with the cerebrospinal fluid, enclose and protect the brain and spinal cord (the central nervous system). The pia mater is a delicate impermeable membrane that firmly adheres to the surface of the brain, following all the minor contours. The arachnoid mater (so named because of its spider-web-like appearance) is a loosely fitting sac on top of the pia mater. The subarachnoid space separates the arachnoid and pia mater membranes and is filled with cerebrospinal fluid. The outermost membrane, the dura mater, is a thick durable membrane, which is attached to both the arachnoid membrane and the skull. In bacterial meningitis, bacteria reach the meninges by one of two main routes: through the bloodstream (hematogenous spread) or through direct contact between the meninges and either the nasal cavity or the skin. In most cases, meningitis follows invasion of the bloodstream by organisms that live on mucosal surfaces such as the nasal cavity. This is often in turn preceded by viral infections, which break down the normal barrier provided by the mucosal surfaces. Once bacteria have entered the bloodstream, they enter the subarachnoid space in places where the blood–brain barrier is vulnerable – such as the choroid plexus. Meningitis occurs in 25% of newborns with bloodstream infections due to group B streptococci; this phenomenon is much less common in adults. Direct contamination of the cerebrospinal fluid may arise from indwelling devices, skull fractures, or infections of the nasopharynx or the nasal sinuses that have formed a tract with the subarachnoid space (see above); occasionally, congenital defects of the dura mater can be identified. The large-scale inflammation that occurs in the subarachnoid space during meningitis is not a direct result of bacterial infection but can rather largely be attributed to the response of the immune system to the entry of bacteria into the central nervous system. When components of the bacterial cell membrane are identified by the immune cells of the brain (astrocytes and microglia), they respond by releasing large amounts of cytokines, hormone-like mediators that recruit other immune cells and stimulate other tissues to participate in an immune response. The blood–brain barrier becomes more permeable, leading to "vasogenic" cerebral edema (swelling of the brain due to fluid leakage from blood vessels). Large numbers of white blood cells enter the CSF, causing inflammation of the meninges and leading to "interstitial" edema (swelling due to fluid between the cells). In addition, the walls of the blood vessels themselves become inflamed (cerebral vasculitis), which leads to decreased blood flow and a third type of edema, "cytotoxic" edema. The three forms of cerebral edema all lead to increased intracranial pressure; together with the lowered blood pressure often encountered in sepsis, this means that it is harder for blood to enter the brain; consequently brain cells are deprived of oxygen and undergo apoptosis (programmed cell death). Administration of antibiotics may initially worsen the process outlined above, by increasing the amount of bacterial cell membrane products released through the destruction of bacteria. Particular treatments, such as the use of corticosteroids, are aimed at dampening the immune system's response to this phenomenon. Diagnosis Diagnosing meningitis as promptly as possible can improve outcomes. There are no specific signs or symptoms that can indicate meningitis, and a lumbar puncture (spinal tap) to examine the cerebrospinal fluid is recommended for diagnosis. Lumbar puncture is contraindicated if there is a mass in the brain (tumor or abscess) or the intracranial pressure (ICP) is elevated, as it may lead to brain herniation. If someone is at risk for either a mass or raised ICP (recent head injury, a known immune system problem, localizing neurological signs, or evidence on examination of a raised ICP), a CT or MRI scan is recommended prior to the lumbar puncture. This applies in 45% of all adult cases. There are no physical tests that can rule out or determine if a person has meningitis. The jolt accentuation test is not specific or sensitive enough to completely rule out meningitis. If someone is suspected of having meningitis, blood tests are performed for markers of inflammation (e.g. C-reactive protein, complete blood count), as well as blood cultures. If a CT or MRI is required before LP, or if LP proves difficult, professional guidelines suggest that antibiotics should be administered first to prevent delay in treatment, especially if this may be longer than 30 minutes. Often, CT or MRI scans are performed at a later stage to assess for complications of meningitis. In severe forms of meningitis, monitoring of blood electrolytes may be important; for example, hyponatremia is common in bacterial meningitis. The cause of hyponatremia, however, is controversial and may include dehydration, the inappropriate secretion of the antidiuretic hormone (SIADH), or overly aggressive intravenous fluid administration. Lumbar puncture A lumbar puncture is done by positioning the person, usually lying on the side, applying local anesthetic, and inserting a needle into the dural sac (a sac around the spinal cord) to collect cerebrospinal fluid (CSF). When this has been achieved, the "opening pressure" of the CSF is measured using a manometer. The pressure is normally between 6 and 18 cm water (cmH2O); in bacterial meningitis the pressure is usually elevated. In cryptococcal meningitis, intracranial pressure is markedly elevated. The initial appearance of the fluid may prove an indication of the nature of the infection: cloudy CSF indicates higher levels of protein, white and red blood cells and/or bacteria, and therefore may suggest bacterial meningitis. The CSF sample is examined for presence and types of white blood cells, red blood cells, protein content and glucose level. Gram staining of the sample may demonstrate bacteria in bacterial meningitis, but absence of bacteria does not exclude bacterial meningitis as they are only seen in 60% of cases; this figure is reduced by a further 20% if antibiotics were administered before the sample was taken. Gram staining is also less reliable in particular infections such as listeriosis. Microbiological culture of the sample is more sensitive (it identifies the organism in 70–85% of cases) but results can take up to 48 hours to become available. The type of white blood cell predominantly present (see table) indicates whether meningitis is bacterial (usually neutrophil-predominant) or viral (usually lymphocyte-predominant), although at the beginning of the disease this is not always a reliable indicator. Less commonly, eosinophils predominate, suggesting parasitic or fungal etiology, among others. The concentration of glucose in CSF is normally above 40% of that in blood. In bacterial meningitis it is typically lower; the CSF glucose level is therefore divided by the blood glucose (CSF glucose to serum glucose ratio). A ratio ≤0.4 is indicative of bacterial meningitis; in the newborn, glucose levels in CSF are normally higher, and a ratio below 0.6 (60%) is therefore considered abnormal. High levels of lactate in CSF indicate a higher likelihood of bacterial meningitis, as does a higher white blood cell count. If lactate levels are less than 35 mg/dl and the person has not previously received antibiotics then this may rule out bacterial meningitis. Various other specialized tests may be used to distinguish between different types of meningitis. A latex agglutination test may be positive in meningitis caused by Streptococcus pneumoniae, Neisseria meningitidis, Haemophilus influenzae, Escherichia coli and group B streptococci; its routine use is not encouraged as it rarely leads to changes in treatment, but it may be used if other tests are not diagnostic. Similarly, the limulus lysate test may be positive in meningitis caused by Gram-negative bacteria, but it is of limited use unless other tests have been unhelpful. Polymerase chain reaction (PCR) is a technique used to amplify small traces of bacterial DNA in order to detect the presence of bacterial or viral DNA in cerebrospinal fluid; it is a highly sensitive and specific test since only trace amounts of the infecting agent's DNA is required. It may identify bacteria in bacterial meningitis and may assist in distinguishing the various causes of viral meningitis (enterovirus, herpes simplex virus 2 and mumps in those not vaccinated for this). Serology (identification of antibodies to viruses) may be useful in viral meningitis. If tuberculous meningitis is suspected, the sample is processed for Ziehl–Neelsen stain, which has a low sensitivity, and tuberculosis culture, which takes a long time to process; PCR is being used increasingly. Diagnosis of cryptococcal meningitis can be made at low cost using an India ink stain of the CSF; however, testing for cryptococcal antigen in blood or CSF is more sensitive. A diagnostic and therapeutic difficulty is "partially treated meningitis", where there are meningitis symptoms after receiving antibiotics (such as for presumptive sinusitis). When this happens, CSF findings may resemble those of viral meningitis, but antibiotic treatment may need to be continued until there is definitive positive evidence of a viral cause (e.g. a positive enterovirus PCR). Postmortem Meningitis can be diagnosed after death has occurred. The findings from a post mortem are usually a widespread inflammation of the pia mater and arachnoid layers of the meninges. Neutrophil granulocytes tend to have migrated to the cerebrospinal fluid and the base of the brain, along with cranial nerves and the spinal cord, may be surrounded with pus – as may the meningeal vessels. Prevention For some causes of meningitis, protection can be provided in the long term through vaccination, or in the short term with antibiotics. Some behavioral measures may also be effective. Behavioral Bacterial and viral meningitis are contagious, but neither is as contagious as the common cold or flu. Both can be transmitted through droplets of respiratory secretions during close contact such as kissing, sneezing or coughing on someone, but bacterial meningitis cannot be spread by only breathing the air where a person with meningitis has been. Viral meningitis is typically caused by enteroviruses, and is most commonly spread through fecal contamination. The risk of infection can be decreased by changing the behavior that led to transmission. Vaccination Since the 1980s, many countries have included immunization against Haemophilus influenzae type B in their routine childhood vaccination schemes. This has practically eliminated this pathogen as a cause of meningitis in young children in those countries. In the countries in which the disease burden is highest, however, the vaccine is still too expensive. Similarly, immunization against mumps has led to a sharp fall in the number of cases of mumps meningitis, which prior to vaccination occurred in 15% of all cases of mumps. Meningococcus vaccines exist against groups A, B, C, W135 and Y. In countries where the vaccine for meningococcus group C was introduced, cases caused by this pathogen have decreased substantially. A quadrivalent vaccine now exists, which combines four vaccines with the exception of B; immunization with this ACW135Y vaccine is now a visa requirement for taking part in Hajj. Development of a vaccine against group B meningococci has proved much more difficult, as its surface proteins (which would normally be used to make a vaccine) only elicit a weak response from the immune system, or cross-react with normal human proteins. Still, some countries (New Zealand, Cuba, Norway and Chile) have developed vaccines against local strains of group B meningococci; some have shown good results and are used in local immunization schedules. Two new vaccines, both approved in 2014, are effective against a wider range of group B meningococci strains. In Africa, until recently, the approach for prevention and control of meningococcal epidemics was based on early detection of the disease and emergency reactive mass vaccination of the population at risk with bivalent A/C or trivalent A/C/W135 polysaccharide vaccines, though the introduction of MenAfriVac (meningococcus group A vaccine) has demonstrated effectiveness in young people and has been described as a model for product development partnerships in resource-limited settings. Routine vaccination against Streptococcus pneumoniae with the pneumococcal conjugate vaccine (PCV), which is active against seven common serotypes of this pathogen, significantly reduces the incidence of pneumococcal meningitis. The pneumococcal polysaccharide vaccine, which covers 23 strains, is only administered to certain groups (e.g. those who have had a splenectomy, the surgical removal of the spleen); it does not elicit a significant immune response in all recipients, e.g. small children. Childhood vaccination with Bacillus Calmette-Guérin has been reported to significantly reduce the rate of tuberculous meningitis, but its waning effectiveness in adulthood has prompted a search for a better vaccine. Antibiotics Short-term antibiotic prophylaxis is another method of prevention, particularly of meningococcal meningitis. In cases of meningococcal meningitis, preventative treatment in close contacts with antibiotics (e.g. rifampicin, ciprofloxacin or ceftriaxone) can reduce their risk of contracting the condition, but does not protect against future infections. Resistance to rifampicin has been noted to increase after use, which has caused some to recommend considering other agents. While antibiotics are frequently used in an attempt to prevent meningitis in those with a basilar skull fracture there is not enough evidence to determine whether this is beneficial or harmful. This applies to those with or without a CSF leak. Management Meningitis is potentially life-threatening and has a high mortality rate if untreated; delay in treatment has been associated with a poorer outcome. Thus, treatment with wide-spectrum antibiotics should not be delayed while confirmatory tests are being conducted. If meningococcal disease is suspected in primary care, guidelines recommend that benzylpenicillin be administered before transfer to hospital. Intravenous fluids should be administered if hypotension (low blood pressure) or shock are present. It is not clear whether intravenous fluid should be given routinely or whether this should be restricted. Given that meningitis can cause a number of early severe complications, regular medical review is recommended to identify these complications early and to admit the person to an intensive care unit, if deemed necessary. Mechanical ventilation may be needed if the level of consciousness is very low, or if there is evidence of respiratory failure. If there are signs of raised intracranial pressure, measures to monitor the pressure may be taken; this would allow the optimization of the cerebral perfusion pressure and various treatments to decrease the intracranial pressure with medication (e.g. mannitol). Seizures are treated with anticonvulsants. Hydrocephalus (obstructed flow of CSF) may require insertion of a temporary or long-term drainage device, such as a cerebral shunt. The osmotic therapy, glycerol, has an unclear effect on mortality but may decrease hearing problems. Bacterial meningitis Antibiotics Empiric antibiotics (treatment without exact diagnosis) should be started immediately, even before the results of the lumbar puncture and CSF analysis are known. The choice of initial treatment depends largely on the kind of bacteria that cause meningitis in a particular place and population. For instance, in the United Kingdom, empirical treatment consists of a third-generation cefalosporin such as cefotaxime or ceftriaxone. In the US, where resistance to cefalosporins is increasingly found in streptococci, addition of vancomycin to the initial treatment is recommended. Chloramphenicol, either alone or in combination with ampicillin, however, appears to work equally well. Empirical therapy may be chosen on the basis of the person's age, whether the infection was preceded by a head injury, whether the person has undergone recent neurosurgery and whether or not a cerebral shunt is present. In young children and those over 50 years of age, as well as those who are immunocompromised, the addition of ampicillin is recommended to cover Listeria monocytogenes. Once the Gram stain results become available, and the broad type of bacterial cause is known, it may be possible to change the antibiotics to those likely to deal with the presumed group of pathogens. The results of the CSF culture generally take longer to become available (24–48 hours). Once they do, empiric therapy may be switched to specific antibiotic therapy targeted to the specific causative organism and its sensitivities to antibiotics. For an antibiotic to be effective in meningitis it must not only be active against the pathogenic bacterium but also reach the meninges in adequate quantities; some antibiotics have inadequate penetrance and therefore have little use in meningitis. Most of the antibiotics used in meningitis have not been tested directly on people with meningitis in clinical trials. Rather, the relevant knowledge has mostly derived from laboratory studies in rabbits. Tuberculous meningitis requires prolonged treatment with antibiotics. While tuberculosis of the lungs is typically treated for six months, those with tuberculous meningitis are typically treated for a year or longer. Fluid therapy Fluid given intravenously are an essential part of treatment of bacterial meningitis. There is no difference in terms of mortality or acute severe neurological complications in children given a maintenance regimen over restricted-fluid regimen, but evidence is in favor of the maintenance regimen in terms of emergence of chronic severe neurological complications. Steroids Additional treatment with corticosteroids (usually dexamethasone) has shown some benefits, such as a reduction of hearing loss, and better short term neurological outcomes in adolescents and adults from high-income countries with low rates of HIV. Some research has found reduced rates of death while other research has not. They also appear to be beneficial in those with tuberculosis meningitis, at least in those who are HIV negative. Professional guidelines therefore recommend the commencement of dexamethasone or a similar corticosteroid just before the first dose of antibiotics is given, and continued for four days. Given that most of the benefit of the treatment is confined to those with pneumococcal meningitis, some guidelines suggest that dexamethasone be discontinued if another cause for meningitis is identified. The likely mechanism is suppression of overactive inflammation. Additional treatment with corticosteroids have a different role in children than in adults. Though the benefit of corticosteroids has been demonstrated in adults as well as in children from high-income countries, their use in children from low-income countries is not supported by the evidence; the reason for this discrepancy is not clear. Even in high-income countries, the benefit of corticosteroids is only seen when they are given prior to the first dose of antibiotics, and is greatest in cases of H. influenzae meningitis, the incidence of which has decreased dramatically since the introduction of the Hib vaccine. Thus, corticosteroids are recommended in the treatment of pediatric meningitis if the cause is H. influenzae, and only if given prior to the first dose of antibiotics; other uses are controversial. Adjuvant therapies In addition to the primary therapy of antibiotics and corticosteroids, other adjuvant therapies are under development or are sometimes used to try and improve survival from bacterial meningitis and reduce the risk of neurological problems. Examples of adjuvant therapies that have been trialed include acetaminophen, immunoglobulin therapy, heparin, pentoxifyline, and a mononucleotide mixture with succinic acid. It is not clear if any of these therapies are helpful or worsen outcomes in people with acute bacterial meningitis. Viral meningitis Viral meningitis typically only requires supportive therapy; most viruses responsible for causing meningitis are not amenable to specific treatment. Viral meningitis tends to run a more benign course than bacterial meningitis. Herpes simplex virus and varicella zoster virus may respond to treatment with antiviral drugs such as aciclovir, but there are no clinical trials that have specifically addressed whether this treatment is effective. Mild cases of viral meningitis can be treated at home with conservative measures such as fluid, bedrest, and analgesics. Fungal meningitis Fungal meningitis, such as cryptococcal meningitis, is treated with long courses of high dose antifungals, such as amphotericin B and flucytosine. Raised intracranial pressure is common in fungal meningitis, and frequent (ideally daily) lumbar punctures to relieve the pressure are recommended, or alternatively a lumbar drain. Prognosis Untreated, bacterial meningitis is almost always fatal. According to the WHO, bacterial meningitis has an overall mortality rate of 16.7% (with treatment). Viral meningitis, in contrast, tends to resolve spontaneously and is rarely fatal. With treatment, mortality (risk of death) from bacterial meningitis depends on the age of the person and the underlying cause. Of newborns, 20–30% may die from an episode of bacterial meningitis. This risk is much lower in older children, whose mortality is about 2%, but rises again to about 19–37% in adults. Risk of death is predicted by various factors apart from age, such as the pathogen and the time it takes for the pathogen to be cleared from the cerebrospinal fluid, the severity of the generalized illness, a decreased level of consciousness or an abnormally low count of white blood cells in the CSF. Meningitis caused by H. influenzae and meningococci has a better prognosis than cases caused by group B streptococci, coliforms and S. pneumoniae. In adults, too, meningococcal meningitis has a lower mortality (3–7%) than pneumococcal disease. In children there are several potential disabilities which may result from damage to the nervous system, including sensorineural hearing loss, epilepsy, learning and behavioral difficulties, as well as decreased intelligence. These occur in about 15% of survivors. Some of the hearing loss may be reversible. In adults, 66% of all cases emerge without disability. The main problems are deafness (in 14%) and cognitive impairment (in 10%). Tuberculous meningitis in children continues to be associated with a significant risk of death even with treatment (19%), and a significant proportion of the surviving children have ongoing neurological problems. Just over a third of all cases survives with no problems. Epidemiology Although meningitis is a notifiable disease in many countries, the exact incidence rate is unknown. In 2013 meningitis resulted in 303,000 deaths – down from 464,000 deaths in 1990. In 2010 it was estimated that meningitis resulted in 420,000 deaths, excluding cryptococcal meningitis. Bacterial meningitis occurs in about 3 people per 100,000 annually in Western countries. Population-wide studies have shown that viral meningitis is more common, at 10.9 per 100,000, and occurs more often in the summer. In Brazil, the rate of bacterial meningitis is higher, at 45.8 per 100,000 annually. Sub-Saharan Africa has been plagued by large epidemics of meningococcal meningitis for over a century, leading to it being labeled the "meningitis belt". Epidemics typically occur in the dry season (December to June), and an epidemic wave can last two to three years, dying out during the intervening rainy seasons. Attack rates of 100–800 cases per 100,000 are encountered in this area, which is poorly served by medical care. These cases are predominantly caused by meningococci. The largest epidemic ever recorded in history swept across the entire region in 1996–1997, causing over 250,000 cases and 25,000 deaths. Meningococcal disease occurs in epidemics in areas where many people live together for the first time, such as army barracks during mobilization, university and college campuses and the annual Hajj pilgrimage. Although the pattern of epidemic cycles in Africa is not well understood, several factors have been associated with the development of epidemics in the meningitis belt. They include: medical conditions (immunological susceptibility of the population), demographic conditions (travel and large population displacements), socioeconomic conditions (overcrowding and poor living conditions), climatic conditions (drought and dust storms), and concurrent infections (acute respiratory infections). There are significant differences in the local distribution of causes for bacterial meningitis. For instance, while N. meningitides groups B and C cause most disease episodes in Europe, group A is found in Asia and continues to predominate in Africa, where it causes most of the major epidemics in the meningitis belt, accounting for about 80% to 85% of documented meningococcal meningitis cases. History Some suggest that Hippocrates may have realized the existence of meningitis, and it seems that meningism was known to pre-Renaissance physicians such as Avicenna. The description of tuberculous meningitis, then called "dropsy in the brain", is often attributed to Edinburgh physician Sir Robert Whytt in a posthumous report that appeared in 1768, although the link with tuberculosis and its pathogen was not made until the next century. It appears that epidemic meningitis is a relatively recent phenomenon. The first recorded major outbreak occurred in Geneva in 1805. Several other epidemics in Europe and the United States were described shortly afterward, and the first report of an epidemic in Africa appeared in 1840. African epidemics became much more common in the 20th century, starting with a major epidemic sweeping Nigeria and Ghana in 1905–1908. The first report of bacterial infection underlying meningitis was by the Austrian bacteriologist Anton Weichselbaum, who in 1887 described the meningococcus. Mortality from meningitis was very high (over 90%) in early reports. In 1906, antiserum was produced in horses; this was developed further by the American scientist Simon Flexner and markedly decreased mortality from meningococcal disease. In 1944, penicillin was first reported to be effective in meningitis. The introduction in the late 20th century of Haemophilus vaccines led to a marked fall in cases of meningitis associated with this pathogen, and in 2002, evidence emerged that treatment with steroids could improve the prognosis of bacterial meningitis.
Biology and health sciences
Non-infectious disease
null
21010263
https://en.wikipedia.org/wiki/Sickle%20cell%20disease
Sickle cell disease
Sickle cell disease (SCD), also simply called sickle cell, is a group of hemoglobin-related blood disorders that are typically inherited. The most common type is known as sickle cell anemia. Sickle cell anemia results in an abnormality in the oxygen-carrying protein haemoglobin found in red blood cells. This leads to the red blood cells adopting an abnormal sickle-like shape under certain circumstances; with this shape, they are unable to deform as they pass through capillaries, causing blockages. Problems in sickle cell disease typically begin around 5 to 6 months of age. A number of health problems may develop, such as attacks of pain (known as a sickle cell crisis) in joints, anemia, swelling in the hands and feet, bacterial infections, dizziness and stroke. The probability of severe symptoms, including long-term pain, increases with age. Without treatment, people with SCD rarely reach adulthood but with good healthcare, median life expectancy is between 58 and 66 years. All of the major organs are affected by sickle cell disease. The liver, heart, kidneys, gallbladder, eyes, bones, and joints can be damaged from the abnormal functions of the sickle cells and their inability to effectively flow through the small blood vessels. Sickle cell disease occurs when a person inherits two abnormal copies of the β-globin gene that makes haemoglobin, one from each parent. Several subtypes exist, depending on the exact mutation in each haemoglobin gene. An attack can be set off by temperature changes, stress, dehydration, and high altitude. A person with a single abnormal copy does not usually have symptoms and is said to have sickle cell trait. Such people are also referred to as carriers. Diagnosis is by a blood test, and some countries test all babies at birth for the disease. Diagnosis is also possible during pregnancy. The care of people with sickle cell disease may include infection prevention with vaccination and antibiotics, high fluid intake, folic acid supplementation, and pain medication. Other measures may include blood transfusion and the medication hydroxycarbamide (hydroxyurea). In 2023, new gene therapies were approved involving the genetic modification and replacement of blood forming stem cells in the bone marrow. , SCD is estimated to affect about 7.7 million people worldwide, directly causing an estimated 34,000 annual deaths and a contributory factor to a further 376,000 deaths. About 80% of sickle cell disease cases are believed to occur in Sub-Saharan Africa. It also occurs to a lesser degree in parts of India, Southern Europe, West Asia, North Africa and among people of African origin (sub-Saharan) living in other parts of the world. The condition was first described in the medical literature by American physician James B. Herrick in 1910. In 1949, its genetic transmission was determined by E. A. Beet and J. V. Neel. In 1954, it was established that carriers of the abnormal gene are some degree protected against malaria. Signs and symptoms Signs of sickle cell disease usually begin in early childhood. The severity of symptoms can vary from person to person, as can the frequency of crisis events. Sickle cell disease may lead to various acute and chronic complications, several of which have a high mortality rate. First events When SCD presents within the first year of life, the most common problem is an episode of pain and swelling in the child's hands and feet, known as dactylitis or "hand-foot syndrome." Pallor, jaundice, and fatigue can also be early signs due to anaemia resulting from sickle cell disease. In children older than 2 years, the most common initial presentation is a painful episode of a generalized or variable nature, while a slightly less common presentation involves acute chest pain. Dactylitis is rare or almost never occurs in children over the age of 2. Critical events Vaso-occlusive crisis Also termed "sickle cell crisis" or "sickling crisis", the vaso-occlusive crisis (VOC) manifests principally as extreme pain, most often affecting the chest, back, legs and/or arms. The underlying cause is sickle-shaped red blood cells that obstruct capillaries and restrict blood flow to an organ, resulting in ischaemia, pain, necrosis, and often organ damage. The frequency, severity, and duration of these crises vary considerably. Milder crises can be managed with nonsteroidal anti-inflammatory drugs. For more severe crises, patients may require inpatient management for intravenous opioids. Vaso-occlusive crisis involving organs such as the penis or lungs are considered an emergency and treated with red blood cell transfusions. A VOC can be triggered by anything which causes blood vessels to constrict; this includes physical or mental stress, cold, and dehydration. "After HbS deoxygenates in the capillaries, it takes some time (seconds) for HbS polymerization and the subsequent flexible-to-rigid transformation. If the transit time of RBC through the microvasculature is longer than the polymerization time, sickled RBC will lodge in the microvasculature." Splenic sequestration crisis The spleen is especially prone to damage in SCD due to its role as a blood filter. A splenic sequestration crisis, also known as a spleen crisis, is a medical emergency that occurs when sickled red blood cells block the spleen's filter mechanism, causing the spleen to swell and fill with blood. The accumulation of red blood cells in the spleen results in a sudden drop in circulating hemoglobin and potentially life-threatening anemia. Symptoms include pain on the left side, swollen spleen (which can be detected by palpation), fatigue, dizziness, irritability, rapid heartbeat, or pale skin. It most commonly affects young children, the median age of first occurrence is 1.4 years. By the age of 5 years repeated instances of sequestration cause scarring and eventual atrophy of the spleen. Treatment is supportive, with blood transfusion if hemoglobin levels fall too low. Full or partial splenectomy may be necessary. Long term consequences of a loss of spleen function are increased susceptibility to bacterial infections. Acute chest syndrome Acute chest syndrome is caused by a VOC which affects the lungs, possibly triggered by infection or by emboli which have circulated from other organs. Symptoms include wheezing, chest pain, fever, pulmonary infiltrate (visible on x-ray), and hypoxemia. After sickling crisis (see above) it is the second-most common cause of hospitalization and it accounts for about 25% of deaths in patients with SCD. Most cases present with vaso-occlusive crises, and then develop acute chest syndrome. Aplastic crisis Aplastic crises are instances of an acute worsening of the patient's baseline anaemia, producing pale appearance, fast heart rate, and fatigue. This crisis is normally triggered by parvovirus B19, which directly affects production of red blood cells by invading the red cell precursors and multiplying in and destroying them. Parvovirus infection almost completely prevents red blood cell production for two to three days (red cell aplasia). In normal individuals, this is of little consequence, but the shortened red cell life of SCD patients results in an abrupt, life-threatening situation. Reticulocyte count drops dramatically during the disease (causing reticulocytopenia), red cell production lapses, and the rapid destruction of existing red cells leads to acute and severe anemia. This crisis takes four to seven days to resolve. Most patients can be managed supportively; some need a blood transfusion. Complications Sickle cell anaemia can lead to various complications including: Increased risk of severe bacterial infections is due to loss of functioning spleen tissue. These infections are typically caused by bacteria such as Streptococcus pneumoniae and Haemophilus influenzae. Daily penicillin prophylaxis is the most commonly used treatment during childhood, with some haematologists continuing treatment indefinitely. Patients benefit from routine vaccination for S. pneumoniae. Stroke can result from blockage of blood vessels in the brain, causing numbness, confusion, or weakness which may be long lasting. Silent stroke causes no immediate symptoms, but is associated with damage to the brain. Silent stroke is probably five times as common as symptomatic stroke. About 10–15% of children with SCD have strokes, with silent strokes predominating in the younger patients. Cholelithiasis (gallstones) and cholecystitis may result from excessive bilirubin production and precipitation due to prolonged haemolysis. Avascular necrosis (aseptic bone necrosis) of the hip and other major joints may occur as a result of ischaemia. Priapism and infarction of the penis Osteomyelitis (bacterial bone infection) as a result of damage to the spleen, commonly caused by either Staphylococcus aureus or species of Salmonella. Chronic kidney failure due to sickle-cell nephropathy manifests itself with hypertension, protein loss in the urine, loss of red blood cells in urine and worsened anaemia. If it progresses to end-stage kidney failure, it carries a poor prognosis. Leg ulcers are relatively common in SCD and can be disabling. In eyes, background retinopathy, proliferative retinopathy, vitreous haemorrhages, and retinal detachments can result in blindness. Regular annual eye checks are recommended. During pregnancy, intrauterine growth restriction, spontaneous abortion, and pre-eclampsia Chronic pain: Even in the absence of acute vaso-occlusive pain, many patients have unreported chronic pain. Pulmonary hypertension (increased pressure on the pulmonary artery) can lead to strain on the right ventricle and a risk of heart failure; typical symptoms are shortness of breath, decreased exercise tolerance, and episodes of syncope. 21% of children and 30% of adults have evidence of pulmonary hypertension when tested; this is associated with reduced walking distance and increased mortality. Cardiomyopathy and left ventricular diastolic dysfunction caused by fibrosis or scarring of cardiac tissues. This also contributes to pulmonary hypertension, decreased exercise capacity, and arrhythmias. Genetics Hemoglobin is an oxygen-binding protein, found in erythrocytes, which transports oxygen from the lungs (or in the fetus, from the placenta) to the tissues. Each molecule of hemoglobin comprises 4 protein subunits, referred to as globins. Normally, humans have:- hemoglobin F (Fetal hemoglobin, HbF), consisting of two alpha (α-globin) and two gamma (γ-globin) chains. This dominates during development of the fetus and until about 6 weeks of age. Afterwards, haemoglobin A dominates throughout life. hemoglobin A, (Adult hemoglobin, HbA) which consists of two alpha and two beta (β-globin) chains. This is the most common human hemoglobin tetramer, accounting for over 97% of the total red blood cell hemoglobin in normal adults. hemoglobin A2, (HbA2) is a second form of adult hemoglobin and is composed of two alpha and two delta (δ-globin) chains. This hemoglobin typically makes up 1-3% of hemoglobin in adults. β-globin is encoded by the HBB gene on human chromosome 11; mutations in this gene produce variants of the protein which are implicated with abnormal hemoglobins. The mutation which causes sickle cell disease results in an abnormal hemoglobin known as hemoglobin S (HbS), which replaces HbA in adults. The human genome contains a pair of genes for β-globin; in people with sickle cell disease, both genes are affected and the erythropoietic cells in the bone marrow will only create HbS. In people with sickle cell trait, only one gene is abnormal; erythropoiesis generates a mixture of normal HbA and sickle HbS. The person has very few if any symptoms of sickle cell disease but carries the gene and can pass it on to their children. Sickle cell disease has an autosomal recessive pattern of inheritance from parents. Both copies of the affected gene must carry the same mutation (homozygous condition) for a person to be affected by an autosomal recessive disorder. An affected person usually has unaffected parents who each carry one mutated gene and one normal gene (heterozygous condition) and are referred to as genetic carriers; they may not have any symptoms. When both parents have the sickle cell trait, any given child has a 25% chance of sickle cell disease; a 25% chance of no sickle cell alleles, and a 50% chance of the heterozygous condition (see diagram). There are several different haplotypes of the sickle cell gene mutation, indicating that it probably arose spontaneously in different geographic areas. The variants are known as Cameroon, Senegal, Benin, Bantu, and Saudi-Asian. These are clinically important because some are associated with higher HbF levels, e.g., Senegal and Saudi-Asian variants, and tend to have milder disease. The gene defect is a single nucleotide mutation of the β-globin gene, which results in glutamate being substituted by valine at position 6 of the β-globin chain. Hemoglobin S with this mutation is referred to as HbS, as opposed to the normal adult HbA. Under conditions of normal oxygen concentration this causes no apparent effects on the structure of haemoglobin or its ability to transport oxygen around the body. However, the deoxy form of HbS has an exposed hydrophobic patch which causes HbS molecules to join to form long inflexible chains. Under conditions of low oxygen concentration in the bloodstream, such as exercise, stress, altitude or dehydration, HbS polymerization forms fibrous precipitates within the red blood cell. In people homozygous for the sickle cell mutation, the presence of long-chain polymers of HbS distort the shape of the red blood cell from a smooth, doughnut-like shape to the sickle shape, making it fragile and susceptible to blocking or breaking within capillaries. In people heterozygous for HbS (carriers of sickle cell disease), the polymerisation problems are minor because the normal allele can produce half of the haemoglobin. Sickle cell carriers have symptoms only if they are deprived of oxygen (for example, at altitude) or while severely dehydrated. Malaria SCD is most prevalent in areas which have historically been associated with endemic malaria. The sickle cell trait provides a carrier with a survival advantage against malaria fatality over people with normal hemoglobin in regions where malaria is endemic. Infection with the malaria parasite affects asymptomatic carriers of the abnormal hemoglobin gene differently from patients with full SCD. Carriers (heterozygous for the gene) who catch malaria are less likely to suffer from severe symptoms than people with normal hemogolobin. SCD patients (homozygous for the gene) are similarly less likely to become infected with malaria; however once infected they are more likely to develop severe and life-threatening anemia. The impact of sickle cell anemia on malaria immunity illustrates some evolutionary trade-offs that have occurred because of endemic malaria. Although the shorter life expectancy for those with the homozygous condition would tend to disfavour the trait's survival, the trait is preserved in malaria-prone regions because of the benefits provided by the heterozygous form; an example of natural selection. Due to the adaptive advantage of the heterozygote, the disease is still prevalent, especially among people with recent ancestry in malaria-stricken areas, such as Africa, the Mediterranean, India, and the Middle East. Malaria was historically endemic to southern Europe, but it was declared eradicated in the mid-20th century, with the exception of rare sporadic cases. The malaria parasite has a complex lifecycle and spends part of it in red blood cells. There are two mechanisms which protect sickle cell carriers from malaria. One is that the parasite is hindered from growing and reproducing in a carrier's red blood cells; another is that a carrier's red cells show signs of damage when infected, and are detected and destroyed as they pass through the spleen. Pathophysiology Under conditions of low oxygen concentration, HBS polymerises to form long strands within the red blood cell (RBC). These strands distort the shape of the cell and after a few seconds cause it to adopt an abnormal, inflexible sickle-like shape. This process reverses when oxygen concentration is raised and the cells resume their normal biconcave disc shape. If sickling takes place in the venous system, after blood has passed through the capillaries, it has no effect on the organs and the RBCs can unsickle when they become oxygenated in the lungs. Repeated switching between sickle and normal shapes damages the membrane of the RBC so that it eventually becomes permanently sickled. Normal red blood cells are quite elastic and have a biconcave disc shape, which allows the cells to deform to pass through capillaries. In sickle cell disease, low oxygen tension promotes red blood cell sickling and repeated episodes of sickling damage the cell membrane and decrease the cell's elasticity. These cells fail to return to normal shape when normal oxygen tension is restored. As a consequence, these rigid blood cells are unable to deform as they pass through narrow capillaries, leading to vessel occlusion and ischaemia. Cells which have become sickled are detected as they pass through the spleen and are destroyed. In young children with SCD, the accumulation of sickled cells in the spleen can result in splenic sequestration crisis. In this, the spleen becomes engorged with blood, depriving the general circulation of blood cells and leading to severe anemia. The spleen initially becomes noticeably swollen but the lack of a healthy blood flow through the organ culminates in scarring of the spleen tissues and eventually death of the organ, generally before the age of 5 years. The actual anaemia of the illness is caused by haemolysis, the destruction of the red cells, because of their shape. Although the bone marrow attempts to compensate by creating new red cells, it does not match the rate of destruction. Healthy red blood cells typically function for 90–120 days, but sickled cells only last 10–20 days. The rapid breakdown of RBC's in SCD results in the release of free heme into the bloodstream exceeding the capacity of the body's protective mechanisms. Although heme is an essential component of hemoglobin, it is also a potent oxidative molecule. Free heme is also an alarmin - a signal of tissue damage or infection, which triggers defensive responses in the body and increases the risk of inflammation and vaso-occlusive events. Diagnosis Prenatal and newborn screening Checking for SCD begins during pregnancy, with a prenatal screening questionnaire which includes, among other things, a consideration of health issues in the child's parents and close relatives. During pregnancy, genetic testing can be done on either a blood sample from the fetus or a sample of amniotic fluid. During the first trimester of pregnancy, chorionic villus sampling (CVS) is also a technique used for SCD prenatal diagnosis. A routine heel prick test, in which a small sample of blood is collected a few days after birth, is used to check conclusively for SCD as well as other inherited conditions. Tests Where SCD is suspected, a number of tests can be used. Often a simpler, cheaper test is applied first with a more complex test such as DNA analysis used to confirm a positive result. Two tests which are specific for SCD: A blood smear is a thin layer of blood smeared on a glass microscope slide and then stained in such a way as to allow the various blood cells to be examined microscopically. This technique can be used to visually detect sickled cells, however it does not detect sickle carriers. A solubility test relies on the fact that HbS is less soluble than normal hemoglobin (HbA); it is highly reliable but does not distinguish between full SCD and carrier status. Tests which can be used for SCD as well as for other hemoglobinopathies: Hemoglobin electrophoresis is a test that can detect different types of hemoglobin. Hemoglobin is extracted from the red cells, then introduced into a porous gel and subjected to an electrical field. This separates the normal and abnormal types of hemoglobin which can then be identified and quantified. Isoelectric focusing (IEF) is a technique that can be used to diagnose sickle cell disease and other hemoglobinopathies. The technique separates molecules based on their isoelectric point, or the pH at which they have no net electrical charge. IEF uses an electric charge to separate and identify different types of hemoglobin, which become focused into sharp stationary bands. The technique can distinguish many types of abnormal hemoglobin. High-performance liquid chromatography (HPLC) is reliable, fully automated, and able to distinguish most types of sickle cell disease including heterozygous, The method separates and quantifies hemoglobin fractions by measuring their rate of flow through a column of absorbent material. DNA analysis using polymerase chain reaction (PCR), to amplify small samples of DNA. Variants of PCR used to diagnose SCD include Amplification-refractory mutation system (ARMS) and Allele-Specific Recombinase Polymerase Amplification. These tests can identify subtypes of SCD as well as combination hemoglobinopathies. Genetic counseling Genetic counselling is the process by which people with a hereditary disorder are advised of the probability of transmitting it and the ways in which this may be prevented or ameliorated. People who are known carriers of the disease or at risk of having a child with sickle cell anemia may undergo genetic counseling. Genetic counselors work with families to discuss the benefits, limitations, and logistics of genetic testing options as well as the potential impact of testing and test results on the individual. Counselling is best given before a child is conceived, and a number of possible courses could be suggested. These include adoption, the use of eggs or sperm from a healthy donor, and in-vitro fertilisation (IVF) when combined with pre-implantation genetic diagnosis of the embryos. Treatment Management There are a number of precautions which can help reduce the risk of developing a sickling crisis. Lifestyle behaviours include maintaining good hydration and avoiding physical stress or exhaustion. Since sickling can be triggered by low oxygen levels, people with SCD should avoid high altitudes such as high mountains or flying in unpressurised aircraft. People with SCD should avoid alcohol and smoking, as alcohol can cause dehydration and smoking can trigger acute chest syndrome. Stress can also trigger a sickle cell crisis, so relaxation techniques like breathing exercises can help. Pneumococcal infection is a leading cause of death among children with SCD; penicillin is recommended daily during the first 5 years of life in order to minimise the risk of infection. Dietary supplementation of folic acid is sometimes recommended, on the basis that it facilitates the creation of new red blood cells and may reduce anemia. A Cochrane review of its use in 2016 found "the effect of supplementation on anaemia and any symptoms of anaemia remains unclear" due to a lack of medical evidence. People with SCD are recommended to receive all vaccinations which are recommended by health authorities in order to avoid serious infection which might trigger a sickling crisis. Hydroxyurea was the first approved drug for the treatment of SCD, which has been shown to decrease the number and severity of attacks and possibly increase survival time. This is achieved, in part, by reactivating fetal haemoglobin production in place of the haemoglobin S that causes sickling. Hydroxyurea lowers the expression of adhesion molecules on endothelial and red blood cells, which lowers the chance of small vessel blockages. Additionally, it encourages the release of nitric oxide, which enhances blood flow and inhibits the formation of clots. Hydroxyurea had previously been used as a chemotherapy agent, and some concern exists that long-term use may be harmful. A Cochrane review in 2022 found a weak evidence base for its use in SCD. Voxelotor was received accelerated approval as a treatment for SCD in the United States in 2019, and was approved by the European Medicines Agency (EMA) in 2021. In trials, it had been shown to have disease-modifying potential by increasing hemoglobin levels and decreasing hemolysis indicators However, following an increased risk of vaso-occlusive seizures and death observed in registries and clinical trials, the manufacturer, Pfizer, withdrew it from the market worldwide. Blood transfusion A simple blood transfusion can be used to treat SCD when hemoglobin levels drop too low, or to prepare for an operation or pregnancy. It can also be used to protect against long-term complications, or to reduce the risk of stroke. The simple, or top-up transfusion is a procedure in which healthy blood cells from a donor are infused into the patient's bloodstream. This benefits by alleviating anemia and increasing oxygen levels in the tissues, reducing the risk of sickling and relieving sickling symptoms. A simple transfusion can be used to treat SCD when hemoglobin levels drop too low, or to prepare for an operation or pregnancy. It can also be used to protect against long-term complications, or to reduce the risk of stroke. An exchange transfusion is a procedure in which blood is removed from the body, then processed to extract sickled cells, which are replaced by healthy red blood cells from a donor. The treated blood, including white cells and plasma, is then returned to the patient. Exchange transfusions are likely to be needed in an emergency, in severe cases of SCD, or to support a mother during pregnancy. Stroke prevention Transcranial Doppler ultrasound (TCD) can detect children with sickle cell that have a high risk for stroke. The ultrasound test detects blood vessels partially obstructed by sickle cells by measuring the rate of blood into the brain, as blood flow velocity is inversely related to arterial diameter, and consequently, high blood-flow velocity is correlated with narrowing of the arteries. In children, preventive RBC transfusion therapy has been shown to reduce the risk of first stroke or silent stroke when transcranial Doppler ultrasonography shows abnormal cerebral blood flow. In those who have sustained a prior stroke event, it also reduces the risk of recurrent stroke and additional silent strokes. Vaso-occlusive crisis Most people with sickle cell disease have intensely painful episodes called vaso-occlusive crises (VOC). However, the frequency, severity, and duration of these crises vary tremendously. In a VOC, the circulation of blood vessels is obstructed by sickled red blood cells, causing ischemic injuries to the tissues, inflammation and pain. Recurrent episodes may cause irreversible organ damage. The most common and obvious symptom of a VOC is pain, which may be felt anywhere in the body but most commonly in the limbs and back. The degree of pain varies from mild to severe. Home treatment options include bedrest and hydration, and pain control using over-the-counter medication such as paracetamol or ibuprofen. More severe cases may require prescription opioids such as codeine or morphine for pain control. In 2019, crizanlizumab, a monoclonal antibody targeting P-selectin, was approved in the United States to reduce the frequency of vaso-occlusive crisis in those 16 years and older. It had also been approved in the UK and Europe, but in both cases authorisation was subsequently withdrawn because of poor evidence of its effectiveness. Acute chest syndrome Acute chest syndrome is caused by vaso-occlusion occurring in the lungs. As with a VOC, treatment includes pain control and hydration. Antibiotics are required because there is a severe risk of pulmonary infection, and oxygen supplementation for hypoxia. Blood transfusion may also be required, or exchange transfusion in severe cases. Treating avascular necrosis When treating avascular necrosis of the bone in people with sickle cell disease, the aim of treatment is to reduce or stop the pain and maintain joint mobility. Treatment options include resting the joint, physical therapy, pain-relief medicine, joint-replacement surgery, or bone grafting. Psychological therapy Psychological therapies such as patient education, cognitive therapy, behavioural therapy, and psychodynamic psychotherapy, that aim to complement current medical treatments, require further research to determine their effectiveness. Stem cell treatments Hematopoietic stem cells (HSC) are cells in the bone marrow that can develop into all types of blood cells, including red blood cells, white blood cells, and platelets. There are two possible ways to treat SCD and some other hemoglobinopathies by targeting HSCs. Since 1991, a small number of patients have received bone marrow transplants from healthy matched donors, although this procedure has a high level of risk. More recently, it has become possible to use CRISPR gene editing technology to modify the patient's own HSCs in a way that reduces or eliminates the production of sickle hemoglobin HbS and replaces it with a non-sickling form of hemoglobin. All stem cell treatments must involve myeloablation of the patients' bone marrow in order to remove HSCs containing the faulty gene. This requires high doses of chemotherapy agents with side effects such as sickness and tiredness. A long hospital stay is necessary after infusion of the replacement HSCs while the cells take up residence in the bone marrow and start to make red blood cells with the stable form of haemoglobin. Gene therapy Gene therapy was first trialled in 2014 on a single patient, and followed by clinical trials in which a number of patients were successfully treated. In 2023, both exagamglogene autotemcel (Casgevy) and lovotibeglogene autotemcel (Lyfgenia) were approved for the treatment of sickle cell disease. Kendric Cromer in October 2024 became the first commercial case in the USA to receive gene therapy and was discharged from Children's National Hospital. Both Casgevy and Lyfgenia work by first harvesting the patient's HSCs, then using CRISPR gene editing to modify their DNA in the laboratory. In parallel with this, the person with sickle cell disease's bone marrow is put through a myeloablation procedure to destroy the remaining HSCs. The treated cells are then infused back into the patient where they colonise the bone marrow and eventually resume production of blood cells. Casgevy works by editing the BCL11A gene, which normally inhibits the production of hemoglobin F (fetal hemoglobin) in adults. The edit has the effect of increasing production of HbF, which is not prone to sickling. Lyfgenia introduces a new gene for T87Q-globin which coexists with the sickling beta-globin but reduces the incidence of sickling. Hematopoietic stem cell transplantation Hematopoietic stem cell transplantation (HSCT) involves replacing the dysfunctional stem cells from a person with sickle cell disease with healthy cells from a well-matched donor. Finding a well matched donor is essential to the process' success. Different types of donors may be suitable and include umbilical cord blood, human leukocyte antigen (HLA) matched relatives, or HLA matched donors that are not related to the person being treated. Risks associated with HSCT can include graft-versus host disease, failure of the graft, and other toxicity related to the transplant. Prognosis Sickle cell disease is most prevalent in sub-saharan Africa. In areas without healthcare infrastructure, it is estimated that between 50% and 90% of children born with the disease die before the age of 5 years. In contrast, life expectancy in the United States in 2010-2020 was 43 years and in the UK 67 years. Epidemiology The HbS gene can be found in every ethnic group. The highest frequency of sickle cell disease is found in tropical regions, particularly sub-Saharan Africa, tribal regions of India, and the Middle East. About 80% of sickle cell disease cases are believed to occur in Sub-Saharan Africa. Migration of substantial populations from these high-prevalence areas to low-prevalence countries in Europe has dramatically increased in recent decades and in some European countries, sickle cell disease has now overtaken more familiar genetic conditions such as haemophilia and cystic fibrosis. In 2015, it resulted in about 114,800 deaths. Sickle cell disease occurs more commonly among people whose ancestors lived in tropical and subtropical sub-Saharan regions where malaria is or was common. Where malaria is common, carrying a single sickle cell allele (trait) confers a heterozygote advantage; humans with one of the two alleles of sickle cell disease show less severe symptoms when infected with malaria. This condition is inherited in an autosomal recessive pattern, which means both copies of the gene in each cell have mutations. The parents each carry one copy of the mutated gene, but they typically do not show signs and symptoms of the condition. Africa Three-quarters of sickle cell cases occur in Africa. A WHO report dated 2006 estimated that around 2% of newborns in Nigeria were affected by sickle cell anaemia, giving a total of 150,000 affected children born every year in Nigeria alone. The carrier frequency ranges between 10 and 40% across equatorial Africa, decreasing to 1–2% on the North African coast and <1% in South Africa. Uganda has the fifth-highest sickle cell disease burden in Africa. One study indicates that 20 000 babies per year are born with sickle cell disease with the sickle cell trait at 13.3% and with disease 0.7%. United States The number of people with the disease in the United States is about 100,000 (one in 3,300), mostly affecting Americans of sub-Saharan African descent. In the United States, about one out of 365 African-American children and one in every 16,300 Hispanic-American children have sickle cell anaemia. The life expectancy for men with SCD is approximately 42 years of age while women live approximately six years longer. An additional 2million are carriers of the sickle cell trait. Most infants with SCD born in the United States are identified by routine neonatal screening. As of 2016 all 50 states include screening for sickle cell disease as part of their newborn screen. The newborn's blood is sampled through a heel-prick and is sent to a lab for testing. The baby must have been eating for a minimum of 24 hours before the heel-prick test can be done. Some states also require a second blood test to be done when the baby is two weeks old to ensure the results. Sickle cell anemia is the most common genetic disorder among African Americans. Approximately 8% are carriers and 1 in 375 are born with the disease. Patient advocates for sickle cell disease have complained that it gets less government and private research funding than similar rare diseases such as cystic fibrosis, with researcher Elliott Vichinsky saying this shows racial discrimination or the role of wealth in health care advocacy. Overall, without considering race, approximately 1.5% of infants born in the United States are carriers of at least one copy of the mutant (disease-causing) gene. France As a result of population growth in African-Caribbean regions of overseas France and immigration from North and sub-Saharan Africa to mainland France, sickle cell disease has become a major health problem in France. SCD has become the most common genetic disease in the country, with an overall birth prevalence of one in 2,415 in mainland France, ahead of phenylketonuria (one in 10,862), congenital hypothyroidism (one in 3,132), congenital adrenal hyperplasia (one in 19,008) and cystic fibrosis (one in 5,014) for the same reference period. Since 2000, neonatal screening of SCD has been performed at the national level for all newborns defined as being "at-risk" for SCD based on ethnic origin (defined as those born to parents originating from sub-Saharan Africa, North Africa, the Mediterranean area (South Italy, Greece, and Turkey), the Arabic peninsula, the French overseas islands, and the Indian subcontinent). United Kingdom In the United Kingdom, between 12,000 and 15,000 people are thought to have sickle cell disease with an estimated 250,000 carriers of the condition in England alone. As the number of carriers is only estimated, all newborn babies in the UK receive a routine blood test to screen for the condition. Due to many adults in high-risk groups not knowing if they are carriers, pregnant women and both partners in a couple are offered screening so they can get counselling if they have the sickle cell trait. In addition, blood donors from those in high-risk groups are also screened to confirm whether they are carriers and whether their blood filters properly. Donors who are found to be carriers are informed and their blood, while often used for those of the same ethnic group, is not used for those with sickle cell disease who require a blood transfusion. West Asia In Saudi Arabia, about 4.2% of the population carry the sickle cell trait and 0.26% have sickle cell disease. The highest prevalence is in the Eastern province, where approximately 17% of the population carry the gene and 1.2% have sickle cell disease. In 2005, Saudi Arabia introduced a mandatory premarital test including HB electrophoresis, which aimed to decrease the incidence of SCD and thalassemia. In Bahrain, a study published in 1998 that covered about 56,000 people in hospitals in Bahrain found that 2% of newborns have sickle cell disease, 18% of the surveyed people have the sickle cell trait, and 24% were carriers of the gene mutation causing the disease. The country began screening of all pregnant women in 1992, and newborns started being tested if the mother was a carrier. In 2004, a law was passed requiring couples planning to marry to undergo free premarital counseling. These programs were accompanied by public education campaigns. India and Nepal Sickle cell disease is common in some ethnic groups of central India, where the prevalence has ranged from 9.4 to 22.2% in endemic areas of Madhya Pradesh, Rajasthan, and Chhattisgarh. It is also endemic among Tharu people of Nepal and India; however, they have a sevenfold lower rate of malaria despite living in a malaria infested zone. Caribbean Islands In Jamaica, 10% of the population carry the sickle cell gene, making it the most prevalent genetic disorder in the country. History The first modern report of sickle cell disease may have been in 1846, where the autopsy of an executed runaway slave was discussed; the key finding was the absence of the spleen. Reportedly, African slaves in the United States exhibited resistance to malaria, but were prone to leg ulcers. The abnormal characteristics of the red blood cells, which later lent their name to the condition, was first described by Ernest E. Irons (1877–1959), intern to Chicago cardiologist and professor of medicine James B. Herrick (1861–1954), in 1910. Irons saw "peculiar elongated and sickle-shaped" cells in the blood of a man named Walter Clement Noel, a 20-year-old first-year dental student from Grenada. Noel had been admitted to the Chicago Presbyterian Hospital in December 1904 with anaemia. Noel was readmitted several times over the next three years for "muscular rheumatism" and "bilious attacks" but completed his studies and returned to the capital of Grenada (St. George's) to practice dentistry. He died of pneumonia in 1916 and is buried in the Catholic cemetery at Sauteurs in the north of Grenada. Shortly after the report by Herrick, another case appeared in the Virginia Medical Semi-Monthly with the same title, "Peculiar Elongated and Sickle-Shaped Red Blood Corpuscles in a Case of Severe Anemia." This article is based on a patient admitted to the University of Virginia Hospital on 15 November 1910. In the later description by Verne Mason in 1922, the name "sickle cell anemia" is first used. Childhood problems related to sickle cells disease were not reported until the 1930s, despite the fact that this cannot have been uncommon in African-American populations. Memphis physician Lemuel Diggs, a prolific researcher into sickle cell disease, first introduced the distinction between sickle cell disease and trait in 1933, although until 1949, the genetic characteristics had not been elucidated by James V. Neel and E.A. Beet. 1949 was the year when Linus Pauling described the unusual chemical behaviour of haemoglobin S, and attributed this to an abnormality in the molecule itself. The molecular change in HbS was described in 1956 by Vernon Ingram. The late 1940s and early 1950s saw further understanding in the link between malaria and sickle cell disease. In 1954, the introduction of haemoglobin electrophoresis allowed the discovery of particular subtypes, such as HbSC disease. Large-scale natural history studies and further intervention studies were introduced in the 1970s and 1980s, leading to widespread use of prophylaxis against pneumococcal infections amongst other interventions. Bill Cosby's Emmy-winning 1972 TV movie, To All My Friends on Shore, depicted the story of the parents of a child with sickle cell disease. The 1990s had the development of hydroxycarbamide, and reports of cure through bone marrow transplantation appeared in 2007. Some old texts refer to it as drepanocytosis. Society and culture United States Sickle cell disease is frequently contested as a disability. Effective 15 September 2017, the U.S. Social Security Administration issued a Policy Interpretation Ruling providing background information on sickle cell disease and a description of how Social Security evaluates the disease during its adjudication process for disability claims. In the US, there are stigmas surrounding SCD that discourage people with SCD from receiving necessary care. These stigmas mainly affect people of African American and Latin American ancestries, according to the National Heart, Lung, and Blood Institute. People with SCD experience the impact of stigmas of the disease on multiple aspects of life including social and psychological well-being. Studies have shown that those with SCD frequently feel as though they must keep their diagnosis a secret to avoid discrimination in the workplace and also among peers in relationships. In the 1960s, the US government supported initiatives for workplace screening for genetic diseases in an attempt to be protective towards people with SCD. By having this screening, it was intended that employees would not be placed in environments that could potentially be harmful and trigger SCD. Uganda Uganda has the 5th highest sickle cell disease (SCD) burden in the world. In Uganda, social stigma exists for those with sickle cell disease because of the lack of general knowledge of the disease. The general gap in knowledge surrounding sickle cell disease is noted among adolescents and young adults due to the culturally sanctioned secrecy about the disease. While most people have heard generally about the disease, a large portion of the population is relatively misinformed about how SCD is diagnosed or inherited. Those who are informed about the disease learned about it from family or friends and not from health professionals. Failure to provide the public with information about sickle cell disease results in a population with a poor understanding of the causes of the disease, symptoms, and prevention techniques. The differences, physically and socially, that arise in those with sickle cell disease, such as jaundice, stunted physical growth, and delayed sexual maturity, can also lead them to become targets of bullying, rejection, and stigma. Rate of sickle cell disease in Uganda The data compiled on sickle cell disease in Uganda has not been updated since the early 1970s. The deficiency of data is due to a lack of government research funds, even though Ugandans die daily from SCD. Data shows that the trait frequency of sickle cell disease is 20% of the population in Uganda. This means that 66 million people are at risk of having a child who has sickle cell disease. It is also estimated that about 25,000 Ugandans are born each year with SCD and 80% of those people do not live past five years old. SCD also contributes 25% to the child mortality rate in Uganda. The Bamba people of Uganda, located in the southwest of the country, carry 45% of the gene which is the highest trait frequency recorded in the world. The Sickle Cell Clinic in Mulago is only one sickle cell disease clinic in the country and on average sees 200 patients a day. Misconceptions about sickle cell disease The stigma around the disease is particularly bad in regions of the country that are not as affected. For example, Eastern Ugandans tend to be more knowledgeable of the disease than Western Ugandans, who are more likely to believe that sickle cell disease resulted as a punishment from God or witchcraft. Other misconceptions about SCD include the belief that it is caused by environmental factors but, in reality, SCD is a genetic disease. There have been efforts throughout Uganda to address the social misconceptions about the disease. In 2013, the Uganda Sickle Cell Rescue Foundation was established to spread awareness of sickle cell disease and combat the social stigma attached to the disease. In addition to this organization's efforts, there is a need for the inclusion of sickle cell disease education in preexisting community health education programs in order to reduce the stigmatization of sickle cell disease in Uganda. Social isolation of people with sickle cell disease The deeply rooted stigma of SCD from society causes families to often hide their family members' sick status for fear of being labeled, cursed, or left out of social events. Sometimes in Uganda, when it is confirmed that a family member has sickle cell disease, intimate relationships with all members of the family are avoided. The stigmatization and social isolation people with sickle cell disease tend to experience is often the consequence of popular misconceptions that people with SCD should not socialize with those free from the disease. This mentality robs people with SCD of the right to freely participate in community activities like everyone else SCD-related stigma and social isolation in schools, especially, can make a life for young people living with sickle cell disease extremely difficult. For school-aged children living with SCD, the stigma they face can lead to peer rejection. Peer rejection involves the exclusion from social groups or gatherings. It often leads the excluded individual to experience emotional distress and may result in their academic underperformance, avoidance of school, and occupational failure later in life. This social isolation is also likely to negatively impact people with SCD's self-esteem and overall quality of life. Mothers of children with sickle cell disease tend to receive disproportionate amounts of stigma from their peers and family members. These women will often be blamed for their child's diagnosis of SCD, especially if SCD is not present in earlier generations, due to the suspicion that the child's poor health may have been caused by the mother's failure to implement preventative health measures or promote a healthy environment for her child to thrive. The reliance on theories related to environmental factors to place blame on the mother reflects many Ugandan's poor knowledge of how the disease is acquired as it is determined by genetics, not environment. Mothers of children with sickle cell disease are also often left with very limited resources to safeguard their futures against the stigma of having SCD. This lack of access to resources results from their subordinating roles within familial structures as well as the class disparities that hinder many mothers' ability to satisfy additional childcare costs and responsibilities. Women living with SCD who become pregnant often face extreme discrimination and discouragement in Uganda. These women are frequently branded by their peers as irresponsible for having a baby while living with sickle cell disease or even engaging in sex while living with SCD.Burden of sickle cell trait and disease in the Uganda Sickle Surveillance Study (US3): a cross-sectional study The criticism and judgement these women receive, not only from healthcare professionals but also from their families, often leaves them feeling alone, depressed, anxious, ashamed, and with very little social support.Burden of sickle cell trait and disease in the Uganda Sickle Surveillance Study (US3): a cross-sectional study - The Lancet Global Health Most pregnant women with SCD also go on to be single mothers as it is common for them to be left by their male partners who claim they were unaware of their partner's SCD status.Unprepared and Misinformed Parents of Children with Sickle Cell Disease: Time to Rethink Awareness Campaigns Not only does the abandonment experienced by these women cause emotional distress for them, but this low level of parental support can be linked to depressive symptoms and overall lower quality of life for the child once they are born. United Kingdom In 2021 many patients were found to be afraid to visit hospitals, such was the level of ignorance among staff, so purchased pain relief to treat themselves outside the NHS. They were often waiting a long time for pain relief, and sometimes suspected of "drugs-seeking" behaviour. Delays to treatment, failure to inform the hospital haematology team and poor pain management had caused deaths. Specialist haematology staff preferred to work in bigger, teaching hospitals, leading to shortages of expertise elsewhere. In 2021, the NHS initiated its first new treatment in 20 years for Sickle Cell. This involved the use of Crizanlizumab, a drug given via transfusion drips, which reduces the number of visits to A&E by sufferers. The treatment can be accessed, via consultants, at any of ten new hubs set up around the country. In the same year, however, an All-Party Parliamentary Group produced a report on Sickle Cell and Thalassaemia entitled 'No-one is listening'. Partly in response to this, on 19 June 2022, World Sickle Cell Day, the NHS launched a campaign called "Can you tell it's sickle cell?". The campaign had twin aims. One was to increase awareness of the key signs and symptoms of the blood disorder so that people would be as alert to signs of a sickle cell crisis as they are to an imminent heart attack or stroke. The second aim was to set up a new training programme to help paramedics, Accident and Emergency staff, carers and the general public to care effectively for sufferers in crisis.
Biology and health sciences
Specific diseases
Health
846000
https://en.wikipedia.org/wiki/Environmental%20degradation
Environmental degradation
Environmental degradation is the deterioration of the environment through depletion of resources such as quality of air, water and soil; the destruction of ecosystems; habitat destruction; the extinction of wildlife; and pollution. It is defined as any change or disturbance to the environment perceived to be deleterious or undesirable. The environmental degradation process amplifies the impact of environmental issues which leave lasting impacts on the environment. Environmental degradation is one of the ten threats officially cautioned by the High-level Panel on Threats, Challenges and Change of the United Nations. The United Nations International Strategy for Disaster Reduction defines environmental degradation as "the reduction of the capacity of the environment to meet social and ecological objectives, and needs". Environmental degradation comes in many types. When natural habitats are destroyed or natural resources are depleted, the environment is degraded; direct environmental degradation, such as deforestation, which is readily visible; this can be caused by more indirect process, such as the build up of plastic pollution over time or the buildup of greenhouse gases that causes tipping points in the climate system. Efforts to counteract this problem include environmental protection and environmental resources management. Mismanagement that leads to degradation can also lead to environmental conflict where communities organize in opposition to the forces that mismanaged the environment. Biodiversity loss Scientists assert that human activity has pushed the earth into a sixth mass extinction event. The loss of biodiversity has been attributed in particular to human overpopulation, continued human population growth and overconsumption of natural resources by the world's wealthy. A 2020 report by the World Wildlife Fund found that human activity – specifically overconsumption, population growth and intensive farming – has destroyed 68% of vertebrate wildlife since 1970. The Global Assessment Report on Biodiversity and Ecosystem Services, published by the United Nation's IPBES in 2019, posits that roughly one million species of plants and animals face extinction from anthropogenic causes, such as expanding human land use for industrial agriculture and livestock rearing, along with overfishing. Since the establishment of agriculture over 11,000 years ago, humans have altered roughly 70% of the Earth's land surface, with the global biomass of vegetation being reduced by half, and terrestrial animal communities seeing a decline in biodiversity greater than 20% on average. A 2021 study says that just 3% of the planet's terrestrial surface is ecologically and faunally intact, meaning areas with healthy populations of native animal species and little to no human footprint. Many of these intact ecosystems were in areas inhabited by indigenous peoples. With 3.2 billion people affected globally, degradation affects over 30% of the world's land area and 40% of land in developing countries. The implications of these losses for human livelihoods and wellbeing have raised serious concerns. With regard to the agriculture sector for example, The State of the World's Biodiversity for Food and Agriculture, published by the Food and Agriculture Organization of the United Nations in 2019, states that "countries report that many species that contribute to vital ecosystem services, including pollinators, the natural enemies of pests, soil organisms and wild food species, are in decline as a consequence of the destruction and degradation of habitats, overexploitation, pollution and other threats" and that "key ecosystems that deliver numerous services essential to food and agriculture, including supply of freshwater, protection against hazards and provision of habitat for species such as fish and pollinators, are declining." Impacts of environmental degradation on women's livelihoods On the way biodiversity loss and ecosystem degradation impact livelihoods, the Food and Agriculture Organization of the United Nations finds also that in contexts of degraded lands and ecosystems in rural areas, both girls and women bear heavier workloads. Women's livelihoods, health, food and nutrition security, access to water and energy, and coping abilities are all disproportionately affected by environmental degradation. Environmental pressures and shocks, particularly in rural areas, force women to deal with the aftermath, greatly increasing their load of unpaid care work. Also, as limited natural resources grow even scarcer due to climate change, women and girls must also walk further to collect food, water or firewood, which heightens their risk of being subjected to gender-based violence. This implies, for example, longer journeys to get primary necessities and greater exposure to the risks of human trafficking, rape, and sexual violence. Water degradation One major component of environmental degradation is the depletion of the resource of fresh water on Earth. Approximately only 2.5% of all of the water on Earth is fresh water, with the rest being salt water. 69% of fresh water is frozen in ice caps located on Antarctica and Greenland, so only 30% of the 2.5% of fresh water is available for consumption. Fresh water is an exceptionally important resource, since life on Earth is ultimately dependent on it. Water transports nutrients, minerals and chemicals within the biosphere to all forms of life, sustains both plants and animals, and moulds the surface of the Earth with transportation and deposition of materials. The current top three uses of fresh water account for 95% of its consumption; approximately 85% is used for irrigation of farmland, golf courses, and parks, 6% is used for domestic purposes such as indoor bathing uses and outdoor garden and lawn use, and 4% is used for industrial purposes such as processing, washing, and cooling in manufacturing centres. It is estimated that one in three people over the entire globe are already facing water shortages, almost one-fifth of the world population live in areas of physical water scarcity, and almost one quarter of the world's population live in a developing country that lacks the necessary infrastructure to use water from available rivers and aquifers. Water scarcity is an increasing problem due to many foreseen issues in the future including population growth, increased urbanization, higher standards of living, and climate change. Industrial and domestic sewage, pesticides, fertilizers, plankton blooms, silt, oils, chemical residues, radioactive material, and other pollutants are some of the most frequent water pollutants. These have a huge negative impact on the water and can cause degradation in various levels. Climate change and temperature Climate change affects the Earth's water supply in a large number of ways. It is predicted that the mean global temperature will rise in the coming years due to a number of forces affecting the climate. The amount of atmospheric carbon dioxide (CO2) will rise, and both of these will influence water resources; evaporation depends strongly on temperature and moisture availability which can ultimately affect the amount of water available to replenish groundwater supplies. Transpiration from plants can be affected by a rise in atmospheric CO2, which can decrease their use of water, but can also raise their use of water from possible increases of leaf area. Temperature rise can reduce the snow season in the winter and increase the intensity of the melting snow leading to peak runoff of this, affecting soil moisture, flood and drought risks, and storage capacities depending on the area. Warmer winter temperatures cause a decrease in snowpack, which can result in diminished water resources during summer. This is especially important at mid-latitudes and in mountain regions that depend on glacial runoff to replenish their river systems and groundwater supplies, making these areas increasingly vulnerable to water shortages over time; an increase in temperature will initially result in a rapid rise in water melting from glaciers in the summer, followed by a retreat in glaciers and a decrease in the melt and consequently the water supply every year as the size of these glaciers get smaller and smaller. Thermal expansion of water and increased melting of oceanic glaciers from an increase in temperature gives way to a rise in sea level. This can affect the freshwater supply to coastal areas as well. As river mouths and deltas with higher salinity get pushed further inland, an intrusion of saltwater results in an increase of salinity in reservoirs and aquifers. Sea-level rise may also consequently be caused by a depletion of groundwater, as climate change can affect the hydrologic cycle in a number of ways. Uneven distributions of increased temperatures and increased precipitation around the globe results in water surpluses and deficits, but a global decrease in groundwater suggests a rise in sea level, even after meltwater and thermal expansion were accounted for, which can provide a positive feedback to the problems sea-level rise causes to fresh-water supply. A rise in air temperature results in a rise in water temperature, which is also very significant in water degradation as the water would become more susceptible to bacterial growth. An increase in water temperature can also affect ecosystems greatly because of a species' sensitivity to temperature, and also by inducing changes in a body of water's self-purification system from decreased amounts of dissolved oxygen in the water due to rises in temperature. Climate change and precipitation A rise in global temperatures is also predicted to correlate with an increase in global precipitation but because of increased runoff, floods, increased rates of soil erosion, and mass movement of land, a decline in water quality is probable, because while water will carry more nutrients it will also carry more contaminants. While most of the attention about climate change is directed towards global warming and greenhouse effect, some of the most severe effects of climate change are likely to be from changes in precipitation, evapotranspiration, runoff, and soil moisture. It is generally expected that, on average, global precipitation will increase, with some areas receiving increases and some decreases. Climate models show that while some regions should expect an increase in precipitation, such as in the tropics and higher latitudes, other areas are expected to see a decrease, such as in the subtropics. This will ultimately cause a latitudinal variation in water distribution. The areas receiving more precipitation are also expected to receive this increase during their winter and actually become drier during their summer, creating even more of a variation of precipitation distribution. Naturally, the distribution of precipitation across the planet is very uneven, causing constant variations in water availability in respective locations. Changes in precipitation affect the timing and magnitude of floods and droughts, shift runoff processes, and alter groundwater recharge rates. Vegetation patterns and growth rates will be directly affected by shifts in precipitation amount and distribution, which will in turn affect agriculture as well as natural ecosystems. Decreased precipitation will deprive areas of water causing water tables to fall and reservoirs of wetlands, rivers, and lakes to empty. In addition, a possible increase in evaporation and evapotranspiration will result, depending on the accompanied rise in temperature. Groundwater reserves will be depleted, and the remaining water has a greater chance of being of poor quality from saline or contaminants on the land surface. Climate change is resulting into a very high rate of land degradation causing enhanced desertification and nutrient deficient soils. The menace of land degradation is increasing by the day and has been characterized as a major global threat. According to Global Assessment of Land Degradation and Improvement (GLADA) a quarter of land area around the globe can now be marked as degraded. Land degradation is supposed to influence lives of 1.5 billion people and 15 billion tons of fertile soil is lost every year due to anthropogenic activities and climate change. Population growth The human population on Earth is expanding rapidly, which together with even more rapid economic growth is the main cause of the degradation of the environment. Humanity's appetite for resources is disrupting the environment's natural equilibrium. Production industries are venting smoke into the atmosphere and discharging chemicals that are polluting water resources. The smoke includes detrimental gases such as carbon monoxide and sulphur dioxide. The high levels of pollution in the atmosphere form layers that are eventually absorbed into the atmosphere. Organic compounds such as chlorofluorocarbons (CFCs) have generated an opening in the ozone layer, which admits higher levels of ultraviolet radiation, putting the globe at risk. The available fresh water being affected by the climate is also being stretched across an ever-increasing global population. It is estimated that almost a quarter of the global population is living in an area that is using more than 20% of their renewable water supply; water use will rise with population while the water supply is also being aggravated by decreases in streamflow and groundwater caused by climate change. Even though some areas may see an increase in freshwater supply from an uneven distribution of precipitation increase, an increased use of water supply is expected. An increased population means increased withdrawals from the water supply for domestic, agricultural, and industrial uses, the largest of these being agriculture, believed to be the major non-climate driver of environmental change and water deterioration. The next 50 years will likely be the last period of rapid agricultural expansion, but the larger and wealthier population over this time will demand more agriculture. Population increase over the last two decades, at least in the United States, has also been accompanied by a shift to an increase in urban areas from rural areas, which concentrates the demand for water into certain areas, and puts stress on the fresh water supply from industrial and human contaminants. Urbanization causes overcrowding and increasingly unsanitary living conditions, especially in developing countries, which in turn exposes an increasingly number of people to disease. About 79% of the world's population is in developing countries, which lack access to sanitary water and sewer systems, giving rises to disease and deaths from contaminated water and increased numbers of disease-carrying insects. Agriculture Agriculture is dependent on available soil moisture, which is directly affected by climate dynamics, with precipitation being the input in this system and various processes being the output, such as evapotranspiration, surface runoff, drainage, and percolation into groundwater. Changes in climate, especially the changes in precipitation and evapotranspiration predicted by climate models, will directly affect soil moisture, surface runoff, and groundwater recharge. In areas with decreasing precipitation as predicted by the climate models, soil moisture may be substantially reduced. With this in mind, agriculture in most areas already needs irrigation, which depletes fresh water supplies both by the physical use of the water and the degradation agriculture causes to the water. Irrigation increases salt and nutrient content in areas that would not normally be affected, and damages streams and rivers from damming and removal of water. Fertilizer enters both human and livestock waste streams that eventually enter groundwater, while nitrogen, phosphorus, and other chemicals from fertilizer can acidify both soils and water. Certain agricultural demands may increase more than others with an increasingly wealthier global population, and meat is one commodity expected to double global food demand by 2050, which directly affects the global supply of fresh water. Cows need water to drink, more if the temperature is high and humidity is low, and more if the production system the cow is in is extensive, since finding food takes more effort. Water is needed in the processing of the meat, and also in the production of feed for the livestock. Manure can contaminate bodies of freshwater, and slaughterhouses, depending on how well they are managed, contribute waste such as blood, fat, hair, and other bodily contents to supplies of fresh water. The transfer of water from agricultural to urban and suburban use raises concerns about agricultural sustainability, rural socioeconomic decline, food security, an increased carbon footprint from imported food, and decreased foreign trade balance. The depletion of fresh water, as applied to more specific and populated areas, increases fresh water scarcity among the population and also makes populations susceptible to economic, social, and political conflict in a number of ways; rising sea levels forces migration from coastal areas to other areas farther inland, pushing populations closer together breaching borders and other geographical patterns, and agricultural surpluses and deficits from the availability of water induce trade problems and economies of certain areas. Climate change is an important cause of involuntary migration and forced displacement According to the Food and Agriculture Organization of the United Nations, global greenhouse gas emissions from animal agriculture exceeds that of transportation. Water management Water management is the process of planning, developing, and managing water resources across all water applications, in terms of both quantity and quality." Water management is supported and guided by institutions, infrastructure, incentives, and information systems The issue of the depletion of fresh water has stimulated increased efforts in water management. While water management systems are often flexible, adaptation to new hydrologic conditions may be very costly. Preventative approaches are necessary to avoid high costs of inefficiency and the need for rehabilitation of water supplies, and innovations to decrease overall demand may be important in planning water sustainability. Water supply systems, as they exist now, were based on the assumptions of the current climate, and built to accommodate existing river flows and flood frequencies. Reservoirs are operated based on past hydrologic records, and irrigation systems on historical temperature, water availability, and crop water requirements; these may not be a reliable guide to the future. Re-examining engineering designs, operations, optimizations, and planning, as well as re-evaluating legal, technical, and economic approaches to manage water resources are very important for the future of water management in response to water degradation. Another approach is water privatization; despite its economic and cultural effects, service quality and overall quality of the water can be more easily controlled and distributed. Rationality and sustainability is appropriate, and requires limits to overexploitation and pollution and efforts in conservation. Consumption increases As the world's population increases, it is accompanied by an increase in population demand for natural resources. With the need for more production increases comes more damage to the environments and ecosystems in which those resources are housed. According to United Nations' population growth predictions, there could be up to 170 million more births by 2070. The need for more fuel, energy, food, buildings, and water sources grows with the number of people on the planet. Deforestation As the need for new agricultural areas and road construction increases, the deforestation processes stay in effect. Deforestation is the "removal of forest or stand of trees from land that is converted to non-forest use." (Wikipedia-Deforestation). Since the 1960s, nearly 50% of tropical forests have been destroyed, but this process is not limited to tropical forest areas. Europe's forests are also destroyed by livestock, insects, diseases, invasive species, and other human activities. Many of the world's terrestrial biodiversity can be found living in the different types of forests. Tearing down these areas for increased consumption directly decreases the world's biodiversity of plant and animal species native to those areas. Along with destroying habitats and ecosystems, decreasing the world's forest contributes to the amount of in the atmosphere. By taking away forested areas, we are limiting the amount of carbon reservoirs, limiting it to the largest ones: the atmosphere and oceans. While one of the biggest reasons for deforestation is agriculture use for the world's food supply, removing trees from landscapes also increases erosion rates in areas, making it harder to produce crops in those soil types.
Physical sciences
Earth science basics: General
Earth science
847308
https://en.wikipedia.org/wiki/Spelt
Spelt
Spelt (Triticum spelta), also known as dinkel wheat is a species of wheat. It is a relict crop, eaten in Central Europe and northern Spain. It is high in protein and may be considered a health food. Spelt was cultivated from the Neolithic period onwards. It was a staple food in parts of Europe from the Bronze Age to the Middle Ages. It is used in baking, and is made into bread, pasta, and beer. It is sometimes considered a subspecies of the closely related species common wheat (T. aestivum), in which case its botanical name is considered to be Triticum aestivum subsp. spelta. It is a hexaploid, most likely a hybrid of wheat and emmer. Description Spelt is a species of Triticum, a large stout grass similar to bread wheat. Its flowering spike is slenderer than that of bread wheat; when ripe, it bends somewhat from the vertical. The spike is roughly four-edged. The axis of the spike is brittle and divided into segments; it shatters into separate segments when fully ripe. Spelt differs from bread wheat in that each seed (a caryopsis, botanically a fruit with its wall fused to the single seed inside) stays fully encapsulated by its husk. Confusion with other wheats Especially in the context of descriptions of ancient cultures, the English word spelt has sometimes been used for grains that were not T. spelta, but other species of hulled wheat such as T. dicoccum (emmer wheat) or T. monococcum (einkorn wheat, also known as "little spelt", in French "petit épeautre"). This confusion may arise either from mistranslation of words found in other languages that can denote hulled wheat in general (such as Italian farro, which can denote any of emmer, spelt or einkorn; spelt is sometimes distinguished as , 'large farro', emmer as , ('medium farro'), and einkorn as , 'little farro'), or changing opinions about which actual species of wheat are described in texts written in ancient languages. Thus, the meaning of the ancient Greek word ([zeiá]) or is either uncertain or vague, and has been argued to denote einkorn or emmer rather than spelt. Likewise, the ancient Roman grain denoted by the Latin word , although often translated as 'spelt', was in fact emmer. Similarly, references to the cultivation of spelt wheat in Biblical times in ancient Egypt and Mesopotamia are incorrect: they result from confusion with emmer wheat. Evolution Hybridisation and polyploidy Like common wheat, spelt is a hexaploid wheat species, which means it has six sets of chromosomes. It is derived from a hybridisation event between a domesticated tetraploid wheat such as durum wheat and another wheat species, increasing the number of sets of chromosomes. Genetic evidence indicates an initial hybridisation of a domesticated tetraploid wheat and the diploid wild goat-grass Aegilops tauschii. It further shows that spelt could have arisen as the result of a second hybridisation, this time of bread wheat and emmer wheat, giving rise to European spelt. The spelt genome continues to influence the breeding of modern hexaploid bread wheat through recent hybridisation. Spelt, being closely related to bread wheat, is a likely source of alleles to increase wheat's genetic diversity, and so improve crop yields. Analysis of the Oberkulmer cultivar of spelt found 40 alleles that could contribute to increased yield. Among the differences were spelt's larger grain size, greater fertility of tillers, and longer fruiting spikes. is an effector-triggered resistance gene for powdery mildew. History of cultivation Spelt has been cultivated since approximately 5000 BCE. In the fifth millennium BCE, there are archaeological remains in the north of Iraq and in Transcaucasia, north-east of the Black Sea. Much more evidence comes from Europe. Remains of spelt have been found in Denmark, Germany, and Poland from the later Neolithic (dating from 2500–1700 BCE). Evidence of spelt has been found from across central Europe from the Bronze Age. In the south of Germany and Switzerland in the Iron Age (750–15 BCE), it was a major type of wheat, while by 500 BCE, it had in addition become widespread in the south of Britain. There is evidence that spelt cultivation increased in Iron Age Britain as damp regions of the country with heavy soils tolerated by spelt were being settled. In the Middle Ages, spelt was cultivated in parts of Switzerland, Tyrol, Germany, northern France and the southern Low Countries. Spelt became a major crop in Europe in the 9th century CE, possibly because it is more suitable for storage and being husked makes it more adaptable to cold climates. Spelt was introduced to the United States in the 1890s. In the 20th century, spelt was replaced by bread wheat in almost all areas where it was still grown. The organic farming movement revived its popularity somewhat toward the end of the 20th century, as spelt requires less fertilizer. Since the beginning of the 21st century, spelt has become a common wheat substitute for making artisanal loaves of bread, pasta, and flakes. By 2014, the grain was popular in the UK, Kazakhstan, and Ukraine. Shortages were reported although spelt was grown in those countries. Nutrition A reference serving of uncooked spelt provides of food energy and is a rich source (20% or more of the Daily Value) of protein, dietary fiber, several B vitamins, and numerous dietary minerals (table). Highest nutrient contents include manganese (143% DV), phosphorus (57% DV), and niacin (46% DV). Spelt contains about 70% total carbohydrates, including 11% as dietary fibre, and is low in fat (table). Spelt contains gluten, and is therefore suitable for baking, but this component makes it unsuitable for people with gluten-related disorders, such as celiac disease. In comparison to hard red winter wheat, spelt has a more soluble protein matrix characterized by a higher gliadin:glutenin ratio. Products In Germany and Austria, spelt loaves and rolls (dinkelbrot) are widely available in bakeries. The unripe spelt grains are dried and eaten as grünkern ("green grain"). In some countries, spelt may be considered a health food; for example, in Australia it is grown organically for the health food market. Dutch jenever makers sometimes distil with spelt, while beer brewed from spelt exists in Bavaria and Belgium.
Biology and health sciences
Grains
Plants
847879
https://en.wikipedia.org/wiki/Age%20of%20the%20universe
Age of the universe
In physical cosmology, the age of the universe is the time elapsed since the Big Bang: 13.8 billion years. Astronomers have two different approaches to determine the age of the universe. One is based on a particle physics model of the early universe called Lambda-CDM, matched to measurements of the distant, and thus old features, like the cosmic microwave background. The other is based on the distance and relative velocity of a series or "ladder" of different kinds of stars, making it depend on local measurements late in the history of the universe. These two methods give slightly different values for the Hubble constant, which is then used in a formula to calculate the age. The range of the estimate is also within the range of the estimate for the oldest observed star in the universe. History In the 18th century, the concept that the age of Earth was millions, if not billions, of years began to appear. Nonetheless, most scientists throughout the 19th century and into the first decades of the 20th century presumed that the universe itself was steady state and eternal, possibly with stars coming and going but no changes occurring at the largest scale known at the time. The first scientific theories indicating that the age of the universe might be finite were the studies of thermodynamics, formalized in the mid-19th century. The concept of entropy dictates that if the universe (or any other closed system) were infinitely old, then everything inside would be at the same temperature, and thus there would be no stars and no life. No scientific explanation for this contradiction was put forth at the time. In 1915, Albert Einstein published the theory of general relativity and in 1917 constructed the first cosmological model based on his theory. In order to remain consistent with a steady-state universe, Einstein added what was later called a cosmological constant to his equations. Einstein's model of a static universe was proved unstable by Arthur Eddington. The first direct observational hint that the universe was not static but expanding came from the observations of 'recession velocities', mostly by Vesto M. Slipher, combined with distances to the 'nebulae' (galaxies) by Edwin Hubble in a work published in 1929. Earlier in the 20th century, Hubble and others resolved individual stars within certain nebulae, thus determining that they were galaxies, similar to, but external to, the Milky Way Galaxy. In addition, these galaxies were very large and very far away. Spectra taken of these distant galaxies showed a red shift in their spectral lines presumably caused by the Doppler effect, thus indicating that these galaxies were moving away from the Earth. In addition, the farther away these galaxies seemed to be (the dimmer they appeared) the greater was their redshift, and thus the faster they seemed to be moving away. This was the first direct evidence that the universe is not static but expanding. The first estimate of the age of the universe came from the calculation of when all of the objects must have started speeding out from the same point. Hubble's initial value for the universe's age was very low, as the galaxies were assumed to be much closer than later observations found them to be. The first reasonably accurate measurement of the rate of expansion of the universe, a numerical value now known as the Hubble constant, was made in 1958 by astronomer Allan Sandage. His measured value for the Hubble constant came very close to the value range generally accepted today. Sandage, like Einstein, did not believe his own results at the time of discovery. Sandage proposed new theories of cosmogony to explain this discrepancy. This issue was more or less resolved by improvements in the theoretical models used for estimating the ages of stars. As of 2024, using the latest models for stellar evolution, the estimated age of the oldest known star is billion years. The discovery of cosmic microwave background radiation announced in 1965 finally brought an effective end to the remaining scientific uncertainty over the expanding universe. It was a chance result from work by two teams less than 60 miles apart. In 1964, Arno Penzias and Robert Woodrow Wilson were trying to detect radio wave echoes with a supersensitive antenna. The antenna persistently detected a low, steady, mysterious noise in the microwave region that was evenly spread over the sky, and was present day and night. After testing, they became certain that the signal did not come from the Earth, the Sun, or the Milky Way galaxy, but from outside the Milky Way, but could not explain it. At the same time another team, Robert H. Dicke, Jim Peebles, and David Wilkinson, were attempting to detect low level noise that might be left over from the Big Bang and could prove whether the Big Bang theory was correct. The two teams realized that the detected noise was in fact radiation left over from the Big Bang, and that this was strong evidence that the theory was correct. Since then, a great deal of other evidence has strengthened and confirmed this conclusion, and refined the estimated age of the universe to its current figure. The space probes WMAP, launched in 2001, and Planck, launched in 2009, produced data that determines the Hubble constant and the age of the universe independent of galaxy distances, removing the largest source of error. Definition Experimental observations confirm expansion of universe according to Hubble's law. Since the universe is expanding, the equation for that expansion can be "run backwards" to its starting point. The Lambda-CDM concordance model describes the expansion of the universe from a very uniform, hot, dense primordial state to its present state over a span of about 13.77 billion years of cosmological time. This model is well understood theoretically and strongly supported by recent high-precision astronomical observations such as WMAP. The International Astronomical Union uses the term "age of the universe" to mean the duration of the Lambda-CDM expansion, or equivalently, the time elapsed within the currently observable universe since the Big Bang. The expansion rate at any time is called the Hubble parameter which is modeled as where are density parameters, with for mass ([baryon]]s and cold dark matter), for radiation (photons plus relativistic neutrinos), and for dark energy. The value , called the Hubble constant, is the Hubble parameter today () and it has units of inverse time. The age of the universe is then defined as The integral is close to 1 so is close to the age of the universe. Observational limits Since the universe must be at least as old as the oldest things in it, there are a number of observations that put a lower limit on the age of the universe; these include the temperature of the coolest white dwarfs, which gradually cool as they age, and the dimmest turnoff point of main sequence stars in clusters (lower-mass stars spend a greater amount of time on the main sequence, so the lowest-mass stars that have evolved away from the main sequence set a minimum age). Before the incorporation of dark energy in the model of cosmic expansion, the age was awkwardly less than the oldest observed astronomical objects. This connection can be used in reverse: the oldest objects found constrain the values of the density parameter for dark energy. Cosmological parameters The problem of determining the age of the universe is closely tied to the problem of determining the values of the cosmological parameters. Today this is largely carried out in the context of the ΛCDM model, where the universe is assumed to contain normal (baryonic) matter, cold dark matter, radiation (including both photons and neutrinos), and a cosmological constant. The fractional contribution of each to the current energy density of the universe is given by the density parameters and The full ΛCDM model is described by a number of other parameters, but for the purpose of computing its age these three, along with the Hubble parameter, are the most important. With accurate measurements of these parameters, the age of the universe can be determined by using the Friedmann equation. This equation relates the rate of change in the scale factorto the matter content of the universe. Turning this relation around, we can calculate the change in time per change in scale factor and thus calculate the total age of the universe by integrating this formula. The ageis then given by an expression of the form whereis the Hubble parameter and the functiondepends only on the fractional contribution to the universe's energy content that comes from various components. The first observation that one can make from this formula is that it is the Hubble parameter that controls that age of the universe, with a correction arising from the matter and energy content. So a rough estimate of the age of the universe comes from the Hubble time, the inverse of the Hubble parameter. With a value foraround , the Hubble time evaluates to  billion years. To get a more accurate number, the correction functionmust be computed. In general this must be done numerically, and the results for a range of cosmological parameter values are shown in the figure. For the Planck values(0.3086, 0.6914), shown by the box in the upper left corner of the figure, this correction factor is about For a flat universe without any cosmological constant, shown by the star in the lower right corner,is much smaller and thus the universe is younger for a fixed value of the Hubble parameter. To make this figure,is held constant (roughly equivalent to holding the cosmic microwave background temperature constant) and the curvature density parameter is fixed by the value of the other three. Apart from the Planck satellite, the Wilkinson Microwave Anisotropy Probe (WMAP) was instrumental in establishing an accurate age of the universe, though other measurements must be folded in to gain an accurate number. CMB measurements are very good at constraining the matter content and curvature parameter It is not as sensitive todirectly, partly because the cosmological constant becomes important only at low redshift. The most accurate determinations of the Hubble parameterare currently believed to come from measured brightnesses and redshifts of distant Type Ia supernovae. Combining these measurements leads to the generally accepted value for the age of the universe quoted above. The cosmological constant makes the universe "older" for fixed values of the other parameters. This is significant, since before the cosmological constant became generally accepted, the Big Bang model had difficulty explaining why globular clusters in the Milky Way appeared to be far older than the age of the universe as calculated from the Hubble parameter and a matter-only universe. Introducing the cosmological constant allows the universe to be older than these clusters, as well as explaining other features that the matter-only cosmological model could not. From redshift observations To derive the age of the universe from redshift, numeric integration or its closed-form solution involving the special Gaussian hypergeometric function 2F1 may be used: Lookback time is the age of the observation subtracted from the present age of the universe: WMAP NASA's Wilkinson Microwave Anisotropy Probe (WMAP) project's nine-year data release in 2012 estimated the age of the universe to be years (13.772 billion years, with an uncertainty of plus or minus 59 million years). This age is based on the assumption that the project's underlying model is correct; other methods of estimating the age of the universe could give different ages. Assuming an extra background of relativistic particles, for example, can enlarge the error bars of the WMAP constraint by one order of magnitude. This measurement is made by using the location of the first acoustic peak in the microwave background power spectrum to determine the size of the decoupling surface (size of the universe at the time of recombination). The light travel time to this surface (depending on the geometry used) yields a reliable age for the universe. Assuming the validity of the models used to determine this age, the residual accuracy yields a margin of error near one per cent. Planck In 2015, the Planck Collaboration estimated the age of the universe to be  billion years, slightly higher but within the uncertainties of the earlier number derived from the WMAP data. In the table below, figures are within 68% confidence limits for the base ΛCDM model. {| class="wikitable" style="text-align:center;" |+ Cosmological parameters from 2015 Planck results |- ! Parameter !! Symbol !! TT + lowP !! TT + lowP + lensing !! TT + lowP + lensing + ext !! TT, TE, EE + lowP !! TT, TE, EE + lowP + lensing !! TT, TE, EE + lowP + lensing + ext |- ! Age of the universe (Ga) | || || || || || || |- ! Hubble constant () | || || || || || || |} Legend: TT, TE, EE: Planck Cosmic microwave background (CMB) power spectra lowP: Planck polarization data in the low-ℓ likelihood lensing: CMB lensing reconstruction ext: External data (BAO+JLA+H0). BAO: Baryon acoustic oscillations, JLA: Joint Light curve Analysis, H0: Hubble constant In 2018, the Planck Collaboration updated its estimate for the age of the universe to  billion years. Assumption of strong priors Calculating the age of the universe is accurate only if the assumptions built into the models being used to estimate it are also accurate. This is referred to as strong priors and essentially involves stripping the potential errors in other parts of the model to render the accuracy of actual observational data directly into the concluded result. The age given is thus accurate to the specified error, since this represents the error in the instrument used to gather the raw data input into the model. The age of the universe based on the best fit to Planck 2018 data alone is billion years. This number represents an accurate "direct" measurement of the age of the universe, in contrast to other methods that typically involve Hubble's law and the age of the oldest stars in globular clusters. It is possible to use different methods for determining the same parameter (in this case, the age of the universe) and arrive at different answers with no overlap in the "errors". To best avoid the problem, it is common to show two sets of uncertainties; one related to the actual measurement and the other related to the systematic errors of the model being used. An important component to the analysis of data used to determine the age of the universe (e.g. from Planck) therefore is to use a Bayesian statistical analysis, which normalizes the results based upon the priors (i.e. the model). This quantifies any uncertainty in the accuracy of a measurement due to a particular model used.
Physical sciences
Physical cosmology
Astronomy
848451
https://en.wikipedia.org/wiki/Feminine%20hygiene
Feminine hygiene
Feminine hygiene products are personal care products used for women's hygiene during menstruation, vaginal discharge, or other bodily functions related to the vulva and vagina. Products that are used during menstruation may also be called menstrual hygiene products, including menstrual pads, tampons, pantyliners, menstrual cups, menstrual sponges and period panties. Feminine hygiene products also include products meant to cleanse the vulva or vagina, such as douches, feminine wipes, and soap. Feminine hygiene products are either disposable or reusable. Sanitary napkins, tampons, and pantyliners are disposable feminine hygiene products. Menstrual cups, cloth menstrual pads, period panties, and sponges are reusable feminine hygiene products. Types Menstrual hygiene products Disposable: Menstrual pad: Made of absorbent material that is worn on the inside of underwear to absorb a heavier menstrual flow. They are made of cellulose and are available in many different absorbencies and lengths. They may have wings and/or an adhesive backing to hold the pad in place. Pantyliner: Similar to a menstrual pad, they are smaller, thinner and used for lighter periods, intermittent bleeding and vaginal discharge, or as a supplement to a tampon. Tampon: Inserted inside the vagina to absorb menstrual blood, can also be used while swimming, available in different levels of absorbency. Reusable: Menstrual cup: Made of silicone, natural rubber, or plastic; is inserted inside the vagina to catch blood and/or uterine lining. Most are reusable: they are emptied when full and can be washed or boiled. Cloth menstrual pad: Worn inside underwear; can be made of materials such as cotton, flannel or terry cloth. Period underwear (AKA period panties): Can refer to either underwear that keeps pads in place, or absorbent underwear that can take the place of tampons and pads. Menstrual sponge: Inserted like a tampon or cup and worn inside the body. Towel: large reusable piece of cloth, most often used at night (if nothing else is available), placed between legs to absorb menstrual flow. Cleansing products Douches: A fluid used to flush out the inside of the vagina. Feminine wipes: A moist, sometimes scented cloth used to wipe the vulva. Feminine hygiene products that are meant to cleanse may lead to allergic reaction and irritation, as the vagina naturally flushes out bacteria. Many health professionals advise against douching because it can change the balance of vaginal flora and acidity. Research shows that the vagina's features allow it to naturally defend itself from harmful microorganisms. The innate defense mechanisms against vulvovaginal infections encompass the normal vaginal flora, acidic vaginal pH, and vaginal discharge. Resident bacteria play a crucial role in maintaining an acidic pH and outcompeting external pathogens for adhesion to the vaginal mucosa. Additionally, these bacteria defend against pathogens by generating antimicrobial compounds like bacteriocin. In vitro analysis of vaginal fluids from five women demonstrated activity against non-resident bacterial species, including Escherichia coli and Group B Streptococcus. This protection against Group B Streptococcus holds particular significance for pregnant women, as it commonly colonizes the vagina via the gastrointestinal tract, elevating the risk of preterm delivery, neonatal meningitis, and fetal death. Moreover, it may lead to asymptomatic bacteriuria, urinary tract infections, upper genital tract infections, and postpartum endometritis. Environmental impact The environmental impact of each product varies enormously: In a lifetime, a menstruating person in developed countries can use between 5,000 and 15,000 pads and tampons creating about 400 pounds of packaging. Worst in terms of environmental impact are tampons as they usually contain plastic or a blend of cotton and rayon and synthetic fibers, packaged in paper and plastic, may have plastic strings and plastic applicators, which cannot be recycled. the most environmentally friendly option is considered to be the menstrual cup. Health issues The different products may carry different health risks, some of which might be proven, others speculative. Toxic shock syndrome: A rare illness that may occur when tampons are worn for long periods of time, although not directly linked to tampon use but caused by poison linked to bacteria of the Streptococcus pyogenes or Staphylococcus aureus type. Irritation: Can be caused by fragrances, neomycin (adhesive on pads), tea tree oil, benzocaine. Inflammation can also be a risk associated with some products. Yeast infection: A fungus. Bacterial vaginosis: Overgrowth of naturally occurring bacteria in the vagina that leads to a type of vaginal inflammation. The imbalance of bacteria from its natural state has been connected to bacterial vaginosis Bacterial vaginosis manifests as a uniform white/gray layer on the vaginal walls and vulva, accompanied by a fishy odor and a vaginal pH exceeding 4.5. The challenge of recurrence arises from the adaptive mechanisms of the bacteria and the inadequate re-establishment of normal vaginal flora. Exposure to chemicals: some period underwear companies (like Thinx, Ruby Love, and Knix) are facing class action lawsuits for products containing harmful toxins like per- and polyfluoroalkyl substances (PFAS) which may be linked to adverse health outcomes like cancer. The vulvovaginal area The vulva acts as the initial defense line, shielding the genital tract from infections. Often, contaminants accumulate in the folds of the vulva, and factors like increased moisture, sweating, menstruation, and hormonal fluctuations can impact the growth and balance of microbial species, potentially leading to odor and vulvovaginal infections. Distinct from other skin areas, vulvar skin exhibits variations in hydration, friction, permeability, and visible irritation. It is more susceptible to topical agents compared to forearm skin due to increased hydration, occlusion, and friction. The non-keratinized vulvar vestibule is likely more permeable than keratinized skin. Notably, genital skin is unique with a thin stratum corneum and large hair follicles, making it easier for microbes and substances to permeate. The vagina, a fibromuscular canal extending from its external opening in the vulva to the cervix, is primarily composed of smooth muscle covered by a non-keratinized epithelial lining. This lining, until menopause, remains thick, kept moist by fluid from the vaginal wall and mucus from cervical and vestibular glands. Vaginal discharge Before reaching puberty until after menopause, women typically experience a natural and healthy occurrence of vaginal discharge. This discharge comprises bacteria, desquamated epithelial cells shedding from the vaginal walls, along with mucus and fluid (plasma) produced by the cervix and vagina. Throughout the menstrual cycle, the quantity and consistency of the discharge undergo variations. At the start and end of the cycle when estrogen levels are low, the discharge is dense, adhesive, and unwelcoming to sperm. As estrogen levels increase before ovulation, the discharge gradually becomes clearer, more liquid, and stretchier. Discrepancies in various ethnic groups Feminine hygiene presents discrepancies in various ethnic groups. Differences in feminine hygiene practices are often associated with varying cultural beliefs and religious customs. Research indicates that Afro-Caribbean immigrants, in contrast to Caucasian women, are more inclined to cleanse the vulva with bubble bath or antiseptic. This practice aligns with the belief in the necessity of thorough body cleansing for health and well-being. Among Orthodox Jewish women, a ritual bath known as mikveh is performed after menstrual periods or childbirth to achieve ritual purity. In the Muslim faith, both men and women partake in a bathing ritual called full ablution (ghusl) after sexual intercourse or menstruation as a purification practice. In regions like Mozambique and South Africa, certain women opt for internal cleansing of their vaginas using substances such as lemon juice, saltwater, or vinegar with the intention of eliminating vaginal discharge and "treating" sexually transmitted diseases. A research study involving 500 women in Iran revealed a notable association between bacterial vaginosis and inadequate menstrual and vaginal hygiene practices. Additionally, findings from a household survey conducted by Anand et al. indicated that women employing unhygienic methods during menstruation—excluding sanitary pads or locally prepared napkins—were 1.04 times more likely to report symptoms of reproductive tract infections. Furthermore, these women were 1.3 times more likely to experience abnormal vaginal discharge, encompassing symptoms like itching, vulvar irritation, lower abdomen pain, pain during urination or defecation, and low back pain. In another investigation, findings revealed that women engaging in the use of bubble bath on the vulva exhibited a twofold increase in the likelihood of experiencing bacterial vaginosis, in contrast to those who refrained from using this product. Furthermore, the occurrence of bacterial vaginosis was three times greater among women who applied antiseptic solutions to the vulva or within the vagina. Additionally, the frequency of bacterial vaginosis was six times higher in women utilizing a douching agent. When it comes to bacterial vaginosis, African American women are 2.9 times more likely to be diagnosed with bacterial vaginosis compared to women of European ancestry, possibly due to variations in their "normal" vaginal flora. Menstrual hygiene: the adolescent girl To observe the menstrual hygiene in adolescent girls, a cross-sectional study conducted in a secondary school in Singur West Bengal involved 160 girls. the findings were published in 2008 and revealed that a significant portion of respondents became aware of menstruation before menarche, with mothers being the primary source of information. While the majority recognized menstruation as a physiological process, knowledge and usage of sanitary pads were limited. Most girls employed soap and water for cleaning purposes, and a considerable percentage observed various restrictions during menstruation. Among the 160 respondents, 108 (67%) girls were aware of menstruation before experiencing menarche. Mothers were the primary source of information for 60 (37.5%) girls. A majority, 138 (86%), considered menstruation a physiological process. Only 78 (48%) girls were familiar with the use of sanitary pads during menstruation. In terms of practices, merely 18 (11.25%) girls used sanitary pads during menstruation. For cleaning purposes, 156 (97%) girls utilized both soap and water. Regarding restrictions, 136 (85%) girls adhered to various restrictions during menstruation. Society and culture According to the World Health Organization, as of 2018 there are about 1.9 billion women who are of reproductive age. In low-income countries, women's choices of menstrual hygiene materials are often limited by the costs, availability and social norms. Not only are women's choices limited but, according to the WHO and UNICEF, 780 million people do not have access to improved water sources and about 2.5 billion people lack access to improved sanitation. The lack of proper hygiene leads to a harder time for women to manage feminine hygiene. Costs and tax Tampon tax is a shorthand for sales tax charged on tampons, pads, and menstrual cups. The cost of these commercial products for menstrual management is considered to be unacceptably high for many low-income women. At least half a million women across the world do not have enough money to adequately afford these products. This can result in missing days of school or even dropping out entirely in the worst cases. In some jurisdictions, similar necessities like medical devices and toilet paper are not taxed. Several initiatives worldwide advocate to eliminate the tax all together. In some countries, such petitions have already been successful (for example parts of the UK and the United States). The UK abolished the 5% minimum VAT imposed on sanitary products on 1 January 2021, as previously whilst a member of the European Union; EU law prohibited the UK or any EU member state from removing the 5% VAT imposed on sanitary products. Access to products in prisons The Federal Bureau of Prisons in the United States announced that women in its facilities would be guaranteed free menstrual pads and tampons. In section 411 of the First Step Act which was passed on May 22, 2018, states, "The Director of the Bureau of Prisons shall make the healthcare products described in subsection (c) available to prisoners for free, in a quantity that is appropriate to the healthcare needs of each prisoner". Other social views Some girls and women may view tampons and menstrual cups as affecting their virginity even though they have not engaged in sexual intercourse. For those with autism, using pads before menstruation begins may help reduce sensory issues associated with menstrual hygiene products. Prior education and practice may help familiarize an individual with body changes and the process of using products associated with menstruation. Menstruation may occur despite paralyzation; product use depends on the individual's personal preference. History In ancient Egypt, the Roman Empire and Indonesia, various natural materials – wool, grass, papyrus – were used as tampons. In ancient Japan, the tampon was made of paper and held in place by a special binder called , and was changed up to 12 times a day. In 18th-century Sweden, women in common society were not known to use feminine hygiene products and visible period stains on clothing did not attract much attention. A common expression for menstruation during this period was to "wear the clothes" or "wear the ", a chemise-like undergarment. It is likely that pieces of cloth or special rags were used to collect the menstrual fluid. However, there are few records of menstrual pads from the pre-industrial era. As artifacts, the various types of menstrual pads have not been preserved or survived in any particular sense, as the cloths used were discarded when they became worn out or the need for them ceased with menopause. However, as technology evolved, commercial hygiene products were introduced in the form of the menstrual pad, also known as the sanitary napkin. In Sweden, this happened at the end of the 19th century and has been linked to an increased focus on cleanliness, personal hygiene and health that occurred in the early part of the 20th century in the wake of urbanization. By the end of the 19th century, the first commercial sanitary napkin had also been introduced on the American market by Johnson & Johnson. It was a variant of the menstrual pad made of flannel. Advertisements and product information for sanitary pads are the primary source of knowledge about the history of sanitary pads. Early 20th-century commercial products Sanitary napkins could be made of woven cotton, knitted or crocheted and filled with rags. They could be homemade for personal use or mass-produced and sold, such as in towns that had a textile industry. The menstrual receptacle was the very earliest hygiene product to be launched as menstrual protection in Sweden, as early as 1879. It was made of rubber, like many of the hygiene articles of the time, and resembled a bowl-shaped casing that would sit on the outside of the abdomen. The menstrual receptacle is not considered to have gained much popularity. The first half of the 20th century also saw the development of early intravaginal menstrual products similar to the menstrual cup, with an early patent dating from 1903. Menstrual belts were another form that menstrual protection took and began to appear in the late 19th century. They were made so that the pad itself was contained in a special holder that was fastened around the waist with a belt. The pads in these designs are referred to as "suction pads" in Swedish patent documents, such as the "Suction pad for menstruation" patent from 1889. The price for a menstrual belt could be between 2.75 – 3.50 SEK and pads had to be purchased for about 4–5 SEK each, depending on the size of the pack. From the price information available, menstrual protection was likely a costly purchase that was not available to everyone. The sanitary belt can be seen as a modern version of the menstrual belt, but more like a girdle. The function of the belt is to hold the pad in place while giving the user greater freedom of motion. In Sweden, the product was introduced in the 1940s and was in use until the 1960s. In the 1970s, the adhesive strip on the underside of the pad was introduced, allowing it to be attached to the underwear and held in place without the use of a girdle, safety pin or belt. Historical types of menstrual hygiene products Cup Pad Panty Sponges Nothing Sheep's wool Underwear Raw cotton Sanitary belt and napkin holder Crocheted sanitary napkins Clouts No belt Baby diapers Adult diapers Plants
Biology and health sciences
Health and fitness
null
848457
https://en.wikipedia.org/wiki/Arapaima
Arapaima
The arapaima, pirarucu, or paiche is any large species of bonytongue in the genus Arapaima native to the Amazon and Essequibo basins of South America. Arapaima is the type genus of the subfamily Arapaiminae within the family Osteoglossidae. They are among the world's largest freshwater fish, reaching as much as in length. They are an important food fish. They have declined in the native range due to overfishing and habitat loss. In contrast, arapaima have been introduced to several tropical regions outside the native range (within South America and elsewhere), where they are sometimes considered invasive species. In Kerala, India, arapaima escaped from aquaculture ponds after floods in 2018. Its Portuguese name, pirarucu, derives from the Tupi language words pira and urucum, meaning "red fish". Arapaima was traditionally regarded as a monotypic genus, but later, several species were distinguished. As a consequence of this taxonomic confusion, most earlier studies were done using the name A. gigas, but this species is only known from old museum specimens and the exact native range is unclear. The regularly seen and studied species is A. arapaima, although a small number of A. leptosoma also have been recorded in the aquarium trade. The remaining species are virtually unknown: A. agassizii from old detailed drawings (the type specimen itself was lost during World War II bombings) and A. mapae from the type specimen. Taxonomy FishBase recognizes four species in the genus. In addition to these, evidence suggests that a fifth species, A. arapaima, should be recognized (this being the widespread, well-known species, otherwise included in A. gigas). Arapaima agassizii (Valenciennes, 1847) (Agassiz's arapaima) Arapaima gigas (Schinz, 1822) (pirarucu, arapaima) Arapaima leptosoma D. J. Stewart, 2013 (slender arapaima; Solimoes arapaima) Arapaima mapae (Valenciennes, 1847) (Mapa arapaima) These fish are widely dispersed and do not migrate, which leads scientists to suppose that more species are waiting to be discovered in the depths of the Amazon Basin harbors. Sites such as these offer the likelihood of diversity. Morphology Arapaima can reach lengths more than , in some exceptional cases even exceeding and over . The maximum recorded weight for the species is , while the longest recorded length verified was 3.07 m (10 ft 1 in). Anecdotal reports suggest that specimens as long as 4.57 m (15 ft 0 in) exist, but verification is deemed impossible, and thus considered questionable. As a result of overfishing, arapaima more than are seldom found in the wild. The arapaima is torpedo-shaped, with large, blackish-green scales and red markings. It is streamlined and sleek, with its dorsal and anal fins set near its tail. Arapaima scales have a mineralised, hard, outer layer with a corrugated surface under which lie several layers of collagen fibres in a Bouligand-type arrangement. In a structure similar to plywood, the fibres in each successive layer are oriented at large angles to those in the previous layer, increasing toughness. The hard, corrugated surface of the outer layer, and the tough internal collagen layers work synergistically to contribute to their ability to flex and deform while providing strength and protection—a solution that allows the fish to remain mobile while heavily armored. The arapaima has a fundamental dependence on surface air to breathe. In addition to gills, it has a modified and enlarged swim bladder, composed of lung-like tissue, which enables it to extract oxygen from the air. Ecology The diet of the arapaima consists of fish, crustaceans, fruits, seeds, insects, and small land animals that walk near the shore. The fish is an air breather, using its labyrinth organ, which is rich in blood vessels and opens into the fish's mouth, an advantage in oxygen-deprived water that is often found in the Amazon River. This fish is able to survive in oxbow lakes with dissolved oxygen as low as 0.5 ppm. In the wetlands of the Araguaia, one of the most important refuges for this species, it is the top predator in such lakes during the low-water season, when the lakes are isolated from the rivers and oxygen levels drop, rendering its prey lethargic and vulnerable. Arapaima may leap out of the water if they feel constrained by their environment or harassed. Life history/behavior Reproduction Due to its geographic ranges, arapaima's lifecycle is greatly affected by seasonal flooding. Various pictures show slightly different coloring owing to colour changes when they reproduce. The arapaima lays its eggs when water levels are low or beginning to rise. They build a nest about wide and deep, usually in muddy-bottomed areas. As the water rises, the eggs hatch and the offspring have the flood season from May to August in which to prosper, such that yearly spawning is regulated seasonally. Breeding The arapaima male is a mouthbrooder, like the related Osteoglossum genus, meaning the young are protected in his mouth until they are older. The female arapaima helps to protect the male and the young by circling them and fending off potential predators. In his book, Three Singles to Adventure, naturalist Gerald Durrell reported that in British Guyana, female arapaima had been seen secreting a white substance from a gland in the head, and that their young were seemingly feeding on the substance. Evolution Some 23-million-year-old fossils of arapaima or a very similar species have been found in the Miocene Villavieja Formation of Colombia. Museum specimens are found in France, England, the United States, Brazil, Guyana, Ecuador and Perú. This makes them some of the oldest known species of freshwater fish. Relation to humans Arapaima is exploited in many ways by local human populations. Its tongue is thought to have medicinal qualities in South America. It is dried and combined with guarana bark, which is grated and mixed into water. Doses are given to kill intestinal worms. The bony tongue is used to scrape cylinders of dried guarana, an ingredient in some beverages, and the bony scales are used as nail files. Arapaima produce boneless steaks and are considered a delicacy. In the Amazon region, locals often salt and dry the meat, rolling it into a cigar-style package that is then tied and can be stored without rotting, which is important in a region with little refrigeration. Arapaima are referred to as the "cod of the Amazon", and can be prepared in the same way as traditional salted cod. Designers have begun using the skin of the arapaima as leather to make jackets, shoes, and handbags, and to cover furniture. In July 2009, villagers around Kenyir Lake in Terengganu, Malaysia, reported sighting A. gigas. The "Kenyir monster", or "dragon fish" as the locals call it, was claimed to be responsible for the mysterious drowning of two men on 17June. In August 2018, India Times reported that arapaima has been spotted in the Chalakudy River, following floods in Kerala; their presence in India is attributed to illegal importation for fish farming. The arapaima is depicted on both the flag and the seal of the Department of Ucayali, Peru. Fishing Wild arapaima are harpooned or caught in large nets. Since the arapaima needs to surface to breathe air, traditional arapaima fishermen harpoon them and then club them to death. An individual fish can yield as much as of meat. The arapaima was introduced for fishing in Thailand and Malaysia. Fishing in Thailand can be done in several lakes, where specimens over are often landed and then released. On 14 May 2020, a 30 kg specimen was found floating in the river in Angkor Wat area, Krovanh village, Sangkat Norkor Thom, Siem Reap, Cambodia; the locals said it was a rare fish, and not commonly seen in this area. With catch-and-release after the fish is landed, it must be held for 5 minutes until it takes a breath. The fish has a large blood vessel running down its spine, so lifting the fish clear of the water for trophy shots can rupture this vessel, causing death. Aquaculture In 2013, Whole Foods began selling farm-raised arapaima in the United States as a cheaper alternative to halibut or Chilean sea bass. In Thailand, the only legal breeding farm is located in Tambon Phrong Maduea, Amphoe Mueang Nakhon Pathom, Nakhon Pathom Province. This has been approved by both the Department of Fisheries and CITES since early 2018, and has been exporting them worldwide as an aquarium fish. Conservation Arapaima are particularly vulnerable to overfishing because of their size and because they must surface periodically to breathe. Some 7000 tons per year were taken from 1918 to 1924, the height of commercial arapaima fishing; demand led to farming of the fish by native ribeirinhos. As efforts at restricting catches were largely unsuccessful, arapaima fishing was banned outright in Brazil in 1996, due to declining populations. Indeed, a 2014 study found that the fish were depleted or overexploited at 93% of the sites examined and well-managed or unfished in only 7%; the fish appeared to be extirpated in 19% of these sites. The status of the arapaima population in the Amazon River Basin is unknown, hence it is listed on the IUCN red list as data deficient. Conducting a population census in so large an area is difficult, as is monitoring catches in a trade that was once largely unregulated. Since 1999, both subsistence and commercial fishing have been permitted in specially designated areas under a sophisticated sustainable management strategy. This approach has led to massive recovery of once-depleted stocks; in a sampling of 10 areas conducted using traditional counting methods, the population was found to have grown from 2,500 in 1999 to over 170,000 in 2017. Colombia only bans fishing and consumption of the arapaima between October 1 and March 15, during breeding season. Gallery
Biology and health sciences
Fishes
null
848460
https://en.wikipedia.org/wiki/Reaper-binder
Reaper-binder
The reaper-binder, or binder, is a farm implement that improved upon the simple reaper. The binder was invented in 1872 by Charles Baxter Withington, a jeweler from Janesville, Wisconsin. In addition to cutting the small-grain crop, a binder also 'binds' the stems into bundles or sheaves. These sheaves are usually then 'shocked' into A-shaped conical stooks, resembling small tipis, to allow the grain to dry for several days before being picked up and threshed. Withington's original binder used wire to tie the bundles. There were problems with using wire and it was not long before William Deering invented a binder that successfully used twine and a knotter (invented in 1858 by John Appleby). Early binders were horse-drawn, their cutting and tying-mechanisms powered by a bull-wheel, that through the traction of being pulled forward creates rotational forces to operate the mechanical components of the machine. Later models were tractor-drawn and some were tractor-powered. (This mechanical power transfer is commonly referred to as a PTO or power take-off device.) Binders have a reel and a sickle bar, like a modern grain head for a combine harvester. The cut stems fall onto a canvas bed which conveys the cut stems to the binding mechanism. This mechanism bundles the stems of grain and ties the bundle with string to form a sheaf. Once tied, the sheaf is discharged from the side of the binder, to be picked up by the 'stookers'. With the replacement of the threshing machine by the combine harvester, the binder has become almost obsolete. Some grain crops such as oats are now cut and formed into windrows with a swather. With other grain crops, such as wheat, the grain is now mostly cut and threshed by a combine in a single operation, but the much lighter binder is still in use in small fields or mountain areas too steep or inaccessible for heavy combines. Reaper-binders were in wide use in the People's Republic of Poland, but farmers often could not operate them due to shortages of twine and a lack of replacement parts. This was such a regular occurrence that baling twine () remains a symbol of the dysfunction of the communist economy in the cultural memory of Poland.
Technology
Farm and garden machinery
null
849220
https://en.wikipedia.org/wiki/Greenstone%20belt
Greenstone belt
Greenstone belts are zones of variably metamorphosed mafic to ultramafic volcanic sequences with associated sedimentary rocks that occur within Archaean and Proterozoic cratons between granite and gneiss bodies. The name comes from the green hue imparted by the colour of the metamorphic minerals within the mafic rocks: The typical green minerals are chlorite, actinolite, and other green amphiboles. Greenstone belts also often contain ore deposits of gold, silver, copper, zinc, and lead. A greenstone belt is typically several dozens to several thousand kilometres long. Typically, a greenstone belt within the greater volume of otherwise homogeneous granite-gneiss within a craton contains a significantly larger degree of heterogeneity and complications and forms a tectonic marker far more distinct than the much more voluminous and homogeneous granites. Additionally, a greenstone belt contains far more information on tectonic and metamorphic events, deformations, and paleogeologic conditions than the granite and gneiss events, because the vast majority of greenstones are interpreted as altered basalts and other volcanic or sedimentary rocks. As such, understanding the nature and origin of greenstone belts is the most fruitful way of studying Archaean geological history. Nature and formation Greenstone belts have been interpreted as having formed at ancient oceanic spreading centers and island arc terranes. Greenstone belts are primarily formed of volcanic rocks, dominated by basalt, with minor sedimentary rocks inter-leaving the volcanic formations. Through time, the degree of sediment contained within greenstone belts has risen, and the amount of ultramafic rock (either as layered intrusions or as volcanic komatiite) has decreased. There is also a change in the structure and relationship of greenstone belts to their basements between the Archaean where there is little clear relationship, if any, between basalt-peridotite sheets of a greenstone belt and the granites they abut, and the Proterozoic where greenstone belts sit upon granite-gneiss basements and / or other greenstone belts, and the Phanerozoic where clear examples of island arc volcanism, arc sedimentation and ophiolite sequences become more dominant. This change in nature is interpreted as a response to the maturity of the plate tectonics processes throughout the Earth's geological history. Archaean plate tectonics did not take place on mature crust and as such the presence of thrust-in allochthonous greenstone belts is expected. By the Proterozoic, magmatism was occurring around cratons and with established sedimentary sources, with little recycling of the crust, allowing preservation of more sediments. By the Phanerozoic, extensive continental cover and lower heat flow from the mantle has seen greater preservation of sediments and greater influence of continental masses. Greenstones, aside from containing basalts, also give rise to several types of metamorphic rocks which are used synonymously with 'metabasalt' et cetera; greenschist, whiteschist and blueschist are all terms spawned from the study of greenstone belts. The West African early Proterozoic greenstone belts are similar to the Archean greenstone belts. These similarities include a decrease in the amount of ultramafic and mafic rocks as you move up the stratigraphic column, in addition to an increase in pyroclastics, felsic and/or andesite rocks. Also, the rock successions tend to have clastics in the upper portion and tholeiitic suites in the lower. Calc-alkaline dikes are common in these suites. Distribution Archaean greenstones are found in the Slave craton, northern Canada, Pilbara craton and Yilgarn Craton, Western Australia, Gawler Craton in South Australia, and in the Wyoming Craton in the US. Examples are found in South and Eastern Africa, namely the Kaapvaal craton and also in the cratonic core of Madagascar, as well as West Africa and Brazil, northern Scandinavia and the Kola Peninsula (see Baltic Shield). Proterozoic greenstones occur sandwiched between the Pilbara and Yilgarn cratons in Australia, and adjoining the Gawler Craton and within the extensive Proterozoic mobile belts of Australia, within West Africa, throughout the metamorphic complexes surrounding the Archaean core of Madagascar; the eastern United States, northern Canada and northern Scandinavia. The Abitibi greenstone belt in Ontario and Quebec is one of the largest Archean greenstone belts in the world. In Antarctica, the Proterozoic-aged Fisher Massif closely resembles the composition and structure of a greenstone belt. One of the best known greenstone belts in the world is the South African Barberton greenstone belt, where gold was first discovered in South Africa. The Barberton Greenstone belt was first uniquely identified by Prof Annhauser at the University of the Witwatersrand, Johannesburg. His work in mapping and detailing the characteristics of the Barberton Greenstone belt has been used as a primer for other greenstone belts around the world. He noted the existence of pillow lavas, indicating a lava being rapidly cooled in water, as well as the spinifex textures created by crystals formed under rapidly cooling environments, namely water. List of greenstone belts Africa Barberton Greenstone Belt (South Africa) Giyani Greenstone Belt (South Africa) Pietersberg greenstone belt (South Africa) Gwanda Greenstone Belt (Zimbabwe) Kilimafedha Greenstone Belt (East Africa) Lake Victoria Greenstone Belt (East Africa) Boromo-Goren Greenstone Belt (West Africa) Hounde Greenstone Belt (Burkina Faso) Boromo Greenstone Belt (Burkina Faso) Asia Taishan greenstone belt (East Asia) Ramagiri-Hungund greenstone belt (Dharwar Craton), India Babina greenstone belt (Bundelkhand craton), India Mauranipur greenstone belt (Bundelkhand craton), India Iron Ore Group, East Indian Shield, India Europe Kostomuksha greenstone belt (Russia) Central Lapland Greenstone Belt (Lapland, Finland) Kuhmo-Suomussalmi Greenstone Belt, Finland Mauken greenstone belt (Norway) North America Abitibi greenstone belt (Quebec/Ontario, Canada) Bird River greenstone belt (Manitoba, Canada) Elmers Rock greenstone belt (Wyoming, USA) Flin Flon greenstone belt (Manitoba/Saskatchewan, Canada) Hope Bay greenstone belt (in the western portion of Kivalliq Region, Nunavut, Canada) Hunt River greenstone belt (Newfoundland and Labrador, Canada) Isua greenstone belt (Southwestern Greenland) Nuvvuagittuq greenstone belt (Quebec, Canada) Pecos greenstone belt (New Mexico, US) Rattlesnake Hills greenstone belt (Wyoming, US) Seminoe Mountains greenstone belt (Wyoming, US) South Pass greenstone belt (Wyoming, US) Temagami Greenstone Belt (Ontario, Canada) Yellowknife greenstone belt (Northwest Territories, Canada) South America Rio-das-Velhas greenstone belt (Minas Gerais, Brazil) Piumhi greenstone belt (Minas Gerais, Brazil) Rio-Itapicuru greenstone belt (Bahia, Brazil) Mundo Novo greenstone belt (Bahia, Brazil) Umburanas greenstone belt (Bahia, Brazil) Crixás greenstone belt (Goiás, Brazil) Faina greenstone belt (Goiás, Brazil) Guarinos greenstone belt (Goiás, Brazil) Pilar-de-Goiás greenstone belt (Goiás, Brazil) Northern Guiana Shield greenstone belt (Venezuela, Guyana, Suriname and French Guiana) Australia Harris greenstone belt (Australia) Jack Hills greenstone belt (Australia) Norseman-Wiluna greenstone belt (Australia) Southern Cross greenstone belt (Australia) Yandal Greenstone Belt (Australia) Yalgoo-Singleton greenstone belt (Australia)
Physical sciences
Stratigraphy
Earth science
849375
https://en.wikipedia.org/wiki/Relativistic%20Heavy%20Ion%20Collider
Relativistic Heavy Ion Collider
The Relativistic Heavy Ion Collider (RHIC ) is the first and one of only two operating heavy-ion colliders, and the only spin-polarized proton collider ever built. Located at Brookhaven National Laboratory (BNL) in Upton, New York, and used by an international team of researchers, it is the only operating particle collider in the US. By using RHIC to collide ions traveling at relativistic speeds, physicists study the primordial form of matter that existed in the universe shortly after the Big Bang. By colliding spin-polarized protons, the spin structure of the proton is explored. RHIC is as of 2019 the second-highest-energy heavy-ion collider in the world, with nucleon energies for collisions reaching 100 GeV for gold ions and 250 GeV for protons. As of November 7, 2010, the Large Hadron Collider (LHC) has collided heavy ions of lead at higher energies than RHIC. The LHC operating time for ions (lead–lead and lead–proton collisions) is limited to about one month per year. In 2010, RHIC physicists published results of temperature measurements from earlier experiments which concluded that temperatures in excess of 345 MeV (4 terakelvin or 7 trillion degrees Fahrenheit) had been achieved in gold ion collisions, and that these collision temperatures resulted in the breakdown of "normal matter" and the creation of a liquid-like quark–gluon plasma. In January 2020, the US Department of Energy Office of Science selected the eRHIC design for the future Electron–Ion collider (EIC), building on the existing RHIC facility at BNL. The accelerator RHIC is an intersecting storage ring particle accelerator. Two independent rings (arbitrarily denoted as "Blue" and "Yellow") circulate heavy ions and/or polarized protons in opposite directions and allow a virtually free choice of colliding positively charged particles (the eRHIC upgrade will allow collisions between positively and negatively charged particles). The RHIC double storage ring is hexagonally shaped and has a circumference of , with curved edges in which stored particles are deflected and focused by 1,740 superconducting magnets using niobium-titanium conductors. The dipole magnets operate at . The six interaction points (between the particles circulating in the two rings) are in the middle of the six relatively straight sections, where the two rings cross, allowing the particles to collide. The interaction points are enumerated by clock positions, with the injection near 6 o'clock. Two large experiments, STAR and sPHENIX, are located at 6 and 8 o'clock respectively. The sPHENIX experiment is the newest experiment to be built at RHIC, replacing PHENIX at the 8 o'clock position. A particle passes through several stages of boosters before it reaches the RHIC storage ring. The first stage for ions is the electron beam ion source (EBIS), while for protons, the linear accelerator (Linac) is used. As an example, gold nuclei leaving the EBIS have a kinetic energy of per nucleon and have an electric charge Q = +32 (32 of 79 electrons stripped from the gold atom). The particles are then accelerated by the Booster synchrotron to per nucleon, which injects the projectile now with Q = +77 into the Alternating Gradient Synchrotron (AGS), before they finally reach per nucleon and are injected in a Q = +79 state (no electrons left) into the RHIC storage ring over the AGS-to-RHIC Transfer Line (AtR). To date the types of particle combinations explored at RHIC are , , , , , , , , , and . The projectiles typically travel at a speed of 99.995% of the speed of light. For collisions, the center-of-mass energy is typically per nucleon-pair, and was as low as per nucleon-pair. An average luminosity of was targeted during the planning. The current average luminosity of the collider has reached , 44 times the design value. The heavy ion luminosity is substantially increased through stochastic cooling. One unique characteristic of RHIC is its capability to collide polarized protons. RHIC holds the record of highest energy polarized proton beams. Polarized protons are injected into RHIC and preserve this state throughout the energy ramp. This is a difficult task that is accomplished with the aid of corkscrew magnetics called 'Siberian snakes' (in RHIC a chain 4 helical dipole magnets). The corkscrew induces the magnetic field to spiral along the direction of the beam Run-9 achieved center-of-mass energy of on 12 February 2009. In Run-13 the average luminosity of the collider reached , with a time and intensity averaged polarization of 52%. AC dipoles have been used in non-linear machine diagnostics for the first time in RHIC. The experiments There are two detectors currently operating at RHIC: STAR (6 o'clock, and near the AGS-to-RHIC Transfer Line) and sPHENIX (8 o'clock), the successor to PHENIX. PHOBOS (10 o'clock) completed its operation in 2005, and BRAHMS (2 o'clock) in 2006. Among the two larger detectors, STAR is aimed at the detection of hadrons with its system of time projection chambers covering a large solid angle and in a conventionally generated solenoidal magnetic field, while PHENIX is further specialized in detecting rare and electromagnetic particles, using a partial coverage detector system in a superconductively generated axial magnetic field. The smaller detectors have larger pseudorapidity coverage, PHOBOS has the largest pseudorapidity coverage of all detectors, and tailored for bulk particle multiplicity measurement, while BRAHMS is designed for momentum spectroscopy, in order to study the so-called "small-x" and saturation physics. There is an additional experiment, PP2PP (now part of STAR), investigating spin dependence in p + p scattering. The spokespersons for each of the experiments are: STAR: Frank Geurts (Rice University) and Lijuan Ruan (Brookhaven National Laboratory) PHENIX: Yasuyuki Akiba (Riken) sPHENIX: Gunter Roland (MIT) and David Morrison (Brookhaven National Laboratory) Current results For the experimental objective of creating and studying the quark–gluon plasma, RHIC has the unique ability to provide baseline measurements for itself. This consists of both the lower energy and also lower mass number projectile combinations that do not result in the density of 200 GeV Au + Au collisions, like the p + p and d + Au collisions of the earlier runs, and also Cu + Cu collisions in Run-5. Using this approach, important results of the measurement of the hot QCD matter created at RHIC are: Collective anisotropy, or elliptic flow. The major part of the particles with lower momenta is emitted following an angular distribution (pT is the transverse momentum, angle with the reaction plane). This is a direct result of the elliptic shape of the nucleus overlap region during the collision and hydrodynamical property of the matter created. Jet quenching. In the heavy ion collision event, scattering with a high transverse pT can serve as a probe for the hot QCD matter, as it loses its energy while traveling through the medium. Experimentally, the quantity RAA (A is the mass number) being the quotient of observed jet yield in A + A collisions and Nbin × yield in p + p collisions shows a strong damping with increasing A, which is an indication of the new properties of the hot QCD matter created. Color glass condensate saturation. The Balitsky–Fadin–Kuraev–Lipatov (BFKL) dynamics which are the result of a resummation of large logarithmic terms ln(1/x) for deep inelastic scattering with small Bjorken-x, saturate at a unitarity limit , with Npart/2 being the number of participant nucleons in a collision (as opposed to the number of binary collisions). The observed charged multiplicity follows the expected dependency of , supporting the predictions of the color glass condensate model. For a detailed discussion, see e.g. Dmitri Kharzeev et al.; for an overview of color glass condensates, see e.g. Iancu & Venugopalan. Particle ratios. The particle ratios predicted by statistical models allow the calculation of parameters such as the temperature at chemical freeze-out Tch and hadron chemical potential . The experimental value Tch varies a bit with the model used, with most authors giving a value of 160 MeV < Tch < 180 MeV, which is very close to the expected QCD phase transition value of approximately 170 MeV obtained by lattice QCD calculations (see e.g. Karsch). While in the first years, theorists were eager to claim that RHIC has discovered the quark–gluon plasma (e.g. Gyulassy & McLarren), the experimental groups were more careful not to jump to conclusions, citing various variables still in need of further measurement. The present results shows that the matter created is a fluid with a viscosity near the quantum limit, but is unlike a weakly interacting plasma (a widespread yet not quantitatively unfounded belief on how quark–gluon plasma looks). A recent overview of the physics result is provided by the RHIC Experimental Evaluations 2004 , a community-wide effort of RHIC experiments to evaluate the current data in the context of implication for formation of a new state of matter. These results are from the first three years of data collection at RHIC. New results were published in Physical Review Letters on February 16, 2010, stating the discovery of the first hints of symmetry transformations, and that the observations may suggest that bubbles formed in the aftermath of the collisions created in the RHIC may break parity symmetry, which normally characterizes interactions between quarks and gluons. The RHIC physicists announced new temperature measurements for these experiments of up to 4 trillion kelvins, the highest temperature ever achieved in a laboratory. It is described as a recreation of the conditions that existed during the birth of the Universe. Possible closure under flat nuclear science budget scenarios In late 2012, the Nuclear Science Advisory Committee (NSAC) was asked to advise the Department of Energy's Office of Science and the National Science Foundation how to implement the nuclear science long range plan written in 2007, if future nuclear science budgets continue to provide no growth over the next four years. In a narrowly decided vote, the NSAC committee showed a slight preference, based on non-science related considerations, for shutting down RHIC rather than canceling the construction of the Facility for Rare Isotope Beams (FRIB). By October 2015, the budget situation had improved, and RHIC continued operations into the next decade. The future RHIC began operation in 2000 and until November 2010 was the highest-energy heavy-ion collider in the world. The Large Hadron Collider (LHC) of CERN, while used mainly for colliding protons, operates with heavy ions for about one month per year. The LHC has operated with 25 times higher energies per nucleon. As of 2018, RHIC and the LHC are the only operating hadron colliders in the world. Due to the longer operating time per year, a greater number of colliding ion species and collision energies can be studied at RHIC. In addition and unlike the LHC, RHIC is also able to accelerate spin polarized protons, which would leave RHIC as the world's highest energy accelerator for studying spin-polarized proton structure. A major upgrade is the Electron–Ion Collider (EIC), the addition of a 18 GeV high intensity electron beam facility, allowing electron–ion collisions. At least one new detector will have to be built to study the collisions. A review was published by Abhay Deshpande et al. in 2005. A more recent description is at: On January 9, 2020, It was announced by Paul Dabbar, undersecretary of the US Department of Energy Office of Science, that the BNL eRHIC design has been selected for the future electron–ion collider (EIC) in the United States. In addition to the site selection, it was announced that the BNL EIC had acquired CD-0 (mission need) from the Department of Energy. Critics of high-energy experiments Before RHIC started operation, critics postulated that the extremely high energy could produce catastrophic scenarios, such as creating a black hole, a transition into a different quantum mechanical vacuum (see false vacuum), or the creation of strange matter that is more stable than ordinary matter. These hypotheses are complex, but many predict that the Earth would be destroyed in a time frame from seconds to millennia, depending on the theory considered. However, the fact that objects of the Solar System (e.g., the Moon) have been bombarded with cosmic particles of significantly higher energies than that of RHIC and other man-made colliders for billions of years, without any harm to the Solar System, were among the most striking arguments that these hypotheses were unfounded. The other main controversial issue was a demand by critics for physicists to reasonably exclude the probability for such a catastrophic scenario. Physicists are unable to demonstrate experimental and astrophysical constraints of zero probability of catastrophic events, nor that tomorrow Earth will be struck with a "doomsday" cosmic ray (they can only calculate an upper limit for the likelihood). The result would be the same destructive scenarios described above, although obviously not caused by humans. According to this argument of upper limits, RHIC would still modify the chance for the Earth's survival by an infinitesimal amount. Concerns were raised in connection with the RHIC particle accelerator, both in the media and in the popular science media. The risk of a doomsday scenario was indicated by Martin Rees, with respect to the RHIC, as being at least a 1 in 50,000,000 chance. With regards to the production of strangelets, Frank Close, professor of physics at the University of Oxford, indicates that "the chance of this happening is like you winning the major prize on the lottery 3 weeks in succession; the problem is that people believe it is possible to win the lottery 3 weeks in succession." After detailed studies, scientists reached such conclusions as "beyond reasonable doubt, heavy-ion experiments at RHIC will not endanger our planet" and that there is "powerful empirical evidence against the possibility of dangerous strangelet production". The debate started in 1999 with an exchange of letters in Scientific American between Walter L. Wagner and F. Wilczek, in response to a previous article by M. Mukerjee. The media attention unfolded with an article in UK Sunday Times of July 18, 1999, by J. Leake, closely followed by articles in the U.S. media. The controversy mostly ended with the report of a committee convened by the director of Brookhaven National Laboratory, J. H. Marburger, ostensibly ruling out the catastrophic scenarios depicted. However, the report left open the possibility that relativistic cosmic ray impact products might behave differently while transiting earth compared to "at rest" RHIC products; and the possibility that the qualitative difference between high-E proton collisions with earth or the moon might be different than gold on gold collisions at the RHIC. Wagner tried subsequently to stop full-energy collision at RHIC by filing Federal lawsuits in San Francisco and New York, but without success. The New York suit was dismissed on the technicality that the San Francisco suit was the preferred forum. The San Francisco suit was dismissed, but with leave to refile if additional information was developed and presented to the court. On March 17, 2005, the BBC published an article implying that researcher Horaţiu Năstase believes black holes have been created at RHIC. However, the original papers of H. Năstase and the New Scientist article cited by the BBC state that the correspondence of the hot dense QCD matter created in RHIC to a black hole is only in the sense of a correspondence of QCD scattering in Minkowski space and scattering in the AdS5 × X5 space in AdS/CFT; in other words, it is similar mathematically. Therefore, RHIC collisions might be described by mathematics relevant to theories of quantum gravity within AdS/CFT, but the described physical phenomena are not the same. Financial information The RHIC project was sponsored by the United States Department of Energy, Office of Science, Office of Nuclear physics. It had a line-item budget of 616.6 million U.S. dollars. For fiscal year 2006 the operational budget was reduced by 16.1 million U.S. dollars from the previous year, to 115.5 million U.S. dollars. Though operation under the fiscal year 2006 federal budget cut was uncertain, a key portion of the operational cost (13 million U.S. dollars) was contributed privately by a group close to Renaissance Technologies of East Setauket, New York. In fiction The novel Cosm () by the American author Gregory Benford takes place at RHIC. The science fiction setting describes the main character Alicia Butterworth, a physicist at the BRAHMS experiment, and a new universe being created in RHIC by accident, while running with uranium ions. The zombie apocalypse novel The Rising by the American author Brian Keene referenced the media concerns of activating the RHIC raised by the article in The Sunday Times of July 18, 1999, by J. Leake. As revealed very early in the story, side effects of the collider experiments of the RHIC (located at "Havenbrook National Laboratories") were the cause of the zombie uprising in the novel and its sequel City of the Dead. In the Rayloria's Memory novel series by the American author Othello Gooden Jr, beginning with Raylorian Dawn (), it is noted that each Lunar City and their space station is powered by a RHIC.
Physical sciences
Devices
Physics
849376
https://en.wikipedia.org/wiki/Transition%20state
Transition state
In chemistry, the transition state of a chemical reaction is a particular configuration along the reaction coordinate. It is defined as the state corresponding to the highest potential energy along this reaction coordinate. It is often marked with the double dagger (‡) symbol. As an example, the transition state shown below occurs during the SN2 reaction of bromoethane with a hydroxide anion: The activated complex of a reaction can refer to either the transition state or to other states along the reaction coordinate between reactants and products, especially those close to the transition state. According to the transition state theory, once the reactants have passed through the transition state configuration, they always continue to form products. History of concept The concept of a transition state has been important in many theories of the rates at which chemical reactions occur. This started with the transition state theory (also referred to as the activated complex theory), which was first developed around 1935 by Eyring, Evans and Polanyi, and introduced basic concepts in chemical kinetics that are still used today. Explanation A collision between reactant molecules may or may not result in a successful reaction. The outcome depends on factors such as the relative kinetic energy, relative orientation and internal energy of the molecules. Even if the collision partners form an activated complex they are not bound to go on and form products, and instead the complex may fall apart back to the reactants. Observing transition states Because the structure of the transition state is a first-order saddle point along a potential energy surface, the population of species in a reaction that are at the transition state is negligible. Since being at a saddle point along the potential energy surface means that a force is acting along the bonds to the molecule, there will always be a lower energy structure that the transition state can decompose into. This is sometimes expressed by stating that the transition state has a fleeting existence, with species only maintaining the transition state structure for the time-scale of vibrations of chemical bonds (femtoseconds). However, cleverly manipulated spectroscopic techniques can get us as close as the timescale of the technique allows. Femtochemical IR spectroscopy was developed for that reason, and it is possible to probe molecular structure extremely close to the transition point. Often, along the reaction coordinate, reactive intermediates are present not much lower in energy from a transition state making it difficult to distinguish between the two. Determining the geometry of a transition state Transition state structures can be determined by searching for first-order saddle points on the potential energy surface (PES) of the chemical species of interest. A first-order saddle point is a critical point of index one, that is, a position on the PES corresponding to a minimum in all directions except one. This is further described in the article geometry optimization. The Hammond–Leffler postulate The Hammond–Leffler postulate states that the structure of the transition state more closely resembles either the products or the starting material, depending on which is higher in enthalpy. A transition state that resembles the reactants more than the products is said to be early, while a transition state that resembles the products more than the reactants is said to be late. Thus, the Hammond–Leffler Postulate predicts a late transition state for an endothermic reaction and an early transition state for an exothermic reaction. A dimensionless reaction coordinate that quantifies the lateness of a transition state can be used to test the validity of the Hammond–Leffler postulate for a particular reaction. The structure–correlation principle The structure–correlation principle states that structural changes that occur along the reaction coordinate can reveal themselves in the ground state as deviations of bond distances and angles from normal values along the reaction coordinate. According to this theory if one particular bond length on reaching the transition state increases then this bond is already longer in its ground state compared to a compound not sharing this transition state. One demonstration of this principle is found in the two bicyclic compounds depicted below. The one on the left is a bicyclo[2.2.2]octene, which, at 200 °C, extrudes ethylene in a retro-Diels–Alder reaction. Compared to the compound on the right (which, lacking an alkene group, is unable to give this reaction) the bridgehead carbon-carbon bond length is expected to be shorter if the theory holds, because on approaching the transition state this bond gains double bond character. For these two compounds the prediction holds up based on X-ray crystallography. Implications for enzymatic catalysis One way that enzymatic catalysis proceeds is by stabilizing the transition state through electrostatics. By lowering the energy of the transition state, it allows a greater population of the starting material to attain the energy needed to overcome the transition energy and proceed to product.
Physical sciences
Kinetics
Chemistry