id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
1419
https://en.wikipedia.org/wiki/Adiabatic%20process
Adiabatic process
An adiabatic process (adiabatic ) is a type of thermodynamic process that occurs without transferring heat between the thermodynamic system and its environment. Unlike an isothermal process, an adiabatic process transfers energy to the surroundings only as work and/or mass flow. As a key concept in thermodynamics, the adiabatic process supports the theory that explains the first law of thermodynamics. The opposite term to "adiabatic" is diabatic. Some chemical and physical processes occur too rapidly for energy to enter or leave the system as heat, allowing a convenient "adiabatic approximation". For example, the adiabatic flame temperature uses this approximation to calculate the upper limit of flame temperature by assuming combustion loses no heat to its surroundings. In meteorology, adiabatic expansion and cooling of moist air, which can be triggered by winds flowing up and over a mountain for example, can cause the water vapor pressure to exceed the saturation vapor pressure. Expansion and cooling beyond the saturation vapor pressure is often idealized as a pseudo-adiabatic process whereby excess vapor instantly precipitates into water droplets. The change in temperature of an air undergoing pseudo-adiabatic expansion differs from air undergoing adiabatic expansion because latent heat is released by precipitation. Description A process without transfer of heat to or from a system, so that , is called adiabatic, and such a system is said to be adiabatically isolated. The simplifying assumption frequently made is that a process is adiabatic. For example, the compression of a gas within a cylinder of an engine is assumed to occur so rapidly that on the time scale of the compression process, little of the system's energy can be transferred out as heat to the surroundings. Even though the cylinders are not insulated and are quite conductive, that process is idealized to be adiabatic. The same can be said to be true for the expansion process of such a system. The assumption of adiabatic isolation is useful and often combined with other such idealizations to calculate a good first approximation of a system's behaviour. For example, according to Laplace, when sound travels in a gas, there is no time for heat conduction in the medium, and so the propagation of sound is adiabatic. For such an adiabatic process, the modulus of elasticity (Young's modulus) can be expressed as , where is the ratio of specific heats at constant pressure and at constant volume () and is the pressure of the gas. Various applications of the adiabatic assumption For a closed system, one may write the first law of thermodynamics as , where denotes the change of the system's internal energy, the quantity of energy added to it as heat, and the work done by the system on its surroundings. If the system has such rigid walls that work cannot be transferred in or out (), and the walls are not adiabatic and energy is added in the form of heat (), and there is no phase change, then the temperature of the system will rise. If the system has such rigid walls that pressure–volume work cannot be done, but the walls are adiabatic (), and energy is added as isochoric (constant volume) work in the form of friction or the stirring of a viscous fluid within the system (), and there is no phase change, then the temperature of the system will rise. If the system walls are adiabatic () but not rigid (), and, in a fictive idealized process, energy is added to the system in the form of frictionless, non-viscous pressure–volume work (), and there is no phase change, then the temperature of the system will rise. Such a process is called an isentropic process and is said to be "reversible". Ideally, if the process were reversed the energy could be recovered entirely as work done by the system. If the system contains a compressible gas and is reduced in volume, the uncertainty of the position of the gas is reduced, and seemingly would reduce the entropy of the system, but the temperature of the system will rise as the process is isentropic (). Should the work be added in such a way that friction or viscous forces are operating within the system, then the process is not isentropic, and if there is no phase change, then the temperature of the system will rise, the process is said to be "irreversible", and the work added to the system is not entirely recoverable in the form of work. If the walls of a system are not adiabatic, and energy is transferred in as heat, entropy is transferred into the system with the heat. Such a process is neither adiabatic nor isentropic, having , and according to the second law of thermodynamics. Naturally occurring adiabatic processes are irreversible (entropy is produced). The transfer of energy as work into an adiabatically isolated system can be imagined as being of two idealized extreme kinds. In one such kind, no entropy is produced within the system (no friction, viscous dissipation, etc.), and the work is only pressure-volume work (denoted by ). In nature, this ideal kind occurs only approximately because it demands an infinitely slow process and no sources of dissipation. The other extreme kind of work is isochoric work (), for which energy is added as work solely through friction or viscous dissipation within the system. A stirrer that transfers energy to a viscous fluid of an adiabatically isolated system with rigid walls, without phase change, will cause a rise in temperature of the fluid, but that work is not recoverable. Isochoric work is irreversible. The second law of thermodynamics observes that a natural process, of transfer of energy as work, always consists at least of isochoric work and often both of these extreme kinds of work. Every natural process, adiabatic or not, is irreversible, with , as friction or viscosity are always present to some extent. Adiabatic compression and expansion The adiabatic compression of a gas causes a rise in temperature of the gas. Adiabatic expansion against pressure, or a spring, causes a drop in temperature. In contrast, free expansion is an isothermal process for an ideal gas. Adiabatic compression occurs when the pressure of a gas is increased by work done on it by its surroundings, e.g., a piston compressing a gas contained within a cylinder and raising the temperature where in many practical situations heat conduction through walls can be slow compared with the compression time. This finds practical application in diesel engines which rely on the lack of heat dissipation during the compression stroke to elevate the fuel vapor temperature sufficiently to ignite it. Adiabatic compression occurs in the Earth's atmosphere when an air mass descends, for example, in a Katabatic wind, Foehn wind, or Chinook wind flowing downhill over a mountain range. When a parcel of air descends, the pressure on the parcel increases. Because of this increase in pressure, the parcel's volume decreases and its temperature increases as work is done on the parcel of air, thus increasing its internal energy, which manifests itself by a rise in the temperature of that mass of air. The parcel of air can only slowly dissipate the energy by conduction or radiation (heat), and to a first approximation it can be considered adiabatically isolated and the process an adiabatic process. Adiabatic expansion occurs when the pressure on an adiabatically isolated system is decreased, allowing it to expand in size, thus causing it to do work on its surroundings. When the pressure applied on a parcel of gas is reduced, the gas in the parcel is allowed to expand; as the volume increases, the temperature falls as its internal energy decreases. Adiabatic expansion occurs in the Earth's atmosphere with orographic lifting and lee waves, and this can form pilei or lenticular clouds. Due in part to adiabatic expansion in mountainous areas, snowfall infrequently occurs in some parts of the Sahara desert. Adiabatic expansion does not have to involve a fluid. One technique used to reach very low temperatures (thousandths and even millionths of a degree above absolute zero) is via adiabatic demagnetisation, where the change in magnetic field on a magnetic material is used to provide adiabatic expansion. Also, the contents of an expanding universe can be described (to first order) as an adiabatically expanding fluid. (See heat death of the universe.) Rising magma also undergoes adiabatic expansion before eruption, particularly significant in the case of magmas that rise quickly from great depths such as kimberlites. In the Earth's convecting mantle (the asthenosphere) beneath the lithosphere, the mantle temperature is approximately an adiabat. The slight decrease in temperature with shallowing depth is due to the decrease in pressure the shallower the material is in the Earth. Such temperature changes can be quantified using the ideal gas law, or the hydrostatic equation for atmospheric processes. In practice, no process is truly adiabatic. Many processes rely on a large difference in time scales of the process of interest and the rate of heat dissipation across a system boundary, and thus are approximated by using an adiabatic assumption. There is always some heat loss, as no perfect insulators exist. Ideal gas (reversible process) The mathematical equation for an ideal gas undergoing a reversible (i.e., no entropy generation) adiabatic process can be represented by the polytropic process equation where is pressure, is volume, and is the adiabatic index or heat capacity ratio defined as Here is the specific heat for constant pressure, is the specific heat for constant volume, and is the number of degrees of freedom (3 for a monatomic gas, 5 for a diatomic gas or a gas of linear molecules such as carbon dioxide). For a monatomic ideal gas, , and for a diatomic gas (such as nitrogen and oxygen, the main components of air), . Note that the above formula is only applicable to classical ideal gases (that is, gases far above absolute zero temperature) and not Bose–Einstein or Fermi gases. One can also use the ideal gas law to rewrite the above relationship between and as where T is the absolute or thermodynamic temperature. Example of adiabatic compression The compression stroke in a gasoline engine can be used as an example of adiabatic compression. The model assumptions are: the uncompressed volume of the cylinder is one litre (1 L = 1000 cm3 = 0.001 m3); the gas within is the air consisting of molecular nitrogen and oxygen only (thus a diatomic gas with 5 degrees of freedom, and so ); the compression ratio of the engine is 10:1 (that is, the 1 L volume of uncompressed gas is reduced to 0.1 L by the piston); and the uncompressed gas is at approximately room temperature and pressure (a warm room temperature of ~27 °C, or 300 K, and a pressure of 1 bar = 100 kPa, i.e. typical sea-level atmospheric pressure). so the adiabatic constant for this example is about 6.31 Pa m4.2. The gas is now compressed to a 0.1 L (0.0001 m3) volume, which we assume happens quickly enough that no heat enters or leaves the gas through the walls. The adiabatic constant remains the same, but with the resulting pressure unknown We can now solve for the final pressure or 25.1 bar. This pressure increase is more than a simple 10:1 compression ratio would indicate; this is because the gas is not only compressed, but the work done to compress the gas also increases its internal energy, which manifests itself by a rise in the gas temperature and an additional rise in pressure above what would result from a simplistic calculation of 10 times the original pressure. We can solve for the temperature of the compressed gas in the engine cylinder as well, using the ideal gas law, PV = nRT (n is amount of gas in moles and R the gas constant for that gas). Our initial conditions being 100 kPa of pressure, 1 L volume, and 300 K of temperature, our experimental constant (nR) is: We know the compressed gas has  = 0.1 L and  = , so we can solve for temperature: That is a final temperature of 753 K, or 479 °C, or 896 °F, well above the ignition point of many fuels. This is why a high-compression engine requires fuels specially formulated to not self-ignite (which would cause engine knocking when operated under these conditions of temperature and pressure), or that a supercharger with an intercooler to provide a pressure boost but with a lower temperature rise would be advantageous. A diesel engine operates under even more extreme conditions, with compression ratios of 16:1 or more being typical, in order to provide a very high gas pressure, which ensures immediate ignition of the injected fuel. Adiabatic free expansion of a gas For an adiabatic free expansion of an ideal gas, the gas is contained in an insulated container and then allowed to expand in a vacuum. Because there is no external pressure for the gas to expand against, the work done by or on the system is zero. Since this process does not involve any heat transfer or work, the first law of thermodynamics then implies that the net internal energy change of the system is zero. For an ideal gas, the temperature remains constant because the internal energy only depends on temperature in that case. Since at constant temperature, the entropy is proportional to the volume, the entropy increases in this case, therefore this process is irreversible. Derivation of P–V relation for adiabatic compression and expansion The definition of an adiabatic process is that heat transfer to the system is zero, . Then, according to the first law of thermodynamics, where is the change in the internal energy of the system and is work done by the system. Any work () done must be done at the expense of internal energy , since no heat is being supplied from the surroundings. Pressure–volume work done by the system is defined as However, does not remain constant during an adiabatic process but instead changes along with . It is desired to know how the values of and relate to each other as the adiabatic process proceeds. For an ideal gas (recall ideal gas law ) the internal energy is given by where is the number of degrees of freedom divided by 2, is the universal gas constant and is the number of moles in the system (a constant). Differentiating equation (a3) yields Equation (a4) is often expressed as because . Now substitute equations (a2) and (a4) into equation (a1) to obtain factorize : and divide both sides by : After integrating the left and right sides from to and from to and changing the sides respectively, Exponentiate both sides, substitute with , the heat capacity ratio and eliminate the negative sign to obtain Therefore, and At the same time, the work done by the pressure–volume changes as a result from this process, is equal to Since we require the process to be adiabatic, the following equation needs to be true By the previous derivation, Rearranging (b4) gives Substituting this into (b2) gives Integrating, we obtain the expression for work, Substituting in the second term, Rearranging, Using the ideal gas law and assuming a constant molar quantity (as often happens in practical cases), By the continuous formula, or Substituting into the previous expression for , Substituting this expression and (b1) in (b3) gives Simplifying, Derivation of discrete formula and work expression The change in internal energy of a system, measured from state 1 to state 2, is equal to At the same time, the work done by the pressure–volume changes as a result from this process, is equal to Since we require the process to be adiabatic, the following equation needs to be true By the previous derivation, Rearranging (c4) gives Substituting this into (c2) gives Integrating we obtain the expression for work, Substituting in second term, Rearranging, Using the ideal gas law and assuming a constant molar quantity (as often happens in practical cases), By the continuous formula, or Substituting into the previous expression for , Substituting this expression and (c1) in (c3) gives Simplifying, Graphing adiabats An adiabat is a curve of constant entropy in a diagram. Some properties of adiabats on a P–V diagram are indicated. These properties may be read from the classical behaviour of ideal gases, except in the region where PV becomes small (low temperature), where quantum effects become important. Every adiabat asymptotically approaches both the V axis and the P axis (just like isotherms). Each adiabat intersects each isotherm exactly once. An adiabat looks similar to an isotherm, except that during an expansion, an adiabat loses more pressure than an isotherm, so it has a steeper inclination (more vertical). If isotherms are concave towards the north-east direction (45° from V-axis), then adiabats are concave towards the east north-east (31° from V-axis). If adiabats and isotherms are graphed at regular intervals of entropy and temperature, respectively (like altitude on a contour map), then as the eye moves towards the axes (towards the south-west), it sees the density of isotherms stay constant, but it sees the density of adiabats grow. The exception is very near absolute zero, where the density of adiabats drops sharply and they become rare (see Nernst's theorem). Etymology The term adiabatic () is an anglicization of the Greek term ἀδιάβατος "impassable" (used by Xenophon of rivers). It is used in the thermodynamic sense by Rankine (1866), and adopted by Maxwell in 1871 (explicitly attributing the term to Rankine). The etymological origin corresponds here to an impossibility of transfer of energy as heat and of transfer of matter across the wall. The Greek word ἀδιάβατος is formed from privative ἀ- ("not") and διαβατός, "passable", in turn deriving from διά ("through"), and βαῖνειν ("to walk, go, come"). Furthermore, in atmospheric thermodynamics, a diabatic process is one in which heat is exchanged. An adiabatic process is the opposite – a process in which no heat is exchanged. Conceptual significance in thermodynamic theory The adiabatic process has been important for thermodynamics since its early days. It was important in the work of Joule because it provided a way of nearly directly relating quantities of heat and work. Energy can enter or leave a thermodynamic system enclosed by walls that prevent mass transfer only as heat or work. Therefore, a quantity of work in such a system can be related almost directly to an equivalent quantity of heat in a cycle of two limbs. The first limb is an isochoric adiabatic work process increasing the system's internal energy; the second, an isochoric and workless heat transfer returning the system to its original state. Accordingly, Rankine measured quantity of heat in units of work, rather than as a calorimetric quantity. In 1854, Rankine used a quantity that he called "the thermodynamic function" that later was called entropy, and at that time he wrote also of the "curve of no transmission of heat", which he later called an adiabatic curve. Besides its two isothermal limbs, Carnot's cycle has two adiabatic limbs. For the foundations of thermodynamics, the conceptual importance of this was emphasized by Bryan, by Carathéodory, and by Born. The reason is that calorimetry presupposes a type of temperature as already defined before the statement of the first law of thermodynamics, such as one based on empirical scales. Such a presupposition involves making the distinction between empirical temperature and absolute temperature. Rather, the definition of absolute thermodynamic temperature is best left till the second law is available as a conceptual basis. In the eighteenth century, the law of conservation of energy was not yet fully formulated or established, and the nature of heat was debated. One approach to these problems was to regard heat, measured by calorimetry, as a primary substance that is conserved in quantity. By the middle of the nineteenth century, it was recognized as a form of energy, and the law of conservation of energy was thereby also recognized. The view that eventually established itself, and is currently regarded as right, is that the law of conservation of energy is a primary axiom, and that heat is to be analyzed as consequential. In this light, heat cannot be a component of the total energy of a single body because it is not a state variable but, rather, a variable that describes a transfer between two bodies. The adiabatic process is important because it is a logical ingredient of this current view. Divergent usages of the word adiabatic This present article is written from the viewpoint of macroscopic thermodynamics, and the word adiabatic is used in this article in the traditional way of thermodynamics, introduced by Rankine. It is pointed out in the present article that, for example, if a compression of a gas is rapid, then there is little time for heat transfer to occur, even when the gas is not adiabatically isolated by a definite wall. In this sense, a rapid compression of a gas is sometimes approximately or loosely said to be adiabatic, though often far from isentropic, even when the gas is not adiabatically isolated by a definite wall. Some authors, like Pippard, recommend using "adiathermal" to refer to processes where no heat-exchange occurs (such as Joule expansion), and "adiabatic" to reversible quasi-static adiathermal processes (so that rapid compression of a gas is not "adiabatic"). And Laidler has summarized the complicated etymology of "adiabatic". Quantum mechanics and quantum statistical mechanics, however, use the word adiabatic in a very different sense, one that can at times seem almost opposite to the classical thermodynamic sense. In quantum theory, the word adiabatic can mean something perhaps near isentropic, or perhaps near quasi-static, but the usage of the word is very different between the two disciplines. On the one hand, in quantum theory, if a perturbative element of compressive work is done almost infinitely slowly (that is to say quasi-statically), it is said to have been done adiabatically. The idea is that the shapes of the eigenfunctions change slowly and continuously, so that no quantum jump is triggered, and the change is virtually reversible. While the occupation numbers are unchanged, nevertheless there is change in the energy levels of one-to-one corresponding, pre- and post-compression, eigenstates. Thus a perturbative element of work has been done without heat transfer and without introduction of random change within the system. For example, Max Born writes On the other hand, in quantum theory, if a perturbative element of compressive work is done rapidly, it changes the occupation numbers and energies of the eigenstates in proportion to the transition moment integral and in accordance with time-dependent perturbation theory, as well as perturbing the functional form of the eigenstates themselves. In that theory, such a rapid change is said not to be adiabatic, and the contrary word diabatic is applied to it. Recent research suggests that the power absorbed from the perturbation corresponds to the rate of these non-adiabatic transitions. This corresponds to the classical process of energy transfer in the form of heat, but with the relative time scales reversed in the quantum case. Quantum adiabatic processes occur over relatively long time scales, while classical adiabatic processes occur over relatively short time scales. It should also be noted that the concept of 'heat' (in reference to the quantity of thermal energy transferred) breaks down at the quantum level, and the specific form of energy (typically electromagnetic) must be considered instead. The small or negligible absorption of energy from the perturbation in a quantum adiabatic process provides a good justification for identifying it as the quantum analogue of adiabatic processes in classical thermodynamics, and for the reuse of the term. In classical thermodynamics, such a rapid change would still be called adiabatic because the system is adiabatically isolated, and there is no transfer of energy as heat. The strong irreversibility of the change, due to viscosity or other entropy production, does not impinge on this classical usage. Thus for a mass of gas, in macroscopic thermodynamics, words are so used that a compression is sometimes loosely or approximately said to be adiabatic if it is rapid enough to avoid significant heat transfer, even if the system is not adiabatically isolated. But in quantum statistical theory, a compression is not called adiabatic if it is rapid, even if the system is adiabatically isolated in the classical thermodynamic sense of the term. The words are used differently in the two disciplines, as stated just above.
Physical sciences
Thermodynamics
Physics
1422
https://en.wikipedia.org/wiki/Amide
Amide
In organic chemistry, an amide, also known as an organic amide or a carboxamide, is a compound with the general formula , where R, R', and R″ represent any group, typically organyl groups or hydrogen atoms. The amide group is called a peptide bond when it is part of the main chain of a protein, and an isopeptide bond when it occurs in a side chain, as in asparagine and glutamine. It can be viewed as a derivative of a carboxylic acid () with the hydroxyl group () replaced by an amine group (); or, equivalently, an acyl (alkanoyl) group () joined to an amine group. Common of amides are formamide (), acetamide (), benzamide (), and dimethylformamide (). Some uncommon examples of amides are N-chloroacetamide () and chloroformamide (). Amides are qualified as primary, secondary, and tertiary according to the number of carbon atoms bounded to the nitrogen atom. Nomenclature The core of amides is called the amide group (specifically, carboxamide group). In the usual nomenclature, one adds the term "amide" to the stem of the parent acid's name. For instance, the amide derived from acetic acid is named acetamide (CH3CONH2). IUPAC recommends ethanamide, but this and related formal names are rarely encountered. When the amide is derived from a primary or secondary amine, the substituents on nitrogen are indicated first in the name. Thus, the amide formed from dimethylamine and acetic acid is N,N-dimethylacetamide (CH3CONMe2, where Me = CH3). Usually even this name is simplified to dimethylacetamide. Cyclic amides are called lactams; they are necessarily secondary or tertiary amides. Applications Amides are pervasive in nature and technology. Proteins and important plastics like nylons, aramids, Twaron, and Kevlar are polymers whose units are connected by amide groups (polyamides); these linkages are easily formed, confer structural rigidity, and resist hydrolysis. Amides include many other important biological compounds, as well as many drugs like paracetamol, penicillin and LSD. Low-molecular-weight amides, such as dimethylformamide, are common solvents. Structure and bonding The lone pair of electrons on the nitrogen atom is delocalized into the Carbonyl group, thus forming a partial double bond between nitrogen and carbon. In fact the O, C and N atoms have molecular orbitals occupied by delocalized electrons, forming a conjugated system. Consequently, the three bonds of the nitrogen in amides is not pyramidal (as in the amines) but planar. This planar restriction prevents rotations about the N linkage and thus has important consequences for the mechanical properties of bulk material of such molecules, and also for the configurational properties of macromolecules built by such bonds. The inability to rotate distinguishes amide groups from ester groups which allow rotation and thus create more flexible bulk material. The C-C(O)NR2 core of amides is planar. The C=O distance is shorter than the C-N distance by almost 10%. The structure of an amide can be described also as a resonance between two alternative structures: neutral (A) and zwitterionic (B). It is estimated that for acetamide, structure A makes a 62% contribution to the structure, while structure B makes a 28% contribution (these figures do not sum to 100% because there are additional less-important resonance forms that are not depicted above). There is also a hydrogen bond present between the hydrogen and nitrogen atoms in the active groups. Resonance is largely prevented in the very strained quinuclidone. In their IR spectra, amides exhibit a moderately intense νCO band near 1650 cm−1. The energy of this band is about 60 cm−1 lower than for the νCO of esters and ketones. This difference reflects the contribution of the zwitterionic resonance structure. Basicity Compared to amines, amides are very weak bases. While the conjugate acid of an amine has a pKa of about 9.5, the conjugate acid of an amide has a pKa around −0.5. Therefore, compared to amines, amides do not have acid–base properties that are as noticeable in water. This relative lack of basicity is explained by the withdrawing of electrons from the amine by the carbonyl. On the other hand, amides are much stronger bases than carboxylic acids, esters, aldehydes, and ketones (their conjugate acids' pKas are between −6 and −10). The proton of a primary or secondary amide does not dissociate readily; its pKa is usually well above 15. Conversely, under extremely acidic conditions, the carbonyl oxygen can become protonated with a pKa of roughly −1. It is not only because of the positive charge on the nitrogen but also because of the negative charge on the oxygen gained through resonance. Hydrogen bonding and solubility Because of the greater electronegativity of oxygen than nitrogen, the carbonyl (C=O) is a stronger dipole than the N–C dipole. The presence of a C=O dipole and, to a lesser extent a N–C dipole, allows amides to act as H-bond acceptors. In primary and secondary amides, the presence of N–H dipoles allows amides to function as H-bond donors as well. Thus amides can participate in hydrogen bonding with water and other protic solvents; the oxygen atom can accept hydrogen bonds from water and the N–H hydrogen atoms can donate H-bonds. As a result of interactions such as these, the water solubility of amides is greater than that of corresponding hydrocarbons. These hydrogen bonds also have an important role in the secondary structure of proteins. The solubilities of amides and esters are roughly comparable. Typically amides are less soluble than comparable amines and carboxylic acids since these compounds can both donate and accept hydrogen bonds. Tertiary amides, with the important exception of N,N-dimethylformamide, exhibit low solubility in water. Reactions Amides do not readily participate in nucleophilic substitution reactions. Amides are stable to water, and are roughly 100 times more stable towards hydrolysis than esters. Amides can, however, be hydrolyzed to carboxylic acids in the presence of acid or base. The stability of amide bonds has biological implications, since the amino acids that make up proteins are linked with amide bonds. Amide bonds are resistant enough to hydrolysis to maintain protein structure in aqueous environments but are susceptible to catalyzed hydrolysis. Primary and secondary amides do not react usefully with carbon nucleophiles. Instead, Grignard reagents and organolithiums deprotonate an amide N-H bond. Tertiary amides do not experience this problem, and react with carbon nucleophiles to give ketones; the amide anion (NR2−) is a very strong base and thus a very poor leaving group, so nucleophilic attack only occurs once. When reacted with carbon nucleophiles, N,N-dimethylformamide (DMF) can be used to introduce a formyl group. Here, phenyllithium 1 attacks the carbonyl group of DMF 2, giving tetrahedral intermediate 3. Because the dimethylamide anion is a poor leaving group, the intermediate does not collapse and another nucleophilic addition does not occur. Upon acidic workup, the alkoxide is protonated to give 4, then the amine is protonated to give 5. Elimination of a neutral molecule of dimethylamine and loss of a proton give benzaldehyde, 6. Hydrolysis Amides hydrolyse in hot alkali as well as in strong acidic conditions. Acidic conditions yield the carboxylic acid and the ammonium ion while basic hydrolysis yield the carboxylate ion and ammonia. The protonation of the initially generated amine under acidic conditions and the deprotonation of the initially generated carboxylic acid under basic conditions render these processes non-catalytic and irreversible. Electrophiles other than protons react with the carbonyl oxygen. This step often precedes hydrolysis, which is catalyzed by both Brønsted acids and Lewis acids. Peptidase enzymes and some synthetic catalysts often operate by attachment of electrophiles to the carbonyl oxygen. Synthesis From carboxylic acids and related compounds Amides are usually prepared by coupling a carboxylic acid with an amine. The direct reaction generally requires high temperatures to drive off the water: Esters are far superior substrates relative to carboxylic acids. Further "activating" both acid chlorides (Schotten-Baumann reaction) and anhydrides (Lumière–Barbier method) react with amines to give amides: Peptide synthesis use coupling agents such as HATU, HOBt, or PyBOP. From nitriles The hydrolysis of nitriles is conducted on an industrial scale to produce fatty amides. Laboratory procedures are also available. Specialty routes Many specialized methods also yield amides. A variety of reagents, e.g. tris(2,2,2-trifluoroethyl) borate have been developed for specialized applications.
Physical sciences
Carbon–nitrogen bond
null
1453
https://en.wikipedia.org/wiki/ALGOL
ALGOL
ALGOL (; short for "Algorithmic Language") is a family of imperative computer programming languages originally developed in 1958. ALGOL heavily influenced many other languages and was the standard method for algorithm description used by the Association for Computing Machinery (ACM) in textbooks and academic sources for more than thirty years. In the sense that the syntax of most modern languages is "Algol-like", it was arguably more influential than three other high-level programming languages among which it was roughly contemporary: FORTRAN, Lisp, and COBOL. It was designed to avoid some of the perceived problems with FORTRAN and eventually gave rise to many other programming languages, including PL/I, Simula, BCPL, B, Pascal, Ada, and C. ALGOL introduced code blocks and the begin...end pairs for delimiting them. It was also the first language implementing nested function definitions with lexical scope. Moreover, it was the first programming language which gave detailed attention to formal language definition and through the Algol 60 Report introduced Backus–Naur form, a principal formal grammar notation for language design. There were three major specifications, named after the years they were first published: ALGOL 58 – originally proposed to be called IAL, for International Algebraic Language. ALGOL 60 – first implemented as X1 ALGOL 60 in 1961. Revised 1963. ALGOL 68 – introduced new elements including flexible arrays, slices, parallelism, operator identification. Revised 1973. ALGOL 68 is substantially different from ALGOL 60 and was not well received, so reference to "Algol" is generally understood to mean ALGOL 60 and its dialects. History ALGOL was developed jointly by a committee of European and American computer scientists in a meeting in 1958 at the Swiss Federal Institute of Technology in Zurich (cf. ALGOL 58). It specified three different syntaxes: a reference syntax, a publication syntax, and an implementation syntax, syntaxes that permitted it to use different keyword names and conventions for decimal points (commas vs periods) for different languages. ALGOL was used mostly by research computer scientists in the United States and in Europe; commercial applications were hindered by the absence of standard input/output facilities in its description, and the lack of interest in the language by large computer vendors (other than Burroughs Corporation). ALGOL 60 did however become the standard for the publication of algorithms and had a profound effect on future language development. John Backus developed the Backus normal form method of describing programming languages specifically for ALGOL 58. It was revised and expanded by Peter Naur for ALGOL 60, and at Donald Knuth's suggestion renamed Backus–Naur form. Peter Naur: "As editor of the ALGOL Bulletin I was drawn into the international discussions of the language and was selected to be member of the European language design group in November 1959. In this capacity I was the editor of the ALGOL 60 report, produced as the result of the ALGOL 60 meeting in Paris in January 1960." The following people attended the meeting in Paris (from 11 to 16 January): Friedrich Ludwig Bauer, Peter Naur, Heinz Rutishauser, Klaus Samelson, Bernard Vauquois, Adriaan van Wijngaarden, and Michael Woodger (from Europe) John Warner Backus, Julien Green, Charles Katz, John McCarthy, Alan Jay Perlis, and Joseph Henry Wegstein (from the US). Alan Perlis gave a vivid description of the meeting: "The meetings were exhausting, interminable, and exhilarating. One became aggravated when one's good ideas were discarded along with the bad ones of others. Nevertheless, diligence persisted during the entire period. The chemistry of the 13 was excellent." Legacy A significant contribution of the ALGOL 58 Report was to provide standard terms for programming concepts: statement, declaration, type, label, primary, block, and others. ALGOL 60 inspired many languages that followed it. Tony Hoare remarked: "Here is a language so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors." The Scheme programming language, a variant of Lisp that adopted the block structure and lexical scope of ALGOL, also adopted the wording "Revised Report on the Algorithmic Language Scheme" for its standards documents in homage to ALGOL. Properties ALGOL 60 as officially defined had no I/O facilities; implementations defined their own in ways that were rarely compatible with each other. In contrast, ALGOL 68 offered an extensive library of transput (input/output) facilities. ALGOL 60 allowed for two evaluation strategies for parameter passing: the common call-by-value, and call-by-name. Call-by-name has certain effects in contrast to call-by-reference. For example, without specifying the parameters as value or reference, it is impossible to develop a procedure that will swap the values of two parameters if the actual parameters that are passed in are an integer variable and an array that is indexed by that same integer variable. Think of passing a pointer to swap(i, A[i]) in to a function. Now that every time swap is referenced, it is reevaluated. Say i := 1 and A[i] := 2, so every time swap is referenced it will return the other combination of the values ([1,2], [2,1], [1,2] and so on). A similar situation occurs with a random function passed as actual argument. Call-by-name is known by many compiler designers for the interesting "thunks" that are used to implement it. Donald Knuth devised the "man or boy test" to separate compilers that correctly implemented "recursion and non-local references." This test contains an example of call-by-name. ALGOL 68 was defined using a two-level grammar formalism invented by Adriaan van Wijngaarden and which bears his name. Van Wijngaarden grammars use a context-free grammar to generate an infinite set of productions that will recognize a particular ALGOL 68 program; notably, they are able to express the kind of requirements that in many other programming language standards are labelled "semantics" and have to be expressed in ambiguity-prone natural language prose, and then implemented in compilers as ad hoc code attached to the formal language parser. Examples and portability Code sample comparisons ALGOL 60 (The way the bold text has to be written depends on the implementation, e.g. 'INTEGER'—quotation marks included—for integer. This is known as stropping.) procedure Absmax(a) Size:(n, m) Result:(y) Subscripts:(i, k); value n, m; array a; integer n, m, i, k; real y; comment The absolute greatest element of the matrix a, of size n by m, is copied to y, and the subscripts of this element to i and k; begin integer p, q; y := 0; i := k := 1; for p := 1 step 1 until n do for q := 1 step 1 until m do if abs(a[p, q]) > y then begin y := abs(a[p, q]); i := p; k := q end end Absmax Here is an example of how to produce a table using Elliott 803 ALGOL. FLOATING POINT ALGOL TEST' BEGIN REAL A,B,C,D' READ D' FOR A:= 0.0 STEP D UNTIL 6.3 DO BEGIN PRINT ,££L??' B := SIN(A)' C := COS(A)' PRINT PUNCH(3),,,A,B,C' END END' ALGOL 68 The following code samples are ALGOL 68 versions of the above ALGOL 60 code samples. ALGOL 68 implementations used ALGOL 60's approaches to stropping. In ALGOL 68's case tokens with the bold typeface are reserved words, types (modes) or operators. proc abs max = ([,]real a, ref real y, ref int i, k)real: comment The absolute greatest element of the matrix a, of size ⌈a by 2⌈a is transferred to y, and the subscripts of this element to i and k; comment begin real y := 0; i := ⌊a; k := 2⌊a; for p from ⌊a to ⌈a do for q from 2⌊a to 2⌈a do if abs a[p, q] > y then y := abs a[p, q]; i := p; k := q fi od od; y end # abs max # Note: lower (⌊) and upper (⌈) bounds of an array, and array slicing, are directly available to the programmer. floating point algol68 test: ( real a,b,c,d;   # printf – sends output to the file stand out. # # printf($p$); – selects a new page # printf(($pg$,"Enter d:")); read(d);   for step from 0 while a:=step*d; a <= 2*pi do printf($l$); # $l$ - selects a new line. # b := sin(a); c := cos(a); printf(($z-d.6d$,a,b,c)) # formats output with 1 digit before and 6 after the decimal point. # od ) Timeline: Hello world The variations and lack of portability of the programs from one implementation to another is easily demonstrated by the classic hello world program. ALGOL 58 (IAL) ALGOL 58 had no I/O facilities. ALGOL 60 family Since ALGOL 60 had no I/O facilities, there is no portable hello world program in ALGOL. The next three examples are in Burroughs Extended Algol. The first two direct output at the interactive terminal they are run on. The first uses a character array, similar to C. The language allows the array identifier to be used as a pointer to the array, and hence in a REPLACE statement. A simpler program using an inline format: An even simpler program using the Display statement. Note that its output would end up at the system console ('SPO'): An alternative example, using Elliott Algol I/O is as follows. Elliott Algol used different characters for "open-string-quote" and "close-string-quote", represented here by and . Below is a version from Elliott 803 Algol (A104). The standard Elliott 803 used five-hole paper tape and thus only had upper case. The code lacked any quote characters so £ (UK Pound Sign) was used for open quote and ? (Question Mark) for close quote. Special sequences were placed in double quotes (e.g. ££L?? produced a new line on the teleprinter). HIFOLKS' BEGIN PRINT £HELLO WORLD£L??' END' The ICT 1900 series Algol I/O version allowed input from paper tape or punched card. Paper tape 'full' mode allowed lower case. Output was to a line printer. The open and close quote characters were represented using '(' and ')' and spaces by %. 'BEGIN' WRITE TEXT('('HELLO%WORLD')'); 'END' ALGOL 68 ALGOL 68 code was published with reserved words typically in lowercase, but bolded or underlined. begin printf(($gl$,"Hello, world!")) end In the language of the "Algol 68 Report" the input/output facilities were collectively called the "Transput". Timeline of ALGOL special characters The ALGOLs were conceived at a time when character sets were diverse and evolving rapidly; also, the ALGOLs were defined so that only uppercase letters were required. 1960: IFIP – The Algol 60 language and report included several mathematical symbols which are available on modern computers and operating systems, but, unfortunately, were unsupported on most computing systems at the time. For instance: ×, ÷, ≤, ≥, ≠, ¬, ∨, ∧, ⊂, ≡, ␣ and ⏨. 1961 September: ASCII – The ASCII character set, then in an early stage of development, had the \ (Back slash) character added to it in order to support ALGOL's Boolean operators /\ and \/. 1962: ALCOR – This character set included the unusual "᛭" runic cross character for multiplication and the "⏨" Decimal Exponent Symbol for floating point notation. 1964: GOST – The 1964 Soviet standard GOST 10859 allowed the encoding of 4-bit, 5-bit, 6-bit and 7-bit characters in ALGOL. 1968: The "Algol 68 Report" – used extant ALGOL characters, and further adopted →, ↓, ↑, □, ⌊, ⌈, ⎩, ⎧, ○, ⊥, and ¢ characters which can be found on the IBM 2741 keyboard with typeball (or golf ball) print heads inserted (such as the APL golf ball). These became available in the mid-1960s while ALGOL 68 was being drafted. The report was translated into Russian, German, French, and Bulgarian, and allowed programming in languages with larger character sets, e.g., Cyrillic alphabet of the Soviet BESM-4. All ALGOL's characters are also part of the Unicode standard and most of them are available in several popular fonts. 2009 October: Unicode – The ⏨ (Decimal Exponent Symbol) for floating point notation was added to Unicode 5.2 for backward compatibility with historic Buran programme ALGOL software. ALGOL implementations To date there have been at least 70 augmentations, extensions, derivations and sublanguages of Algol 60. The Burroughs dialects included special Bootstrapping dialects such as ESPOL and NEWP. The latter is still used for Unisys MCP system software.
Technology
"Historical" languages
null
1461
https://en.wikipedia.org/wiki/Apollo%20program
Apollo program
The Apollo program, also known as Project Apollo, was the United States human spaceflight program led by NASA, which succeeded in landing the first men on the Moon in 1969, following Project Mercury, which put the first Americans in space. It was conceived in 1960 as a three-person spacecraft during President Dwight D. Eisenhower's administration. Apollo was later dedicated to President John F. Kennedy's national goal for the 1960s of "landing a man on the Moon and returning him safely to the Earth" in an address to Congress on May 25, 1961. It was the third US human spaceflight program to fly, preceded by Project Gemini conceived in 1961 to extend spaceflight capability in support of Apollo. Kennedy's goal was accomplished on the Apollo 11 mission when astronauts Neil Armstrong and Buzz Aldrin landed their Apollo Lunar Module (LM) on July 20, 1969, and walked on the lunar surface, while Michael Collins remained in lunar orbit in the command and service module (CSM), and all three landed safely on Earth in the Pacific Ocean on July 24. Five subsequent Apollo missions also landed astronauts on the Moon, the last, Apollo 17, in December 1972. In these six spaceflights, twelve people walked on the Moon. Apollo ran from 1961 to 1972, with the first crewed flight in 1968. It encountered a major setback in 1967 when an Apollo 1 cabin fire killed the entire crew during a prelaunch test. After the first successful landing, sufficient flight hardware remained for nine follow-on landings with a plan for extended lunar geological and astrophysical exploration. Budget cuts forced the cancellation of three of these. Five of the remaining six missions achieved successful landings, but the Apollo 13 landing had to be aborted after an oxygen tank exploded en route to the Moon, crippling the CSM. The crew barely managed a safe return to Earth by using the lunar module as a "lifeboat" on the return journey. Apollo used the Saturn family of rockets as launch vehicles, which were also used for an Apollo Applications Program, which consisted of Skylab, a space station that supported three crewed missions in 1973–1974, and the Apollo–Soyuz Test Project, a joint United States-Soviet Union low Earth orbit mission in 1975. Apollo set several major human spaceflight milestones. It stands alone in sending crewed missions beyond low Earth orbit. Apollo 8 was the first crewed spacecraft to orbit another celestial body, and Apollo 11 was the first crewed spacecraft to land humans on one. Overall, the Apollo program returned of lunar rocks and soil to Earth, greatly contributing to the understanding of the Moon's composition and geological history. The program laid the foundation for NASA's subsequent human spaceflight capability and funded construction of its Johnson Space Center and Kennedy Space Center. Apollo also spurred advances in many areas of technology incidental to rocketry and human spaceflight, including avionics, telecommunications, and computers. Name The program was named after Apollo, the Greek god of light, music, and the Sun, by NASA manager Abe Silverstein, who later said, "I was naming the spacecraft like I'd name my baby." Silverstein chose the name at home one evening, early in 1960, because he felt "Apollo riding his chariot across the Sun was appropriate to the grand scale of the proposed program". The context of this was that the program focused at its beginning mainly on developing an advanced crewed spacecraft, the Apollo command and service module, succeeding the Mercury program. A lunar landing became the focus of the program only in 1961. Thereafter Project Gemini instead followed the Mercury program to test and study advanced crewed spaceflight technology. Background Origin and spacecraft feasibility studies The Apollo program was conceived during the Eisenhower administration in early 1960, as a follow-up to Project Mercury. While the Mercury capsule could support only one astronaut on a limited Earth orbital mission, Apollo would carry three. Possible missions included ferrying crews to a space station, circumlunar flights, and eventual crewed lunar landings. In July 1960, NASA Deputy Administrator Hugh L. Dryden announced the Apollo program to industry representatives at a series of Space Task Group conferences. Preliminary specifications were laid out for a spacecraft with a mission module cabin separate from the command module (piloting and reentry cabin), and a propulsion and equipment module. On August 30, a feasibility study competition was announced, and on October 25, three study contracts were awarded to General Dynamics/Convair, General Electric, and the Glenn L. Martin Company. Meanwhile, NASA performed its own in-house spacecraft design studies led by Maxime Faget, to serve as a gauge to judge and monitor the three industry designs. Political pressure builds In November 1960, John F. Kennedy was elected president after a campaign that promised American superiority over the Soviet Union in the fields of space exploration and missile defense. Up to the election of 1960, Kennedy had been speaking out against the "missile gap" that he and many other senators said had developed between the Soviet Union and the United States due to the inaction of President Eisenhower. Beyond military power, Kennedy used aerospace technology as a symbol of national prestige, pledging to make the US not "first but, first and, first if, but first period". Despite Kennedy's rhetoric, he did not immediately come to a decision on the status of the Apollo program once he became president. He knew little about the technical details of the space program, and was put off by the massive financial commitment required by a crewed Moon landing. When Kennedy's newly appointed NASA Administrator James E. Webb requested a 30 percent budget increase for his agency, Kennedy supported an acceleration of NASA's large booster program but deferred a decision on the broader issue. On April 12, 1961, Soviet cosmonaut Yuri Gagarin became the first person to fly in space, reinforcing American fears about being left behind in a technological competition with the Soviet Union. At a meeting of the US House Committee on Science and Astronautics one day after Gagarin's flight, many congressmen pledged their support for a crash program aimed at ensuring that America would catch up. Kennedy was circumspect in his response to the news, refusing to make a commitment on America's response to the Soviets. On April 20, Kennedy sent a memo to Vice President Lyndon B. Johnson, asking Johnson to look into the status of America's space program, and into programs that could offer NASA the opportunity to catch up. Johnson responded approximately one week later, concluding that "we are neither making maximum effort nor achieving results necessary if this country is to reach a position of leadership." His memo concluded that a crewed Moon landing was far enough in the future that it was likely the United States would achieve it first. On May 25, 1961, twenty days after the first US crewed spaceflight Freedom 7, Kennedy proposed the crewed Moon landing in a Special Message to the Congress on Urgent National Needs: NASA expansion At the time of Kennedy's proposal, only one American had flown in space—less than a month earlier—and NASA had not yet sent an astronaut into orbit. Even some NASA employees doubted whether Kennedy's ambitious goal could be met. By 1963, Kennedy even came close to agreeing to a joint US-USSR Moon mission, to eliminate duplication of effort. With the clear goal of a crewed landing replacing the more nebulous goals of space stations and circumlunar flights, NASA decided that, in order to make progress quickly, it would discard the feasibility study designs of Convair, GE, and Martin, and proceed with Faget's command and service module design. The mission module was determined to be useful only as an extra room, and therefore unnecessary. They used Faget's design as the specification for another competition for spacecraft procurement bids in October 1961. On November 28, 1961, it was announced that North American Aviation had won the contract, although its bid was not rated as good as the Martin proposal. Webb, Dryden and Robert Seamans chose it in preference due to North American's longer association with NASA and its predecessor. Landing humans on the Moon by the end of 1969 required the most sudden burst of technological creativity, and the largest commitment of resources ($25 billion; $ in US dollars) ever made by any nation in peacetime. At its peak, the Apollo program employed 400,000 people and required the support of over 20,000 industrial firms and universities. On July 1, 1960, NASA established the Marshall Space Flight Center (MSFC) in Huntsville, Alabama. MSFC designed the heavy lift-class Saturn launch vehicles, which would be required for Apollo. Manned Spacecraft Center It became clear that managing the Apollo program would exceed the capabilities of Robert R. Gilruth's Space Task Group, which had been directing the nation's crewed space program from NASA's Langley Research Center. So Gilruth was given authority to grow his organization into a new NASA center, the Manned Spacecraft Center (MSC). A site was chosen in Houston, Texas, on land donated by Rice University, and Administrator Webb announced the conversion on September 19, 1961. It was also clear NASA would soon outgrow its practice of controlling missions from its Cape Canaveral Air Force Station launch facilities in Florida, so a new Mission Control Center would be included in the MSC. In September 1962, by which time two Project Mercury astronauts had orbited the Earth, Gilruth had moved his organization to rented space in Houston, and construction of the MSC facility was under way, Kennedy visited Rice to reiterate his challenge in a famous speech: The MSC was completed in September 1963. It was renamed by the US Congress in honor of Lyndon B. Johnson soon after his death in 1973. Launch Operations Center It also became clear that Apollo would outgrow the Canaveral launch facilities in Florida. The two newest launch complexes were already being built for the Saturn I and IB rockets at the northernmost end: LC-34 and LC-37. But an even bigger facility would be needed for the mammoth rocket required for the crewed lunar mission, so land acquisition was started in July 1961 for a Launch Operations Center (LOC) immediately north of Canaveral at Merritt Island. The design, development and construction of the center was conducted by Kurt H. Debus, a member of Wernher von Braun's original V-2 rocket engineering team. Debus was named the LOC's first Director. Construction began in November 1962. Following Kennedy's death, President Johnson issued an executive order on November 29, 1963, to rename the LOC and Cape Canaveral in honor of Kennedy. The LOC included Launch Complex 39, a Launch Control Center, and a Vertical Assembly Building (VAB). in which the space vehicle (launch vehicle and spacecraft) would be assembled on a mobile launcher platform and then moved by a crawler-transporter to one of several launch pads. Although at least three pads were planned, only two, designated AandB, were completed in October 1965. The LOC also included an Operations and Checkout Building (OCB) to which Gemini and Apollo spacecraft were initially received prior to being mated to their launch vehicles. The Apollo spacecraft could be tested in two vacuum chambers capable of simulating atmospheric pressure at altitudes up to , which is nearly a vacuum. Organization Administrator Webb realized that in order to keep Apollo costs under control, he had to develop greater project management skills in his organization, so he recruited George E. Mueller for a high management job. Mueller accepted, on the condition that he have a say in NASA reorganization necessary to effectively administer Apollo. Webb then worked with Associate Administrator (later Deputy Administrator) Seamans to reorganize the Office of Manned Space Flight (OMSF). On July 23, 1963, Webb announced Mueller's appointment as Deputy Associate Administrator for Manned Space Flight, to replace then Associate Administrator D. Brainerd Holmes on his retirement effective September 1. Under Webb's reorganization, the directors of the Manned Spacecraft Center (Gilruth), Marshall Space Flight Center (von Braun), and the Launch Operations Center (Debus) reported to Mueller. Based on his industry experience on Air Force missile projects, Mueller realized some skilled managers could be found among high-ranking officers in the U.S. Air Force, so he got Webb's permission to recruit General Samuel C. Phillips, who gained a reputation for his effective management of the Minuteman program, as OMSF program controller. Phillips's superior officer Bernard A. Schriever agreed to loan Phillips to NASA, along with a staff of officers under him, on the condition that Phillips be made Apollo Program Director. Mueller agreed, and Phillips managed Apollo from January 1964, until it achieved the first human landing in July 1969, after which he returned to Air Force duty. Charles Fishman, in One Giant Leap, estimated the number of people and organizations involved into the Apollo program as "410,000 men and women at some 20,000 different companies contributed to the effort". Choosing a mission mode Once Kennedy had defined a goal, the Apollo mission planners were faced with the challenge of designing a spacecraft that could meet it while minimizing risk to human life, limiting cost, and not exceeding limits in possible technology and astronaut skill. Four possible mission modes were considered: Direct Ascent: The spacecraft would be launched as a unit and travel directly to the lunar surface, without first going into lunar orbit. A Earth return ship would land all three astronauts atop a descent propulsion stage, which would be left on the Moon. This design would have required development of the extremely powerful Saturn C-8 or Nova launch vehicle to carry a payload to the Moon. Earth Orbit Rendezvous (EOR): Multiple rocket launches (up to 15 in some plans) would carry parts of the Direct Ascent spacecraft and propulsion units for translunar injection (TLI). These would be assembled into a single spacecraft in Earth orbit. Lunar Surface Rendezvous: Two spacecraft would be launched in succession. The first, an automated vehicle carrying propellant for the return to Earth, would land on the Moon, to be followed some time later by the crewed vehicle. Propellant would have to be transferred from the automated vehicle to the crewed vehicle. Lunar Orbit Rendezvous (LOR): This turned out to be the winning configuration, which achieved the goal with Apollo 11 on July 20, 1969: a single Saturn V launched a spacecraft that was composed of a Apollo command and service module which remained in orbit around the Moon and a two-stage Apollo Lunar Module spacecraft which was flown by two astronauts to the surface, flown back to dock with the command module and was then discarded. Landing the smaller spacecraft on the Moon, and returning an even smaller part () to lunar orbit, minimized the total mass to be launched from Earth, but this was the last method initially considered because of the perceived risk of rendezvous and docking. In early 1961, direct ascent was generally the mission mode in favor at NASA. Many engineers feared that rendezvous and docking, maneuvers that had not been attempted in Earth orbit, would be nearly impossible in lunar orbit. LOR advocates including John Houbolt at Langley Research Center emphasized the important weight reductions that were offered by the LOR approach. Throughout 1960 and 1961, Houbolt campaigned for the recognition of LOR as a viable and practical option. Bypassing the NASA hierarchy, he sent a series of memos and reports on the issue to Associate Administrator Robert Seamans; while acknowledging that he spoke "somewhat as a voice in the wilderness", Houbolt pleaded that LOR should not be discounted in studies of the question. Seamans's establishment of an ad hoc committee headed by his special technical assistant Nicholas E. Golovin in July 1961, to recommend a launch vehicle to be used in the Apollo program, represented a turning point in NASA's mission mode decision. This committee recognized that the chosen mode was an important part of the launch vehicle choice, and recommended in favor of a hybrid EOR-LOR mode. Its consideration of LOR—as well as Houbolt's ceaseless work—played an important role in publicizing the workability of the approach. In late 1961 and early 1962, members of the Manned Spacecraft Center began to come around to support LOR, including the newly hired deputy director of the Office of Manned Space Flight, Joseph Shea, who became a champion of LOR. The engineers at Marshall Space Flight Center (MSFC), who were heavily invested in direct ascent, took longer to become convinced of its merits, but their conversion was announced by Wernher von Braun at a briefing on June 7, 1962. But even after NASA reached internal agreement, it was far from smooth sailing. Kennedy's science advisor Jerome Wiesner, who had expressed his opposition to human spaceflight to Kennedy before the President took office, and had opposed the decision to land people on the Moon, hired Golovin, who had left NASA, to chair his own "Space Vehicle Panel", ostensibly to monitor, but actually to second-guess NASA's decisions on the Saturn V launch vehicle and LOR by forcing Shea, Seamans, and even Webb to defend themselves, delaying its formal announcement to the press on July 11, 1962, and forcing Webb to still hedge the decision as "tentative". Wiesner kept up the pressure, even making the disagreement public during a two-day September visit by the President to Marshall Space Flight Center. Wiesner blurted out "No, that's no good" in front of the press, during a presentation by von Braun. Webb jumped in and defended von Braun, until Kennedy ended the squabble by stating that the matter was "still subject to final review". Webb held firm and issued a request for proposal to candidate Lunar Excursion Module (LEM) contractors. Wiesner finally relented, unwilling to settle the dispute once and for all in Kennedy's office, because of the President's involvement with the October Cuban Missile Crisis, and fear of Kennedy's support for Webb. NASA announced the selection of Grumman as the LEM contractor in November 1962. Space historian James Hansen concludes that: The LOR method had the advantage of allowing the lander spacecraft to be used as a "lifeboat" in the event of a failure of the command ship. Some documents prove this theory was discussed before and after the method was chosen. In 1964 an MSC study concluded, "The LM [as lifeboat]... was finally dropped, because no single reasonable CSM failure could be identified that would prohibit use of the SPS." Ironically, just such a failure happened on Apollo 13 when an oxygen tank explosion left the CSM without electrical power. The lunar module provided propulsion, electrical power and life support to get the crew home safely. Spacecraft Faget's preliminary Apollo design employed a cone-shaped command module, supported by one of several service modules providing propulsion and electrical power, sized appropriately for the space station, cislunar, and lunar landing missions. Once Kennedy's Moon landing goal became official, detailed design began of a command and service module (CSM) in which the crew would spend the entire direct-ascent mission and lift off from the lunar surface for the return trip, after being soft-landed by a larger landing propulsion module. The final choice of lunar orbit rendezvous changed the CSM's role to the translunar ferry used to transport the crew, along with a new spacecraft, the Lunar Excursion Module (LEM, later shortened to LM (Lunar Module) but still pronounced ) which would take two individuals to the lunar surface and return them to the CSM. Command and service module The command module (CM) was the conical crew cabin, designed to carry three astronauts from launch to lunar orbit and back to an Earth ocean landing. It was the only component of the Apollo spacecraft to survive without major configuration changes as the program evolved from the early Apollo study designs. Its exterior was covered with an ablative heat shield, and had its own reaction control system (RCS) engines to control its attitude and steer its atmospheric entry path. Parachutes were carried to slow its descent to splashdown. The module was tall, in diameter, and weighed approximately . A cylindrical service module (SM) supported the command module, with a service propulsion engine and an RCS with propellants, and a fuel cell power generation system with liquid hydrogen and liquid oxygen reactants. A high-gain S-band antenna was used for long-distance communications on the lunar flights. On the extended lunar missions, an orbital scientific instrument package was carried. The service module was discarded just before reentry. The module was long and in diameter. The initial lunar flight version weighed approximately fully fueled, while a later version designed to carry a lunar orbit scientific instrument package weighed just over . North American Aviation won the contract to build the CSM, and also the second stage of the Saturn V launch vehicle for NASA. Because the CSM design was started early before the selection of lunar orbit rendezvous, the service propulsion engine was sized to lift the CSM off the Moon, and thus was oversized to about twice the thrust required for translunar flight. Also, there was no provision for docking with the lunar module. A 1964 program definition study concluded that the initial design should be continued as Block I which would be used for early testing, while Block II, the actual lunar spacecraft, would incorporate the docking equipment and take advantage of the lessons learned in Block I development. Apollo Lunar Module The Apollo Lunar Module (LM) was designed to descend from lunar orbit to land two astronauts on the Moon and take them back to orbit to rendezvous with the command module. Not designed to fly through the Earth's atmosphere or return to Earth, its fuselage was designed totally without aerodynamic considerations and was of an extremely lightweight construction. It consisted of separate descent and ascent stages, each with its own engine. The descent stage contained storage for the descent propellant, surface stay consumables, and surface exploration equipment. The ascent stage contained the crew cabin, ascent propellant, and a reaction control system. The initial LM model weighed approximately , and allowed surface stays up to around 34 hours. An extended lunar module (ELM) weighed over , and allowed surface stays of more than three days. The contract for design and construction of the lunar module was awarded to Grumman Aircraft Engineering Corporation, and the project was overseen by Thomas J. Kelly. Launch vehicles Before the Apollo program began, Wernher von Braun and his team of rocket engineers had started work on plans for very large launch vehicles, the Saturn series, and the even larger Nova series. In the midst of these plans, von Braun was transferred from the Army to NASA and was made Director of the Marshall Space Flight Center. The initial direct ascent plan to send the three-person Apollo command and service module directly to the lunar surface, on top of a large descent rocket stage, would require a Nova-class launcher, with a lunar payload capability of over . The June 11, 1962, decision to use lunar orbit rendezvous enabled the Saturn V to replace the Nova, and the MSFC proceeded to develop the Saturn rocket family for Apollo. Since Apollo, like Mercury, used more than one launch vehicle for space missions, NASA used spacecraft-launch vehicle combination series numbers: AS-10x for Saturn I, AS-20x for Saturn IB, and AS-50x for Saturn V (compare Mercury-Redstone 3, Mercury-Atlas 6) to designate and plan all missions, rather than numbering them sequentially as in Project Gemini. This was changed by the time human flights began. Little Joe II Since Apollo, like Mercury, would require a launch escape system (LES) in case of a launch failure, a relatively small rocket was required for qualification flight testing of this system. A rocket bigger than the Little Joe used by Mercury would be required, so the Little Joe II was built by General Dynamics/Convair. After an August 1963 qualification test flight, four LES test flights (A-001 through 004) were made at the White Sands Missile Range between May 1964 and January 1966. Saturn I Saturn I, the first US heavy lift launch vehicle, was initially planned to launch partially equipped CSMs in low Earth orbit tests. The S-I first stage burned RP-1 with liquid oxygen (LOX) oxidizer in eight clustered Rocketdyne H-1 engines, to produce of thrust. The S-IV second stage used six liquid hydrogen-fueled Pratt & Whitney RL-10 engines with of thrust. The S-V third stage flew inactively on Saturn I four times. The first four Saturn I test flights were launched from LC-34, with only the first stage live, carrying dummy upper stages filled with water. The first flight with a live S-IV was launched from LC-37. This was followed by five launches of boilerplate CSMs (designated AS-101 through AS-105) into orbit in 1964 and 1965. The last three of these further supported the Apollo program by also carrying Pegasus satellites, which verified the safety of the translunar environment by measuring the frequency and severity of micrometeorite impacts. In September 1962, NASA planned to launch four crewed CSM flights on the Saturn I from late 1965 through 1966, concurrent with Project Gemini. The payload capacity would have severely limited the systems which could be included, so the decision was made in October 1963 to use the uprated Saturn IB for all crewed Earth orbital flights. Saturn IB The Saturn IB was an upgraded version of the Saturn I. The S-IB first stage increased the thrust to by uprating the H-1 engine. The second stage replaced the S-IV with the S-IVB-200, powered by a single J-2 engine burning liquid hydrogen fuel with LOX, to produce of thrust. A restartable version of the S-IVB was used as the third stage of the Saturn V. The Saturn IB could send over into low Earth orbit, sufficient for a partially fueled CSM or the LM. Saturn IB launch vehicles and flights were designated with an AS-200 series number, "AS" indicating "Apollo Saturn" and the "2" indicating the second member of the Saturn rocket family. Saturn V Saturn V launch vehicles and flights were designated with an AS-500 series number, "AS" indicating "Apollo Saturn" and the "5" indicating Saturn V. The three-stage Saturn V was designed to send a fully fueled CSM and LM to the Moon. It was in diameter and stood tall with its lunar payload. Its capability grew to for the later advanced lunar landings. The S-IC first stage burned RP-1/LOX for a rated thrust of , which was upgraded to . The second and third stages burned liquid hydrogen; the third stage was a modified version of the S-IVB, with thrust increased to and capability to restart the engine for translunar injection after reaching a parking orbit. Astronauts NASA's director of flight crew operations during the Apollo program was Donald K. "Deke" Slayton, one of the original Mercury Seven astronauts who was medically grounded in September 1962 due to a heart murmur. Slayton was responsible for making all Gemini and Apollo crew assignments. Thirty-two astronauts were assigned to fly missions in the Apollo program. Twenty-four of these left Earth's orbit and flew around the Moon between December 1968 and December 1972 (three of them twice). Half of the 24 walked on the Moon's surface, though none of them returned to it after landing once. One of the moonwalkers was a trained geologist. Of the 32, Gus Grissom, Ed White, and Roger Chaffee were killed during a ground test in preparation for the Apollo 1 mission. The Apollo astronauts were chosen from the Project Mercury and Gemini veterans, plus from two later astronaut groups. All missions were commanded by Gemini or Mercury veterans. Crews on all development flights (except the Earth orbit CSM development flights) through the first two landings on Apollo 11 and Apollo 12, included at least two (sometimes three) Gemini veterans. Harrison Schmitt, a geologist, was the first NASA scientist astronaut to fly in space, and landed on the Moon on the last mission, Apollo 17. Schmitt participated in the lunar geology training of all of the Apollo landing crews. NASA awarded all 32 of these astronauts its highest honor, the Distinguished Service Medal, given for "distinguished service, ability, or courage", and personal "contribution representing substantial progress to the NASA mission". The medals were awarded posthumously to Grissom, White, and Chaffee in 1969, then to the crews of all missions from Apollo 8 onward. The crew that flew the first Earth orbital test mission Apollo 7, Walter M. Schirra, Donn Eisele, and Walter Cunningham, were awarded the lesser NASA Exceptional Service Medal, because of discipline problems with the flight director's orders during their flight. In October 2008, the NASA Administrator decided to award them the Distinguished Service Medals. For Schirra and Eisele, this was posthumously. Lunar mission profile The first lunar landing mission was planned to proceed: Profile variations The first three lunar missions (Apollo 8, Apollo 10, and Apollo 11) used a free return trajectory, keeping a flight path coplanar with the lunar orbit, which would allow a return to Earth in case the SM engine failed to make lunar orbit insertion. Landing site lighting conditions on later missions dictated a lunar orbital plane change, which required a course change maneuver soon after TLI, and eliminated the free-return option. After Apollo 12 placed the second of several seismometers on the Moon, the jettisoned LM ascent stages on Apollo 12 and later missions were deliberately crashed on the Moon at known locations to induce vibrations in the Moon's structure. The only exceptions to this were the Apollo 13 LM which burned up in the Earth's atmosphere, and Apollo 16, where a loss of attitude control after jettison prevented making a targeted impact. As another active seismic experiment, the S-IVBs on Apollo 13 and subsequent missions were deliberately crashed on the Moon instead of being sent to solar orbit. Starting with Apollo 13, descent orbit insertion was to be performed using the service module engine instead of the LM engine, in order to allow a greater fuel reserve for landing. This was actually done for the first time on Apollo 14, since the Apollo 13 mission was aborted before landing. Development history Uncrewed flight tests Two Block I CSMs were launched from LC-34 on suborbital flights in 1966 with the Saturn IB. The first, AS-201 launched on February 26, reached an altitude of and splashed down downrange in the Atlantic Ocean. The second, AS-202 on August 25, reached altitude and was recovered downrange in the Pacific Ocean. These flights validated the service module engine and the command module heat shield. A third Saturn IB test, AS-203 launched from pad 37, went into orbit to support design of the S-IVB upper stage restart capability needed for the Saturn V. It carried a nose cone instead of the Apollo spacecraft, and its payload was the unburned liquid hydrogen fuel, the behavior of which engineers measured with temperature and pressure sensors, and a TV camera. This flight occurred on July 5, before AS-202, which was delayed because of problems getting the Apollo spacecraft ready for flight. Preparation for crewed flight Two crewed orbital Block I CSM missions were planned: AS-204 and AS-205. The Block I crew positions were titled Command Pilot, Senior Pilot, and Pilot. The Senior Pilot would assume navigation duties, while the Pilot would function as a systems engineer. The astronauts would wear a modified version of the Gemini spacesuit. After an uncrewed LM test flight AS-206, a crew would fly the first Block II CSM and LM in a dual mission known as AS-207/208, or AS-278 (each spacecraft would be launched on a separate Saturn IB). The Block II crew positions were titled Commander, Command Module Pilot, and Lunar Module Pilot. The astronauts would begin wearing a new Apollo A6L spacesuit, designed to accommodate lunar extravehicular activity (EVA). The traditional visor helmet was replaced with a clear "fishbowl" type for greater visibility, and the lunar surface EVA suit would include a water-cooled undergarment. Deke Slayton, the grounded Mercury astronaut who became director of flight crew operations for the Gemini and Apollo programs, selected the first Apollo crew in January 1966, with Grissom as Command Pilot, White as Senior Pilot, and rookie Donn F. Eisele as Pilot. But Eisele dislocated his shoulder twice aboard the KC135 weightlessness training aircraft, and had to undergo surgery on January 27. Slayton replaced him with Chaffee. NASA announced the final crew selection for AS-204 on March 21, 1966, with the backup crew consisting of Gemini veterans James McDivitt and David Scott, with rookie Russell L. "Rusty" Schweickart. Mercury/Gemini veteran Wally Schirra, Eisele, and rookie Walter Cunningham were announced on September 29 as the prime crew for AS-205. In December 1966, the AS-205 mission was canceled, since the validation of the CSM would be accomplished on the 14-day first flight, and AS-205 would have been devoted to space experiments and contribute no new engineering knowledge about the spacecraft. Its Saturn IB was allocated to the dual mission, now redesignated AS-205/208 or AS-258, planned for August 1967. McDivitt, Scott and Schweickart were promoted to the prime AS-258 crew, and Schirra, Eisele and Cunningham were reassigned as the Apollo1 backup crew. Program delays The spacecraft for the AS-202 and AS-204 missions were delivered by North American Aviation to the Kennedy Space Center with long lists of equipment problems which had to be corrected before flight; these delays caused the launch of AS-202 to slip behind AS-203, and eliminated hopes the first crewed mission might be ready to launch as soon as November 1966, concurrently with the last Gemini mission. Eventually, the planned AS-204 flight date was pushed to February 21, 1967. North American Aviation was prime contractor not only for the Apollo CSM, but for the SaturnV S-II second stage as well, and delays in this stage pushed the first uncrewed SaturnV flight AS-501 from late 1966 to November 1967. (The initial assembly of AS-501 had to use a dummy spacer spool in place of the stage.) The problems with North American were severe enough in late 1965 to cause Manned Space Flight Administrator George Mueller to appoint program director Samuel Phillips to head a "tiger team" to investigate North American's problems and identify corrections. Phillips documented his findings in a December 19 letter to NAA president Lee Atwood, with a strongly worded letter by Mueller, and also gave a presentation of the results to Mueller and Deputy Administrator Robert Seamans. Meanwhile, Grumman was also encountering problems with the Lunar Module, eliminating hopes it would be ready for crewed flight in 1967, not long after the first crewed CSM flights. Apollo 1 fire Grissom, White, and Chaffee decided to name their flight Apollo1 as a motivational focus on the first crewed flight. They trained and conducted tests of their spacecraft at North American, and in the altitude chamber at the Kennedy Space Center. A "plugs-out" test was planned for January, which would simulate a launch countdown on LC-34 with the spacecraft transferring from pad-supplied to internal power. If successful, this would be followed by a more rigorous countdown simulation test closer to the February 21 launch, with both spacecraft and launch vehicle fueled. The plugs-out test began on the morning of January 27, 1967, and immediately was plagued with problems. First, the crew noticed a strange odor in their spacesuits which delayed the sealing of the hatch. Then, communications problems frustrated the astronauts and forced a hold in the simulated countdown. During this hold, an electrical fire began in the cabin and spread quickly in the high pressure, 100% oxygen atmosphere. Pressure rose high enough from the fire that the cabin inner wall burst, allowing the fire to erupt onto the pad area and frustrating attempts to rescue the crew. The astronauts were asphyxiated before the hatch could be opened. NASA immediately convened an accident review board, overseen by both houses of Congress. While the determination of responsibility for the accident was complex, the review board concluded that "deficiencies existed in command module design, workmanship and quality control". At the insistence of NASA Administrator Webb, North American removed Harrison Storms as command module program manager. Webb also reassigned Apollo Spacecraft Program Office (ASPO) Manager Joseph Francis Shea, replacing him with George Low. To remedy the causes of the fire, changes were made in the Block II spacecraft and operational procedures, the most important of which were use of a nitrogen/oxygen mixture instead of pure oxygen before and during launch, and removal of flammable cabin and space suit materials. The Block II design already called for replacement of the Block I plug-type hatch cover with a quick-release, outward opening door. NASA discontinued the crewed Block I program, using the BlockI spacecraft only for uncrewed SaturnV flights. Crew members would also exclusively wear modified, fire-resistant A7L Block II space suits, and would be designated by the Block II titles, regardless of whether a LM was present on the flight or not. Uncrewed Saturn V and LM tests On April 24, 1967, Mueller published an official Apollo mission numbering scheme, using sequential numbers for all flights, crewed or uncrewed. The sequence would start with Apollo 4 to cover the first three uncrewed flights while retiring the Apollo1 designation to honor the crew, per their widows' wishes. In September 1967, Mueller approved a sequence of mission types which had to be successfully accomplished in order to achieve the crewed lunar landing. Each step had to be successfully accomplished before the next ones could be performed, and it was unknown how many tries of each mission would be necessary; therefore letters were used instead of numbers. The A missions were uncrewed Saturn V validation; B was uncrewed LM validation using the Saturn IB; C was crewed CSM Earth orbit validation using the Saturn IB; D was the first crewed CSM/LM flight (this replaced AS-258, using a single Saturn V launch); E would be a higher Earth orbit CSM/LM flight; F would be the first lunar mission, testing the LM in lunar orbit but without landing (a "dress rehearsal"); and G would be the first crewed landing. The list of types covered follow-on lunar exploration to include H lunar landings, I for lunar orbital survey missions, and J for extended-stay lunar landings. The delay in the CSM caused by the fire enabled NASA to catch up on human-rating the LM and SaturnV. Apollo4 (AS-501) was the first uncrewed flight of the SaturnV, carrying a BlockI CSM on November 9, 1967. The capability of the command module's heat shield to survive a trans-lunar reentry was demonstrated by using the service module engine to ram it into the atmosphere at higher than the usual Earth-orbital reentry speed. Apollo 5 (AS-204) was the first uncrewed test flight of the LM in Earth orbit, launched from pad 37 on January 22, 1968, by the Saturn IB that would have been used for Apollo 1. The LM engines were successfully test-fired and restarted, despite a computer programming error which cut short the first descent stage firing. The ascent engine was fired in abort mode, known as a "fire-in-the-hole" test, where it was lit simultaneously with jettison of the descent stage. Although Grumman wanted a second uncrewed test, George Low decided the next LM flight would be crewed. This was followed on April 4, 1968, by Apollo 6 (AS-502) which carried a CSM and a LM Test Article as ballast. The intent of this mission was to achieve trans-lunar injection, followed closely by a simulated direct-return abort, using the service module engine to achieve another high-speed reentry. The Saturn V experienced pogo oscillation, a problem caused by non-steady engine combustion, which damaged fuel lines in the second and third stages. Two S-II engines shut down prematurely, but the remaining engines were able to compensate. The damage to the third stage engine was more severe, preventing it from restarting for trans-lunar injection. Mission controllers were able to use the service module engine to essentially repeat the flight profile of Apollo 4. Based on the good performance of Apollo6 and identification of satisfactory fixes to the Apollo6 problems, NASA declared the SaturnV ready to fly crew, canceling a third uncrewed test. Crewed development missions Apollo 7, launched from LC-34 on October 11, 1968, was the Cmission, crewed by Schirra, Eisele, and Cunningham. It was an 11-day Earth-orbital flight which tested the CSM systems. Apollo 8 was planned to be the D mission in December 1968, crewed by McDivitt, Scott and Schweickart, launched on a SaturnV instead of two Saturn IBs. In the summer it had become clear that the LM would not be ready in time. Rather than waste the Saturn V on another simple Earth-orbiting mission, ASPO Manager George Low suggested the bold step of sending Apollo8 to orbit the Moon instead, deferring the Dmission to the next mission in March 1969, and eliminating the E mission. This would keep the program on track. The Soviet Union had sent two tortoises, mealworms, wine flies, and other lifeforms around the Moon on September 15, 1968, aboard Zond 5, and it was believed they might soon repeat the feat with human cosmonauts. The decision was not announced publicly until successful completion of Apollo 7. Gemini veterans Frank Borman and Jim Lovell, and rookie William Anders captured the world's attention by making ten lunar orbits in 20 hours, transmitting television pictures of the lunar surface on Christmas Eve, and returning safely to Earth. The following March, LM flight, rendezvous and docking were successfully demonstrated in Earth orbit on Apollo 9, and Schweickart tested the full lunar EVA suit with its portable life support system (PLSS) outside the LM. The F mission was successfully carried out on Apollo 10 in May 1969 by Gemini veterans Thomas P. Stafford, John Young and Eugene Cernan. Stafford and Cernan took the LM to within of the lunar surface. The G mission was achieved on Apollo 11 in July 1969 by an all-Gemini veteran crew consisting of Neil Armstrong, Michael Collins and Buzz Aldrin. Armstrong and Aldrin performed the first landing at the Sea of Tranquility at 20:17:40 UTC on July 20, 1969. They spent a total of 21 hours, 36 minutes on the surface, and spent 2hours, 31 minutes outside the spacecraft, walking on the surface, taking photographs, collecting material samples, and deploying automated scientific instruments, while continuously sending black-and-white television back to Earth. The astronauts returned safely on July 24. Production lunar landings In November 1969, Charles "Pete" Conrad became the third person to step onto the Moon, which he did while speaking more informally than had Armstrong: Conrad and rookie Alan L. Bean made a precision landing of Apollo 12 within walking distance of the Surveyor 3 uncrewed lunar probe, which had landed in April 1967 on the Ocean of Storms. The command module pilot was Gemini veteran Richard F. Gordon Jr. Conrad and Bean carried the first lunar surface color television camera, but it was damaged when accidentally pointed into the Sun. They made two EVAs totaling 7hours and 45 minutes. On one, they walked to the Surveyor, photographed it, and removed some parts which they returned to Earth. The contracted batch of 15 Saturn Vs was enough for lunar landing missions through Apollo 20. Shortly after Apollo 11, NASA publicized a preliminary list of eight more planned landing sites after Apollo 12, with plans to increase the mass of the CSM and LM for the last five missions, along with the payload capacity of the Saturn V. These final missions would combine the I and J types in the 1967 list, allowing the CMP to operate a package of lunar orbital sensors and cameras while his companions were on the surface, and allowing them to stay on the Moon for over three days. These missions would also carry the Lunar Roving Vehicle (LRV) increasing the exploration area and allowing televised liftoff of the LM. Also, the Block II spacesuit was revised for the extended missions to allow greater flexibility and visibility for driving the LRV. The success of the first two landings allowed the remaining missions to be crewed with a single veteran as commander, with two rookies. Apollo 13 launched Lovell, Jack Swigert, and Fred Haise in April 1970, headed for the Fra Mauro formation. But two days out, a liquid oxygen tank exploded, disabling the service module and forcing the crew to use the LM as a "lifeboat" to return to Earth. Another NASA review board was convened to determine the cause, which turned out to be a combination of damage of the tank in the factory, and a subcontractor not making a tank component according to updated design specifications. Apollo was grounded again, for the remainder of 1970 while the oxygen tank was redesigned and an extra one was added. Mission cutbacks About the time of the first landing in 1969, it was decided to use an existing Saturn V to launch the Skylab orbital laboratory pre-built on the ground, replacing the original plan to construct it in orbit from several Saturn IB launches; this eliminated Apollo 20. NASA's yearly budget also began to shrink in light of the successful landing, and NASA also had to make funds available for the development of the upcoming Space Shuttle. By 1971, the decision was made to also cancel missions 18 and 19. The two unused Saturn Vs became museum exhibits at the John F. Kennedy Space Center on Merritt Island, Florida, George C. Marshall Space Center in Huntsville, Alabama, Michoud Assembly Facility in New Orleans, Louisiana, and Lyndon B. Johnson Space Center in Houston, Texas. The cutbacks forced mission planners to reassess the original planned landing sites in order to achieve the most effective geological sample and data collection from the remaining four missions. Apollo 15 had been planned to be the last of the H series missions, but since there would be only two subsequent missions left, it was changed to the first of three J missions. Apollo 13's Fra Mauro mission was reassigned to Apollo 14, commanded in February 1971 by Mercury veteran Alan Shepard, with Stuart Roosa and Edgar Mitchell. This time the mission was successful. Shepard and Mitchell spent 33 hours and 31 minutes on the surface, and completed two EVAs totalling 9hours 24 minutes, which was a record for the longest EVA by a lunar crew at the time. In August 1971, just after conclusion of the Apollo 15 mission, President Richard Nixon proposed canceling the two remaining lunar landing missions, Apollo 16 and 17. Office of Management and Budget Deputy Director Caspar Weinberger was opposed to this, and persuaded Nixon to keep the remaining missions. Extended missions Apollo 15 was launched on July 26, 1971, with David Scott, Alfred Worden and James Irwin. Scott and Irwin landed on July 30 near Hadley Rille, and spent just under two days, 19 hours on the surface. In over 18 hours of EVA, they collected about of lunar material. Apollo 16 landed in the Descartes Highlands on April 20, 1972. The crew was commanded by John Young, with Ken Mattingly and Charles Duke. Young and Duke spent just under three days on the surface, with a total of over 20 hours EVA. Apollo 17 was the last of the Apollo program, landing in the Taurus–Littrow region in December 1972. Eugene Cernan commanded Ronald E. Evans and NASA's first scientist-astronaut, geologist Harrison H. Schmitt. Schmitt was originally scheduled for Apollo 18, but the lunar geological community lobbied for his inclusion on the final lunar landing. Cernan and Schmitt stayed on the surface for just over three days and spent just over 23 hours of total EVA. Canceled missions Several missions were planned for but were canceled before details were finalized. Mission summary Source: Apollo by the Numbers: A Statistical Reference (Orloff 2004). Samples returned The Apollo program returned over of lunar rocks and soil to the Lunar Receiving Laboratory in Houston. Today, 75% of the samples are stored at the Lunar Sample Laboratory Facility built in 1979. The rocks collected from the Moon are extremely old compared to rocks found on Earth, as measured by radiometric dating techniques. They range in age from about 3.2 billion years for the basaltic samples derived from the lunar maria, to about 4.6 billion years for samples derived from the highlands crust. As such, they represent samples from a very early period in the development of the Solar System, that are largely absent on Earth. One important rock found during the Apollo Program is dubbed the Genesis Rock, retrieved by astronauts David Scott and James Irwin during the Apollo 15 mission. This anorthosite rock is composed almost exclusively of the calcium-rich feldspar mineral anorthite, and is believed to be representative of the highland crust. A geochemical component called KREEP was discovered by Apollo 12, which has no known terrestrial counterpart. KREEP and the anorthositic samples have been used to infer that the outer portion of the Moon was once completely molten (see lunar magma ocean). Almost all the rocks show evidence of impact process effects. Many samples appear to be pitted with micrometeoroid impact craters, which is never seen on Earth rocks, due to the thick atmosphere. Many show signs of being subjected to high-pressure shock waves that are generated during impact events. Some of the returned samples are of impact melt (materials melted near an impact crater.) All samples returned from the Moon are highly brecciated as a result of being subjected to multiple impact events. From analyses of the composition of the returned lunar samples, it is now believed that the Moon was created through the impact of a large astronomical body with Earth. Costs Apollo cost $25.4 billion or approximately $257 billion (2023) using improved cost analysis. Of this amount, $20.2 billion ($ adjusted) was spent on the design, development, and production of the Saturn family of launch vehicles, the Apollo spacecraft, spacesuits, scientific experiments, and mission operations. The cost of constructing and operating Apollo-related ground facilities, such as the NASA human spaceflight centers and the global tracking and data acquisition network, added an additional $5.2 billion ($ adjusted). The amount grows to $28 billion ($280 billion adjusted) if the costs for related projects such as Project Gemini and the robotic Ranger, Surveyor, and Lunar Orbiter programs are included. NASA's official cost breakdown, as reported to Congress in the Spring of 1973, is as follows: Accurate estimates of human spaceflight costs were difficult in the early 1960s, as the capability was new and management experience was lacking. Preliminary cost analysis by NASA estimated $7 billion – $12 billion for a crewed lunar landing effort. NASA Administrator James Webb increased this estimate to $20 billion before reporting it to Vice President Johnson in April 1961. Project Apollo was a massive undertaking, representing the largest research and development project in peacetime. At its peak, it employed over 400,000 employees and contractors around the country and accounted for more than half of NASA's total spending in the 1960s. After the first Moon landing, public and political interest waned, including that of President Nixon, who wanted to rein in federal spending. NASA's budget could not sustain Apollo missions which cost, on average, $445 million ($ adjusted) each while simultaneously developing the Space Shuttle. The final fiscal year of Apollo funding was 1973. Apollo Applications Program Looking beyond the crewed lunar landings, NASA investigated several post-lunar applications for Apollo hardware. The Apollo Extension Series (Apollo X) proposed up to 30 flights to Earth orbit, using the space in the Spacecraft Lunar Module Adapter (SLA) to house a small orbital laboratory (workshop). Astronauts would continue to use the CSM as a ferry to the station. This study was followed by design of a larger orbital workshop to be built in orbit from an empty S-IVB Saturn upper stage and grew into the Apollo Applications Program (AAP). The workshop was to be supplemented by the Apollo Telescope Mount, which could be attached to the ascent stage of the lunar module via a rack. The most ambitious plan called for using an empty S-IVB as an interplanetary spacecraft for a Venus fly-by mission. The S-IVB orbital workshop was the only one of these plans to make it off the drawing board. Dubbed Skylab, it was assembled on the ground rather than in space, and launched in 1973 using the two lower stages of a Saturn V. It was equipped with an Apollo Telescope Mount. Skylab's last crew departed the station on February 8, 1974, and the station itself re-entered the atmosphere in 1979 after development of the Space Shuttle was delayed too long to save it. The Apollo–Soyuz program also used Apollo hardware for the first joint nation spaceflight, paving the way for future cooperation with other nations in the Space Shuttle and International Space Station programs. Recent observations In 2008, Japan Aerospace Exploration Agency's SELENE probe observed evidence of the halo surrounding the Apollo 15 Lunar Module blast crater while orbiting above the lunar surface. Beginning in 2009, NASA's robotic Lunar Reconnaissance Orbiter, while orbiting above the Moon, photographed the remnants of the Apollo program left on the lunar surface, and each site where crewed Apollo flights landed. All of the U.S. flags left on the Moon during the Apollo missions were found to still be standing, with the exception of the one left during the Apollo 11 mission, which was blown over during that mission's lift-off from the lunar surface; the degree to which these flags retain their original colors remains unknown. The flags cannot be seen through a telescope from Earth. In a November 16, 2009, editorial, The New York Times opined: Legacy Science and engineering The Apollo program has been described as the greatest technological achievement in human history. Apollo stimulated many areas of technology, leading to over 1,800 spinoff products as of 2015, including advances in the development of cordless power tools, fireproof materials, heart monitors, solar panels, digital imaging, and the use of liquid methane as fuel. The flight computer design used in both the lunar and command modules was, along with the Polaris and Minuteman missile systems, the driving force behind early research into integrated circuits (ICs). By 1963, Apollo was using 60 percent of the United States' production of ICs. The crucial difference between the requirements of Apollo and the missile programs was Apollo's much greater need for reliability. While the Navy and Air Force could work around reliability problems by deploying more missiles, the political and financial cost of failure of an Apollo mission was unacceptably high. Technologies and techniques required for Apollo were developed by Project Gemini. The Apollo project was enabled by NASA's adoption of new advances in semiconductor electronic technology, including metal–oxide–semiconductor field-effect transistors (MOSFETs) in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit chips in the Apollo Guidance Computer (AGC). Cultural impact The crew of Apollo 8 sent the first live televised pictures of the Earth and the Moon back to Earth, and read from the creation story in the Book of Genesis, on Christmas Eve 1968. An estimated one-quarter of the population of the world saw—either live or delayed—the Christmas Eve transmission during the ninth orbit of the Moon, and an estimated one-fifth of the population of the world watched the live transmission of the Apollo 11 moonwalk. The Apollo program also affected environmental activism in the 1970s due to photos taken by the astronauts. The most well known include Earthrise, taken by William Anders on Apollo 8, and The Blue Marble, taken by the Apollo 17 astronauts. The Blue Marble was released during a surge in environmentalism, and became a symbol of the environmental movement as a depiction of Earth's frailty, vulnerability, and isolation amid the vast expanse of space. According to The Economist, Apollo succeeded in accomplishing President Kennedy's goal of taking on the Soviet Union in the Space Race by accomplishing a singular and significant achievement, to demonstrate the superiority of the free-market system. The publication noted the irony that in order to achieve the goal, the program required the organization of tremendous public resources within a vast, centralized government bureaucracy. Apollo 11 broadcast data restoration project Prior to Apollo 11's 40th anniversary in 2009, NASA searched for the original videotapes of the mission's live televised moonwalk. After an exhaustive three-year search, it was concluded that the tapes had probably been erased and reused. A new digitally remastered version of the best available broadcast television footage was released instead. Depictions on film Documentaries Numerous documentary films cover the Apollo program and the Space Race, including: Footprints on the Moon (1969) Moonwalk One (1970) The Greatest Adventure (1978) For All Mankind (1989) Moon Shot (1994 miniseries) "Moon" from the BBC miniseries The Planets (1999) Magnificent Desolation: Walking on the Moon 3D (2005) The Wonder of It All (2007) In the Shadow of the Moon (2007) When We Left Earth: The NASA Missions (2008 miniseries) Moon Machines (2008 miniseries) James May on the Moon (2009) NASA's Story (2009 miniseries) Apollo 11 (2019) Chasing the Moon (2019 miniseries) Docudramas Some missions have been dramatized: Apollo 13 (1995) Apollo 11 (1996) From the Earth to the Moon (1998) The Dish (2000) Space Race (2005) Moonshot (2009) First Man (2018) Fictional The Apollo program has been the focus of several works of fiction, including: Apollo 18 (2011), horror movie which was released to negative reviews. Men in Black 3 (2012), Science Fiction/Comedy movie. Agent J played by Will Smith goes back to the Apollo 11 launch in 1969 to ensure that a global protection system is launched in to space. For All Mankind (2019), TV series depicting an alternate history in which the Soviet Union was the first country to successfully land a man on the Moon. Indiana Jones and the Dial of Destiny (2023), fifth Indiana Jones film, in which Jürgen Voller, a NASA member and ex-Nazi involved with the Apollo program, wants to time travel. The New York City parade for the Apollo 11 crew is portrayed as a plot point.
Technology
Programs and launch sites
null
1525
https://en.wikipedia.org/wiki/Aspirin
Aspirin
Aspirin is the genericized trademark for acetylsalicylic acid (ASA), a nonsteroidal anti-inflammatory drug (NSAID) used to reduce pain, fever, and inflammation, and as an antithrombotic. Specific inflammatory conditions that aspirin is used to treat include Kawasaki disease, pericarditis, and rheumatic fever. Aspirin is also used long-term to help prevent further heart attacks, ischaemic strokes, and blood clots in people at high risk. For pain or fever, effects typically begin within 30 minutes. Aspirin works similarly to other NSAIDs but also suppresses the normal functioning of platelets. One common adverse effect is an upset stomach. More significant side effects include stomach ulcers, stomach bleeding, and worsening asthma. Bleeding risk is greater among those who are older, drink alcohol, take other NSAIDs, or are on other blood thinners. Aspirin is not recommended in the last part of pregnancy. It is not generally recommended in children with infections because of the risk of Reye syndrome. High doses may result in ringing in the ears. A precursor to aspirin found in the bark of the willow tree (genus Salix) has been used for its health effects for at least 2,400 years. In 1853, chemist Charles Frédéric Gerhardt treated the medicine sodium salicylate with acetyl chloride to produce acetylsalicylic acid for the first time. Over the next 50 years, other chemists, mostly of the German company Bayer, established the chemical structure and devised more efficient production methods. Felix Hoffmann (or Arthur Eichengrün) of Bayer was the first to produce acetylsalicylic acid in a pure, stable form in 1897. By 1899, Bayer had dubbed this drug Aspirin and was selling it globally. Aspirin is available without medical prescription as a proprietary or generic medication in most jurisdictions. It is one of the most widely used medications globally, with an estimated (50 to 120 billion pills) consumed each year, and is on the World Health Organization's List of Essential Medicines. In 2022, it was the 36th most commonly prescribed medication in the United States, with more than 16million prescriptions. Brand vs. generic name In 1897, scientists at the Bayer company began studying acetylsalicylic acid as a less-irritating replacement medication for common salicylate medicines. By 1899, Bayer had named it "Aspirin" and was selling it around the world. Aspirin's popularity grew over the first half of the 20th century, leading to competition between many brands and formulations. The word Aspirin was Bayer's brand name; however, its rights to the trademark were lost or sold in many countries. The name is ultimately a blend of the prefix a(cetyl) + spir Spiraea, the meadowsweet plant genus from which the acetylsalicylic acid was originally derived at Bayer + -in, the common chemical suffix. Chemical properties Aspirin decomposes rapidly in solutions of ammonium acetate or the acetates, carbonates, citrates, or hydroxides of the alkali metals. It is stable in dry air, but gradually hydrolyses in contact with moisture to acetic and salicylic acids. In solution with alkalis, the hydrolysis proceeds rapidly and the clear solutions formed may consist entirely of acetate and salicylate. Like flour mills, factories producing aspirin tablets must control the amount of the powder that becomes airborne inside the building, because the powder-air mixture can be explosive. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit in the United States of 5mg/m3 (time-weighted average). In 1989, the Occupational Safety and Health Administration (OSHA) set a legal permissible exposure limit for aspirin of 5mg/m3, but this was vacated by the AFL-CIO v. OSHA decision in 1993. Synthesis The synthesis of aspirin is classified as an esterification reaction. Salicylic acid is treated with acetic anhydride, an acid derivative, causing a chemical reaction that turns salicylic acid's hydroxyl group into an ester group (R-OH → R-OCOCH3). This process yields aspirin and acetic acid, which is considered a byproduct of this reaction. Small amounts of sulfuric acid (and occasionally phosphoric acid) are almost always used as a catalyst. This method is commonly demonstrated in undergraduate teaching labs. Reaction between acetic acid and salicylic acid can also form aspirin but this esterification reaction is reversible and the presence of water can lead to hydrolysis of the aspirin. So, an anhydrous reagent is preferred. Reaction mechanism Formulations containing high concentrations of aspirin often smell like vinegar because aspirin can decompose through hydrolysis in moist conditions, yielding salicylic and acetic acids. Physical properties Aspirin, an acetyl derivative of salicylic acid, is a white, crystalline, weakly acidic substance that melts at , and decomposes around . Its acid dissociation constant (pKa) is 3.5 at . Polymorphism Polymorphism, or the ability of a substance to form more than one crystal structure, is important in the development of pharmaceutical ingredients. Many drugs receive regulatory approval for only a single crystal form or polymorph. Until 2005, there was only one proven polymorph of aspirin (Form I), though the existence of another polymorph was debated since the 1960s, and one report from 1981 reported that when crystallized in the presence of aspirin anhydride, the diffractogram of aspirin has weak additional peaks. Though at the time it was dismissed as mere impurity, it was, in retrospect, Form II aspirin. Form II was reported in 2005, found after attempted co-crystallization of aspirin and levetiracetam from hot acetonitrile. In form I, pairs of aspirin molecules form centrosymmetric dimers through the acetyl groups with the (acidic) methyl proton to carbonyl hydrogen bonds. In form II, each aspirin molecule forms the same hydrogen bonds, but with two neighbouring molecules instead of one. With respect to the hydrogen bonds formed by the carboxylic acid groups, both polymorphs form identical dimer structures. The aspirin polymorphs contain identical 2-dimensional sections and are therefore more precisely described as polytypes. Pure Form II aspirin could be prepared by seeding the batch with aspirin anhydrate in 15% weight. Form III was reported in 2015 by compressing form I above 2 GPa, but it reverts back to Form I when pressure is removed. Form IV was reported in 2017. It is stable at ambient conditions. Mechanism of action Discovery of the mechanism In 1971, British pharmacologist John Robert Vane, then employed by the Royal College of Surgeons in London, showed aspirin suppressed the production of prostaglandins and thromboxanes. For this discovery he was awarded the 1982 Nobel Prize in Physiology or Medicine, jointly with Sune Bergström and Bengt Ingemar Samuelsson. Prostaglandins and thromboxanes Aspirin's ability to suppress the production of prostaglandins and thromboxanes is due to its irreversible inactivation of the cyclooxygenase (COX; officially known as prostaglandin-endoperoxide synthase, PTGS) enzyme required for prostaglandin and thromboxane synthesis. Aspirin acts as an acetylating agent where an acetyl group is covalently attached to a serine residue in the active site of the COX enzyme (Suicide inhibition). This makes aspirin different from other NSAIDs (such as diclofenac and ibuprofen), which are reversible inhibitors. Low-dose aspirin use irreversibly blocks the formation of thromboxane A2 in platelets, producing an inhibitory effect on platelet aggregation during the lifetime of the affected platelet (8–9 days). This antithrombotic property makes aspirin useful for reducing the incidence of heart attacks in people who have had a heart attack, unstable angina, ischemic stroke or transient ischemic attack. 40mg of aspirin a day is able to inhibit a large proportion of maximum thromboxane A2 release provoked acutely, with the prostaglandin I2 synthesis being little affected; however, higher doses of aspirin are required to attain further inhibition. Prostaglandins, local hormones produced in the body, have diverse effects, including the transmission of pain information to the brain, modulation of the hypothalamic thermostat, and inflammation. Thromboxanes are responsible for the aggregation of platelets that form blood clots. Heart attacks are caused primarily by blood clots, and low doses of aspirin are seen as an effective medical intervention to prevent a second acute myocardial infarction. COX-1 and COX-2 inhibition At least two different types of cyclooxygenases, COX-1 and COX-2, are acted on by aspirin. Aspirin irreversibly inhibits COX-1 and modifies the enzymatic activity of COX-2. COX-2 normally produces prostanoids, most of which are proinflammatory. Aspirin-modified COX-2 (aka prostaglandin-endoperoxide synthase 2 or PTGS2) produces epi-lipoxins, most of which are anti-inflammatory. Newer NSAID drugs, COX-2 inhibitors (coxibs), have been developed to inhibit only COX-2, with the intent to reduce the incidence of gastrointestinal side effects. Several COX-2 inhibitors, such as rofecoxib (Vioxx), have been withdrawn from the market, after evidence emerged that COX-2 inhibitors increase the risk of heart attack and stroke. Endothelial cells lining the microvasculature in the body are proposed to express COX-2, and, by selectively inhibiting COX-2, prostaglandin production (specifically, PGI2; prostacyclin) is downregulated with respect to thromboxane levels, as COX-1 in platelets is unaffected. Thus, the protective anticoagulative effect of PGI2 is removed, increasing the risk of thrombus and associated heart attacks and other circulatory problems. Since platelets have no DNA, they are unable to synthesize new COX-1 once aspirin has irreversibly inhibited the enzyme, an important difference as compared with reversible inhibitors. Furthermore, aspirin, while inhibiting the ability of COX-2 to form pro-inflammatory products such as the prostaglandins, converts this enzyme's activity from a prostaglandin-forming cyclooxygenase to a lipoxygenase-like enzyme: aspirin-treated COX-2 metabolizes a variety of polyunsaturated fatty acids to hydroperoxy products which are then further metabolized to specialized proresolving mediators such as the aspirin-triggered lipoxins(15-epilipoxin-A4/B4), aspirin-triggered resolvins, and aspirin-triggered maresins. These mediators possess potent anti-inflammatory activity. It is proposed that this aspirin-triggered transition of COX-2 from cyclooxygenase to lipoxygenase activity and the consequential formation of specialized proresolving mediators contributes to the anti-inflammatory effects of aspirin. Additional mechanisms Aspirin has been shown to have at least three additional modes of action. It uncouples oxidative phosphorylation in cartilaginous (and hepatic) mitochondria, by diffusing from the inner membrane space as a proton carrier back into the mitochondrial matrix, where it ionizes once again to release protons. Aspirin buffers and transports the protons. When high doses are given, it may actually cause fever, owing to the heat released from the electron transport chain, as opposed to the antipyretic action of aspirin seen with lower doses. In addition, aspirin induces the formation of NO-radicals in the body, which have been shown in mice to have an independent mechanism of reducing inflammation. This reduced leukocyte adhesion is an important step in the immune response to infection; however, evidence is insufficient to show aspirin helps to fight infection. More recent data also suggest salicylic acid and its derivatives modulate signalling through NF-κB. NF-κB, a transcription factor complex, plays a central role in many biological processes, including inflammation. Aspirin is readily broken down in the body to salicylic acid, which itself has anti-inflammatory, antipyretic, and analgesic effects. In 2012, salicylic acid was found to activate AMP-activated protein kinase, which has been suggested as a possible explanation for some of the effects of both salicylic acid and aspirin. The acetyl portion of the aspirin molecule has its own targets. Acetylation of cellular proteins is a well-established phenomenon in the regulation of protein function at the post-translational level. Aspirin is able to acetylate several other targets in addition to COX isoenzymes. These acetylation reactions may explain many hitherto unexplained effects of aspirin. Formulations Aspirin is produced in many formulations, with some differences in effect. In particular, aspirin can cause gastrointestinal bleeding, and formulations are sought which deliver the benefits of aspirin while mitigating harmful bleeding. Formulations may be combined (e.g., buffered + vitamin C). Tablets, typically of about 75–100 mg and 300–320 mg of immediate-release aspirin (IR-ASA). Dispersible tablets. Enteric-coated tablets. Buffered formulations containing aspirin with one of many buffering agents. Formulations of aspirin with vitamin C (ASA-VitC) A phospholipid-aspirin complex liquid formulation, PL-ASA. the phospholipid coating was being trialled to determine if it caused less gastrointestinal damage. Pharmacokinetics Acetylsalicylic acid is a weak acid, and very little of it is ionized in the stomach after oral administration. Acetylsalicylic acid is quickly absorbed through the cell membrane in the acidic conditions of the stomach. The increased pH and larger surface area of the small intestine causes aspirin to be absorbed more slowly there, as more of it is ionized. Owing to the formation of concretions, aspirin is absorbed much more slowly during overdose, and plasma concentrations can continue to rise for up to 24 hours after ingestion. About 50–80% of salicylate in the blood is bound to human serum albumin, while the rest remains in the active, ionized state; protein binding is concentration-dependent. Saturation of binding sites leads to more free salicylate and increased toxicity. The volume of distribution is 0.1–0.2 L/kg. Acidosis increases the volume of distribution because of enhancement of tissue penetration of salicylates. As much as 80% of therapeutic doses of salicylic acid is metabolized in the liver. Conjugation with glycine forms salicyluric acid, and with glucuronic acid to form two different glucuronide esters. The conjugate with the acetyl group intact is referred to as the acyl glucuronide; the deacetylated conjugate is the phenolic glucuronide. These metabolic pathways have only a limited capacity. Small amounts of salicylic acid are also hydroxylated to gentisic acid. With large salicylate doses, the kinetics switch from first-order to zero-order, as metabolic pathways become saturated and renal excretion becomes increasingly important. Salicylates are excreted mainly by the kidneys as salicyluric acid (75%), free salicylic acid (10%), salicylic phenol (10%), and acyl glucuronides (5%), gentisic acid (< 1%), and 2,3-dihydroxybenzoic acid. When small doses (less than 250mg in an adult) are ingested, all pathways proceed by first-order kinetics, with an elimination half-life of about 2.0 h to 4.5 h. When higher doses of salicylate are ingested (more than 4 g), the half-life becomes much longer (15 h to 30 h), because the biotransformation pathways concerned with the formation of salicyluric acid and salicyl phenolic glucuronide become saturated. Renal excretion of salicylic acid becomes increasingly important as the metabolic pathways become saturated, because it is extremely sensitive to changes in urinary pH. A 10- to 20-fold increase in renal clearance occurs when urine pH is increased from 5 to 8. The use of urinary alkalinization exploits this particular aspect of salicylate elimination. It was found that short-term aspirin use in therapeutic doses might precipitate reversible acute kidney injury when the patient was ill with glomerulonephritis or cirrhosis. Aspirin for some patients with chronic kidney disease and some children with congestive heart failure was contraindicated. History Medicines made from willow and other salicylate-rich plants appear in clay tablets from ancient Sumer as well as the Ebers Papyrus from ancient Egypt. Hippocrates referred to the use of salicylic tea to reduce fevers around 400 BC, and willow bark preparations were part of the pharmacopoeia of Western medicine in classical antiquity and the Middle Ages. Willow bark extract became recognized for its specific effects on fever, pain, and inflammation in the mid-eighteenth century after the Rev Edward Stone of Chipping Norton, Oxfordshire, noticed that the bitter taste of willow bark resembled the taste of the bark of the cinchona tree, known as "Peruvian bark", which was used successfully in Peru to treat a variety of ailments. Stone experimented with preparations of powdered willow bark on people in Chipping Norton for five years and found it to be as effective as Peruvian bark and a cheaper domestic version. In 1763 he sent a report of his findings to the Royal Society in London. By the nineteenth century, pharmacists were experimenting with and prescribing a variety of chemicals related to salicylic acid, the active component of willow extract. In 1853, chemist Charles Frédéric Gerhardt treated sodium salicylate with acetyl chloride to produce acetylsalicylic acid for the first time; in the second half of the 19th century, other academic chemists established the compound's chemical structure and devised more efficient methods of synthesis. In 1897, scientists at the drug and dye firm Bayer began investigating acetylsalicylic acid as a less-irritating replacement for standard common salicylate medicines, and identified a new way to synthesize it. That year, Felix Hoffmann (or Arthur Eichengrün) of Bayer was the first to produce acetylsalicylic acid in a pure, stable form. By 1899, Bayer had dubbed this drug Aspirin and was selling it globally. The word Aspirin was Bayer's brand name, rather than the generic name of the drug; however, Bayer's rights to the trademark were lost or sold in many countries. Aspirin's popularity grew over the first half of the 20th century leading to fierce competition with the proliferation of aspirin brands and products. Aspirin's popularity declined after the development of acetaminophen/paracetamol in 1956 and ibuprofen in 1962. In the 1960s and 1970s, John Vane and others discovered the basic mechanism of aspirin's effects, while clinical trials and other studies from the 1960s to the 1980s established aspirin's efficacy as an anti-clotting agent that reduces the risk of clotting diseases. The initial large studies on the use of low-dose aspirin to prevent heart attacks that were published in the 1970s and 1980s helped spur reform in clinical research ethics and guidelines for human subject research and US federal law, and are often cited as examples of clinical trials that included only men, but from which people drew general conclusions that did not hold true for women. Aspirin sales revived considerably in the last decades of the 20th century, and remain strong in the 21st century with widespread use as a preventive treatment for heart attacks and strokes. Trademark Bayer lost its trademark for Aspirin in the United States and some other countries in actions taken between 1918 and 1921 because it had failed to use the name for its own product correctly and had for years allowed the use of "Aspirin" by other manufacturers without defending the intellectual property rights. Today, aspirin is a generic trademark in many countries. Aspirin, with a capital "A", remains a registered trademark of Bayer in Germany, Canada, Mexico, and in over 80 other countries, for acetylsalicylic acid in all markets, but using different packaging and physical aspects for each. Compendial status United States Pharmacopeia British Pharmacopoeia Medical use Aspirin is used in the treatment of a number of conditions, including fever, pain, rheumatic fever, and inflammatory conditions, such as rheumatoid arthritis, pericarditis, and Kawasaki disease. Lower doses of aspirin have also been shown to reduce the risk of death from a heart attack, or the risk of stroke in people who are at high risk or who have cardiovascular disease, but not in elderly people who are otherwise healthy. There is evidence that aspirin is effective at preventing colorectal cancer, though the mechanisms of this effect are unclear. Pain Aspirin is an effective analgesic for acute pain, although it is generally considered inferior to ibuprofen because aspirin is more likely to cause gastrointestinal bleeding. Aspirin is generally ineffective for those pains caused by muscle cramps, bloating, gastric distension, or acute skin irritation. As with other NSAIDs, combinations of aspirin and caffeine provide slightly greater pain relief than aspirin alone. Effervescent formulations of aspirin relieve pain faster than aspirin in tablets, which makes them useful for the treatment of migraines. Topical aspirin may be effective for treating some types of neuropathic pain. Aspirin, either by itself or in a combined formulation, effectively treats certain types of a headache, but its efficacy may be questionable for others. Secondary headaches, meaning those caused by another disorder or trauma, should be promptly treated by a medical provider. Among primary headaches, the International Classification of Headache Disorders distinguishes between tension headache (the most common), migraine, and cluster headache. Aspirin or other over-the-counter analgesics are widely recognized as effective for the treatment of tension headaches. Aspirin, especially as a component of an aspirin/paracetamol/caffeine combination, is considered a first-line therapy in the treatment of migraine, and comparable to lower doses of sumatriptan. It is most effective at stopping migraines when they are first beginning. Fever Like its ability to control pain, aspirin's ability to control fever is due to its action on the prostaglandin system through its irreversible inhibition of COX. Although aspirin's use as an antipyretic in adults is well established, many medical societies and regulatory agencies, including the American Academy of Family Physicians, the American Academy of Pediatrics, and the Food and Drug Administration, strongly advise against using aspirin for the treatment of fever in children because of the risk of Reye's syndrome, a rare but often fatal illness associated with the use of aspirin or other salicylates in children during episodes of viral or bacterial infection. Because of the risk of Reye's syndrome in children, in 1986, the US Food and Drug Administration (FDA) required labeling on all aspirin-containing medications advising against its use in children and teenagers. Inflammation Aspirin is used as an anti-inflammatory agent for both acute and long-term inflammation, as well as for the treatment of inflammatory diseases, such as rheumatoid arthritis. Heart attacks and strokes Aspirin is an important part of the treatment of those who have had a heart attack. It is generally not recommended for routine use by people with no other health problems, including those over the age of 70. The 2009 Antithrombotic Trialists' Collaboration published in Lancet evaluated the efficacy and safety of low dose aspirin in secondary prevention. In those with prior ischaemic stroke or acute myocardial infarction, daily low dose aspirin was associated with a 19% relative risk reduction of serious cardiovascular events (non-fatal myocardial infarction, non-fatal stroke, or vascular death). This did come at the expense of a 0.19% absolute risk increase in gastrointestinal bleeding; however, the benefits outweigh the hazard risk in this case. Data from previous trials have suggested that weight-based dosing of aspirin has greater benefits in primary prevention of cardiovascular outcomes. However, more recent trials were not able to replicate similar outcomes using low dose aspirin in low body weight (<70 kg) in specific subset of population studied i.e. elderly and diabetic population, and more evidence is required to study the effect of high dose aspirin in high body weight (≥70 kg). After percutaneous coronary interventions (PCIs), such as the placement of a coronary artery stent, a U.S. Agency for Healthcare Research and Quality guideline recommends that aspirin be taken indefinitely. Frequently, aspirin is combined with an ADP receptor inhibitor, such as clopidogrel, prasugrel, or ticagrelor to prevent blood clots. This is called dual antiplatelet therapy (DAPT). Duration of DAPT was advised in the United States and European Union guidelines after the CURE and PRODIGY studies. In 2020, the systematic review and network meta-analysis from Khan et al. showed promising benefits of short-term (< 6 months) DAPT followed by P2Y12 inhibitors in selected patients, as well as the benefits of extended-term (> 12 months) DAPT in high risk patients. In conclusion, the optimal duration of DAPT after PCIs should be personalized after outweighing each patient's risks of ischemic events and risks of bleeding events with consideration of multiple patient-related and procedure-related factors. Moreover, aspirin should be continued indefinitely after DAPT is complete. The status of the use of aspirin for the primary prevention in cardiovascular disease is conflicting and inconsistent, with recent changes from previously recommending it widely decades ago, and that some referenced newer trials in clinical guidelines show less of benefit of adding aspirin alongside other anti-hypertensive and cholesterol lowering therapies. The ASCEND study demonstrated that in high-bleeding risk diabetics with no prior cardiovascular disease, there is no overall clinical benefit (12% decrease in risk of ischaemic events v/s 29% increase in GI bleeding) of low dose aspirin in preventing the serious vascular events over a period of 7.4 years. Similarly, the results of the ARRIVE study also showed no benefit of same dose of aspirin in reducing the time to first cardiovascular outcome in patients with moderate risk of cardiovascular disease over a period of five years. Aspirin has also been suggested as a component of a polypill for prevention of cardiovascular disease. Complicating the use of aspirin for prevention is the phenomenon of aspirin resistance. For people who are resistant, aspirin's efficacy is reduced. Some authors have suggested testing regimens to identify people who are resistant to aspirin. As of , the United States Preventive Services Task Force (USPSTF) determined that there was a "small net benefit" for patients aged 40–59 with a 10% or greater 10-year cardiovascular disease (CVD) risk, and "no net benefit" for patients aged over 60. Determining the net benefit was based on balancing the risk reduction of taking aspirin for heart attacks and ischaemic strokes, with the increased risk of gastrointestinal bleeding, intracranial bleeding, and hemorrhagic strokes. Their recommendations state that age changes the risk of the medicine, with the magnitude of the benefit of aspirin coming from starting at a younger age, while the risk of bleeding, while small, increases with age, particular for adults over 60, and can be compounded by other risk factors such as diabetes and a history of gastrointestinal bleeding. As a result, the USPSTF suggests that "people ages 40 to 59 who are at higher risk for CVD should decide with their clinician whether to start taking aspirin; people 60 or older should not start taking aspirin to prevent a first heart attack or stroke." Primary prevention guidelines from made by the American College of Cardiology and the American Heart Association state they might consider aspirin for patients aged 40–69 with a higher risk of atherosclerotic CVD, without an increased bleeding risk, while stating they would not recommend aspirin for patients aged over 70 or adults of any age with an increased bleeding risk. They state a CVD risk estimation and a risk discussion should be done before starting on aspirin, while stating aspirin should be used "infrequently in the routine primary prevention of (atherosclerotic CVD) because of lack of net benefit". As of , the European Society of Cardiology made similar recommendations; considering aspirin specifically to patients aged less than 70 at high or very high CVD risk, without any clear contraindications, on a case-by-case basis considering both ischemic risk and bleeding risk. Cancer prevention Aspirin may reduce the overall risk of both getting cancer and dying from cancer. There is substantial evidence for lowering the risk of colorectal cancer (CRC), but aspirin must be taken for at least 10–20 years to see this benefit. It may also slightly reduce the risk of endometrial cancer and prostate cancer. Some conclude the benefits are greater than the risks due to bleeding in those at average risk. Others are unclear if the benefits are greater than the risk. Given this uncertainty, the 2007 United States Preventive Services Task Force (USPSTF) guidelines on this topic recommended against the use of aspirin for prevention of CRC in people with average risk. Nine years later however, the USPSTF issued a grade B recommendation for the use of low-dose aspirin (75 to 100mg/day) "for the primary prevention of CVD [cardiovascular disease] and CRC in adults 50 to 59 years of age who have a 10% or greater 10-year CVD risk, are not at increased risk for bleeding, have a life expectancy of at least 10 years, and are willing to take low-dose aspirin daily for at least 10 years". A meta-analysis through 2019 said that there was an association between taking aspirin and lower risk of cancer of the colorectum, esophagus, and stomach. In 2021, the U.S. Preventive services Task Force raised questions about the use of aspirin in cancer prevention. It notes the results of the 2018 ASPREE (Aspirin in Reducing Events in the Elderly) Trial, in which the risk of cancer-related death was higher in the aspirin-treated group than in the placebo group. Psychiatry Bipolar disorder Aspirin, along with several other agents with anti-inflammatory properties, has been repurposed as an add-on treatment for depressive episodes in subjects with bipolar disorder in light of the possible role of inflammation in the pathogenesis of severe mental disorders. A 2022 systematic review concluded that aspirin exposure reduced the risk of depression in a pooled cohort of three studies (HR 0.624, 95% CI: 0.0503, 1.198, P=0.033). However, further high-quality, longer-duration, double-blind randomized controlled trials (RCTs) are needed to determine whether aspirin is an effective add-on treatment for bipolar depression. Thus, notwithstanding the biological rationale, the clinical perspectives of aspirin and anti-inflammatory agents in the treatment of bipolar depression remain uncertain. Dementia Although cohort and longitudinal studies have shown low-dose aspirin has a greater likelihood of reducing the incidence of dementia, numerous randomized controlled trials have not validated this. Schizophrenia Some researchers have speculated the anti-inflammatory effects of aspirin may be beneficial for schizophrenia. Small trials have been conducted but evidence remains lacking. Other uses Aspirin is a first-line treatment for the fever and joint-pain symptoms of acute rheumatic fever. The therapy often lasts for one to two weeks, and is rarely indicated for longer periods. After fever and pain have subsided, the aspirin is no longer necessary, since it does not decrease the incidence of heart complications and residual rheumatic heart disease. Naproxen has been shown to be as effective as aspirin and less toxic, but due to the limited clinical experience, naproxen is recommended only as a second-line treatment. Along with rheumatic fever, Kawasaki disease remains one of the few indications for aspirin use in children in spite of a lack of high quality evidence for its effectiveness. Low-dose aspirin supplementation has moderate benefits when used for prevention of pre-eclampsia. This benefit is greater when started in early pregnancy. Aspirin has also demonstrated anti-tumoral effects, via inhibition of the PTTG1 gene, which is often overexpressed in tumors. Resistance For some people, aspirin does not have as strong an effect on platelets as for others, an effect known as aspirin-resistance or insensitivity. One study has suggested women are more likely to be resistant than men, and a different, aggregate study of 2,930 people found 28% were resistant. A study in 100 Italian people found, of the apparent 31% aspirin-resistant subjects, only 5% were truly resistant, and the others were noncompliant. Another study of 400 healthy volunteers found no subjects who were truly resistant, but some had "pseudoresistance, reflecting delayed and reduced drug absorption". Meta-analysis and systematic reviews have concluded that laboratory confirmed aspirin resistance confers increased rates of poorer outcomes in cardiovascular and neurovascular diseases. Although the majority of research conducted has surrounded cardiovascular and neurovascular, there is emerging research into the risk of aspirin resistance after orthopaedic surgery where aspirin is used for venous thromboembolism prophylaxis. Aspirin resistance in orthopaedic surgery, specifically after total hip and knee arthroplasties, is of interest as risk factors for aspirin resistance are also risk factors for venous thromboembolisms and osteoarthritis; the sequelae of requiring a total hip or knee arthroplasty. Some of these risk factors include obesity, advancing age, diabetes mellitus, dyslipidemia and inflammatory diseases. Dosages Adult aspirin tablets are produced in standardised sizes, which vary slightly from country to country, for example 300mg in Britain and 325mg in the United States. Smaller doses are based on these standards, e.g., 75mg and 81mg tablets. The 81 mg tablets are commonly called "baby aspirin" or "baby-strength", because they were originallybut no longerintended to be administered to infants and children. No medical significance occurs due to the slight difference in dosage between the 75mg and the 81mg tablets. The dose required for benefit appears to depend on a person's weight. For those weighing less than , low dose is effective for preventing cardiovascular disease; for patients above this weight, higher doses are required. In general, for adults, doses are taken four times a day for fever or arthritis, with doses near the maximal daily dose used historically for the treatment of rheumatic fever. For the prevention of myocardial infarction (MI) in someone with documented or suspected coronary artery disease, much lower doses are taken once daily. March 2009 recommendations from the USPSTF on the use of aspirin for the primary prevention of coronary heart disease encourage men aged 45–79 and women aged 55–79 to use aspirin when the potential benefit of a reduction in MI for men or stroke for women outweighs the potential harm of an increase in gastrointestinal hemorrhage. The WHI study of postmenopausal women found that aspirin resulted in a 25% lower risk of death from cardiovascular disease and a 14% lower risk of death from any cause, though there was no significant difference between 81mg and 325mg aspirin doses. The 2021 ADAPTABLE study also showed no significant difference in cardiovascular events or major bleeding between 81mg and 325mg doses of aspirin in patients (both men and women) with established cardiovascular disease. Low-dose aspirin use was also associated with a trend toward lower risk of cardiovascular events, and lower aspirin doses (75 or 81mg/day) may optimize efficacy and safety for people requiring aspirin for long-term prevention. In children with Kawasaki disease, aspirin is taken at dosages based on body weight, initially four times a day for up to two weeks and then at a lower dose once daily for a further six to eight weeks. Adverse effects In October 2020, the US Food and Drug Administration (FDA) required the drug label to be updated for all nonsteroidal anti-inflammatory medications to describe the risk of kidney problems in unborn babies that result in low amniotic fluid. They recommend avoiding NSAIDs in pregnant women at 20 weeks or later in pregnancy. One exception to the recommendation is the use of low-dose 81mg aspirin at any point in pregnancy under the direction of a health care professional. Contraindications Aspirin should not be taken by people who are allergic to ibuprofen or naproxen, or who have salicylate intolerance or a more generalized drug intolerance to NSAIDs, and caution should be exercised in those with asthma or NSAID-precipitated bronchospasm. Owing to its effect on the stomach lining, manufacturers recommend people with peptic ulcers, mild diabetes, or gastritis seek medical advice before using aspirin. Even if none of these conditions is present, the risk of stomach bleeding is still increased when aspirin is taken with alcohol or warfarin. People with hemophilia or other bleeding tendencies should not take aspirin or other salicylates. Aspirin is known to cause hemolytic anemia in people who have the genetic disease glucose-6-phosphate dehydrogenase deficiency, particularly in large doses and depending on the severity of the disease. Use of aspirin during dengue fever is not recommended owing to increased bleeding tendency. Aspirin taken at doses of ≤325 mg and ≤100 mg per day for ≥2 days can increase the odds of suffering a gout attack by 81% and 91% respectively. This effect may potentially be worsened by high purine diets, diuretics, and kidney disease, but is eliminated by the urate lowering drug allopurinol. Daily low dose aspirin does not appear to worsen kidney function. Aspirin may reduce cardiovascular risk in those without established cardiovascular disease in people with moderate CKD, without significantly increasing the risk of bleeding. Aspirin should not be given to children or adolescents under the age of 16 to control cold or influenza symptoms, as this has been linked with Reye's syndrome. Gastrointestinal Aspirin increases the risk of upper gastrointestinal bleeding. Enteric coating on aspirin may be used in manufacturing to prevent release of aspirin into the stomach to reduce gastric harm, but enteric coating does not reduce gastrointestinal bleeding risk. Enteric-coated aspirin may not be as effective at reducing blood clot risk. Combining aspirin with other NSAIDs has been shown to further increase the risk of gastrointestinal bleeding. Using aspirin in combination with clopidogrel or warfarin also increases the risk of upper gastrointestinal bleeding. Blockade of COX-1 by aspirin apparently results in the upregulation of COX-2 as part of a gastric defense. There is no clear evidence that simultaneous use of a COX-2 inhibitor with aspirin may increase the risk of gastrointestinal injury. "Buffering" is an additional method used with the intent to mitigate gastrointestinal bleeding, such as by preventing aspirin from concentrating in the walls of the stomach, although the benefits of buffered aspirin are disputed. Almost any buffering agent used in antacids can be used; Bufferin, for example, uses magnesium oxide. Other preparations use calcium carbonate. Gas-forming agents in effervescent tablet and powder formulations can also double as a buffering agent, one example being sodium bicarbonate, used in Alka-Seltzer. Taking vitamin C with aspirin has been investigated as a method of protecting the stomach lining. In trials vitamin C-releasing aspirin (ASA-VitC) or a buffered aspirin formulation containing vitamin C was found to cause less stomach damage than aspirin alone. Retinal vein occlusion It is a widespread habit among eye specialists (ophthalmologists) to prescribe aspirin as an add-on medication for patients with retinal vein occlusion (RVO), such as central retinal vein occlusion (CRVO) and branch retinal vein occlusion (BRVO). The reason of this widespread use is the evidence of its proven effectiveness in major systemic venous thrombotic disorders, and it has been assumed that may be similarly beneficial in various types of retinal vein occlusion. However, a large-scale investigation based on data of nearly 700 patients showed "that aspirin or other antiplatelet aggregating agents or anticoagulants adversely influence the visual outcome in patients with CRVO and hemi-CRVO, without any evidence of protective or beneficial effect". Several expert groups, including the Royal College of Ophthalmologists, recommended against the use of antithrombotic drugs (incl. aspirin) for patients with RVO. Central effects Large doses of salicylate, a metabolite of aspirin, cause temporary tinnitus (ringing in the ears) based on experiments in rats, via the action on arachidonic acid and NMDA receptors cascade. Reye's syndrome Reye's syndrome, a rare but severe illness characterized by acute encephalopathy and fatty liver, can occur when children or adolescents are given aspirin for a fever or other illness or infection. From 1981 to 1997, 1207 cases of Reye's syndrome in people younger than 18 were reported to the US Centers for Disease Control and Prevention (CDC). Of these, 93% reported being ill in the three weeks preceding the onset of Reye's syndrome, most commonly with a respiratory infection, chickenpox, or diarrhea. Salicylates were detectable in 81.9% of children for whom test results were reported. After the association between Reye's syndrome and aspirin was reported, and safety measures to prevent it (including a Surgeon General's warning, and changes to the labeling of aspirin-containing drugs) were implemented, aspirin taken by children declined considerably in the United States, as did the number of reported cases of Reye's syndrome; a similar decline was found in the United Kingdom after warnings against pediatric aspirin use were issued. The US Food and Drug Administration recommends aspirin (or aspirin-containing products) should not be given to anyone under the age of 12 who has a fever, and the UK National Health Service recommends children who are under 16 years of age should not take aspirin, unless it is on the advice of a doctor. Skin For a small number of people, taking aspirin can result in symptoms including hives, swelling, and headache. Aspirin can exacerbate symptoms among those with chronic hives, or create acute symptoms of hives. These responses can be due to allergic reactions to aspirin, or more often due to its effect of inhibiting the COX-1 enzyme. Skin reactions may also tie to systemic contraindications, seen with NSAID-precipitated bronchospasm, or those with atopy. Aspirin and other NSAIDs, such as ibuprofen, may delay the healing of skin wounds. Earlier findings from two small, low-quality trials suggested a benefit with aspirin (alongside compression therapy) on venous leg ulcer healing time and leg ulcer size, however larger, more recent studies of higher quality have been unable to corroborate these outcomes. As such, further research is required to clarify the role of aspirin in this context. Other adverse effects Aspirin can induce swelling of skin tissues in some people. In one study, angioedema appeared one to six hours after ingesting aspirin in some of the people. However, when the aspirin was taken alone, it did not cause angioedema in these people; the aspirin had been taken in combination with another NSAID-induced drug when angioedema appeared. Aspirin causes an increased risk of cerebral microbleeds, having the appearance on MRI scans of 5 to 10mm or smaller, hypointense (dark holes) patches. A study of a group with a mean dosage of aspirin of 270mg per day estimated an average absolute risk increase in intracerebral hemorrhage (ICH) of 12 events per 10,000 persons. In comparison, the estimated absolute risk reduction in myocardial infarction was 137 events per 10,000 persons, and a reduction of 39 events per 10,000 persons in ischemic stroke. In cases where ICH already has occurred, aspirin use results in higher mortality, with a dose of about 250mg per day resulting in a relative risk of death within three months after the ICH around 2.5 (95% confidence interval 1.3 to 4.6). Aspirin and other NSAIDs can cause abnormally high blood levels of potassium by inducing a hyporeninemic hypoaldosteronism state via inhibition of prostaglandin synthesis; however, these agents do not typically cause hyperkalemia by themselves in the setting of normal renal function and euvolemic state. Use of low-dose aspirin before a surgical procedure has been associated with an increased risk of bleeding events in some patients, however, ceasing aspirin prior to surgery has also been associated with an increase in major adverse cardiac events. An analysis of multiple studies found a three-fold increase in adverse events such as myocardial infarction in patients who ceased aspirin prior to surgery. The analysis found that the risk is dependent on the type of surgery being performed and the patient indication for aspirin use. On 9 July 2015, the US Food and Drug Administration (FDA) toughened warnings of increased heart attack and stroke risk associated with nonsteroidal anti-inflammatory drugs (NSAID). Aspirin is an NSAID but is not affected by the new warnings. Overdose Aspirin overdose can be acute or chronic. In acute poisoning, a single large dose is taken; in chronic poisoning, higher than normal doses are taken over a period of time. Acute overdose has a mortality rate of 2%. Chronic overdose is more commonly lethal, with a mortality rate of 25%; chronic overdose may be especially severe in children. Toxicity is managed with a number of potential treatments, including activated charcoal, intravenous dextrose and normal saline, sodium bicarbonate, and dialysis. The diagnosis of poisoning usually involves measurement of plasma salicylate, the active metabolite of aspirin, by automated spectrophotometric methods. Plasma salicylate levels in general range from 30 to 100mg/L after usual therapeutic doses, 50–300mg/L in people taking high doses and 700–1400mg/L following acute overdose. Salicylate is also produced as a result of exposure to bismuth subsalicylate, methyl salicylate, and sodium salicylate. Interactions Aspirin is known to interact with other drugs. For example, acetazolamide and ammonium chloride are known to enhance the intoxicating effect of salicylates, and alcohol also increases the gastrointestinal bleeding associated with these types of drugs. Aspirin is known to displace a number of drugs from protein-binding sites in the blood, including the antidiabetic drugs tolbutamide and chlorpropamide, warfarin, methotrexate, phenytoin, probenecid, valproic acid (as well as interfering with beta oxidation, an important part of valproate metabolism), and other NSAIDs. Corticosteroids may also reduce the concentration of aspirin. Other NSAIDs, such as ibuprofen and naproxen, may reduce the antiplatelet effect of aspirin. Although limited evidence suggests this may not result in a reduced cardioprotective effect of aspirin. Analgesic doses of aspirin decrease sodium loss induced by spironolactone in the urine, however this does not reduce the antihypertensive effects of spironolactone. Furthermore, antiplatelet doses of aspirin are deemed too small to produce an interaction with spironolactone. Aspirin is known to compete with penicillin G for renal tubular secretion. Aspirin may also inhibit the absorption of vitamin C. Research The ISIS-2 trial demonstrated that aspirin at doses of 160mg daily for one month, decreased the mortality by 21% of participants with a suspected myocardial infarction in the first five weeks. A single daily dose of 324mg of aspirin for 12 weeks has a highly protective effect against acute myocardial infarction and death in men with unstable angina. Bipolar disorder Aspirin has been repurposed as an add-on treatment for depressive episodes in subjects with bipolar disorder. However, meta-analytic evidence is based on very few studies and does not suggest any efficacy of aspirin in the treatment of bipolar depression. Thus, notwithstanding the biological rationale, the clinical perspectives of aspirin and anti-inflammatory agents in the treatment of bipolar depression remain uncertain. Infectious diseases Several studies investigated the anti-infective properties of aspirin for bacterial, viral and parasitic infections. Aspirin was demonstrated to limit platelet activation induced by Staphylococcus aureus and Enterococcus faecalis and to reduce streptococcal adhesion to heart valves. In patients with tuberculous meningitis, the addition of aspirin reduced the risk of new cerebral infarction [RR = 0.52 (0.29-0.92)]. A role of aspirin on bacterial and fungal biofilm is also being supported by growing evidence. Cancer prevention Evidence from observational studies was conflicting on the effect of aspirin in breast cancer prevention; a randomized controlled trial showed that aspirin had no significant effect in reducing breast cancer, thus further studies are needed to clarify the effect of aspirin in cancer prevention. In gardening There are many anecdotal reportings that aspirin can improve plant's growth and resistance though most research involved salicylic acid instead of aspirin. Veterinary medicine Aspirin is sometimes used in veterinary medicine as an anticoagulant or to relieve pain associated with musculoskeletal inflammation or osteoarthritis. Aspirin should be given to animals only under the direct supervision of a veterinarian, as adverse effects—including gastrointestinal issues—are common. An aspirin overdose in any species may result in salicylate poisoning, characterized by hemorrhaging, seizures, coma, and even death. Dogs are better able to tolerate aspirin than cats are. Cats metabolize aspirin slowly because they lack the glucuronide conjugates that aid in the excretion of aspirin, making it potentially toxic if dosing is not spaced out properly. No clinical signs of toxicosis occurred when cats were given 25mg/kg of aspirin every 48 hours for 4 weeks, but the recommended dose for relief of pain and fever and for treating blood clotting diseases in cats is 10mg/kg every 48 hours to allow for metabolization.
Biology and health sciences
Drugs and pharmacology
null
1537
https://en.wikipedia.org/wiki/Acupuncture
Acupuncture
Acupuncture is a form of alternative medicine and a component of traditional Chinese medicine (TCM) in which thin needles are inserted into the body. Acupuncture is a pseudoscience; the theories and practices of TCM are not based on scientific knowledge, and it has been characterized as quackery. There is a range of acupuncture technological variants that originated in different philosophies, and techniques vary depending on the country in which it is performed. However, it can be divided into two main foundational philosophical applications and approaches; the first being the modern standardized form called eight principles TCM and the second being an older system that is based on the ancient Daoist wuxing, better known as the five elements or phases in the West. Acupuncture is most often used to attempt pain relief, though acupuncturists say that it can also be used for a wide range of other conditions. Acupuncture is typically used in combination with other forms of treatment. The global acupuncture market was worth US$24.55 billion in 2017. The market was led by Europe with a 32.7% share, followed by Asia-Pacific with a 29.4% share and the Americas with a 25.3% share. It was estimated in 2021 that the industry would reach a market size of US$55 billion by 2023. The conclusions of trials and systematic reviews of acupuncture generally provide no good evidence of benefit, which suggests that it is not an effective method of healthcare. Acupuncture is generally safe when done by appropriately trained practitioners using clean needle techniques and single-use needles. When properly delivered, it has a low rate of mostly minor adverse effects. When accidents and infections do occur, they are associated with neglect on the part of the practitioner, particularly in the application of sterile techniques. A review conducted in 2013 stated that reports of infection transmission increased significantly in the preceding decade. The most frequently reported adverse events were pneumothorax and infections. Since serious adverse events continue to be reported, it is recommended that acupuncturists be trained sufficiently to reduce the risk. Scientific investigation has not found any histological or physiological evidence for traditional Chinese concepts such as qi, meridians, and acupuncture points, and many modern practitioners no longer support the existence of qi or meridians, which was a major part of early belief systems. Acupuncture is believed to have originated around 100 BC in China, around the time The Inner Classic of Huang Di (Huangdi Neijing) was published, though some experts suggest it could have been practiced earlier. Over time, conflicting claims and belief systems emerged about the effect of lunar, celestial and earthly cycles, yin and yang energies, and a body's "rhythm" on the effectiveness of treatment. Acupuncture fluctuated in popularity in China due to changes in the country's political leadership and the preferential use of rationalism or scientific medicine. Acupuncture spread first to Korea in the 6th century AD, then to Japan through medical missionaries, and then to Europe, beginning with France. In the 20th century, as it spread to the United States and Western countries, spiritual elements of acupuncture that conflicted with scientific knowledge were sometimes abandoned in favor of simply tapping needles into acupuncture points. Clinical practice Acupuncture is a form of alternative medicine. It is used most commonly for pain relief, though it is also used to treat a wide range of conditions. Acupuncture is generally only used in combination with other forms of treatment. For example, the American Society of Anesthesiologists states it may be considered in the treatment of nonspecific, noninflammatory low back pain only in conjunction with conventional therapy. Acupuncture is the insertion of thin needles into the skin. According to the Mayo Foundation for Medical Education and Research (Mayo Clinic), a typical session entails lying still while approximately five to twenty needles are inserted; for the majority of cases, the needles will be left in place for ten to twenty minutes. It can be associated with the application of heat, pressure, or laser light. Classically, acupuncture is individualized and based on philosophy and intuition, and not on scientific research. There is also a non-invasive therapy developed in early 20th-century Japan using an elaborate set of instruments other than needles for the treatment of children ( or ). Clinical practice varies depending on the country. A comparison of the average number of patients treated per hour found significant differences between China (10) and the United States (1.2). Chinese herbs are often used. There is a diverse range of acupuncture approaches, involving different philosophies. Although various different techniques of acupuncture practice have emerged, the method used in traditional Chinese medicine (TCM) seems to be the most widely adopted in the US. Traditional acupuncture involves needle insertion, moxibustion, and cupping therapy, and may be accompanied by other procedures such as feeling the pulse and other parts of the body and examining the tongue. Traditional acupuncture involves the belief that a "life force" (qi) circulates within the body in lines called meridians. The main methods practiced in the UK are TCM and Western medical acupuncture. The term Western medical acupuncture is used to indicate an adaptation of TCM-based acupuncture which focuses less on TCM. The Western medical acupuncture approach involves using acupuncture after a medical diagnosis. Limited research has compared the contrasting acupuncture systems used in various countries for determining different acupuncture points, and thus there is no defined standard for acupuncture points. In traditional acupuncture, the acupuncturist decides which points to treat by observing and questioning the patient to make a diagnosis according to the tradition used. In TCM, the four diagnostic methods are: inspection, auscultation and olfaction, inquiring, and palpation. Inspection focuses on the face and particularly on the tongue, including analysis of the tongue size, shape, tension, color and coating, and the absence or presence of teeth marks around the edge. Auscultation and olfaction involve listening for particular sounds, such as wheezing, and observing body odor. Inquiring involves focusing on the "seven inquiries": chills and fever; perspiration; appetite, thirst and taste; defecation and urination; pain; sleep; and menses and leukorrhea. Palpation is focusing on feeling the body for tender points and feeling the pulse. Needles The most common mechanism of stimulation of acupuncture points employs penetration of the skin by thin metal needles, which are manipulated manually or the needle may be further stimulated by electrical stimulation (electroacupuncture). Acupuncture needles are typically made of stainless steel, making them flexible and preventing them from rusting or breaking. Needles are usually disposed of after each use to prevent contamination. Reusable needles when used should be sterilized between applications. In many areas, only sterile, single-use acupuncture needles are allowed, including the State of California, USA. Needles vary in length between , with shorter needles used near the face and eyes, and longer needles in areas with thicker tissues; needle diameters vary from 0 to 0, with thicker needles used on more robust patients. Thinner needles may be flexible and require tubes for insertion. The tip of the needle should not be made too sharp to prevent breakage, although blunt needles cause more pain. Apart from the usual filiform needle, other needle types include three-edged needles and the Nine Ancient Needles. Japanese acupuncturists use extremely thin needles that are used superficially, sometimes without penetrating the skin, and surrounded by a guide tube (a 17th-century invention adopted in China and the West). Korean acupuncture uses copper needles and has a greater focus on the hand. Needling technique Insertion The skin is sterilized and needles are inserted, frequently with a plastic guide tube. Needles may be manipulated in various ways, including spinning, flicking, or moving up and down relative to the skin. Since most pain is felt in the superficial layers of the skin, a quick insertion of the needle is recommended. Often the needles are stimulated by hand in order to cause a dull, localized, aching sensation that is called de qi, as well as "needle grasp," a tugging feeling felt by the acupuncturist and generated by a mechanical interaction between the needle and skin. Acupuncture can be painful. The acupuncturist's skill level may influence the painfulness of the needle insertion; a sufficiently skilled practitioner may be able to insert the needles without causing any pain. sensation (; "arrival of qi") refers to a claimed sensation of numbness, distension, or electrical tingling at the needling site. If these sensations are not observed then inaccurate location of the acupoint, improper depth of needle insertion, inadequate manual manipulation, are blamed. If is not immediately observed upon needle insertion, various manual manipulation techniques are often applied to promote it (such as "plucking", "shaking" or "trembling"). Once is observed, techniques might be used which attempt to "influence" the ; for example, by certain manipulation the can allegedly be conducted from the needling site towards more distant sites of the body. Other techniques aim at "tonifying" () or "sedating" () qi. The former techniques are used in deficiency patterns, the latter in excess patterns. De qi is more important in Chinese acupuncture, while Western and Japanese patients may not consider it a necessary part of the treatment. Related practices Acupressure, a non-invasive form of bodywork, uses physical pressure applied to acupressure points by the hand or elbow, or with various devices. Acupuncture is often accompanied by moxibustion, the burning of cone-shaped preparations of moxa (made from dried mugwort) on or near the skin, often but not always near or on an acupuncture point. Traditionally, acupuncture was used to treat acute conditions while moxibustion was used for chronic diseases. Moxibustion could be direct (the cone was placed directly on the skin and allowed to burn the skin, producing a blister and eventually a scar), or indirect (either a cone of moxa was placed on a slice of garlic, ginger or other vegetable, or a cylinder of moxa was held above the skin, close enough to either warm or burn it). Cupping therapy is an ancient Chinese form of alternative medicine in which a local suction is created on the skin; practitioners believe this mobilizes blood flow in order to promote healing. Tui na is a TCM method of attempting to stimulate the flow of qi by various bare-handed techniques that do not involve needles. Electroacupuncture is a form of acupuncture in which acupuncture needles are attached to a device that generates continuous electric pulses (this has been described as "essentially transdermal electrical nerve stimulation [TENS] masquerading as acupuncture"). Fire needle acupuncture also known as fire needling is a technique which involves quickly inserting a flame-heated needle into areas on the body. Sonopuncture is a stimulation of the body similar to acupuncture using sound instead of needles. This may be done using purpose-built transducers to direct a narrow ultrasound beam to a depth of 6–8 centimetres at acupuncture meridian points on the body. Alternatively, tuning forks or other sound emitting devices are used. Acupuncture point injection is the injection of various substances (such as drugs, vitamins or herbal extracts) into acupoints. This technique combines traditional acupuncture with injection of what is often an effective dose of an approved pharmaceutical drug, and proponents claim that it may be more effective than either treatment alone, especially for the treatment of some kinds of chronic pain. However, a 2016 review found that most published trials of the technique were of poor value due to methodology issues and larger trials would be needed to draw useful conclusions. Auriculotherapy, commonly known as ear acupuncture, auricular acupuncture, or auriculoacupuncture, is considered to date back to ancient China. It involves inserting needles to stimulate points on the outer ear. The modern approach was developed in France during the early 1950s. There is no scientific evidence that it can cure disease; the evidence of effectiveness is negligible. Scalp acupuncture, developed in Japan, is based on reflexological considerations regarding the scalp. Koryo hand acupuncture, developed in Korea, centers around assumed reflex zones of the hand. Medical acupuncture attempts to integrate reflexological concepts, the trigger point model, and anatomical insights (such as dermatome distribution) into acupuncture practice, and emphasizes a more formulaic approach to acupuncture point location. Cosmetic acupuncture is the use of acupuncture in an attempt to reduce wrinkles on the face. Bee venom acupuncture is a treatment approach of injecting purified, diluted bee venom into acupoints. Veterinary acupuncture is the use of acupuncture on domesticated animals. Efficacy , many thousands of papers had been published on the efficacy of acupuncture for the treatment of various adult health conditions, but there was no robust evidence it was beneficial for anything, except shoulder pain and fibromyalgia. For Science-Based Medicine, Steven Novella wrote that the overall pattern of evidence was reminiscent of that for homeopathy, compatible with the hypothesis that most, if not all, benefits were due to the placebo effect, and strongly suggestive that acupuncture had no beneficial therapeutic effects at all. Research methodology and challenges Sham acupuncture and research It is difficult but not impossible to design rigorous research trials for acupuncture. Due to acupuncture's invasive nature, one of the major challenges in efficacy research is in the design of an appropriate placebo control group. For efficacy studies to determine whether acupuncture has specific effects, "sham" forms of acupuncture where the patient, practitioner, and analyst are blinded seem the most acceptable approach. Sham acupuncture uses non-penetrating needles or needling at non-acupuncture points, e.g. inserting needles on meridians not related to the specific condition being studied, or in places not associated with meridians. The under-performance of acupuncture in such trials may indicate that therapeutic effects are due entirely to non-specific effects, or that the sham treatments are not inert, or that systematic protocols yield less than optimal treatment. A 2014 review in Nature Reviews Cancer found that "contrary to the claimed mechanism of redirecting the flow of qi through meridians, researchers usually find that it generally does not matter where the needles are inserted, how often (that is, no dose-response effect is observed), or even if needles are actually inserted. In other words, "sham" or "placebo" acupuncture generally produces the same effects as "real" acupuncture and, in some cases, does better." A 2013 meta-analysis found little evidence that the effectiveness of acupuncture on pain (compared to sham) was modified by the location of the needles, the number of needles used, the experience or technique of the practitioner, or by the circumstances of the sessions. The same analysis also suggested that the number of needles and sessions is important, as greater numbers improved the outcomes of acupuncture compared to non-acupuncture controls. There has been little systematic investigation of which components of an acupuncture session may be important for any therapeutic effect, including needle placement and depth, type and intensity of stimulation, and number of needles used. The research seems to suggest that needles do not need to stimulate the traditionally specified acupuncture points or penetrate the skin to attain an anticipated effect (e.g. psychosocial factors). A response to "sham" acupuncture in osteoarthritis may be used in the elderly, but placebos have usually been regarded as deception and thus unethical. However, some physicians and ethicists have suggested circumstances for applicable uses for placebos such as it might present a theoretical advantage of an inexpensive treatment without adverse reactions or interactions with drugs or other medications. As the evidence for most types of alternative medicine such as acupuncture is far from strong, the use of alternative medicine in regular healthcare can present an ethical question. Using the principles of evidence-based medicine to research acupuncture is controversial, and has produced different results. Some research suggests acupuncture can alleviate pain but the majority of research suggests that acupuncture's effects are mainly due to placebo. Evidence suggests that any benefits of acupuncture are short-lasting. There is insufficient evidence to support use of acupuncture compared to mainstream medical treatments. Acupuncture is not better than mainstream treatment in the long term. The use of acupuncture has been criticized owing to there being little scientific evidence for explicit effects, or the mechanisms for its supposed effectiveness, for any condition that is discernible from placebo. Acupuncture has been called "theatrical placebo", and David Gorski argues that when acupuncture proponents advocate "harnessing of placebo effects" or work on developing "meaningful placebos", they essentially concede it is little more than that. Publication bias Publication bias is cited as a concern in the reviews of randomized controlled trials of acupuncture. A 1998 review of studies on acupuncture found that trials originating in China, Japan, Hong Kong, and Taiwan were uniformly favourable to acupuncture, as were ten out of eleven studies conducted in Russia. A 2011 assessment of the quality of randomized controlled trials on traditional Chinese medicine, including acupuncture, concluded that the methodological quality of most such trials (including randomization, experimental control, and blinding) was generally poor, particularly for trials published in Chinese journals (though the quality of acupuncture trials was better than the trials testing traditional Chinese medicine remedies). The study also found that trials published in non-Chinese journals tended to be of higher quality. Chinese authors use more Chinese studies, which have been demonstrated to be uniformly positive. A 2012 review of 88 systematic reviews of acupuncture published in Chinese journals found that less than half of these reviews reported testing for publication bias, and that the majority of these reviews were published in journals with impact factors of zero. A 2015 study comparing pre-registered records of acupuncture trials with their published results found that it was uncommon for such trials to be registered before the trial began. This study also found that selective reporting of results and changing outcome measures to obtain statistically significant results was common in this literature. Scientist Steven Salzberg identifies acupuncture and Chinese medicine generally as a focus for "fake medical journals" such as the Journal of Acupuncture and Meridian Studies and Acupuncture in Medicine. Safety Adverse events Acupuncture is generally safe when administered by an experienced, appropriately trained practitioner using clean-needle technique and sterile single-use needles. When improperly delivered it can cause adverse effects. Accidents and infections are associated with infractions of sterile technique or neglect on the part of the practitioner. To reduce the risk of serious adverse events after acupuncture, acupuncturists should be trained sufficiently. A 2009 overview of Cochrane reviews found acupuncture is not effective for a wide range of conditions. People with serious spinal disease, such as cancer or infection, are not good candidates for acupuncture. Contraindications to acupuncture (conditions that should not be treated with acupuncture) include coagulopathy disorders (e.g. hemophilia and advanced liver disease), warfarin use, severe psychiatric disorders (e.g. psychosis), and skin infections or skin trauma (e.g. burns). Further, electroacupuncture should be avoided at the spot of implanted electrical devices (such as pacemakers). A 2011 systematic review of systematic reviews (internationally and without language restrictions) found that serious complications following acupuncture continue to be reported. Between 2000 and 2009, ninety-five cases of serious adverse events, including five deaths, were reported. Many such events are not inherent to acupuncture but are due to malpractice of acupuncturists. This might be why such complications have not been reported in surveys of adequately trained acupuncturists. Most such reports originate from Asia, which may reflect the large number of treatments performed there or a relatively higher number of poorly trained Asian acupuncturists. Many serious adverse events were reported from developed countries. These included Australia, Austria, Canada, Croatia, France, Germany, Ireland, the Netherlands, New Zealand, Spain, Sweden, Switzerland, the UK, and the US. The number of adverse effects reported from the UK appears particularly unusual, which may indicate less under-reporting in the UK than other countries. Reports included 38 cases of infections and 42 cases of organ trauma. The most frequent adverse events included pneumothorax, and bacterial and viral infections. A 2013 review found (without restrictions regarding publication date, study type or language) 295 cases of infections; mycobacterium was the pathogen in at least 96%. Likely sources of infection include towels, hot packs or boiling tank water, and reusing reprocessed needles. Possible sources of infection include contaminated needles, reusing personal needles, a person's skin containing mycobacterium, and reusing needles at various sites in the same person. Although acupuncture is generally considered a safe procedure, a 2013 review stated that the reports of infection transmission increased significantly in the prior decade, including those of mycobacterium. Although it is recommended that practitioners of acupuncture use disposable needles, the reuse of sterilized needles is still permitted. It is also recommended that thorough control practices for preventing infection be implemented and adapted. English-language A 2013 systematic review of the English-language case reports found that serious adverse events associated with acupuncture are rare, but that acupuncture is not without risk. Between 2000 and 2011 the English-language literature from 25 countries and regions reported 294 adverse events. The majority of the reported adverse events were relatively minor, and the incidences were low. For example, a prospective survey of 34,000 acupuncture treatments found no serious adverse events and 43 minor ones, a rate of 1.3 per 1000 interventions. Another survey found there were 7.1% minor adverse events, of which 5 were serious, amid 97,733 acupuncture patients. The most common adverse effect observed was infection (e.g. mycobacterium), and the majority of infections were bacterial in nature, caused by skin contact at the needling site. Infection has also resulted from skin contact with unsterilized equipment or with dirty towels in an unhygienic clinical setting. Other adverse complications included five reported cases of spinal cord injuries (e.g. migrating broken needles or needling too deeply), four brain injuries, four peripheral nerve injuries, five heart injuries, seven other organ and tissue injuries, bilateral hand edema, epithelioid granuloma, pseudolymphoma, argyria, pustules, pancytopenia, and scarring due to hot-needle technique. Adverse reactions from acupuncture, which are unusual and uncommon in typical acupuncture practice, included syncope, galactorrhoea, bilateral nystagmus, pyoderma gangrenosum, hepatotoxicity, eruptive lichen planus, and spontaneous needle migration. A 2013 systematic review found 31 cases of vascular injuries caused by acupuncture, three causing death. Two died from pericardial tamponade and one was from an aortoduodenal fistula. The same review found vascular injuries were rare, bleeding and pseudoaneurysm were most prevalent. A 2011 systematic review (without restriction in time or language), aiming to summarize all reported case of cardiac tamponade after acupuncture, found 26 cases resulting in 14 deaths, with little doubt about cause in most fatal instances. The same review concluded that cardiac tamponade was a serious, usually fatal, though theoretically avoidable complication following acupuncture, and urged training to minimize risk. A 2012 review found that a number of adverse events were reported after acupuncture in the UK's National Health Service (NHS), 95% of which were not severe, though miscategorization and under-reporting may alter the total figures. From January 2009 to December 2011, 468 safety incidents were recognized within the NHS organizations. The adverse events recorded included retained needles (31%), dizziness (30%), loss of consciousness/unresponsive (19%), falls (4%), bruising or soreness at needle site (2%), pneumothorax (1%) and other adverse side effects (12%). Acupuncture practitioners should know, and be prepared to be responsible for, any substantial harm from treatments. Some acupuncture proponents argue that the long history of acupuncture suggests it is safe. However, there is an increasing literature on adverse events (e.g. spinal-cord injury). Acupuncture seems to be safe in people getting anticoagulants, assuming needles are used at the correct location and depth, but studies are required to verify these findings. Chinese, Korean, and Japanese-language A 2010 systematic review of the Chinese-language literature found numerous acupuncture-related adverse events, including pneumothorax, fainting, subarachnoid hemorrhage, and infection as the most frequent, and cardiovascular injuries, subarachnoid hemorrhage, pneumothorax, and recurrent cerebral hemorrhage as the most serious, most of which were due to improper technique. Between 1980 and 2009, the Chinese-language literature reported 479 adverse events. Prospective surveys show that mild, transient acupuncture-associated adverse events ranged from 6.71% to 15%. In a study with 190,924 patients, the prevalence of serious adverse events was roughly 0.024%. Another study showed a rate of adverse events requiring specific treatment of 2.2%, 4,963 incidences among 229,230 patients. Infections, mainly hepatitis, after acupuncture are reported often in English-language research, though are rarely reported in Chinese-language research, making it plausible that acupuncture-associated infections have been underreported in China. Infections were mostly caused by poor sterilization of acupuncture needles. Other adverse events included spinal epidural hematoma (in the cervical, thoracic and lumbar spine), chylothorax, injuries of abdominal organs and tissues, injuries in the neck region, injuries to the eyes, including orbital hemorrhage, traumatic cataract, injury of the oculomotor nerve and retinal puncture, hemorrhage to the cheeks and the hypoglottis, peripheral motor-nerve injuries and subsequent motor dysfunction, local allergic reactions to metal needles, stroke, and cerebral hemorrhage after acupuncture. A causal link between acupuncture and the adverse events cardiac arrest, pyknolepsy, shock, fever, cough, thirst, aphonia, leg numbness, and sexual dysfunction remains uncertain. The same review concluded that acupuncture can be considered inherently safe when practiced by properly trained practitioners, but the review also stated there is a need to find effective strategies to minimize the health risks. Between 1999 and 2010, the Korean-language literature contained reports of 1104 adverse events. Between the 1980s and 2002, the Japanese-language literature contained reports of 150 adverse events. Children and pregnancy Although acupuncture has been practiced for thousands of years in China, its use in pediatrics in the United States did not become common until the early 2000s. In 2007, the National Health Interview Survey (NHIS) conducted by the National Center For Health Statistics (NCHS) estimated that approximately 150,000 children had received acupuncture treatment for a variety of conditions. In 2008, a study determined that the use of acupuncture-needle treatment on children was "questionable" due to the possibility of adverse side-effects and the pain manifestation differences in children versus adults. The study also includes warnings against practicing acupuncture on infants, as well as on children who are over-fatigued, very weak, or have over-eaten. When used on children, acupuncture is considered safe when administered by well-trained, licensed practitioners using sterile needles; however, a 2011 review found there was limited research to draw definite conclusions about the overall safety of pediatric acupuncture. The same review found 279 adverse events, 25 of them serious. The adverse events were mostly mild in nature (e.g., bruising or bleeding). The prevalence of mild adverse events ranged from 10.1% to 13.5%, an estimated 168 incidences among 1,422 patients. On rare occasions adverse events were serious (e.g. cardiac rupture or hemoptysis); many might have been a result of substandard practice. The incidence of serious adverse events was 5 per one million, which included children and adults. When used during pregnancy, the majority of adverse events caused by acupuncture were mild and transient, with few serious adverse events. The most frequent mild adverse event was needling or unspecified pain, followed by bleeding. Although two deaths (one stillbirth and one neonatal death) were reported, there was a lack of acupuncture-associated maternal mortality. Limiting the evidence as certain, probable or possible in the causality evaluation, the estimated incidence of adverse events following acupuncture in pregnant women was 131 per 10,000. Although acupuncture is not contraindicated in pregnant women, some specific acupuncture points are particularly sensitive to needle insertion; these spots, as well as the abdominal region, should be avoided during pregnancy. Moxibustion and cupping Four adverse events associated with moxibustion were bruising, burns and cellulitis, spinal epidural abscess, and large superficial basal cell carcinoma. Ten adverse events were associated with cupping. The minor ones were keloid scarring, burns, and bullae; the serious ones were acquired hemophilia A, stroke following cupping on the back and neck, factitious panniculitis, reversible cardiac hypertrophy, and iron deficiency anemia. Risk of forgoing conventional medical care As with other alternative medicines, unethical or naïve practitioners may induce patients to exhaust financial resources by pursuing ineffective treatment. Professional ethics codes set by accrediting organizations such as the National Certification Commission for Acupuncture and Oriental Medicine require practitioners to make "timely referrals to other health care professionals as may be appropriate." Stephen Barrett states that there is a "risk that an acupuncturist whose approach to diagnosis is not based on scientific concepts will fail to diagnose a dangerous condition". Conceptual basis Traditional Acupuncture is a substantial part of traditional Chinese medicine (TCM). Early acupuncture beliefs relied on concepts that are common in TCM, such as a life force energy called qi. Qi was believed to flow from the body's primary organs (zang-fu organs) to the "superficial" body tissues of the skin, muscles, tendons, bones, and joints, through channels called meridians. Acupuncture points where needles are inserted are mainly (but not always) found at locations along the meridians. Acupuncture points not found along a meridian are called extraordinary points and those with no designated site are called points. In TCM, disease is generally perceived as a disharmony or imbalance in energies such as yin, yang, qi, xuĕ, zàng-fǔ, meridians, and of the interaction between the body and the environment. Therapy is based on which "pattern of disharmony" can be identified. For example, some diseases are believed to be caused by meridians being invaded with an excess of wind, cold, and damp. In order to determine which pattern is at hand, practitioners examine things like the color and shape of the tongue, the relative strength of pulse-points, the smell of the breath, the quality of breathing, or the sound of the voice. TCM and its concept of disease does not strongly differentiate between the cause and effect of symptoms. Purported scientific basis Many within the scientific community consider acupuncture to be quackery and pseudoscience, having no effect other than as "theatrical placebo". David Gorski has argued that of all forms of quackery, acupuncture has perhaps gained most acceptance among physicians and institutions. Academics Massimo Pigliucci and Maarten Boudry describe acupuncture as a "borderlands science" lying between science and pseudoscience. Rationalizations of traditional medicine It is a generally held belief within the acupuncture community that acupuncture points and meridians structures are special conduits for electrical signals, but no research has established any consistent anatomical structure or function for either acupuncture points or meridians. Human tests to determine whether electrical continuity was significantly different near meridians than other places in the body have been inconclusive. Scientific research has not supported the existence of qi, meridians, or yin and yang. A Nature editorial described TCM as "fraught with pseudoscience", with the majority of its treatments having no logical mechanism of action. Quackwatch states that "TCM theory and practice are not based upon the body of knowledge related to health, disease, and health care that has been widely accepted by the scientific community. TCM practitioners disagree among themselves about how to diagnose patients and which treatments should go with which diagnoses. Even if they could agree, the TCM theories are so nebulous that no amount of scientific study will enable TCM to offer rational care." Academic discussions of acupuncture still make reference to pseudoscientific concepts such as qi and meridians despite the lack of scientific evidence. Release of endorphins or adenosine Some modern practitioners support the use of acupuncture to treat pain, but have abandoned the use of qi, meridians, yin, yang and other mystical energies as an explanatory frameworks. The use of qi as an explanatory framework has been decreasing in China, even as it becomes more prominent during discussions of acupuncture in the US. Many acupuncturists attribute pain relief to the release of endorphins when needles penetrate, but no longer support the idea that acupuncture can affect a disease. Some studies suggest acupuncture causes a series of events within the central nervous system, and that it is possible to inhibit acupuncture's analgesic effects with the opioid antagonist naloxone. Mechanical deformation of the skin by acupuncture needles appears to result in the release of adenosine. The anti-nociceptive effect of acupuncture may be mediated by the adenosine A1 receptor. A 2014 review in Nature Reviews Cancer analyzed mouse studies that suggested acupuncture relieves pain via the local release of adenosine, which then triggered nearby A1 receptors. The review found that in those studies, because acupuncture "caused more tissue damage and inflammation relative to the size of the animal in mice than in humans, such studies unnecessarily muddled a finding that local inflammation can result in the local release of adenosine with analgesic effect." History Origins Acupuncture, along with moxibustion, is one of the oldest practices of traditional Chinese medicine. Most historians believe the practice began in China, though there are some conflicting narratives on when it originated. Academics David Ramey and Paul Buell said the exact date acupuncture was founded depends on the extent to which dating of ancient texts can be trusted and the interpretation of what constitutes acupuncture. Acupressure therapy was prevalent in India. Once Buddhism spread to China, the acupressure therapy was also integrated into common medical practice in China and it came to be known as acupuncture. The major points of Indian acupressure and Chinese acupuncture are similar to each other. According to an article in Rheumatology, the first documentation of an "organized system of diagnosis and treatment" for acupuncture was in Inner Classic of Huang Di (Huangdi Neijing) from about 100 BC. Gold and silver needles found in the tomb of Liu Sheng from around 100 BC are believed to be the earliest archaeological evidence of acupuncture, though it is unclear if that was their purpose. According to Plinio Prioreschi, the earliest known historical record of acupuncture is the Shiji ("Records of the Grand Historian"), written by a historian around 100 BC. It is believed that this text was documenting what was established practice at that time. Alternative theories The 5,000-year-old mummified body of Ötzi the Iceman was found with 15 groups of tattoos, many of which were located at points on the body where acupuncture needles are used for abdominal or lower back problems. Evidence from the body suggests Ötzi had these conditions. This has been cited as evidence that practices similar to acupuncture may have been practised elsewhere in Eurasia during the early Bronze Age; however, The Oxford Handbook of the History of Medicine calls this theory "speculative". It is considered unlikely that acupuncture was practised before 2000 BC. Acupuncture may have been practised during the Neolithic era, near the end of the Stone Age, using sharpened stones called Bian shi. Many Chinese texts from later eras refer to sharp stones called "plen", which means "stone probe", that may have been used for acupuncture purposes. The ancient Chinese medical text, Huangdi Neijing, indicates that sharp stones were believed at-the-time to cure illnesses at or near the body's surface, perhaps because of the short depth a stone could penetrate. However, it is more likely that stones were used for other medical purposes, such as puncturing a growth to drain its pus. The Mawangdui texts, which are believed to be from the 2nd century BC, mention the use of pointed stones to open abscesses, and moxibustion, but not for acupuncture. It is also speculated that these stones may have been used for bloodletting, due to the ancient Chinese belief that illnesses were caused by demons within the body that could be killed or released. It is likely bloodletting was an antecedent to acupuncture. According to historians Lu Gwei-djen and Joseph Needham, there is substantial evidence that acupuncture may have begun around 600 BC. Some hieroglyphs and pictographs from that era suggests acupuncture and moxibustion were practised. However, historians Lu and Needham said it was unlikely a needle could be made out of the materials available in China during this time period. It is possible that bronze was used for early acupuncture needles. Tin, copper, gold and silver are also possibilities, though they are considered less likely, or to have been used in fewer cases. If acupuncture was practised during the Shang dynasty (1766 to 1122 BC), organic materials like thorns, sharpened bones, or bamboo may have been used. Once methods for producing steel were discovered, it would replace all other materials, since it could be used to create a very fine, but sturdy needle. Lu and Needham noted that all the ancient materials that could have been used for acupuncture and which often produce archaeological evidence, such as sharpened bones, bamboo or stones, were also used for other purposes. An article in Rheumatology said that the absence of any mention of acupuncture in documents found in the tomb of Mawangdui from 198 BC suggest that acupuncture was not practised by that time. Belief systems Several different and sometimes conflicting belief systems emerged regarding acupuncture. This may have been the result of competing schools of thought. Some ancient texts referred to using acupuncture to cause bleeding, while others mixed the ideas of blood-letting and spiritual ch'i energy. Over time, the focus shifted from blood to the concept of puncturing specific points on the body, and eventually to balancing Yin and Yang energies as well. According to David Ramey, no single "method or theory" was ever predominantly adopted as the standard. At the time, scientific knowledge of medicine was not yet developed, especially because in China dissection of the deceased was forbidden, preventing the development of basic anatomical knowledge. It is not certain when specific acupuncture points were introduced, but the autobiography of Bian Que from around 400–500 BC references inserting needles at designated areas. Bian Que believed there was a single acupuncture point at the top of one's skull that he called the point "of the hundred meetings." Texts dated to be from 156 to 186 BC document early beliefs in channels of life force energy called meridians that would later be an element in early acupuncture beliefs. Ramey and Buell said the "practice and theoretical underpinnings" of modern acupuncture were introduced in The Yellow Emperor's Classic (Huangdi Neijing) around 100 BC. It introduced the concept of using acupuncture to manipulate the flow of life energy (qi) in a network of meridian (channels) in the body. The network concept was made up of acu-tracts, such as a line down the arms, where it said acupoints were located. Some of the sites acupuncturists use needles at today still have the same names as those given to them by the Yellow Emperor's Classic. Numerous additional documents were published over the centuries introducing new acupoints. By the 4th century AD, most of the acupuncture sites in use today had been named and identified. Early development in China Establishment and growth In the first half of the 1st century AD, acupuncturists began promoting the belief that acupuncture's effectiveness was influenced by the time of day or night, the lunar cycle, and the season. The 'science of the yin-yang cycles' ( ) was a set of beliefs that curing diseases relied on the alignment of both heavenly () and earthly () forces that were attuned to cycles like that of the sun and moon. There were several different belief systems that relied on a number of celestial and earthly bodies or elements that rotated and only became aligned at certain times. According to Needham and Lu, these "arbitrary predictions" were depicted by acupuncturists in complex charts and through a set of special terminology. Acupuncture needles during this period were much thicker than most modern ones and often resulted in infection. Infection is caused by a lack of sterilization, but at that time it was believed to be caused by use of the wrong needle, or needling in the wrong place, or at the wrong time. Later, many needles were heated in boiling water, or in a flame. Sometimes needles were used while they were still hot, creating a cauterizing effect at the injection site. Nine needles were recommended in the Great Compendium of Acupuncture and Moxibustion from 1601, which may have been because of an ancient Chinese belief that nine was a magic number. Other belief systems were based on the idea that the human body operated on a rhythm and acupuncture had to be applied at the right point in the rhythm to be effective. In some cases a lack of balance between Yin and Yang were believed to be the cause of disease. In the 1st century AD, many of the first books about acupuncture were published and recognized acupuncturist experts began to emerge. The Zhen Jiu Jia Yi Jing, which was published in the mid-3rd century, became the oldest acupuncture book that is still in existence in the modern era. Other books like the Yu Gui Zhen Jing, written by the Director of Medical Services for China, were also influential during this period, but were not preserved. In the mid 7th century, Sun Simiao published acupuncture-related diagrams and charts that established standardized methods for finding acupuncture sites on people of different sizes and categorized acupuncture sites in a set of modules. Acupuncture became more established in China as improvements in paper led to the publication of more acupuncture books. The Imperial Medical Service and the Imperial Medical College, which both supported acupuncture, became more established and created medical colleges in every province. The public was also exposed to stories about royal figures being cured of their diseases by prominent acupuncturists. By time the Great Compendium of Acupuncture and Moxibustion was published during the Ming dynasty (1368–1644 AD), most of the acupuncture practices used in the modern era had been established. Decline By the end of the Song dynasty (1279 AD), acupuncture had lost much of its status in China. It became rarer in the following centuries, and was associated with less prestigious professions like alchemy, shamanism, midwifery and moxibustion. Additionally, by the 18th century, scientific rationality was becoming more popular than traditional superstitious beliefs. By 1757 a book documenting the history of Chinese medicine called acupuncture a "lost art". Its decline was attributed in part to the popularity of prescriptions and medications, as well as its association with the lower classes. In 1822, the Chinese Emperor signed a decree excluding the practice of acupuncture from the Imperial Medical Institute. He said it was unfit for practice by gentlemen-scholars. In China acupuncture was increasingly associated with lower-class, illiterate practitioners. It was restored for a time, but banned again in 1929 in favor of science-based medicine. Although acupuncture declined in China during this time period, it was also growing in popularity in other countries. International expansion Korea is believed to be the first country in Asia that acupuncture spread to outside of China. Within Korea there is a legend that acupuncture was developed by emperor Dangun, though it is more likely to have been brought into Korea from a Chinese colonial prefecture in 514 AD. Acupuncture use was commonplace in Korea by the 6th century. It spread to Vietnam in the 8th and 9th centuries. As Vietnam began trading with Japan and China around the 9th century, it was influenced by their acupuncture practices as well. China and Korea sent "medical missionaries" that spread traditional Chinese medicine to Japan, starting around 219 AD. In 553, several Korean and Chinese citizens were appointed to re-organize medical education in Japan and they incorporated acupuncture as part of that system. Japan later sent students back to China and established acupuncture as one of five divisions of the Chinese State Medical Administration System. Acupuncture began to spread to Europe in the second half of the 17th century. Around this time the surgeon-general of the Dutch East India Company met Japanese and Chinese acupuncture practitioners and later encouraged Europeans to further investigate it. He published the first in-depth description of acupuncture for the European audience and created the term "acupuncture" in his 1683 work De Acupunctura. France was an early adopter among the West due to the influence of Jesuit missionaries, who brought the practice to French clinics in the 16th century. The French doctor Louis Berlioz (the father of the composer Hector Berlioz) is usually credited with being the first to experiment with the procedure in Europe in 1810, before publishing his findings in 1816. By the 19th century, acupuncture had become commonplace in many areas of the world. Americans and Britons began showing interest in acupuncture in the early 19th century, although interest waned by mid-century. Western practitioners abandoned acupuncture's traditional beliefs in spiritual energy, pulse diagnosis, and the cycles of the moon, sun or the body's rhythm. Diagrams of the flow of spiritual energy, for example, conflicted with the West's own anatomical diagrams. It adopted a new set of ideas for acupuncture based on tapping needles into nerves. In Europe it was speculated that acupuncture may allow or prevent the flow of electricity in the body, as electrical pulses were found to make a frog's leg twitch after death. The West eventually created a belief system based on Travell trigger points that were believed to inhibit pain. They were in the same locations as China's spiritually identified acupuncture points, but under a different nomenclature. The first elaborate Western treatise on acupuncture was published in 1683 by Willem ten Rhijne. Modern era In China, the popularity of acupuncture rebounded in 1949 when Mao Zedong took power and sought to unite China behind traditional cultural values. It was also during this time that many Eastern medical practices were consolidated under the name traditional Chinese medicine (TCM). New practices were adopted in the 20th century, such as using a cluster of needles, electrified needles, or leaving needles inserted for up to a week. A lot of emphasis developed on using acupuncture on the ear. Acupuncture research organizations such as the International Society of Acupuncture were founded in the 1940s and 1950s and acupuncture services became available in modern hospitals. China, where acupuncture was believed to have originated, was increasingly influenced by Western medicine. Meanwhile, acupuncture grew in popularity in the US. The US Congress created the Office of Alternative Medicine in 1992 and the National Institutes of Health (NIH) declared support for acupuncture for some conditions in November 1997. In 1999, the National Center for Complementary and Alternative Medicine was created within the NIH. Acupuncture became the most popular alternative medicine in the US. Politicians from the Chinese Communist Party said acupuncture was superstitious and conflicted with the party's commitment to science. Communist Party Chairman Mao Zedong later reversed this position, arguing that the practice was based on scientific principles. During the Cultural Revolution, disbelief in acupuncture anesthesia was subjected to ruthless political repression. In 1971, New York Times reporter James Reston published an article on his acupuncture experiences in China, which led to more investigation of and support for acupuncture. The US President Richard Nixon visited China in 1972. During one part of the visit, the delegation was shown a patient undergoing major surgery while fully awake, ostensibly receiving acupuncture rather than anesthesia. Later it was found that the patients selected for the surgery had both a high pain tolerance and received heavy indoctrination before the operation; these demonstration cases were also frequently receiving morphine surreptitiously through an intravenous drip that observers were told contained only fluids and nutrients. One patient receiving open heart surgery while awake was ultimately found to have received a combination of three powerful sedatives as well as large injections of a local anesthetic into the wound. After the National Institute of Health expressed support for acupuncture for a limited number of conditions, adoption in the US grew further. In 1972 the first legal acupuncture center in the US was established in Washington DC and in 1973 the American Internal Revenue Service allowed acupuncture to be deducted as a medical expense. In 2006, a BBC documentary Alternative Medicine filmed a patient undergoing open heart surgery allegedly under acupuncture-induced anesthesia. It was later revealed that the patient had been given a cocktail of anesthetics. In 2010, UNESCO inscribed "acupuncture and moxibustion of traditional Chinese medicine" on the UNESCO Intangible Cultural Heritage List following China's nomination. Adoption Acupuncture is most heavily practiced in China and is popular in the US, Australia, and Europe. In Switzerland, acupuncture has become the most frequently used alternative medicine since 2004. In the United Kingdom, a total of 4 million acupuncture treatments were administered in 2009. Acupuncture is used in most pain clinics and hospices in the UK. An estimated 1 in 10 adults in Australia used acupuncture in 2004. In Japan, it is estimated that 25 percent of the population will try acupuncture at some point, though in most cases it is not covered by public health insurance. Users of acupuncture in Japan are more likely to be elderly and to have a limited education. Approximately half of users surveyed indicated a likelihood to seek such remedies in the future, while 37% did not. Less than one percent of the US population reported having used acupuncture in the early 1990s. By the early 2010s, more than 14 million Americans reported having used acupuncture as part of their health care. In the US, acupuncture is increasingly () used at academic medical centers, and is usually offered through CAM centers or anesthesia and pain management services. Examples include those at Harvard University, Stanford University, Johns Hopkins University, and UCLA. CDC clinical practice guidelines from 2022 list acupuncture among the types of complementary and alternative medicines physicians should consider in preference to opioid prescription for certain kinds of pain. The use of acupuncture in Germany increased by 20% in 2007, after the German acupuncture trials supported its efficacy for certain uses. In 2011, there were more than one million users, and insurance companies have estimated that two-thirds of German users are women. As a result of the trials, German public health insurers began to cover acupuncture for chronic low back pain and osteoarthritis of the knee, but not tension headache or migraine. This decision was based in part on socio-political reasons. Some insurers in Germany chose to stop reimbursement of acupuncture because of the trials. For other conditions, insurers in Germany were not convinced that acupuncture had adequate benefits over usual care or sham treatments. Highlighting the results of the placebo group, researchers refused to accept a placebo therapy as efficient. Regulation There are various government and trade association regulatory bodies for acupuncture in the United Kingdom, the United States, Saudi Arabia, Australia, New Zealand, Japan, Canada, and in European countries and elsewhere. The World Health Organization recommends that an acupuncturist receive 200 hours of specialized training if they are a physician and 2,500 hours for non-physicians before being licensed or certified; many governments have adopted similar standards. In Hong Kong, the practice of acupuncture is regulated by the Chinese Medicine Council, which was formed in 1999 by the Legislative Council. It includes a licensing exam, registration, and degree courses approved by the board. Canada has acupuncture licensing programs in the provinces of British Columbia, Ontario, Alberta and Quebec; standards set by the Chinese Medicine and Acupuncture Association of Canada are used in provinces without government regulation. Regulation in the US began in the 1970s in California, which was eventually followed by every state but Wyoming and Idaho. Licensing requirements vary greatly from state to state. The needles used in acupuncture are regulated in the US by the Food and Drug Administration. In some states acupuncture is regulated by a board of medical examiners, while in others by the board of licensing, health or education. In Japan, acupuncturists are licensed by the Minister of Health, Labour and Welfare after passing an examination and graduating from a technical school or university. In Australia, the Chinese Medicine Board of Australia regulates acupuncture, among other Chinese medical traditions, and restricts the use of titles like 'acupuncturist' to registered practitioners only. The practice of Acupuncture in New Zealand in 1990 acupuncture was included into the Governmental Accident Compensation Corporation (ACC) Act. This inclusion granted qualified and professionally registered acupuncturists the ability to provide subsidised care and treatment to citizens, residents, and temporary visitors for work- or sports-related injuries that occurred within the country of New Zealand. The two bodies for the regulation of acupuncture and attainment of ACC treatment provider status in New Zealand are Acupuncture NZ, and The New Zealand Acupuncture Standards Authority. At least 28 countries in Europe have professional associations for acupuncturists. In France, the Académie Nationale de Médecine (National Academy of Medicine) has regulated acupuncture since 1955.
Biology and health sciences
Alternative and traditional medicine
null
1542
https://en.wikipedia.org/wiki/Amaranth
Amaranth
Amaranthus is a cosmopolitan group of more than 50 species which make up the genus of annual or short-lived perennial plants collectively known as amaranths. Some of the better known names include "prostrate pigweed" and "love lies bleeding". Some amaranth species are cultivated as leaf vegetables, pseudocereals, and ornamental plants. Catkin-like cymes of densely-packed flowers grow in summer or fall. Amaranth varies in flower, leaf, and stem color with a range of striking pigments from the spectrum of maroon to crimson and can grow longitudinally from tall with a cylindrical, succulent, fibrous stem that is hollow with grooves and bracteoles when mature. There are approximately 75 species in the genus, 10 of which are dioecious and native to North America, and the remaining 65 are monoecious species that are endemic to every continent (except Antarctica) from tropical lowlands to the Himalayas. Members of this genus share many characteristics and uses with members of the closely related genus Celosia. Amaranth grain is collected from the genus. The leaves of some species are also eaten. Names and etymology Amaranthus comes from the name of this plant in Ancient Greek, , "amaranth, immortal", noun formed from the privative prefix , "without", and the verb , "to consume, to exhaust". Indeed, the amaranth has a reputation for not withering, with in particular its calice which remains persistent, and for this reason, represents a symbol of immortality. Some species are used in dry bouquets. The form (with H), comes from an erroneous association with the Greek etymon (lat. ) meaning , found in the name of many plants (agapanthus, for example). Its denominations in the languages of the peoples cultivating it since ancient times in America are in Nahuatl, , in Quechua, or in Maya, ahparie in Purépecha, in Huichol, and guegui in Tarahumara. Description Amaranth is a herbaceous plant or shrub that is either annual or perennial across the genus. Flowers vary interspecifically from the presence of 3 or 5 tepals and stamens, whereas a 7-porate pollen grain structure remains consistent across the family. Species across the genus contain concentric rings of vascular bundles, and fix carbon efficiently with a C4 photosynthetic pathway. Leaves are approximately and of oval or elliptical shape that are either opposite or alternate across species, although most leaves are whole and simple with entire margins. Amaranth has a primary root with deeper spreading secondary fibrous root structures. Inflorescences are in the form a large panicle that varies from terminal to axial, color, and sex. The tassel of fluorescence is either erect or bent and varies in width and length between species. Flowers are radially symmetric and either bisexual or unisexual with very small, bristly perianth and pointy bracts. Species in this genus are either monecious (e.g. A. hybridus,) or dioecious (e.g. A. palmeri). Fruits are in the form of capsules referred to as a unilocular pixdio that opens at maturity. The top (operculum) of the unilocular pixdio releases the urn that contains the seed. Seeds are circular form from 1 to 1.5 millimeters in diameter and range in color with a shiny, smooth seed coat. The panicle is harvested 200 days after cultivation with approximately 1,000 to 3,000 seeds harvested per gram. Chemistry Amaranth grain contains phytochemicals that are not defined as nutrients and may be antinutrient factors, such as polyphenols, saponins, tannins, and oxalates. These compounds are reduced in content and antinutrient effect by cooking. Taxonomy Amaranthus shows a wide variety of morphological diversity among and even within certain species. Amaranthus is part of the Amaranthaceae that is part of the larger grouping of the Carophyllales. Although the family (Amaranthaceae) is distinctive, the genus has few distinguishing characters among the 75 species present across six continents. This complicates taxonomy and Amaranthus has generally been considered among systematists as a "difficult" genus and to hybridize often. In 1955, Sauer classified the genus into two subgenera, differentiating only between monoecious and dioecious species: Acnida (L.) Aellen ex K.R. Robertson and Amaranthus. Although this classification was widely accepted, further infrageneric classification was (and still is) needed to differentiate this widely diverse group. Mosyakin and Robertson 1996 later divided into three subgenera: Acnida, Amaranthus, and Albersia. The support for the addition of the subdivision Albersia because of its indehiscent fruits coupled with three elliptic to linear tepals to be exclusive characters to members of this subgenus. The classification of these groups are further supported with a combination of floral characters, reproductive strategies, geographic distribution, and molecular evidence. The phylogenies of Amaranthus using maximum parsimony and Bayesian analysis of nuclear and chloroplast genes suggest five clades within the genus: Diecious / Pumilus, Hybris, Galapagos, Eurasian/ South African, Australian (ESA), ESA + South American. Amaranthus includes three recognised subgenera and 75 species, although species numbers are questionable due to hybridisation and species concepts. Infrageneric classification focuses on inflorescence, flower characters and whether a species is monoecious/dioecious, as in the Sauer (1955) suggested classification. Bracteole morphology present on the stem is used for taxonomic classification of Amaranth. Wild species have longer bracteoles compared to cultivated species. A modified infrageneric classification of Amaranthus includes three subgenera: Acnida, Amaranthus, and Albersia, with the taxonomy further differentiated by sections within each of the subgenera. There is near certainty that A. hypochondriacus is the common ancestor to the cultivated grain species, however the later series of domestication to follow remains unclear. There has been opposing hypotheses of a single as opposed to multiple domestication events of the three grain species. There is evidence of phylogenetic and geographical support for clear groupings that indicate separate domestication events in South America and Central America. A. hybridus may derive from South America, whereas A. caudatus, A. hypochondriacus, and A. quentiensis are native to Central and North America. Species Species include: Amaranthus acanthochiton – greenstripe Amaranthus acutilobus – a synonym of Amaranthus viridis Amaranthus albus – white pigweed, tumble pigweed Amaranthus anderssonii Amaranthus arenicola – sandhill amaranth Amaranthus australis – southern amaranth Amaranthus bigelovii – Bigelow's amaranth Amaranthus blitoides – mat amaranth, prostrate amaranth, prostrate pigweed Amaranthus blitum – purple amaranth Amaranthus brownii – Brown's amaranth Amaranthus californicus – California amaranth, California pigweed Amaranthus cannabinus – tidal-marsh amaranth Amaranthus caudatus – love-lies-bleeding, pendant amaranth, tassel flower, quilete Amaranthus chihuahuensis – Chihuahuan amaranth Amaranthus crassipes – spreading amaranth Amaranthus crispus – crispleaf amaranth Amaranthus cruentus – purple amaranth, red amaranth, Mexican grain amaranth Amaranthus deflexus – large-fruit amaranth Amaranthus dubius – spleen amaranth, khada sag Amaranthus fimbriatus – fringed amaranth, fringed pigweed Amaranthus floridanus – Florida amaranth Amaranthus furcatus Amaranthus graecizans Amaranthus grandiflorus Amaranthus greggii – Gregg's amaranth Amaranthus hybridus – smooth amaranth, smooth pigweed, red amaranth Amaranthus hypochondriacus – Prince-of-Wales feather, prince's feather Amaranthus interruptus – Australian amaranth Amaranthus minimus Amaranthus mitchellii Amaranthus muricatus – African amaranth Amaranthus obcordatus – Trans-Pecos amaranth Amaranthus palmeri – Palmer's amaranth, Palmer pigweed, careless weed Amaranthus polygonoides – tropical amaranth Amaranthus powellii – green amaranth, Powell amaranth, Powell pigweed Amaranthus pringlei – Pringle's amaranth Amaranthus pumilus – seaside amaranth Amaranthus quitensis - Mucronate Amaranth Amaranthus retroflexus – red-root amaranth, redroot pigweed, common amaranth Amaranthus saradhiana - purpal stem amaranth, green leaf amaranth Amaranthus scleranthoides – variously Amaranthus sclerantoides Amaranthus scleropoides – bone-bract amaranth Amaranthus spinosus – spiny amaranth, prickly amaranth, thorny amaranth Amaranthus standleyanus Amaranthus thunbergii – Thunberg's amaranth Amaranthus torreyi – Torrey's amaranth Amaranthus tricolor – Joseph's-coat Amaranthus tuberculatus – rough-fruit amaranth, tall waterhemp Amaranthus viridis – slender amaranth, green amaranth Amaranthus watsonii – Watson's amaranth Amaranthus wrightii – Wright's amaranth Etymology "Amaranth" derives from Greek (), "unfading", with the Greek word for "flower", (), factoring into the word's development as amaranth, the unfading flower. Amarant is an archaic variant. The name was first applied to the related Celosia (Amaranthus and Celosia share long-lasting dried flowers), as Amaranthus plants were not yet known in Europe. Ecology Amaranth weed species have an extended period of germination, rapid growth, and high rates of seed production, and have been causing problems for farmers since the mid-1990s. This is partially due to the reduction in tillage, reduction in herbicidal use and the evolution of herbicidal resistance in several species where herbicides have been applied more often. The following 9 species of Amaranthus are considered invasive and noxious weeds in the U.S. and Canada: A. albus, A. blitoides, A. hybridus, A. palmeri, A. powellii, A. retroflexus, A. spinosus, A. tuberculatus, and A. viridis. A new herbicide-resistant strain of A. palmeri has appeared; it is glyphosate-resistant and so cannot be killed by herbicides using the chemical. Also, this plant can survive in tough conditions. The species Amaranthus palmeri (Palmer amaranth) causes the greatest reduction in soybean yields and has the potential to reduce yields by 17-68% in field experiments. Palmer amaranth is among the "top five most troublesome weeds" in the southeast of the United States and has already evolved resistances to dinitroaniline herbicides and acetolactate synthase inhibitors. This makes the proper identification of Amaranthus species at the seedling stage essential for agriculturalists. Proper weed control needs to be applied before the species successfully colonizes in the crop field and causes significant yield reductions. An evolutionary lineage of around 90 species within the genus has acquired the carbon fixation pathway, which increases their photosynthetic efficiency. This probably occurred in the Miocene. Uses All parts of the plant are considered edible, though some may have sharp spines that need to be removed before consumption. Amaranth is high in oxalates, but this may be partially offset by its high calcium content. Nutrition Uncooked amaranth grain by weight is 12% water, 65% carbohydrates (including 7% dietary fiber), 14% protein, and 7% fat (table). A reference serving of uncooked amaranth grain provides of food energy, and is a rich source (20% or more of the Daily Value, DV) of protein, dietary fiber, pantothenic acid, vitamin B6, folate, and several dietary minerals (table). Uncooked amaranth is particularly rich in manganese (159% DV), phosphorus (80% DV), magnesium (70% DV), iron (59% DV), and selenium (34% DV). Amaranth has a high oxalate content. Cooking decreases its nutritional value substantially across all nutrients, with only dietary minerals remaining at moderate levels. Cooked amaranth leaves are a rich source of vitamin A, vitamin C, calcium, and manganese, with moderate levels of folate, iron, magnesium, and potassium. Amaranth does not contain gluten. History The native range of the genus is cosmopolitan. In pre-Hispanic times, amaranth was cultivated by the Aztec and their tributary communities in a quantity very similar to maize. Known to the Aztecs as , amaranth is thought to have represented up to 80% of their energy consumption before the Spanish conquest. Another important use of amaranth throughout Mesoamerica was in ritual drinks and foods. To this day, amaranth grains are toasted much like popcorn and mixed with honey, molasses, or chocolate to make a treat called , meaning "joy" in Spanish. While all species are believed to be native to the Americas, several have been cultivated and introduced to warm regions worldwide. Amaranth's cosmopolitan distribution makes it one of many plants providing evidence of pre-Columbian oceanic contact. The earliest archeological evidence for amaranth in the Old World was found in an excavation in Narhan, India, dated to 1000–800 BCE. Because of its importance as a symbol of indigenous culture, its palatability, ease of cooking, and a protein that is particularly well-suited to human nutritional needs, interest in amaranth seeds (especially A. cruentus and A. hypochondriacus) revived in the 1970s. It was recovered in Mexico from wild varieties and is now commercially cultivated. It is a popular snack in Mexico, sometimes mixed with chocolate or puffed rice, and its use has spread to Europe and other parts of North America. Seed Several species are raised for amaranth "grain" in Asia and the Americas. Amaranth and its relative quinoa are considered pseudocereals because of their similarities to cereals in flavor and cooking. The spread of Amaranthus is of a joint effort of human expansion, adaptation, and fertilization strategies. Grain amaranth has been used for food by humans in several ways. The grain can be ground into a flour for use like other grain flours. It can be popped like popcorn, or flaked like oatmeal. Seeds of Amaranth grain have been found in Antofagasta de la Sierra Department, Catamarca, Argentina in the southern Puna desert of the north of Argentina dating from 4,500 years ago, with evidence suggesting earlier use. Archeological evidence of seeds from A. hypochondriacus and A. cruentus found in a cave in Tehuacán, Mexico, suggests amaranth was part of Aztec civilization in the 1400s. Ancient amaranth grains still used include the three species Amaranthus caudatus, A. cruentus, and A. hypochondriacus. Evidence from single-nucleotide polymorphisms and chromosome structure supports A. hypochondriacus as the common ancestor of the three grain species. It has been proposed as an inexpensive native crop that could be cultivated by indigenous people in rural areas for several reasons: A small amount of seed plants a large area (seeding rate 1 kg/ha). Yields are high compared to the seeding rate: 1,000 kg or more per hectare. It is easily harvested and easily processed, post harvest, as there are no hulls to remove. Its seeds are a source of protein. It has rich content of the dietary minerals, calcium, magnesium, phosphorus, and potassium. In cooked and edible forms, amaranth retains adequate content of several dietary minerals. It is easy to cook. Boil in water with twice the amount of water as grain by volume (or 2.4 times as much water by weight). Amaranth seed can also be popped one tablespoon at a time in a hot pan without oil, shaken every few seconds to avoid burning. It grows fast and, in three cultivated species, the large seedheads can weigh up to 1 kg and contain a half-million small seeds. In the United States, the amaranth crop is mostly used for seed production. Most amaranth in American food products starts as a ground flour, blended with wheat or other flours to create cereals, crackers, cookies, bread or other baked products. Despite utilization studies showing that amaranth can be blended with other flours at levels above 50% without affecting functional properties or taste, most commercial products use amaranth only as a minor portion of their ingredients despite them being marketed as "amaranth" products. Leaves, roots, and stems Amaranth species are cultivated and consumed as a leaf vegetable in many parts of the world. Four species of Amaranthus are documented as cultivated vegetables in eastern Asia: Amaranthus cruentus, Amaranthus blitum, Amaranthus dubius, and Amaranthus tricolor. Asia In Indonesia and Malaysia, leaf amaranth is called (although the word has since been loaned to refer to spinach, in a different genus). In the Philippines, the Ilocano word for the plant is ; the Tagalog word for the plant is or . In Uttar Pradesh and Bihar in India, it is called and is a popular red leafy vegetable (referred to in the class of vegetable preparations called ). It is called chua in Kumaun area of Uttarakhand, where it is a popular red-green vegetable. In Karnataka in India, it is called (). It is used to prepare curries such as hulee, palya, majjigay-hulee, and so on. In Kerala, it is called cheera and is consumed by stir-frying the leaves with spices and red chili peppers to make a dish called cheera thoran. In Tamil Nadu, it is called and is regularly consumed as a favourite dish, where the greens are steamed and mashed with light seasoning of salt, red chili pepper, and cumin. It is called . In the states of Andhra Pradesh and Telangana and other Telugu speaking regions of the country, this leaf is called as "Thotakura" and is cooked as a standalone curry, added as a part of mix leafy vegetable curry or added in preparation of a popular dal called () in (Telugu). In Maharashtra, it is called and is available in both red and white colour. In Orissa, it is called , it is used to prepare , in which the leaf is fried with chili and onions. In West Bengal, the green variant is called () and the red variant is called (). In China, the leaves and stems are used as a stir-fry vegetable, or in soups. In Vietnam, it is called and is used to make soup. Two species are popular as edible vegetable in Vietnam: (Amaranthus tricolor) and or (Amaranthus viridis). Africa A traditional food plant in Africa, amaranth has the potential to improve nutrition, boost food security, foster rural development and support sustainable land care. In Bantu regions of Uganda and western Kenya, it is known as doodo or litoto. It is also known among the Kalenjin as a drought crop (chepkerta). In Lingala (spoken in the Congo), it is known as or . In Nigeria, it is a common vegetable and goes with all Nigerian starch dishes. It is known in Yoruba as , a short form of (meaning "make the husband fat"), or (meaning "we have money left over for fish"). In Botswana, it is referred to as morug and cooked as a staple green vegetable. Europe In Greece, purple amaranth (Amaranthus blitum) is a popular dish called , or . It is boiled, then served with olive oil and lemon juice like a salad, sometimes alongside fried fish. Greeks stop harvesting the plant (which also grows wild) when it starts to bloom at the end of August. Americas In Brazil, green amaranth was, and to a degree still is, often considered an invasive species as all other species of amaranth (except the generally imported A. caudatus cultivar), though some have traditionally appreciated it as a leaf vegetable, under the names of or , which is consumed cooked, generally accompanying the staple food, rice and beans. In the Caribbean, the leaves are called bhaji in Trinidad and callaloo in Jamaica, and are sautéed with onions, garlic, and tomatoes, or sometimes used in a soup called pepperpot soup. Oil Making up about 5% of the total fatty acids of amaranth, squalene is extracted as a vegetable-based alternative to the more expensive shark oil for use in dietary supplements and cosmetics. Dyes The flowers of the 'Hopi Red Dye' amaranth were used by the Hopi (a tribe in the western United States) as the source of a deep red dye. Also a synthetic dye was named "amaranth" for its similarity in color to the natural amaranth pigments known as betalains. This synthetic dye is also known as Red No. 2 in North America and E123 in the European Union. Ornamentals The genus also contains several well-known ornamental plants, such as Amaranthus caudatus (love-lies-bleeding), a vigorous, hardy annual with dark purplish flowers crowded in handsome drooping spikes. Another Indian annual, A. hypochondriacus (prince's feather), has deeply veined, lance-shaped leaves, purple on the under face, and deep crimson flowers densely packed on erect spikes. Amaranths are recorded as food plants for some Lepidoptera (butterfly and moth) species including the nutmeg moth and various case-bearer moths of the genus Coleophora: C. amaranthella, C. enchorda (feeds exclusively on Amaranthus), C. immortalis (feeds exclusively on Amaranthus), C. lineapulvella, and C. versurella (recorded on A. spinosus). Culture Diego Durán described the festivities for the Aztec god . The Aztec month of (7 December to 26 December) was dedicated to . People decorated their homes and trees with paper flags; ritual races, processions, dances, songs, prayers, and finally human sacrifices were held. This was one of the more important Aztec festivals, and the people prepared for the whole month. They fasted or ate very little; a statue of the god was made out of amaranth seeds and honey, and at the end of the month, it was cut into small pieces so everybody could eat a piece of the god. After the Spanish conquest, cultivation of amaranth was outlawed, while some of the festivities were subsumed into the Christmas celebration. Amaranth is associated with longevity and, poetically, with death and immortality. Amaranth garlands were used in the mourning of Achilles. John Milton's Paradise Lost portrays a showy amaranth in the Garden of Eden, "remov'd from Heav'n" when it blossoms because the flowers "shade the fountain of life". He describes amaranth as "immortal" in reference to the flowers that generally do not wither and retain bright reddish tones of color, even when deceased; referred to in one species as "love-lies-bleeding." Gallery
Biology and health sciences
Caryophyllales
null
1634
https://en.wikipedia.org/wiki/Aquaculture
Aquaculture
Aquaculture (less commonly spelled aquiculture), also known as aquafarming, is the controlled cultivation ("farming") of aquatic organisms such as fish, crustaceans, mollusks, algae and other organisms of value such as aquatic plants (e.g. lotus). Aquaculture involves cultivating freshwater, brackish water, and saltwater populations under controlled or semi-natural conditions and can be contrasted with commercial fishing, which is the harvesting of wild fish. Aquaculture is also a practice used for restoring and rehabilitating marine and freshwater ecosystems. Mariculture, commonly known as marine farming, is aquaculture in seawater habitats and lagoons, as opposed to freshwater aquaculture. Pisciculture is a type of aquaculture that consists of fish farming to obtain fish products as food. Aquaculture can also be defined as the breeding, growing, and harvesting of fish and other aquatic plants, also known as farming in water. It is an environmental source of food and commercial products that help to improve healthier habitats and are used to reconstruct the population of endangered aquatic species. Technology has increased the growth of fish in coastal marine waters and open oceans due to the increased demand for seafood. Aquaculture can be conducted in completely artificial facilities built on land (onshore aquaculture), as in the case of fish tank, ponds, aquaponics or raceways, where the living conditions rely on human control such as water quality (oxygen), feed, temperature. Alternatively, they can be conducted on well-sheltered shallow waters nearshore of a body of water (inshore aquaculture), where the cultivated species are subjected to relatively more naturalistic environments; or on fenced/enclosed sections of open water away from the shore (offshore aquaculture), where the species are either cultured in cages, racks or bags and are exposed to more diverse natural conditions such as water currents (such as ocean currents), diel vertical migration and nutrient cycles. According to the Food and Agriculture Organization (FAO), aquaculture "is understood to mean the farming of aquatic organisms including fish, molluscs, crustaceans and aquatic plants. Farming implies some form of intervention in the rearing process to enhance production, such as regular stocking, feeding, protection from predators, etc. Farming also implies individual or corporate ownership of the stock being cultivated." The reported output from global aquaculture operations in 2019 was over 120 million tonnes valued at US$274 billion, by 2022, it had risen to 130.9 million tonnes, valued at USD 312.8 billion. However, there are issues with the reliability of the reported figures. Further, in current aquaculture practice, products from several kilograms of wild fish are used to produce one kilogram of a piscivorous fish like salmon. Plant and insect-based feeds are also being developed to help reduce wild fish been used for aquaculture feed. Particular kinds of aquaculture include fish farming, shrimp farming, oyster farming, mariculture, pisciculture, algaculture (such as seaweed farming), and the cultivation of ornamental fish. Particular methods include aquaponics and integrated multi-trophic aquaculture, both of which integrate fish farming and aquatic plant farming. The FAO describes aquaculture as one of the industries most directly affected by climate change and its impacts. Some forms of aquaculture have negative impacts on the environment, such as through nutrient pollution or disease transfer to wild populations. Overview Harvest stagnation in wild fisheries and overexploitation of popular marine species, combined with a growing demand for high-quality protein, encouraged aquaculturists to domesticate other marine species. At the outset of modern aquaculture, many were optimistic that a "Blue Revolution" could take place in aquaculture, just as the Green Revolution of the 20th century had revolutionized agriculture. Although land animals had long been domesticated, most seafood species were still caught from the wild. Concerned about the impact of growing demand for seafood on the world's oceans, prominent ocean explorer Jacques Cousteau wrote in 1973: "With earth's burgeoning human populations to feed, we must turn to the sea with new understanding and new technology." About 430 (97%) of the species cultured were domesticated during the 20th and 21st centuries, of which an estimated 106 came in the decade to 2007. Given the long-term importance of agriculture, to date, only 0.08% of known land plant species and 0.0002% of known land animal species have been domesticated, compared with 0.17% of known marine plant species and 0.13% of known marine animal species. Domestication typically involves about a decade of scientific research. Domesticating aquatic species involves fewer risks to humans than do land animals, which took a large toll in human lives. Most major human diseases originated in domesticated animals, including diseases such as smallpox and diphtheria, that like most infectious diseases, move to humans from animals. No human pathogens of comparable virulence have yet emerged from marine species. Biological control methods to manage parasites are already being used, such as cleaner fish (e.g. lumpsuckers and wrasse) to control sea lice populations in salmon farming. Models are being used to help with spatial planning and siting of fish farms in order to minimize impact. The decline in wild fish stocks has increased the demand for farmed fish. However, finding alternative sources of protein and oil for fish feed is necessary so the aquaculture industry can grow sustainably; otherwise, it represents a great risk for the over-exploitation of forage fish. Aquaculture production now exceeds capture fishery production and together the relative GDP contribution has ranged from 0.01 to 10%. Singling out aquaculture's relative contribution to GDP, however, is not easily derived due to lack of data. Another recent issue following the banning in 2008 of organotins by the International Maritime Organization is the need to find environmentally friendly, but still effective, compounds with antifouling effects. Many new natural compounds are discovered every year, but producing them on a large enough scale for commercial purposes is almost impossible. It is highly probable that future developments in this field will rely on microorganisms, but greater funding and further research is needed to overcome the lack of knowledge in this field. Species groups Aquatic plants Microalgae, also referred to as phytoplankton, microphytes, or planktonic algae, constitute the majority of cultivated algae. Macroalgae commonly known as seaweed also have many commercial and industrial uses, but due to their size and specific requirements, they are not easily cultivated on a large scale and are most often taken in the wild. In 2016, aquaculture was the source of 96.5 percent by volume of the total 31.2 million tonnes of wild-collected and cultivated aquatic plants combined. Global production of farmed aquatic plants, overwhelmingly dominated by seaweeds, grew in output volume from 13.5 million tonnes in 1995 to just over 30 million tonnes in 2016. Seaweed farming Fish The farming of fish is the most common form of aquaculture. It involves raising fish commercially in tanks, fish ponds, or ocean enclosures, usually for food. A facility that releases juvenile fish into the wild for recreational fishing or to supplement a species' natural numbers is generally referred to as a fish hatchery. Worldwide, the most important fish species used in fish farming are, in order, carp, salmon, tilapia, and catfish. In the Mediterranean, young bluefin tuna are netted at sea and towed slowly towards the shore. They are then interned in offshore pens (sometimes made from floating HDPE pipe) where they are further grown for the market. In 2009, researchers in Australia managed for the first time to coax southern bluefin tuna to breed in landlocked tanks. Southern bluefin tuna are also caught in the wild and fattened in grow-out sea cages in southern Spencer Gulf, South Australia. A similar process is used in the salmon-farming section of this industry; juveniles are taken from hatcheries and a variety of methods are used to aid them in their maturation. For example, as stated above, some of the most important fish species in the industry, salmon, can be grown using a cage system. This is done by having netted cages, preferably in open water that has a strong flow, and feeding the salmon a special food mixture that aids their growth. This process allows for year-round growth of the fish, thus a higher harvest during the correct seasons. An additional method, known sometimes as sea ranching, has also been used within the industry. Sea ranching involves raising fish in a hatchery for a brief time and then releasing them into marine waters for further development, whereupon the fish are recaptured when they have matured. Crustaceans Commercial shrimp farming began in the 1970s, and production grew steeply thereafter. Global production reached more than 1.6 million tonnes in 2003, worth about US$9 billion. About 75% of farmed shrimp is produced in Asia, in particular in China and Thailand. The other 25% is produced mainly in Latin America, where Brazil is the largest producer. Thailand is the largest exporter. Shrimp farming has changed from its traditional, small-scale form in Southeast Asia into a global industry. Technological advances have led to ever higher densities per unit area, and broodstock is shipped worldwide. Virtually all farmed shrimp are penaeids (i.e., shrimp of the family Penaeidae), and just two species of shrimp, the Pacific white shrimp and the giant tiger prawn, account for about 80% of all farmed shrimp. These industrial monocultures are very susceptible to disease, which has decimated shrimp populations across entire regions. Increasing ecological problems, repeated disease outbreaks, and pressure and criticism from both nongovernmental organizations and consumer countries led to changes in the industry in the late 1990s and generally stronger regulations. In 1999, governments, industry representatives, and environmental organizations initiated a program aimed at developing and promoting more sustainable farming practices through the Seafood Watch program. Freshwater prawn farming shares many characteristics with, including many problems with, marine shrimp farming. Unique problems are introduced by the developmental lifecycle of the main species, the giant river prawn. The global annual production of freshwater prawns (excluding crayfish and crabs) in 2007 was about 460,000 tonnes, exceeding 1.86 billion dollars. Additionally, China produced about 370,000 tonnes of Chinese river crab. In addition astaciculture is the freshwater farming of crayfish (mostly in the US, Australia, and Europe). Molluscs Aquacultured shellfish include various oyster, mussel, and clam species. These bivalves are filter and/or deposit feeders, which rely on ambient primary production rather than inputs of fish or other feed. As such, shellfish aquaculture is generally perceived as benign or even beneficial. Depending on the species and local conditions, bivalve molluscs are either grown on the beach, on longlines, or suspended from rafts and harvested by hand or by dredging. In May 2017 a Belgian consortium installed the first of two trial mussel farms on a wind farm in the North Sea. Abalone farming began in the late 1950s and early 1960s in Japan and China. Since the mid-1990s, this industry has become increasingly successful. Overfishing and poaching have reduced wild populations to the extent that farmed abalone now supplies most abalone meat. Sustainably farmed molluscs can be certified by Seafood Watch and other organizations, including the World Wildlife Fund (WWF). WWF initiated the "Aquaculture Dialogues" in 2004 to develop measurable and performance-based standards for responsibly farmed seafood. In 2009, WWF co-founded the Aquaculture Stewardship Council with the Dutch Sustainable Trade Initiative to manage the global standards and certification programs. After trials in 2012, a commercial "sea ranch" was set up in Flinders Bay, Western Australia, to raise abalone. The ranch is based on an artificial reef made up of 5000 () separate concrete units called abitats (abalone habitats). The 900 kg abitats can host 400 abalone each. The reef is seeded with young abalone from an onshore hatchery. The abalone feed on seaweed that has grown naturally on the habitats, with the ecosystem enrichment of the bay also resulting in growing numbers of dhufish, pink snapper, wrasse, and Samson fish, among other species. Brad Adams, from the company, has emphasised the similarity to wild abalone and the difference from shore-based aquaculture. "We're not aquaculture, we're ranching, because once they're in the water they look after themselves." Other groups Other groups include aquatic reptiles, amphibians, and miscellaneous invertebrates, such as echinoderms and jellyfish. They are separately graphed at the top right of this section, since they do not contribute enough volume to show clearly on the main graph. Commercially harvested echinoderms include sea cucumbers and sea urchins. In China, sea cucumbers are farmed in artificial ponds as large as . Global fish production Global fish production peaked at about 171 million tonnes in 2016, with aquaculture representing 47 percent of the total and 53 percent if non-food uses (including reduction to fishmeal and fish oil) are excluded. With capture fishery production relatively static since the late 1980s, aquaculture has been responsible for the continuing growth in the supply of fish for human consumption. Global aquaculture production (including aquatic plants) in 2016 was 110.2 million tonnes, with the first-sale value estimated at US$244 billion. Three years later, in 2019 the reported output from global aquaculture operations was over 120 million tonnes valued at US$274 billion and by 2022 it had reached 130.9 million tonnes, valued at USD 312.8 billion. For the first time, aquaculture surpassed capture fisheries in aquatic animal production with 94.4 million tonnes, representing 51 percent of the world total and a record 57 percent of the production destined for human consumption. In 2022 most aquaculture workers were in Asia (95%), followed by Africa (3%) and Latin America and the Caribbean (2%). The contribution of aquaculture to the global production of capture fisheries and aquaculture combined has risen continuously, reaching 46.8 percent in 2016, up from 25.7 percent in 2000. With 5.8 percent annual growth rate during the period 2001–2016, aquaculture continues to grow faster than other major food production sectors, but it no longer has the high annual growth rates experienced in the 1980s and 1990s. In 2012, the total world production of fisheries was 158 million tonnes, of which aquaculture contributed 66.6 million tonnes, about 42%. The growth rate of worldwide aquaculture has been sustained and rapid, averaging about 8% per year for over 30 years, while the take from wild fisheries has been essentially flat for the last decade. The aquaculture market reached $86 billion in 2009. Aquaculture is an especially important economic activity in China. Between 1980 and 1997, the Chinese Bureau of Fisheries reports, aquaculture harvests grew at an annual rate of 16.7%, jumping from 1.9 million tonnes to nearly 23 million tonnes. In 2005, China accounted for 70% of world production. Aquaculture is also currently one of the fastest-growing areas of food production in the U.S. About 90% of all U.S. shrimp consumption is farmed and imported. In recent years, salmon aquaculture has become a major export in southern Chile, especially in Puerto Montt, Chile's fastest-growing city. A United Nations report titled The State of the World Fisheries and Aquaculture released in May 2014 maintained fisheries and aquaculture support the livelihoods of some 60 million people in Asia and Africa. FAO estimates that in 2016, overall, women accounted for nearly 14 percent of all people directly engaged in the fisheries and aquaculture primary sector. In 2021, global fish production reached 182 million tonnes, with approximately equal amounts coming from capture (91.2 million tonnes) and aquaculture (90.9 million tonnes). Aquaculture has experienced rapid growth in recent decades, increasing almost sevenfold from 1990 to 2021. Over-reporting by China China overwhelmingly dominates the world in reported aquaculture output, reporting a total output which is double that of the rest of the world put together. However, there are some historical issues with the accuracy of China's returns. In 2001, scientists Reg Watson and Daniel Pauly expressed concerns that China was over reporting its catch from wild fisheries in the 1990s. They said that made it appear that the global catch since 1988 was increasing annually by 300,000 tonnes, whereas it was really shrinking annually by 350,000 tonnes. Watson and Pauly suggested this may have been related to Chinese policies where state entities that monitored the economy were also tasked with increasing output. Also, until more recently, the promotion of Chinese officials was based on production increases from their own areas. China disputed this claim. The official Xinhua News Agency quoted Yang Jian, director general of the Agriculture Ministry's Bureau of Fisheries, as saying that China's figures were "basically correct". However, the FAO accepted there were issues with the reliability of China's statistical returns, and for a period treated data from China, including the aquaculture data, apart from the rest of the world. Aquacultural methods Mariculture Mariculture is the cultivation of marine organisms in seawater, variously in sheltered coastal waters ("inshore"), open ocean ("offshore"), and on land ("onshore"). Farmed species include algae (from microalgae (such as phytoplankton) to macroalgae (such as seaweed); shellfish (such as shrimp), lobster, oysters), and clams, and marine finfish. Channel catfish (Ictalurus punctatus), hard clams (Mercenaria mercenaria) and Atlantic salmon (Salmo salar) are prominent in the U.S. mariculture. Mariculture may consist of raising the organisms on or in artificial enclosures such as in floating netted enclosures for salmon, and on racks or in floating cages for oysters. In the case of enclosed salmon, they are fed by the operators; oysters on racks filter feed on naturally available food. Abalone have been farmed on an artificial reef consuming seaweed which grows naturally on the reef units. Integrated Integrated multi-trophic aquaculture (IMTA) is a practice in which the byproducts (wastes) from one species are recycled to become inputs (fertilizers, food) for another. Fed aquaculture (for example, fish, shrimp) is combined with inorganic extractive and organic extractive (for example, shellfish) aquaculture to create balanced systems for environmental sustainability (biomitigation), economic stability (product diversification and risk reduction) and social acceptability (better management practices). "Multi-trophic" refers to the incorporation of species from different trophic or nutritional levels in the same system. This is one potential distinction from the age-old practice of aquatic polyculture, which could simply be the co-culture of different fish species from the same trophic level. In this case, these organisms may all share the same biological and chemical processes, with few synergistic benefits, which could potentially lead to significant shifts in the ecosystem. Some traditional polyculture systems may, in fact, incorporate a greater diversity of species, occupying several niches, as extensive cultures (low intensity, low management) within the same pond. A working IMTA system can result in greater total production based on mutual benefits to the co-cultured species and improved ecosystem health, even if the production of individual species is lower than in a monoculture over a short-term period. Sometimes the term "integrated aquaculture" is used to describe the integration of monocultures through water transfer. For all intents and purposes, however, the terms "IMTA" and "integrated aquaculture" differ only in their degree of descriptiveness. Aquaponics, fractionated aquaculture, integrated agriculture-aquaculture systems, integrated peri-urban-aquaculture systems, and integrated fisheries-aquaculture systems are other variations of the IMTA concept. Urban aquaculture Netting materials Various materials, including nylon, polyester, polypropylene, polyethylene, plastic-coated welded wire, rubber, patented rope products (Spectra, Thorn-D, Dyneema), galvanized steel and copper are used for netting in aquaculture fish enclosures around the world. All of these materials are selected for a variety of reasons, including design feasibility, material strength, cost, and corrosion resistance. Recently, copper alloys have become important netting materials in aquaculture because they are antimicrobial (i.e., they destroy bacteria, viruses, fungi, algae, and other microbes) and they therefore prevent biofouling (i.e., the undesirable accumulation, adhesion, and growth of microorganisms, plants, algae, tubeworms, barnacles, mollusks, and other organisms). By inhibiting microbial growth, copper alloy aquaculture cages avoid costly net changes that are necessary with other materials. The resistance of organism growth on copper alloy nets also provides a cleaner and healthier environment for farmed fish to grow and thrive. Technology Uncrewed vessels, like ROVs and AUVs, are now being used in aquaculture in various ways, such as site planning, cage or net inspection, environmental monitoring, disaster assessment, and risk reduction. The use of uncrewed vessels aims to increase safety, efficiency, and accuracy of aquaculture operations. Aquaculture is a multi-million-dollar business that relies on net and cage maintenance. Inspections used to be conducted by divers manually inspecting the nets, but uncrewed vessels are now being used to conduct faster and more efficient inspections. Biofloc technology is also used to simultaneously improve water quality and generate bacterial biomass as food for the cultured animals. Issues If performed without consideration for potential local environmental impacts, aquaculture in inland waters can result in more environmental damage than wild fisheries, though with less waste produced per kg on a global scale. Local concerns with aquaculture in inland waters may include waste handling, side-effects of antibiotics, competition between farmed and wild animals, and the potential introduction of invasive plant and animal species, or foreign pathogens, particularly if unprocessed fish are used to feed more marketable carnivorous fish. If non-local live feeds are used, aquaculture may introduce exotic plants or animals with disastrous effects. Improvements in methods resulting from advances in research and the availability of commercial feeds has reduced some of these concerns since their greater prevalence in the 1990s and 2000s . Fish waste is organic and composed of nutrients necessary in all components of aquatic food webs. In-ocean aquaculture often produces much higher than normal fish waste concentrations. The waste collects on the ocean bottom, damaging or eliminating bottom-dwelling life. Waste can also decrease dissolved oxygen levels in the water column, putting further pressure on wild animals. An alternative model to food being added to the ecosystem, is the installation of artificial reef structures to increase the habitat niches available, without the need to add any more than ambient feed and nutrient. This has been used in the "ranching" of abalone in Western Australia. Impacts on wild fish Some carnivorous and omnivorous farmed fish species are fed wild forage fish. Although carnivorous farmed fish represented only 13 percent of aquaculture production by weight in 2000, they represented 34 percent of aquaculture production by value. Farming of carnivorous species like salmon and shrimp leads to a high demand for forage fish to match the nutrition they get in the wild. Fish do not actually produce omega-3 fatty acids, but instead accumulate them from either consuming microalgae that produce these fatty acids, as is the case with forage fish like herring and sardines, or, as is the case with fatty predatory fish, like salmon, by eating prey fish that have accumulated omega-3 fatty acids from microalgae. To satisfy this requirement, more than 50 percent of the world fish oil production is fed to farmed salmon. Farmed salmon consume more wild fish than they generate as a final product, although the efficiency of production is improving. To produce one kilograms of farmed salmon, products from several kilograms of wild fish are fed to them – this can be described as the "fish-in-fish-out" (FIFO) ratio. In 1995, salmon had a FIFO ratio of 7.5 (meaning 7.5 kilograms of wild fish feed were required to produce one kilogram of salmon); by 2006 the ratio had fallen to 4.9. Additionally, a growing share of fish oil and fishmeal come from residues (byproducts of fish processing), rather than dedicated whole fish. In 2012, 34 percent of fish oil and 28 percent of fishmeal came from residues. However, fishmeal and oil from residues instead of whole fish have a different composition with more ash and less protein, which may limit its potential use for aquaculture. As the salmon farming industry expands, it requires more wild forage fish for feed, at a time when seventy-five percent of the world's monitored fisheries are already near to or have exceeded their maximum sustainable yield. The industrial-scale extraction of wild forage fish for salmon farming then impacts the survivability of the wild predator fish who rely on them for food. An important step in reducing the impact of aquaculture on wild fish is shifting carnivorous species to plant-based feeds. Salmon feeds, for example, have gone from containing only fishmeal and oil to containing 40 percent plant protein. The USDA has also experimented with using grain-based feeds for farmed trout. When properly formulated (and often mixed with fishmeal or oil), plant-based feeds can provide proper nutrition and similar growth rates in carnivorous farmed fish. Another impact aquaculture production can have on wild fish is the risk of fish escaping from coastal pens, where they can interbreed with their wild counterparts, diluting wild genetic stocks. Escaped fish can become invasive, out-competing native species. Animal welfare As with the farming of terrestrial animals, social attitudes influence the need for humane practices and regulations in farmed marine animals. Under the guidelines advised by the Farm Animal Welfare Council good animal welfare means both fitness and a sense of well-being in the animal's physical and mental state. This can be defined by the Five Freedoms: Freedom from hunger and thirst Freedom from discomfort Freedom from pain, disease, or injury Freedom to express normal behaviour Freedom from fear and distress However, the controversial issue in aquaculture is whether fish and farmed marine invertebrates are actually sentient, or have the perception and awareness to experience suffering. Although no evidence of this has been found in marine invertebrates, recent studies conclude that fish do have the necessary receptors (nociceptors) to sense noxious stimuli and so are likely to experience states of pain, fear and stress. Consequently, welfare in aquaculture is directed at vertebrates, finfish in particular. Common welfare concerns Welfare in aquaculture can be impacted by a number of issues such as stocking densities, behavioural interactions, disease and parasitism. A major problem in determining the cause of impaired welfare is that these issues are often all interrelated and influence each other at different times. Optimal stocking density is often defined by the carrying capacity of the stocked environment and the amount of individual space needed by the fish, which is very species specific. Although behavioural interactions such as shoaling may mean that high stocking densities are beneficial to some species, in many cultured species high stocking densities may be of concern. Crowding can constrain normal swimming behaviour, as well as increase aggressive and competitive behaviours such as cannibalism, feed competition, territoriality and dominance/subordination hierarchies. This potentially increases the risk of tissue damage due to abrasion from fish-to-fish contact or fish-to-cage contact. Fish can suffer reductions in food intake and food conversion efficiency. In addition, high stocking densities can result in water flow being insufficient, creating inadequate oxygen supply and waste product removal. Dissolved oxygen is essential for fish respiration and concentrations below critical levels can induce stress and even lead to asphyxiation. Ammonia, a nitrogen excretion product, is highly toxic to fish at accumulated levels, particularly when oxygen concentrations are low. Many of these interactions and effects cause stress in the fish, which can be a major factor in facilitating fish disease. For many parasites, infestation depends on the host's degree of mobility, the density of the host population and vulnerability of the host's defence system. Sea lice are the primary parasitic problem for finfish in aquaculture, high numbers causing widespread skin erosion and haemorrhaging, gill congestion, and increased mucus production. There are also a number of prominent viral and bacterial pathogens that can have severe effects on internal organs and nervous systems. Improving welfare The key to improving welfare of marine cultured organisms is to reduce stress to a minimum, as prolonged or repeated stress can cause a range of adverse effects. Attempts to minimise stress can occur throughout the culture process. Understanding and providing required environmental enrichment can be vital for reducing stress and benefit aquaculture objects such as improved growth body condition and reduced damage from aggression. During grow-out it is important to keep stocking densities at appropriate levels specific to each species, as well as separating size classes and grading to reduce aggressive behavioural interactions. Keeping nets and cages clean can assist positive water flow to reduce the risk of water degradation. Not surprisingly disease and parasitism can have a major effect on fish welfare and it is important for farmers not only to manage infected stock but also to apply disease prevention measures. However, prevention methods, such as vaccination, can also induce stress because of the extra handling and injection. Other methods include adding antibiotics to feed, adding chemicals into water for treatment baths and biological control, such as using cleaner wrasse to remove lice from farmed salmon. Many steps are involved in transport, including capture, food deprivation to reduce faecal contamination of transport water, transfer to transport vehicle via nets or pumps, plus transport and transfer to the delivery location. During transport water needs to be maintained to a high quality, with regulated temperature, sufficient oxygen and minimal waste products. In some cases anaesthetics may be used in small doses to calm fish before transport. Aquaculture is sometimes part of an environmental rehabilitation program or as an aid in conserving endangered species. Coastal ecosystems Aquaculture is becoming a significant threat to coastal ecosystems. About 20 percent of mangrove forests have been destroyed since 1980, partly due to shrimp farming. An extended cost–benefit analysis of the total economic value of shrimp aquaculture built on mangrove ecosystems found that the external costs were much higher than the external benefits. Over four decades, of Indonesian mangroves have been converted to shrimp farms. Most of these farms are abandoned within a decade because of the toxin build-up and nutrient loss. Pollution from sea cage aquaculture Salmon farms are typically sited in pristine coastal ecosystems which they then pollute. A farm with 200,000 salmon discharges more fecal waste than a city of 60,000 people. This waste is discharged directly into the surrounding aquatic environment, untreated, often containing antibiotics and pesticides." There is also an accumulation of heavy metals on the benthos (seafloor) near the salmon farms, particularly copper and zinc. In 2016, mass fish kill events impacted salmon farmers along Chile's coast and the wider ecology. Increases in aquaculture production and its associated effluent were considered to be possible contributing factors to fish and molluscan mortality. Sea cage aquaculture is responsible for nutrient enrichment of the waters in which they are established. This results from fish wastes and uneaten feed inputs. Elements of most concern are nitrogen and phosphorus which can promote algal growth, including harmful algal blooms which can be toxic to fish. Flushing times, current speeds, distance from the shore and water depth are important considerations when locating sea cages in order to minimize the impacts of nutrient enrichment on coastal ecosystems. The extent of the effects of pollution from sea-cage aquaculture varies depending on where the cages are located, which species are kept, how densely cages are stocked and what the fish are fed. Important species-specific variables include the species' food conversion ratio (FCR) and nitrogen retention. Freshwater ecosystems Whole-lake experiments carried out at the Experimental Lakes Area in Ontario, Canada, have displayed the potential for cage aquaculture to source numerous changes in freshwater ecosystems. Following the initiation of an experimental rainbow trout cage farm in a small boreal lake, dramatic reductions in mysis concentrations associated with a decrease in dissolved oxygen were observed. Significant increases in ammonium and total phosphorus, a driver for eutrophication in freshwater systems, were measured in the hypolimnion of the lake. Annual phosphorus inputs from aquaculture waste exceeded that of natural inputs from atmospheric deposition and inflows, and phytoplankton biomass has had a fourfold annual increase following the initiation of the experimental farm. Genetic modification A type of salmon called the AquAdvantage salmon has been genetically modified for faster growth, although it has not been approved for commercial use, due to controversy. The altered salmon incorporates a growth hormone from a Chinook salmon that allows it to reach full size in 16–28 months, instead of the normal 36 months for Atlantic salmon, and while consuming 25 percent less feed. The U.S. Food and Drug Administration reviewed the AquAdvantage salmon in a draft environmental assessment and determined that it "would not have a significant impact (FONSI) on the U.S. environment." Fish diseases, parasites and vaccines A major difficulty for aquaculture is the tendency towards monoculture and the associated risk of widespread disease. Aquaculture is also associated with environmental risks; for instance, shrimp farming has caused the destruction of important mangrove forests throughout southeast Asia. In the 1990s, disease wiped out China's farmed Farrer's scallop and white shrimp and required their replacement by other species. Needs of the aquaculture sector in vaccines Aquaculture has an average annual growth rate of 9.2%, however, the success and continued expansion of the fish farming sector is highly dependent on the control of fish pathogens including a wide range of viruses, bacteria, fungi, and parasites. In 2014, it was estimated that these parasites cost the global salmon farming industry up to 400 million Euros. This represents 6–10% of the production value of the affected countries, but it can go up to 20% (Fisheries and Oceans Canada, 2014). Since pathogens quickly spread within a population of cultured fish, their control is vital for the sector. Historically, the use of antibiotics was against bacterial epizootics but the production of animal proteins has to be sustainable, which means that preventive measures that are acceptable from a biological and environmental point of view should be used to keep disease problems in aquaculture at an acceptable level. So, this added to the efficiency of vaccines resulted in an immediate and permanent reduction in the use of antibiotics in the 90s. In the beginning, there were fish immersion vaccines efficient against the vibriosis but proved ineffective against the furunculosis, hence the arrival of injectable vaccines: first water-based and after oil-based, much more efficient (Sommerset, 2005). Development of new vaccines It is the important mortality in cages among farmed fish, the debates around DNA injection vaccines, although effective, their safety and their side effects but also societal expectations for cleaner fish and security, lead research on new vaccine vectors. Several initiatives are financed by the European Union to develop a rapid and cost-effective approach to using bacteria in feed to make vaccines, in particular thanks to lactic bacteria whose DNA is modified (Boudinot, 2006). In fact, vaccinating farmed fish by injection is time-consuming and costly, so vaccines can be administered orally or by immersion by being added to feed or directly into water. This allows vaccinating many individuals at the same time while limiting the associated handling and stress. Indeed, many tests are necessary because the antigens of the vaccines must be adapted to each species or not present a certain level of variability or they will not have any effect. For example, tests have been done with two species: Lepeophtheirus salmonis (from which the antigens were collected) and Caligus rogercresseyi (which was vaccinated with the antigens), although the homology between the two species is important, the level of variability made the protection ineffective (Fisheries and Oceans Canada, 2014). Recent vaccines development in aquaculture There are 24 vaccines available and one for lobsters. The first vaccine was used in the USA against enteric red mouth in 1976. However, there are 19 companies and some small stakeholders are producing vaccines for aquaculture nowadays. The novel approaches are a way forward to prevent the loss of 10% of aquaculture through disease. Genetically modified vaccines are not being used in the EU due to societal concerns and regulations. Meanwhile, DNA vaccines are now authorised in the EU. There are challenges in fish vaccine development, immune response due to lack of potent adjuvants. Scientists are considering microdose application in future. But there are also opportunities in aquaculture vaccinology due to the low cost of technology, regulations change and novel antigen expression and delivery systems. In Norway subunit vaccine (VP2 peptide) against infectious pancreatic necrosis is being used. In Canada, a licensed DNA vaccine against Infectious hematopoietic necrosis has been launched for industry use. Fish have large mucosal surfaces, so the preferred route is immersion, intraperitoneal and oral respectively. Nanoparticles are in progress for delivery purposes. The common antibodies produced are IgM and IgT. Normally booster is not required in fish because more memory cells are produced in response to the booster rather than an increased level of antibodies. mRNA vaccines are alternative to DNA vaccines because they are more safe, stable, easily producible at a large scale and mass immunization potential. Recently these are used in cancer prevention and therapeutics. Studies in rabies has shown that efficacy depends on dose and route of administration. These are still in infancy. Economic gains In 2014, the aquaculture produced fish overtook wild caught fish, in supply for human food. This means there is a huge demand for vaccines, in prevention of diseases. The reported annual loss fish, calculates to >10 billion USD. This is from approximately 10% of all fishes dying from infectious diseases. The high annual losses increases the demand for vaccines. Even though there are about 24 traditionally used vaccines, there is still demand for more vaccines. The breakthrough of DNA-vaccines has sunk the cost of vaccines. The alternative to vaccines would be antibiotics and chemotherapy, which are more expensive and with bigger drawbacks. DNA-vaccines have become the most cost-efficient method of preventing infectious diseases. This bodes well for DNA-vaccines becoming the new standard both in fish vaccines, and in general vaccines. Salinization/acidification of soils Sediment from abandoned aquaculture farms can remain hypersaline, acidic and eroded. This material can remain unusable for aquaculture purposes for long periods thereafter. Various chemical treatments, such as adding lime, can aggravate the problem by modify the physicochemical characteristics of the sediment. Plastic pollution Aquaculture produces a range of marine debris, depending on the product and location. The most frequently documented type of plastic is expanded polystyrene (EPS), used extensively in floats and sea cage collars (MEPC 2020). Other common waste items include cage nets and plastic harvest bins. A review of aquaculture as a source of marine litter in the North, Baltic and Mediterranean Seas identified 64 different items, 19 of which were unique to aquaculture . Estimates of the amount of aquaculture waste entering the oceans vary widely, depending on the methodologies used. For example, in the European Economic Area loss estimates have varied from a low of 3,000 tonnes to 41,000 tonnes per year. Ecological benefits While some forms of aquaculture can be devastating to ecosystems, such as shrimp farming in mangroves, other forms can be beneficial. Shellfish aquaculture adds substantial filter feeding capacity to an environment which can significantly improve water quality. A single oyster can filter 15 gallons of water a day, removing microscopic algal cells. By removing these cells, shellfish are removing nitrogen and other nutrients from the system and either retaining it or releasing it as waste which sinks to the bottom. By harvesting these shellfish, the nitrogen they retained is completely removed from the system. Raising and harvesting kelp and other macroalgae directly remove nutrients such as nitrogen and phosphorus. Repackaging these nutrients can relieve eutrophic, or nutrient-rich, conditions known for their low dissolved oxygen which can decimate species diversity and abundance of marine life. Removing algal cells from the water also increases light penetration, allowing plants such as eelgrass to reestablish themselves and further increase oxygen levels. Aquaculture in an area can provide for crucial ecological functions for the inhabitants. Shellfish beds or cages can provide habitat structure. This structure can be used as shelter by invertebrates, small fish or crustaceans to potentially increase their abundance and maintain biodiversity. Increased shelter raises stocks of prey fish and small crustaceans by increasing recruitment opportunities in turn providing more prey for higher trophic levels. One study estimated that 10 square meters of oyster reef could enhance an ecosystem's biomass by 2.57 kg Herbivore shellfish will also be preyed on. This moves energy directly from primary producers to higher trophic levels potentially skipping out on multiple energetically costly trophic jumps which would increase biomass in the ecosystem. Seaweed farming is a carbon negative crop, with a high potential for climate change mitigation. The IPCC Special Report on the Ocean and Cryosphere in a Changing Climate recommends "further research attention" as a mitigation tactic. Regenerative ocean farming is a polyculture farming system that grows a mix of seaweeds and shellfish while sequestering carbon, decreasing nitrogen in the water and increasing oxygen, helping to regenerate and restore local habitat like reef ecosystems. Prospects Global wild fisheries are in decline, with valuable habitat such as estuaries in critical condition. The aquaculture or farming of piscivorous fish, like salmon, does not help the problem because they need to eat products from other fish, such as fish meal and fish oil. Studies have shown that salmon farming has major negative impacts on wild salmon, as well as the forage fish that need to be caught to feed them. Fish that are higher on the food chain are less efficient sources of food energy. Apart from fish and shrimp, some aquaculture undertakings, such as seaweed and filter-feeding bivalve mollusks like oysters, clams, mussels and scallops, are relatively benign and even environmentally restorative. Filter-feeders filter pollutants as well as nutrients from the water, improving water quality. Seaweeds extract nutrients such as inorganic nitrogen and phosphorus directly from the water, and filter-feeding mollusks can extract nutrients as they feed on particulates, such as phytoplankton and detritus. Some profitable aquaculture cooperatives promote sustainable practices. New methods lessen the risk of biological and chemical pollution through minimizing fish stress, fallowing netpens, and applying integrated pest management. Vaccines are being used more and more to reduce antibiotic use for disease control. Onshore recirculating aquaculture systems, facilities using polyculture techniques, and properly sited facilities (for example, offshore areas with strong currents) are examples of ways to manage negative environmental effects. Recirculating aquaculture systems (RAS) recycle water by circulating it through filters to remove fish waste and food and then recirculating it back into the tanks. This saves water and the waste gathered can be used in compost or, in some cases, could even be treated and used on land. While RAS was developed with freshwater fish in mind, scientists associated with the Agricultural Research Service have found a way to rear saltwater fish using RAS in low-salinity waters. Although saltwater fish are raised in off-shore cages or caught with nets in water that typically has a salinity of 35 parts per thousand (ppt), scientists were able to produce healthy pompano, a saltwater fish, in tanks with a salinity of only 5 ppt. Commercializing low-salinity RAS are predicted to have positive environmental and economical effects. Unwanted nutrients from the fish food would not be added to the ocean and the risk of transmitting diseases between wild and farm-raised fish would greatly be reduced. The price of expensive saltwater fish, such as the pompano and cobia used in the experiments, would be reduced. However, before any of this can be done researchers must study every aspect of the fish's lifecycle, including the amount of ammonia and nitrate the fish will tolerate in the water, what to feed the fish during each stage of its lifecycle, the stocking rate that will produce the healthiest fish, etc. Some 16 countries now use geothermal energy for aquaculture, including China, Israel, and the United States. In California, for example, 15 fish farms produce tilapia, bass, and catfish with warm water from underground. This warmer water enables fish to grow all year round and mature more quickly. Collectively these California farms produce 4.5 million kilograms of fish each year. Global goals The UN Sustainable Development Goal 14 ("life below water"), Target 14.7 includes aquaculture: "By 2030, increase the economic benefits to small island developing states and least developed countries from the sustainable use of marine resources, including through sustainable management of fisheries, aquaculture and tourism". Aquaculture's contribution to GDP is not included in SDG Target 14.7 but methods for quantifying this have been explored by FAO. National laws, regulations, and management Laws governing aquaculture practices vary greatly by country and are often not closely regulated or easily traceable. In the United States, land-based and nearshore aquaculture is regulated at the federal and state levels; however, no national laws govern offshore aquaculture in U.S. exclusive economic zone waters. In June 2011, the Department of Commerce and National Oceanic and Atmospheric Administration released national aquaculture policies to address this issue and "to meet the growing demand for healthy seafood, to create jobs in coastal communities, and restore vital ecosystems." Large aquaculture facilities (i.e. those producing per year) which discharge wastewater are required to obtain permits pursuant to the Clean Water Act. Facilities that produce at least of fish, molluscs or crustaceans a year are subject to specific national discharge standards. Other permitted facilities are subject to effluent limitations that are developed on a case-by-case basis. By country Aquaculture by Country: History The Gunditjmara, a local Aboriginal Australian people in south-western Victoria, Australia, may have raised short-finned eels as early as about 4,580 BCE. Evidence indicates they developed about of volcanic floodplains in the vicinity of Lake Condah into a complex of channels and dams, and used woven traps to capture eels, and to preserve them to eat all year round. The local Budj Bim Cultural Landscape, a World Heritage Site, is one of the oldest known aquaculture sites in the world. Oral tradition in China tells of the culture of the common carp, Cyprinus carpio, as long ago as 2000–2100 BCE (around 4,000 years BP), but the earliest significant evidence lies in the literature, in the earliest monograph on fish culture called The Classic of Fish Culture, by Fan Li, written around 475 BCE ( BP). Another ancient Chinese guide to aquaculture , wriiten by Yang Yu Jing around 460 BCE, shows that carp farming was becoming more sophisticated. The Jiahu site in China has circumstantial archeological evidence as possibly the oldest aquaculture locations, dating from 6200BCE (about 8,200 years BP), but this is speculative. When the waters subsided after river floods, some fish, mainly carp, were trapped in lakes. Early aquaculturists fed their brood using nymphs and silkworm faeces, and ate them. Ancient Egyptians might have farmed fish (especially gilt-head bream) from Lake Bardawil about 1,500 BCE (about 3,500 BP), and they traded them with Canaan. Gim cultivation is the oldest aquaculture in Korea. Early cultivation methods used bamboo or oak sticks; newer methods utilizing nets replaced them in the 19th century. Floating rafts have been used for mass production since the 1920s. Japanese people cultivated seaweed by providing bamboo poles and, later, nets and oyster shells to serve as anchoring-surfaces for spores. Romans bred fish in ponds and farmed oysters in coastal lagoons before 100 CE. In medieval Europe, early Christian monasteries adopted Roman aquacultural practices. Aquaculture spread because people away from coasts and big rivers were otherwise dependant on fish which required salting in order to be preserved. Fish was an important food source in medieval Europe, when in average 150 days per year were days of fasting and abstinence, and meat was prohibited. Improvements in transportation during the 19th century made fresh fish easily available and inexpensive, even in inland areas, rendering aquaculture less popular. The 15th-century fishponds of the Trebon Basin in the present-day Czech Republic are maintained as a tentative UNESCO World Heritage Site. Samoans practised "a traditional form of giant clam ranching". Hawaiians constructed oceanic fish ponds. A remarkable example is the "Menehune" fishpond dating from at least 1,000 years ago, at Alekoko. Legend records its construction by the mythical Menehune dwarf-people. In the first half of the 18th century, German Stephan Ludwig Jacobi experimented with external fertilization of brown trout and salmon. He wrote an article "" (On the Artificial Production of Trout and Salmon) summarizing his findings, and earning him a reputation as the founder of artificial fish-rearing. By the latter decades of the 18th century, oyster-farming had begun in estuaries along the Atlantic Coast of North America. The word "aquaculture" appeared in an 1855 newspaper article in reference to the harvesting of ice. It also appeared in descriptions of the terrestrial agricultural practise of sub-irrigation in the late-19th century before becoming associated primarily with the cultivation of aquatic plant- and animal-species. (The Oxford English Dictionary records the common modern usage of "aquaculture" from 1887; and that of "aquiculture" from 1867.) In 1859, Stephen Ainsworth of West Bloomfield, New York, began experiments with brook trout. By 1864, Seth Green had established a commercial fish-hatching operation at Caledonia Springs, near Rochester, New York. By 1866, with the involvement of W. W. Fletcher of Concord, Massachusetts, artificial fish-hatcheries operated both in both Canada and in the United States. When the Dildo Island fish hatchery opened in Newfoundland in 1889, it was the largest and most advanced in the world. The word "aquaculture" was used in descriptions of the hatcheries experiments with cod and lobster in 1890. By the 1920s, the American Fish Culture Company of Carolina, Rhode Island, founded in the 1870s, was one of the leading producers of trout. During the 1940s, they perfected the method of manipulating the day- and night-cycle of fish so that they could be artificially spawned year-round. Californians harvested wild kelp and attempted to manage supply around 1900, later labeling it a wartime resource.
Technology
Forms
null
1635
https://en.wikipedia.org/wiki/Kolmogorov%20complexity
Kolmogorov complexity
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory. The notion of Kolmogorov complexity can be used to state and prove impossibility results akin to Cantor's diagonal argument, Gödel's incompleteness theorem, and Turing's halting problem. In particular, no program P computing a lower bound for each text's Kolmogorov complexity can return a value essentially larger than P's own length (see section ); hence no single program can compute the exact Kolmogorov complexity for infinitely many texts. Kolmogorov complexity is the length of the ultimately compressed version of a file (i.e., anything which can be put in a computer). Formally, it is the length of a shortest program from which the file can be reconstructed. While Kolmogorov complexity is uncomputable, various approaches have been proposed and reviewed. Definition Intuition Consider the following two strings of 32 lowercase letters and digits: abababababababababababababababab , and 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7 The first string has a short English-language description, namely "write ab 16 times", which consists of 17 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, i.e., "write 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7" which has 38 characters. Hence the operation of writing the first string can be said to have "less complexity" than writing the second. More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings like the abab example above, whose Kolmogorov complexity is small relative to the string's size, are not considered to be complex. The Kolmogorov complexity can be defined for any mathematical object, but for simplicity the scope of this article is restricted to strings. We must first specify a description language for strings. Such a description language can be based on any computer programming language, such as Lisp, Pascal, or Java. If P is a program which outputs a string x, then P is a description of x. The length of the description is just the length of P as a character string, multiplied by the number of bits in a character (e.g., 7 for ASCII). We could, alternatively, choose an encoding for Turing machines, where an encoding is a function which associates to each Turing Machine M a bitstring <M>. If M is a Turing Machine which, on input w, outputs string x, then the concatenated string <M> w is a description of x. For theoretical analysis, this approach is more suited for constructing detailed formal proofs and is generally preferred in the research literature. In this article, an informal approach is discussed. Any string s has at least one description. For example, the second string above is output by the pseudo-code: function GenerateString2() return "4c1j5b2p0cv4w1x8rx2y39umgw5q85s7" whereas the first string is output by the (much shorter) pseudo-code: function GenerateString1() return "ab" × 16 If a description d(s) of a string s is of minimal length (i.e., using the fewest bits), it is called a minimal description of s, and the length of d(s) (i.e. the number of bits in the minimal description) is the Kolmogorov complexity of s, written K(s). Symbolically, K(s) = |d(s)|. The length of the shortest description will depend on the choice of description language; but the effect of changing languages is bounded (a result called the invariance theorem). Plain Kolmogorov complexity C There are two definitions of Kolmogorov complexity: plain and prefix-free. The plain complexity is the minimal description length of any program, and denoted while the prefix-free complexity is the minimal description length of any program encoded in a prefix-free code, and denoted . The plain complexity is more intuitive, but the prefix-free complexity is easier to study. By default, all equations hold only up to an additive constant. For example, really means that , that is, . Let be a computable function mapping finite binary strings to binary strings. It is a universal function if, and only if, for any computable , we can encode the function in a "program" , such that . We can think of as a program interpreter, which takes in an initial segment describing the program, followed by data that the program should process. One problem with plain complexity is that , because intuitively speaking, there is no general way to tell where to divide an output string just by looking at the concatenated string. We can divide it by specifying the length of or , but that would take extra symbols. Indeed, for any there exists such that . Typically, inequalities with plain complexity have a term like on one side, whereas the same inequalities with prefix-free complexity have only . The main problem with plain complexity is that there is something extra sneaked into a program. A program not only represents for something with its code, but also represents its own length. In particular, a program may represent a binary number up to , simply by its own length. Stated in another way, it is as if we are using a termination symbol to denote where a word ends, and so we are not using 2 symbols, but 3. To fix this defect, we introduce the prefix-free Kolmogorov complexity. Prefix-free Kolmogorov complexity K A prefix-free code is a subset of such that given any two different words in the set, neither is a prefix of the other. The benefit of a prefix-free code is that we can build a machine that reads words from the code forward in one direction, and as soon as it reads the last symbol of the word, it knows that the word is finished, and does not need to backtrack or a termination symbol. Define a prefix-free Turing machine to be a Turing machine that comes with a prefix-free code, such that the Turing machine can read any string from the code in one direction, and stop reading as soon as it reads the last symbol. Afterwards, it may compute on a work tape and write to a write tape, but it cannot move its read-head anymore. This gives us the following formal way to describe K. Fix a prefix-free universal Turing machine, with three tapes: a read tape infinite in one direction, a work tape infinite in two directions, and a write tape infinite in one direction. The machine can read from the read tape in one direction only (no backtracking), and write to the write tape in one direction only. It can read and write the work tape in both directions. The work tape and write tape start with all zeros. The read tape starts with an input prefix code, followed by all zeros. Let be the prefix-free code on , used by the universal Turing machine. Note that some universal Turing machines may not be programmable with prefix codes. We must pick only a prefix-free universal Turing machine. The prefix-free complexity of a string is the shortest prefix code that makes the machine output : Invariance theorem Informal treatment There are some description languages which are optimal, in the following sense: given any description of an object in a description language, said description may be used in the optimal description language with a constant overhead. The constant depends only on the languages involved, not on the description of the object, nor the object being described. Here is an example of an optimal description language. A description will have two parts: The first part describes another description language. The second part is a description of the object in that language. In more technical terms, the first part of a description is a computer program (specifically: a compiler for the object's language, written in the description language), with the second part being the input to that computer program which produces the object as output. The invariance theorem follows: Given any description language L, the optimal description language is at least as efficient as L, with some constant overhead. Proof: Any description D in L can be converted into a description in the optimal language by first describing L as a computer program P (part 1), and then using the original description D as input to that program (part 2). The total length of this new description D′ is (approximately): |D′ | = |P| + |D| The length of P is a constant that doesn't depend on D. So, there is at most a constant overhead, regardless of the object described. Therefore, the optimal language is universal up to this additive constant. A more formal treatment Theorem: If K1 and K2 are the complexity functions relative to Turing complete description languages L1 and L2, then there is a constant c – which depends only on the languages L1 and L2 chosen – such that ∀s. −c ≤ K1(s) − K2(s) ≤ c. Proof: By symmetry, it suffices to prove that there is some constant c such that for all strings s K1(s) ≤ K2(s) + c. Now, suppose there is a program in the language L1 which acts as an interpreter for L2: function InterpretLanguage(string p) where p is a program in L2. The interpreter is characterized by the following property: Running InterpretLanguage on input p returns the result of running p. Thus, if P is a program in L2 which is a minimal description of s, then InterpretLanguage(P) returns the string s. The length of this description of s is the sum of The length of the program InterpretLanguage, which we can take to be the constant c. The length of P which by definition is K2(s). This proves the desired upper bound. History and context Algorithmic information theory is the area of computer science that studies Kolmogorov complexity and other complexity measures on strings (or other data structures). The concept and theory of Kolmogorov Complexity is based on a crucial theorem first discovered by Ray Solomonoff, who published it in 1960, describing it in "A Preliminary Report on a General Theory of Inductive Inference" as part of his invention of algorithmic probability. He gave a more complete description in his 1964 publications, "A Formal Theory of Inductive Inference," Part 1 and Part 2 in Information and Control. Andrey Kolmogorov later independently published this theorem in Problems Inform. Transmission in 1965. Gregory Chaitin also presents this theorem in J. ACM – Chaitin's paper was submitted October 1966 and revised in December 1968, and cites both Solomonoff's and Kolmogorov's papers. The theorem says that, among algorithms that decode strings from their descriptions (codes), there exists an optimal one. This algorithm, for all strings, allows codes as short as allowed by any other algorithm up to an additive constant that depends on the algorithms, but not on the strings themselves. Solomonoff used this algorithm and the code lengths it allows to define a "universal probability" of a string on which inductive inference of the subsequent digits of the string can be based. Kolmogorov used this theorem to define several functions of strings, including complexity, randomness, and information. When Kolmogorov became aware of Solomonoff's work, he acknowledged Solomonoff's priority. For several years, Solomonoff's work was better known in the Soviet Union than in the Western World. The general consensus in the scientific community, however, was to associate this type of complexity with Kolmogorov, who was concerned with randomness of a sequence, while Algorithmic Probability became associated with Solomonoff, who focused on prediction using his invention of the universal prior probability distribution. The broader area encompassing descriptional complexity and probability is often called Kolmogorov complexity. The computer scientist Ming Li considers this an example of the Matthew effect: "...to everyone who has, more will be given..." There are several other variants of Kolmogorov complexity or algorithmic information. The most widely used one is based on self-delimiting programs, and is mainly due to Leonid Levin (1974). An axiomatic approach to Kolmogorov complexity based on Blum axioms (Blum 1967) was introduced by Mark Burgin in the paper presented for publication by Andrey Kolmogorov. In the late 1990s and early 2000s, methods developed to approximate Kolmogorov complexity relied on popular compression algorithms like LZW, which made difficult or impossible to provide any estimation to short strings until a method based on Algorithmic probability was introduced, offering the only alternative to compression-based methods. Basic results We write to be , where means some fixed way to code for a tuple of strings x and y. Inequalities We omit additive factors of . This section is based on. Theorem. Proof. Take any program for the universal Turing machine used to define plain complexity, and convert it to a prefix-free program by first coding the length of the program in binary, then convert the length to prefix-free coding. For example, suppose the program has length 9, then we can convert it as follows:where we double each digit, then add a termination code. The prefix-free universal Turing machine can then read in any program for the other machine as follows:The first part programs the machine to simulate the other machine, and is a constant overhead . The second part has length . The third part has length . Theorem: There exists such that . More succinctly, . Similarly, , and . Proof. For the plain complexity, just write a program that simply copies the input to the output. For the prefix-free complexity, we need to first describe the length of the string, before writing out the string itself. Theorem. (extra information bounds, subadditivity) Note that there is no way to compare and or or or . There are strings such that the whole string is easy to describe, but its substrings are very hard to describe. Theorem. (symmetry of information) . Proof. One side is simple. For the other side with , we need to use a counting argument (page 38 ). Theorem. (information non-increase) For any computable function , we have . Proof. Program the Turing machine to read two subsequent programs, one describing the function and one describing the string. Then run both programs on the work tape to produce , and write it out. Uncomputability of Kolmogorov complexity A naive attempt at a program to compute K At first glance it might seem trivial to write a program which can compute K(s) for any s, such as the following: function KolmogorovComplexity(string s) for i = 1 to infinity: for each string p of length exactly i if isValidProgram(p) and evaluate(p) == s return i This program iterates through all possible programs (by iterating through all possible strings and only considering those which are valid programs), starting with the shortest. Each program is executed to find the result produced by that program, comparing it to the input s. If the result matches then the length of the program is returned. However this will not work because some of the programs p tested will not terminate, e.g. if they contain infinite loops. There is no way to avoid all of these programs by testing them in some way before executing them due to the non-computability of the halting problem. What is more, no program at all can compute the function K, be it ever so sophisticated. This is proven in the following. Formal proof of uncomputability of K Theorem: There exist strings of arbitrarily large Kolmogorov complexity. Formally: for each natural number n, there is a string s with K(s) ≥ n. Proof: Otherwise all of the infinitely many possible finite strings could be generated by the finitely many programs with a complexity below n bits. Theorem: K is not a computable function. In other words, there is no program which takes any string s as input and produces the integer K(s) as output. The following proof by contradiction uses a simple Pascal-like language to denote programs; for sake of proof simplicity assume its description (i.e. an interpreter) to have a length of bits. Assume for contradiction there is a program function KolmogorovComplexity(string s) which takes as input a string s and returns K(s). All programs are of finite length so, for sake of proof simplicity, assume it to be bits. Now, consider the following program of length bits: function GenerateComplexString() for i = 1 to infinity: for each string s of length exactly i if KolmogorovComplexity(s) ≥ 8000000000 return s Using KolmogorovComplexity as a subroutine, the program tries every string, starting with the shortest, until it returns a string with Kolmogorov complexity at least bits, i.e. a string that cannot be produced by any program shorter than bits. However, the overall length of the above program that produced s is only bits, which is a contradiction. (If the code of KolmogorovComplexity is shorter, the contradiction remains. If it is longer, the constant used in GenerateComplexString can always be changed appropriately.) The above proof uses a contradiction similar to that of the Berry paradox: "The smallest positive integer that cannot be defined in fewer than twenty English words". It is also possible to show the non-computability of K by reduction from the non-computability of the halting problem H, since K and H are Turing-equivalent. There is a corollary, humorously called the "full employment theorem" in the programming language community, stating that there is no perfect size-optimizing compiler. Chain rule for Kolmogorov complexity The chain rule for Kolmogorov complexity states that there exists a constant c such that for all X and Y: K(X,Y) = K(X) + K(Y|X) + c*max(1,log(K(X,Y))). It states that the shortest program that reproduces X and Y is no more than a logarithmic term larger than a program to reproduce X and a program to reproduce Y given X. Using this statement, one can define an analogue of mutual information for Kolmogorov complexity. Compression It is straightforward to compute upper bounds for K(s) – simply compress the string s with some method, implement the corresponding decompressor in the chosen language, concatenate the decompressor to the compressed string, and measure the length of the resulting string – concretely, the size of a self-extracting archive in the given language. A string s is compressible by a number c if it has a description whose length does not exceed |s| − c bits. This is equivalent to saying that . Otherwise, s is incompressible by c. A string incompressible by 1 is said to be simply incompressible – by the pigeonhole principle, which applies because every compressed string maps to only one uncompressed string, incompressible strings must exist, since there are 2n bit strings of length n, but only 2n − 1 shorter strings, that is, strings of length less than n, (i.e. with length 0, 1, ..., n − 1). For the same reason, most strings are complex in the sense that they cannot be significantly compressed – their K(s) is not much smaller than |s|, the length of s in bits. To make this precise, fix a value of n. There are 2n bitstrings of length n. The uniform probability distribution on the space of these bitstrings assigns exactly equal weight 2−n to each string of length n. Theorem: With the uniform probability distribution on the space of bitstrings of length n, the probability that a string is incompressible by c is at least . To prove the theorem, note that the number of descriptions of length not exceeding n − c is given by the geometric series: 1 + 2 + 22 + ... + 2n − c = 2n−c+1 − 1. There remain at least 2n − 2n−c+1 + 1 bitstrings of length n that are incompressible by c. To determine the probability, divide by 2n. Chaitin's incompleteness theorem By the above theorem (), most strings are complex in the sense that they cannot be described in any significantly "compressed" way. However, it turns out that the fact that a specific string is complex cannot be formally proven, if the complexity of the string is above a certain threshold. The precise formalization is as follows. First, fix a particular axiomatic system S for the natural numbers. The axiomatic system has to be powerful enough so that, to certain assertions A about complexity of strings, one can associate a formula FA in S. This association must have the following property: If FA is provable from the axioms of S, then the corresponding assertion A must be true. This "formalization" can be achieved based on a Gödel numbering. Theorem: There exists a constant L (which only depends on S and on the choice of description language) such that there does not exist a string s for which the statement K(s) ≥ L       (as formalized in S) can be proven within S. Proof Idea: The proof of this result is modeled on a self-referential construction used in Berry's paradox. We firstly obtain a program which enumerates the proofs within S and we specify a procedure P which takes as an input an integer L and prints the strings x which are within proofs within S of the statement K(x) ≥ L. By then setting L to greater than the length of this procedure P, we have that the required length of a program to print x as stated in K(x) ≥ L as being at least L is then less than the amount L since the string x was printed by the procedure P. This is a contradiction. So it is not possible for the proof system S to prove K(x) ≥ L for L arbitrarily large, in particular, for L larger than the length of the procedure P, (which is finite). Proof: We can find an effective enumeration of all the formal proofs in S by some procedure function NthProof(int n) which takes as input n and outputs some proof. This function enumerates all proofs. Some of these are proofs for formulas we do not care about here, since every possible proof in the language of S is produced for some n. Some of these are complexity formulas of the form K(s) ≥ n where s and n are constants in the language of S. There is a procedure function NthProofProvesComplexityFormula(int n) which determines whether the nth proof actually proves a complexity formula K(s) ≥ L. The strings s, and the integer L in turn, are computable by procedure: function StringNthProof(int n) function ComplexityLowerBoundNthProof(int n) Consider the following procedure: function GenerateProvablyComplexString(int n) for i = 1 to infinity: if NthProofProvesComplexityFormula(i) and ComplexityLowerBoundNthProof(i) ≥ n return StringNthProof(i) Given an n, this procedure tries every proof until it finds a string and a proof in the formal system S of the formula K(s) ≥ L for some L ≥ n; if no such proof exists, it loops forever. Finally, consider the program consisting of all these procedure definitions, and a main call: GenerateProvablyComplexString(n0) where the constant n0 will be determined later on. The overall program length can be expressed as U+log2(n0), where U is some constant and log2(n0) represents the length of the integer value n0, under the reasonable assumption that it is encoded in binary digits. We will choose n0 to be greater than the program length, that is, such that n0 > U+log2(n0). This is clearly true for n0 sufficiently large, because the left hand side grows linearly in n0 whilst the right hand side grows logarithmically in n0 up to the fixed constant U. Then no proof of the form "K(s)≥L" with L≥n0 can be obtained in S, as can be seen by an indirect argument: If ComplexityLowerBoundNthProof(i) could return a value ≥n0, then the loop inside GenerateProvablyComplexString would eventually terminate, and that procedure would return a string s such that This is a contradiction, Q.E.D. As a consequence, the above program, with the chosen value of n0, must loop forever. Similar ideas are used to prove the properties of Chaitin's constant. Minimum message length The minimum message length principle of statistical and inductive inference and machine learning was developed by C.S. Wallace and D.M. Boulton in 1968. MML is Bayesian (i.e. it incorporates prior beliefs) and information-theoretic. It has the desirable properties of statistical invariance (i.e. the inference transforms with a re-parametrisation, such as from polar coordinates to Cartesian coordinates), statistical consistency (i.e. even for very hard problems, MML will converge to any underlying model) and efficiency (i.e. the MML model will converge to any true underlying model about as quickly as is possible). C.S. Wallace and D.L. Dowe (1999) showed a formal connection between MML and algorithmic information theory (or Kolmogorov complexity). Kolmogorov randomness Kolmogorov randomness defines a string (usually of bits) as being random if and only if every computer program that can produce that string is at least as long as the string itself. To make this precise, a universal computer (or universal Turing machine) must be specified, so that "program" means a program for this universal machine. A random string in this sense is "incompressible" in that it is impossible to "compress" the string into a program that is shorter than the string itself. For every universal computer, there is at least one algorithmically random string of each length. Whether a particular string is random, however, depends on the specific universal computer that is chosen. This is because a universal computer can have a particular string hard-coded in itself, and a program running on this universal computer can then simply refer to this hard-coded string using a short sequence of bits (i.e. much shorter than the string itself). This definition can be extended to define a notion of randomness for infinite sequences from a finite alphabet. These algorithmically random sequences can be defined in three equivalent ways. One way uses an effective analogue of measure theory; another uses effective martingales. The third way defines an infinite sequence to be random if the prefix-free Kolmogorov complexity of its initial segments grows quickly enough — there must be a constant c such that the complexity of an initial segment of length n is always at least n−c. This definition, unlike the definition of randomness for a finite string, is not affected by which universal machine is used to define prefix-free Kolmogorov complexity. Relation to entropy For dynamical systems, entropy rate and algorithmic complexity of the trajectories are related by a theorem of Brudno, that the equality holds for almost all . It can be shown that for the output of Markov information sources, Kolmogorov complexity is related to the entropy of the information source. More precisely, the Kolmogorov complexity of the output of a Markov information source, normalized by the length of the output, converges almost surely (as the length of the output goes to infinity) to the entropy of the source. Theorem. (Theorem 14.2.5 ) The conditional Kolmogorov complexity of a binary string satisfieswhere is the binary entropy function (not to be confused with the entropy rate). Halting problem The Kolmogorov complexity function is equivalent to deciding the halting problem. If we have a halting oracle, then the Kolmogorov complexity of a string can be computed by simply trying every halting program, in lexicographic order, until one of them outputs the string. The other direction is much more involved. It shows that given a Kolmogorov complexity function, we can construct a function , such that for all large , where is the Busy Beaver shift function (also denoted as ). By modifying the function at lower values of we get an upper bound on , which solves the halting problem. Consider this program , which takes input as , and uses . List all strings of length . For each such string , enumerate all (prefix-free) programs of length until one of them does output . Record its runtime . Output the largest . We prove by contradiction that for all large . Let be a Busy Beaver of length . Consider this (prefix-free) program, which takes no input: Run the program , and record its runtime length . Generate all programs with length . Run every one of them for up to steps. Note the outputs of those that have halted. Output the string with the lowest lexicographic order that has not been output by any of those. Let the string output by the program be . The program has length , where comes from the length of the Busy Beaver , comes from using the (prefix-free) Elias delta code for the number , and comes from the rest of the program. Therefore,for all big . Further, since there are only so many possible programs with length , we have by pigeonhole principle. By assumption, , so every string of length has a minimal program with runtime . Thus, the string has a minimal program with runtime . Further, that program has length . This contradicts how was constructed. Universal probability Fix a universal Turing machine , the same one used to define the (prefix-free) Kolmogorov complexity. Define the (prefix-free) universal probability of a string to beIn other words, it is the probability that, given a uniformly random binary stream as input, the universal Turing machine would halt after reading a certain prefix of the stream, and output . Note. does not mean that the input stream is , but that the universal Turing machine would halt at some point after reading the initial segment , without reading any further input, and that, when it halts, its has written to the output tape. Theorem. (Theorem 14.11.1) Conditional versions The conditional Kolmogorov complexity of two strings is, roughly speaking, defined as the Kolmogorov complexity of x given y as an auxiliary input to the procedure. There is also a length-conditional complexity , which is the complexity of x given the length of x as known/input. Time-bounded complexity Time-bounded Kolmogorov complexity is a modified version of Kolmogorov complexity where the space of programs to be searched for a solution is confined to only programs that can run within some pre-defined number of steps. It is hypothesised that the possibility of the existence of an efficient algorithm for determining approximate time-bounded Kolmogorov complexity is related to the question of whether true one-way functions exist.
Mathematics
Complexity theory
null
1650
https://en.wikipedia.org/wiki/Aloe
Aloe
Aloe (; also written Aloë) is a genus containing over 650 species of flowering succulent plants. The most widely known species is Aloe vera, or "true aloe". It is called this because it is cultivated as the standard source for assorted pharmaceutical purposes. Other species, such as Aloe ferox, are also cultivated or harvested from the wild for similar applications. The APG IV system (2016) places the genus in the family Asphodelaceae, subfamily Asphodeloideae. Within the subfamily it may be placed in the tribe Aloeae. In the past, it has been assigned to the family Aloaceae (now included in the Asphodeloidae) or to a broadly circumscribed family Liliaceae (the lily family). The plant Agave americana, which is sometimes called "American aloe", belongs to the Asparagaceae, a different family. The genus is native to tropical and southern Africa, Madagascar, Jordan, the Arabian Peninsula, and various islands in the Indian Ocean (Mauritius, Réunion, Comoros, etc.). A few species have also become naturalized in other regions (Mediterranean, India, Australia, North and South America, Hawaiian Islands, etc.). Etymology The genus name Aloe is derived from the Arabic word alloeh, meaning "bitter and shiny substance" or from Hebrew ahalim, plural of ahal. Description Most Aloe species have a rosette of large, thick, fleshy leaves. Aloe flowers are tubular, frequently yellow, orange, pink, or red, and are borne, densely clustered and pendant, at the apex of simple or branched, leafless stems. Many species of Aloe appear to be stemless, with the rosette growing directly at ground level; other varieties may have a branched or unbranched stem from which the fleshy leaves spring. They vary in color from grey to bright-green and are sometimes striped or mottled. Some aloes native to South Africa are tree-like (arborescent). Systematics The APG IV system (2016) places the genus in the family Asphodelaceae, subfamily Asphodeloideae. In the past it has also been assigned to the families Liliaceae and Aloeaceae, as well as the family Asphodelaceae sensu stricto, before this was merged into the Asphodelaceae sensu lato. The circumscription of the genus has varied widely. Many genera, such as Lomatophyllum, have been brought into synonymy. Species at one time placed in Aloe, such as Agave americana, have been moved to other genera. Molecular phylogenetic studies, particularly from 2010 onwards, suggested that as then circumscribed, Aloe was not monophyletic and should be divided into more tightly defined genera. In 2014, John Charles Manning and coworkers produced a phylogeny in which Aloe was divided into six genera: Aloidendron, Kumara, Aloiampelos, Aloe, Aristaloe and Gonialoe. Species Over 600 species are accepted in the genus Aloe, plus even more synonyms and unresolved species, subspecies, varieties, and hybrids. Some of the accepted species are: Aloe aculeata Pole-Evans Aloe africana Mill. Aloe albida (Stapf) Reynolds Aloe albiflora Guillaumin Aloe arborescens Mill. Aloe arenicola Reynolds Aloe argenticauda Merxm. & Giess Aloe bakeri Scott-Elliot Aloe ballii Reynolds Aloe ballyi Reynolds Aloe brevifolia Mill. Aloe broomii Schönland Aloe buettneri A.Berger Aloe camperi Schweinf. Aloe capitata Baker Aloe comosa Marloth & A.Berger Aloe cooperi Baker Aloe corallina Verd. Aloe dewinteri Giess ex Borman & Hardy Aloe erinacea D.S.Hardy Aloe excelsa A.Berger Aloe ferox Mill. Aloe forbesii Balf.f. Aloe helenae Danguy Aloe hereroensis Engl. Aloe inermis Forssk. Aloe inyangensis Christian Aloe jawiyon S.J.Christie, D.P.Hannon & Oakman ex A.G.Mill. Aloe jucunda Reynolds Aloe khamiesensis Pillans Aloe kilifiensis Christian Aloe maculata All. Aloe marlothii A.Berger Aloe mubendiensis Christian Aloe namibensis Giess Aloe nyeriensis Christian & I.Verd. Aloe pearsonii Schönland Aloe peglerae Schönland Aloe perfoliata L. Aloe perryi Baker Aloe petricola Pole-Evans Aloe polyphylla Pillans Aloe rauhii Reynolds Aloe reynoldsii Letty Aloe scobinifolia Reynolds & Bally Aloe sinkatana Reynolds Aloe squarrosa Baker ex Balf.f. Aloe striata Haw. Aloe succotrina Lam. Aloe suzannae Decary Aloe thraskii Baker Aloe vera (L.) Burm.f. Aloe viridiflora Reynolds Aloe wildii (Reynolds) Reynolds In addition to the species and hybrids between species within the genus, several hybrids with other genera have been created in cultivation, such as between Aloe and Gasteria (× Gasteraloe), and between Aloe and Astroloba (×Aloloba). Uses Aloe species are frequently cultivated as ornamental plants both in gardens and in pots. Many aloe species are highly decorative and are valued by collectors of succulents. Aloe vera is used both internally and externally on humans as folk or alternative medicine. The Aloe species is known for its medicinal and cosmetic properties. Around 75% of Aloe species are used locally for medicinal uses. The plants can also be made into types of special soaps or used in other skin care products (see natural skin care). Numerous cultivars with mixed or uncertain parentage are grown. Of these, Aloe 'Lizard Lips' has gained the Royal Horticultural Society's Award of Garden Merit. Aloe variegata has been planted on graves in the belief that this ensures eternal life. Historical uses Historical use of various aloe species is well documented. Documentation of the clinical effectiveness is available, although relatively limited. Of the 500+ species, only a few were used traditionally as herbal medicines, Aloe vera again being the most commonly used species. Also included are A. perryi and A. ferox. The Ancient Greeks and Romans used Aloe vera to treat wounds. In the Middle Ages, the yellowish liquid found inside the leaves was favored as a purgative. Unprocessed aloe that contains aloin is generally used as a laxative, whereas processed juice does not usually contain significant aloin. According to Cancer Research UK, a potentially deadly product called T-UP is made of concentrated aloe, and promoted as a cancer cure. They say "there is currently no evidence that aloe products can help to prevent or treat cancer in humans". Aloin in OTC laxative products On May 9, 2002, the US Food and Drug Administration issued a final rule banning the use of aloin, the yellow sap of the aloe plant, for use as a laxative ingredient in over-the-counter drug products. Most aloe juices today do not contain significant aloin. Chemical properties According to W. A. Shenstone, two classes of aloins are recognized: (1) nataloins, which yield picric and oxalic acids with nitric acid, and do not give a red coloration with nitric acid; and (2) barbaloins, which yield aloetic acid (C7H2N3O5), chrysammic acid (C7H2N2O6), picric and oxalic acids with nitric acid, being reddened by the acid. This second group may be divided into a-barbaloins, obtained from Barbados Aloe, and reddened in the cold, and b-barbaloins, obtained from Aloe Socotrina and Zanzibar Aloe, reddened by ordinary nitric acid only when warmed or by fuming acid in the cold. Nataloin (2C17H13O7·H2O) forms bright-yellow scales, barbaloin (C17H18O7) prismatic crystals. Aloe species are used in essential oils as a safety measure to dilute the solution before they are applied to the skin. Flavoring Aloe perryi, A. barbadensis, A. ferox, and hybrids of this species with A. africana and A. spicata are listed as natural flavoring substances in the US government Electronic Code of Federal Regulations. Aloe socotrina is said to be used in yellow Chartreuse. Gallery
Biology and health sciences
Monocots
null
1680
https://en.wikipedia.org/wiki/Amaryllis
Amaryllis
Amaryllis () is the only genus in the subtribe Amaryllidinae (tribe Amaryllideae). It is a small genus of flowering bulbs, with two species. The better known of the two, Amaryllis belladonna, is a native of the Western Cape region of South Africa, particularly the rocky southwest area between the Olifants River Valley and Knysna. For many years there was confusion among botanists over the generic names Amaryllis and Hippeastrum, one result of which is that the common name 'amaryllis' is mainly used for cultivars of the genus Hippeastrum, widely sold in the winter months for their ability to bloom indoors. Plants of the genus Amaryllis are known as belladonna lily, Jersey lily, naked lady, amarillo, Easter lily in Southern Australia or, in South Africa, March lily due to its propensity to flower around March. This is one of numerous genera with the common name 'lily' due to their flower shape and growth habit. However, they are only distantly related to the true lily, Lilium. In the Victorian language of flowers, amaryllis means "love, beauty, and determination", and can also represent hope and achievement. Description Amaryllis is a bulbous plant, with each bulb being in diameter. It has several strap-shaped, hysteranthous, green leaves with midrib, long and broad, arranged in two rows. Each bulb produces one or two leafless, stout, persistent and erect stems tall, each of which bears at the top a cluster of two to twelve zygomorphic, funnel-shaped flowers without a tube. Each flower is in diameter with six spreading tepals (three outer sepals, three inner petals, with similar appearance to each other). The usual color is white with crimson veins, but pink or purple also occur naturally. Stamens are very shortly connate basally, declinate, unequal. Style is declinate, stigma is three-lobed. Ovules are approx. 8 per locule. Seeds are compressed-globose, white to pink. The number of chromosomes is 2n = 22. Taxonomy The single genus is in subtribe Amaryllidinae, in the tribe Amaryllideae. The taxonomy of the genus has been controversial. In 1753 Carl Linnaeus created the name Amaryllis belladonna, the type species of the genus Amaryllis. At the time both South African and South American plants were placed in the same genus; subsequently they were separated into two different genera. The key question is whether Linnaeus's type was a South African plant or a South American plant. If the latter, Amaryllis would be the correct name for the genus Hippeastrum, and a different name would have to be used for the genus discussed here. Alan W. Meerow et al. have briefly summarized the debate, which took place from 1938 onwards and involved botanists on both sides of the Atlantic. The outcome was a decision by the 14th International Botanical Congress in 1987 that Amaryllis should be a conserved name (i.e. correct regardless of priority) and ultimately based on a specimen of the South African Amaryllis belladonna from the Clifford Herbarium at the Natural History Museum in London. Species , Amaryllis had only two accepted species, both native to the Cape Provinces of South Africa: Amaryllis belladonna – south-west Cape Provinces; introduced into many parts of the world, including California, Great Britain, Australia and New Zealand Amaryllis paradisicola – west Cape Provinces Phylogeny Amaryllidinae are placed within Amaryllideae as follow: These are phylogenetically related as follows: Etymology The name Amaryllis is taken from a shepherdess in Virgil's pastoral Eclogues, (from the Greek . Although the 1987 decision settled the question of the scientific name of the genus, the common name 'amaryllis' continues to be used differently. Bulbs sold as amaryllis and described as "ready to bloom for the holidays" belong to the allied genus Hippeastrum. The common name "naked lady" comes from the plant's pattern of flowering when the foliage has died down. This name is also used for other bulbs with a similar growth and flowering pattern; some of these have their own widely used and accepted common names, such as the resurrection lily (Lycoris squamigera). Habitat In areas of its native habitat with mountainous fynbos, flowering tends to be suppressed until after bush fires as dense overhead vegetation prevents growth. In more open sandy areas of the Western Cape, the plant flowers annually. Plants tend to be very localized in dense concentrations due to the seeds' large size and heavy weight. Strong winds shake loose the seeds, which fall to ground and immediately start to germinate, aided by the first winter rains. Ecology The leaves are produced in the autumn or early spring in warm climates depending on the onset of rain and eventually die down by late spring. The bulb is then dormant until late summer. The plant is not frost-tolerant, nor does it do well in tropical environments since they require a dry resting period between leaf growth and flower spike production. One or two leafless stems arise from the bulb in the dry ground in late summer (March in its native habitat and August in USDA zone 7). The plant has a symbiotic relationship with carpenter bees. It is also visited by noctuid moths at night. The relative importance of these insects as pollinators has not yet been established; however, carpenter bees are thought to be the main pollinators of amaryllis on the Cape Peninsula. The plant's main parasite is the lily borer Brithys crini and/or Diaphone eumela. Cultivation Amaryllis belladonna was introduced into cultivation at the beginning of the eighteenth century. It reproduces slowly by either bulb division or seeds and has gradually naturalized from plantings in urban and suburban areas throughout the lower elevations and coastal areas in much of the West Coast of the US since these environments mimic their native South African habitat. Hardiness zones 6–8. It is also naturalized in Australia. There is an Amaryllis belladonna hybrid which was bred in the 1800s in Australia. No one knows the exact species it was crossed with to produce color variations of white, cream, peach, magenta and nearly red hues. The hybrids were crossed back onto the original Amaryllis belladonna and with each other to produce naturally seed-bearing crosses that come in a very wide range of flower sizes, shapes, stem heights and intensities of pink. Pure white varieties with bright green stems were bred as well. The hybrids are quite distinct in that the many shades of pink also have stripes, veining, darkened edges, white centers and light yellow centers, also setting them apart from the original light pink. In addition, the hybrids often produce flowers in a fuller circle rather than the "side-facing" habit of the "old-fashioned" pink. The hybrids are able to adapt to year-round watering and fertilization but can also tolerate completely dry summer conditions if need be. A. belladonna has gained the Royal Horticultural Society's Award of Garden Merit. Amaryllis belladonna has been crossed in cultivation with Crinum moorei to produce a hybrid called × Amarcrinum, which has named cultivars. Hybrids said to be between Amaryllis belladonna and Brunsvigia josephinae have been called × Amarygia. Neither hybrid genus name is accepted by the World Checklist of Selected Plant Families.
Biology and health sciences
Monocots
null
1776
https://en.wikipedia.org/wiki/Arthritis
Arthritis
Arthritis is a general medical term used to describe a disorder that affects joints. Symptoms generally include joint pain and stiffness. Other symptoms may include redness, warmth, swelling, and decreased range of motion of the affected joints. In certain types of arthritis, other organs such as the skin are also affected. Onset can be gradual or sudden. There are several types of arthritis. The most common forms are osteoarthritis (most commonly seen in weightbearing joints) and rheumatoid arthritis. Osteoarthritis usually occurs as an individual ages and often affects the hips, knees, shoulders, and fingers. Rheumatoid arthritis is an autoimmune disorder that often affects the hands and feet. Other types of arthritis include gout, lupus, and septic arthritis. These are inflammatory based types of rheumatic disease. Early treatment for arthritis commonly includes resting the affected joint and conservative measures such as heating or icing. Weight loss and exercise may also be useful to reduce the force across a weightbearing joint. Medication intervention for symptoms depends on the form of arthritis. These may include anti-inflammatory medications such as ibuprofen and paracetamol (acetaminophen). With severe cases of arthritis, joint replacement surgery may be necessary. Osteoarthritis is the most common form of arthritis affecting more than 3.8% of people, while rheumatoid arthritis is the second most common affecting about 0.24% of people. In Australia about 15% of people are affected by arthritis, while in the United States more than 20% have a type of arthritis. Overall arthritis becomes more common with age. Arthritis is a common reason people are unable to carryout their work and can result in decreased ability to complete activities of daily living. The term arthritis is derived from arthr- (meaning 'joint') and -itis (meaning 'inflammation'). Classification There are several diseases where joint pain is the most prominent symptom. Generally when a person has "arthritis" it means that they have one of the following diseases: Hemarthrosis Osteoarthritis Rheumatoid arthritis Gout and pseudo-gout Septic arthritis Ankylosing spondylitis Juvenile idiopathic arthritis Still's disease Psoriatic arthritis Joint pain can also be a symptom of other diseases. In this case, the person may not have arthritis and instead have one of the following diseases: Psoriasis Reactive arthritis Ehlers–Danlos syndrome Iron overload Hepatitis Lyme disease Sjögren's disease Hashimoto's thyroiditis Celiac disease Non-celiac gluten sensitivity Inflammatory bowel disease (including Crohn's disease and ulcerative colitis) Henoch–Schönlein purpura Hyperimmunoglobulinemia D with recurrent fever Sarcoidosis Whipple's disease TNF receptor associated periodic syndrome Granulomatosis with polyangiitis (and many other vasculitis syndromes) Familial Mediterranean fever Systemic lupus erythematosus An undifferentiated arthritis is an arthritis that does not fit into well-known clinical disease categories, possibly being an early stage of a definite rheumatic disease. Signs and symptoms Pain in varying severity is a common symptom in most types of arthritis. Other symptoms include swelling, joint stiffness, redness, and aching around the joint(s). Arthritic disorders like lupus and rheumatoid arthritis can affect other organs in the body, leading to a variety of symptoms including: Inability to use the hand or walk Stiffness in one or more joints Rash or itch Malaise and fatigue Weight loss Poor sleep Muscle aches and pains Tenderness Difficulty moving the joint Causes Some common risk factors that can increase the chances of developing osteoarthritis include obesity, prior injury to the joint, type of joint, and muscle strength. The risk factors with the strongest association for developing inflammatory arthritis such as rheumatoid arthritis are the female sex, a family history of rheumatoid arthritis, age, obesity, previous joint damage from an injury, and exposure to tobacco smoke. Risk factors There are common risk factors that increase a person's chance of developing arthritis later in adulthood. Some of these are modifiable while others are not. Smoking has been linked to an increased susceptibility of developing arthritis, particularly rheumatoid arthritis. Diagnosis Diagnosis is made by clinical examination from an appropriate health professional, and may be supported by tests such as radiologic imaging and blood tests, depending on the type of suspected arthritis. Pain patterns may vary depending on the arthritis type and the location. Rheumatoid arthritis is generally worse in the morning and associated with stiffness lasting over 30 minutes. Important features of diagnosis are rate of onset, pattern of joint involvement, symmetry of symptoms, early morning stiffness, tenderness, locking of joint with inactivity, aggravating and relieving factors, and other systemic symptoms. Physical examination may include checking joints, evaluating gait, examination of skin for dermatological findings and symptoms of pulmonary inflammation. Physical examination may confirm the diagnosis or may indicate systemic disease. Chest radiographs are often used to follow progression or help assess severity. Screening blood tests for suspected arthritis include: rheumatoid factor, antinuclear factor (ANF), extractable nuclear antigen, and specific antibodies. Rheumatoid arthritis patients often have elevated erythrocyte sedimentation rate (ESR, also known as sed rate) or C-reactive protein (CRP) levels, which indicates the presence of an inflammatory process in the body. Anti-cyclic citrullinated peptide (anti-CCP) antibodies and rheumatoid factor (RF) are two more common blood tests when assessing for rheumatoid arthritis. Imaging tests like X-rays are commonly utilized to diagnose and monitor arthritis. Other imaging tests for rheumatoid arthritis that may be considered include computed tomography (CT) scanning, positron emission tomography (PET) scanning, bone scanning, and dual-energy X-ray absorptiometry (DEXA). Osteoarthritis Osteoarthritis (OA) is the most common form of arthritis. It affects humans and other animals, notably dogs, but also occurs in cats and horses. It can affect both the larger (ie. knee, hip, shoulder, etc.) and the smaller joints (ie. fingers, toes, foot, etc.) of the body. The disease is caused by daily wear and tear of the joint. This process can progress more rapidly as a result of injury to the joint. Osteoarthritis is caused by the break down of the smooth surface between two bones, known as cartilage, which can eventually lead to the two opposing bones coming in direct contact and eroding one another. OA symptoms typically begin with minor pain during physical activity, but can eventually progress to be present at rest. The pain can be debilitating and prevent one from doing activities that they would normally do as part of their daily routine. OA typically affects the weight-bearing joints, such as the back, knee and hip due to the mechanical nature of this disease process. Unlike rheumatoid arthritis, osteoarthritis is most commonly a disease of the elderly. The strongest predictor of osteoarthritis is increased age, likely due to the declining ability of chondrocytes to maintain the structural integrity of cartilage. More than 30 percent of women have some degree of osteoarthritis by age 65. One of the primary tools for diagnosing OA are X-rays of the joint. Findings on X-ray that are consistent with OA include those with joint space narrowing (due to cartilage breakdown), bone spurs, sclerosis, and bone cysts. Rheumatoid arthritis Rheumatoid arthritis (RA) is a disorder in which the body's own immune system starts to attack body tissues specifically the cartilage at the end of bones known as articular cartilage. The attack is not only directed at the joint but to many other parts of the body. RA often affects joints in the fingers, wrists, knees and elbows, is symmetrical (appears on both sides of the body), and can lead to severe progressive deformity in a matter of years if not adequately treated. RA usually onsets earlier in life than OA and commonly effects people aged 20 and above. In children, the disorder can present with a skin rash, fever, pain, disability, and limitations in daily activities. With earlier diagnosis and appropriate aggressive treatment, many individuals can obtain control of their symptoms leading to a better quality of life compared to those without treatment. One of the main triggers of bone erosion in the joints in rheumatoid arthritis is inflammation of the synovium (lining of the joint capsule), caused in part by the production of pro-inflammatory cytokines and receptor activator of nuclear factor kappa B ligand (RANKL), a cell surface protein present in Th17 cells and osteoblasts. Osteoclast activity can be directly induced by osteoblasts through the RANK/RANKL mechanism. Lupus Lupus is an autoimmune collagen vascular disorder that can be present with severe arthritis. Other features of lupus include a skin rash, extreme photosensitivity, hair loss, kidney problems, lung fibrosis and constant joint pain. Gout Gout is caused by deposition of uric acid crystals in the joints, causing inflammation. There is also an uncommon form of gouty arthritis caused by the formation of rhomboid crystals of calcium pyrophosphate known as pseudogout. In the early stages, the gouty arthritis usually occurs in one joint, but with time, it can occur in many joints and be quite crippling. The joints in gout can often become swollen and lose function. Gouty arthritis can become particularly painful and potentially debilitating when gout cannot successfully be treated. When uric acid levels and gout symptoms cannot be controlled with standard gout medicines that decrease the production of uric acid (e.g., allopurinol) or increase uric acid elimination from the body through the kidneys (e.g., probenecid), this can be referred to as refractory chronic gout. Comparison of types Other Infectious arthritis is another severe form of arthritis that is sometimes referred to as septic arthritis. It presents with infections symptoms that can include sudden onset of chills, fever, and joint pain. The condition is caused by bacteria, that can spread through the blood stream from elsewhere in the body, that infects a joint and begins to erode cartilage. Infectious arthritis must be rapidly diagnosed and treated promptly to prevent irreversible joint damage. Only about 1% of cases of infectious arthritis are due to any of a wide variety of viruses. The virus SARS-CoV-2, which causes Covid-19 has been added to the list of viruses which can cause infections arthritis. SARS-CoV-2 causes reactive arthritis rather than local septic arthritis. Psoriasis can develop into psoriatic arthritis. With psoriatic arthritis, most individuals develop the skin symptoms first and then the joint related symptoms. The typical features are continuous joint pains, stiffness and swelling like other forms of arthritis. The disease does recur with periods of remission but there is no known cure for the disorder. Treatment current revolves around decreasing autoimmune attacks with immune suppression medications. A small percentage develop a severely painful and destructive form of arthritis which destroys the small joints in the hands and can lead to permanent disability and loss of hand function. Treatment There is no known cure for arthritis and rheumatic diseases. Treatment options vary depending on the type of arthritis and include physical therapy, exercise and diet, orthopedic bracing, and oral and topical medications. Joint replacement surgery may be required to repair damage, restore function, or relieve pain. Physical therapy In general, studies have shown that physical exercise of the affected joint can noticeably improve long-term pain relief. Furthermore, exercise of the arthritic joint is encouraged to maintain the health of the particular joint and the overall body of the person. Individuals with arthritis can benefit from both physical and occupational therapy. In arthritis the joints become stiff and the range of movement can be limited. Physical therapy has been shown to significantly improve function, decrease pain, and delay the need for surgical intervention in advanced cases. Exercise prescribed by a physical therapist has been shown to be more effective than medications in treating osteoarthritis of the knee. Exercise often focuses on improving muscle strength, endurance and flexibility. In some cases, exercises may be designed to train balance. Occupational therapy can provide assistance with activities. Assistive technology is a tool used to aid a person's disability by reducing their physical barriers by improving the use of their damaged body part, typically after an amputation. Assistive technology devices can be customized to the patient or bought commercially. Medications There are several types of medications that are used for the treatment of arthritis. Treatment typically begins with medications that have the fewest side effects with further medications being added if insufficiently effective. Depending on the type of arthritis, the medications that are given may be different. For example, the first-line treatment for osteoarthritis is acetaminophen (paracetamol) while for inflammatory arthritis it involves non-steroidal anti-inflammatory drugs (NSAIDs) like ibuprofen. Opioids and NSAIDs may be less well tolerated. However, topical NSAIDs may have better safety profiles than oral NSAIDs. For more severe cases of osteoarthritis, intra-articular corticosteroid injections may also be considered. The drugs to treat rheumatoid arthritis (RA) range from corticosteroids to monoclonal antibodies given intravenously. Due to the autoimmune nature of RA, treatments may include not only pain medications and anti-inflammatory drugs, but also another category of drugs called disease-modifying antirheumatic drugs (DMARDs). csDMARDs, TNF biologics and tsDMARDs are specific kinds of DMARDs that are recommended for treatment. Treatment with DMARDs is designed to slow down the progression of RA by initiating an adaptive immune response, in part by CD4+ T helper (Th) cells, specifically Th17 cells. Th17 cells are present in higher quantities at the site of bone destruction in joints and produce inflammatory cytokines associated with inflammation, such as interleukin-17 (IL-17). Surgery A number of surgical interventions have been incorporated in the treatment of arthritis since the 1950s. The primary surgical treatment option of arthritis is joint replacement surgery known as arthroplasty. Common joints that are replaced due to arthritis include the shoulder, hip, and knee. Arthroscopic surgery for osteoarthritis of the knee provides no additional benefit to patients when compared to optimized physical and medical therapy. Joint replacement surgery can last anywhere from 15-30 years depending on the patient. Following joint replacement surgery, patients can expect to get back to several physical activities including those such as swimming, tennis, and golf. Adaptive aids People with hand arthritis can have trouble with simple activities of daily living tasks (ADLs), such as turning a key in a lock or opening jars, as these activities can be cumbersome and painful. There are adaptive aids or assistive devices (ADs) available to help with these tasks, but they are generally more costly than conventional products with the same function. It is now possible to 3-D print adaptive aids, which have been released as open source hardware to reduce patient costs. Adaptive aids can significantly help arthritis patients and the vast majority of those with arthritis need and use them. Alternative medicine Further research is required to determine if transcutaneous electrical nerve stimulation (TENS) for knee osteoarthritis is effective for controlling pain. Low level laser therapy may be considered for relief of pain and stiffness associated with arthritis. Evidence of benefit is tentative. Pulsed electromagnetic field therapy (PEMFT) has tentative evidence supporting improved functioning but no evidence of improved pain in osteoarthritis. The FDA has not approved PEMFT for the treatment of arthritis. In Canada, PEMF devices are legally licensed by Health Canada for the treatment of pain associated with arthritic conditions. Epidemiology Arthritis is predominantly a disease of the elderly, but children can also be affected by the disease. Arthritis is more common in women than men at all ages and affects all races, ethnic groups and cultures. In the United States a CDC survey based on data from 2013 to 2015 showed 54.4 million (22.7%) adults had self-reported doctor-diagnosed arthritis, and 23.7 million (43.5% of those with arthritis) had arthritis-attributable activity limitation (AAAL). With an aging population, this number is expected to increase. Adults with co-morbid conditions, such as heart disease, diabetes, and obesity, were seen to have a higher than average prevalence of doctor-diagnosed arthritis (49.3%, 47.1%, and 30.6% respectively). Disability due to musculoskeletal disorders increased by 45% from 1990 to 2010. Of these, osteoarthritis is the fastest increasing major health condition. Among the many reports on the increased prevalence of musculoskeletal conditions, data from Africa are lacking and underestimated. A systematic review assessed the prevalence of arthritis in Africa and included twenty population-based and seven hospital-based studies. The majority of studies, twelve, were from South Africa. Nine studies were well-conducted, eleven studies were of moderate quality, and seven studies were conducted poorly. The results of the systematic review were as follows: Rheumatoid arthritis: 0.1% in Algeria (urban setting); 0.6% in Democratic Republic of Congo (urban setting); 2.5% and 0.07% in urban and rural settings in South Africa respectively; 0.3% in Egypt (rural setting), 0.4% in Lesotho (rural setting) Osteoarthritis: 55.1% in South Africa (urban setting); ranged from 29.5 to 82.7% in South Africans aged 65 years and older Knee osteoarthritis has the highest prevalence from all types of osteoarthritis, with 33.1% in rural South Africa Ankylosing spondylitis: 0.1% in South Africa (rural setting) Psoriatic arthritis: 4.4% in South Africa (urban setting) Gout: 0.7% in South Africa (urban setting) Juvenile idiopathic arthritis: 0.3% in Egypt (urban setting) History Evidence of osteoarthritis and potentially inflammatory arthritis has been discovered in dinosaurs. The first known traces of human arthritis date back as far as 4500 BC. In early reports, arthritis was frequently referred to as the most common ailment of prehistoric peoples. It was noted in skeletal remains of Native Americans found in Tennessee and parts of what is now Olathe, Kansas. Evidence of arthritis has been found throughout history, from Ötzi, a mummy () found along the border of modern Italy and Austria, to the Egyptian mummies . In 1715, William Musgrave published the second edition of his most important medical work, De arthritide symptomatica, which concerned arthritis and its effects. Augustin Jacob Landré-Beauvais, a 28-year-old resident physician at Salpêtrière Asylum in France was the first person to describe the symptoms of rheumatoid arthritis. Though Landré-Beauvais' classification of rheumatoid arthritis as a relative of gout was inaccurate, his dissertation encouraged others to further study the disease. John Charnley completed the first hip replacement (total hip arthroplasty) in England to treat arthritis in the 1960s. Society and Culture: Arthritis is the most common cause of disability in the United States. More than 20 million individuals with arthritis have severe limitations in function on a daily basis. Absenteeism and frequent visits to the physician are common in individuals who have arthritis. Arthritis can make it difficult for individuals to be physically active and some become home bound. It is estimated that the total cost of arthritis cases is close to $100 billion of which almost 50% is from lost earnings. Terminology The term is derived from arthr- (from ) and -itis (from , , ), the latter suffix having come to be associated with inflammation. The word arthritides is the plural form of arthritis, and denotes the collective group of arthritis-like conditions.
Biology and health sciences
Non-infectious disease
null
1778
https://en.wikipedia.org/wiki/Acetylene
Acetylene
Acetylene (systematic name: ethyne) is the chemical compound with the formula and structure . It is a hydrocarbon and the simplest alkyne. This colorless gas is widely used as a fuel and a chemical building block. It is unstable in its pure form and thus is usually handled as a solution. Pure acetylene is odorless, but commercial grades usually have a marked odor due to impurities such as divinyl sulfide and phosphine. As an alkyne, acetylene is unsaturated because its two carbon atoms are bonded together in a triple bond. The carbon–carbon triple bond places all four atoms in the same straight line, with CCH bond angles of 180°. Discovery Acetylene was discovered in 1836 by Edmund Davy, who identified it as a "new carburet of hydrogen". It was an accidental discovery while attempting to isolate potassium metal. By heating potassium carbonate with carbon at very high temperatures, he produced a residue of what is now known as potassium carbide, (K2C2), which reacted with water to release the new gas. It was rediscovered in 1860 by French chemist Marcellin Berthelot, who coined the name acétylène. Berthelot's empirical formula for acetylene (C4H2), as well as the alternative name "quadricarbure d'hydrogène" (hydrogen quadricarbide), were incorrect because many chemists at that time used the wrong atomic mass for carbon (6 instead of 12). Berthelot was able to prepare this gas by passing vapours of organic compounds (methanol, ethanol, etc.) through a red hot tube and collecting the effluent. He also found that acetylene was formed by sparking electricity through mixed cyanogen and hydrogen gases. Berthelot later obtained acetylene directly by passing hydrogen between the poles of a carbon arc. Preparation Partial combustion of hydrocarbons Since the 1950s, acetylene has mainly been manufactured by the partial combustion of methane in the US, much of the EU, and many other countries: It is a recovered side product in production of ethylene by cracking of hydrocarbons. Approximately 400,000 tonnes were produced by this method in 1983. Its presence in ethylene is usually undesirable because of its explosive character and its ability to poison Ziegler–Natta catalysts. It is selectively hydrogenated into ethylene, usually using Pd–Ag catalysts. Dehydrogenation of alkanes The heaviest alkanes in petroleum and natural gas are cracked into lighter molecules which are dehydrogenated at high temperature: This last reaction is implemented in the process of anaerobic decomposition of methane by microwave plasma. Carbochemical method The first acetylene produced was by Edmund Davy in 1836, via potassium carbide. Acetylene was historically produced by hydrolysis (reaction with water) of calcium carbide: This reaction was discovered by Friedrich Wöhler in 1862, but a suitable commercial scale production method which allowed acetylene to be put into wider scale use was not found until 1892 by the Canadian inventor Thomas Willson while searching for a viable commercial production method for aluminum. As late as the early 21st century, China, Japan, and Eastern Europe produced acetylene primarily by this method. The use of this technology has since declined worldwide with the notable exception of China, with its emphasis on coal-based chemical industry, as of 2013. Otherwise oil has increasingly supplanted coal as the chief source of reduced carbon. Calcium carbide production requires high temperatures, ~2000 °C, necessitating the use of an electric arc furnace. In the US, this process was an important part of the late-19th century revolution in chemistry enabled by the massive hydroelectric power project at Niagara Falls. Bonding In terms of valence bond theory, in each carbon atom the 2s orbital hybridizes with one 2p orbital thus forming an sp hybrid. The other two 2p orbitals remain unhybridized. The two ends of the two sp hybrid orbital overlap to form a strong σ valence bond between the carbons, while on each of the other two ends hydrogen atoms attach also by σ bonds. The two unchanged 2p orbitals form a pair of weaker π bonds. Since acetylene is a linear symmetrical molecule, it possesses the D∞h point group. Physical properties Changes of state At atmospheric pressure, acetylene cannot exist as a liquid and does not have a melting point. The triple point on the phase diagram corresponds to the melting point (−80.8 °C) at the minimal pressure at which liquid acetylene can exist (1.27 atm). At temperatures below the triple point, solid acetylene can change directly to the vapour (gas) by sublimation. The sublimation point at atmospheric pressure is −84.0 °C. Other At room temperature, the solubility of acetylene in acetone is 27.9 g per kg. For the same amount of dimethylformamide (DMF), the solubility is 51 g. At 20.26 bar, the solubility increases to 689.0 and 628.0 g for acetone and DMF, respectively. These solvents are used in pressurized gas cylinders. Applications Welding Approximately 20% of acetylene is supplied by the industrial gases industry for oxyacetylene gas welding and cutting due to the high temperature of the flame. Combustion of acetylene with oxygen produces a flame of over , releasing 11.8 kJ/g. Oxygen with acetylene is the hottest burning common gas mixture. Acetylene is the third-hottest natural chemical flame after dicyanoacetylene's and cyanogen at . Oxy-acetylene welding was a popular welding process in previous decades. The development and advantages of arc-based welding processes have made oxy-fuel welding nearly extinct for many applications. Acetylene usage for welding has dropped significantly. On the other hand, oxy-acetylene welding equipment is quite versatile – not only because the torch is preferred for some sorts of iron or steel welding (as in certain artistic applications), but also because it lends itself easily to brazing, braze-welding, metal heating (for annealing or tempering, bending or forming), the loosening of corroded nuts and bolts, and other applications. Bell Canada cable-repair technicians still use portable acetylene-fuelled torch kits as a soldering tool for sealing lead sleeve splices in manholes and in some aerial locations. Oxyacetylene welding may also be used in areas where electricity is not readily accessible. Oxyacetylene cutting is used in many metal fabrication shops. For use in welding and cutting, the working pressures must be controlled by a regulator, since above , if subjected to a shockwave (caused, for example, by a flashback), acetylene decomposes explosively into hydrogen and carbon. Chemicals Acetylene is useful for many processes, but few are conducted on a commercial scale. One of the major chemical applications is ethynylation of formaldehyde. Acetylene adds to aldehydes and ketones to form α-ethynyl alcohols: The reaction gives butynediol, with propargyl alcohol as the by-product. Copper acetylide is used as the catalyst. In addition to ethynylation, acetylene reacts with carbon monoxide, acetylene reacts to give acrylic acid, or acrylic esters. Metal catalysts are required. These derivatives form products such as acrylic fibers, glasses, paints, resins, and polymers. Except in China, use of acetylene as a chemical feedstock has declined by 70% from 1965 to 2007 owing to cost and environmental considerations. In China, acetylene is a major precursor to vinyl chloride. Historical uses Prior to the widespread use of petrochemicals, coal-derived acetylene was a building block for several industrial chemicals. Thus acetylene can be hydrated to give acetaldehyde, which in turn can be oxidized to acetic acid. Processes leading to acrylates were also commercialized. Almost all of these processes became obsolete with the availability of petroleum-derived ethylene and propylene. Niche applications In 1881, the Russian chemist Mikhail Kucherov described the hydration of acetylene to acetaldehyde using catalysts such as mercury(II) bromide. Before the advent of the Wacker process, this reaction was conducted on an industrial scale. The polymerization of acetylene with Ziegler–Natta catalysts produces polyacetylene films. Polyacetylene, a chain of CH centres with alternating single and double bonds, was one of the first discovered organic semiconductors. Its reaction with iodine produces a highly electrically conducting material. Although such materials are not useful, these discoveries led to the developments of organic semiconductors, as recognized by the Nobel Prize in Chemistry in 2000 to Alan J. Heeger, Alan G MacDiarmid, and Hideki Shirakawa. In the 1920s, pure acetylene was experimentally used as an inhalation anesthetic. Acetylene is sometimes used for carburization (that is, hardening) of steel when the object is too large to fit into a furnace. Acetylene is used to volatilize carbon in radiocarbon dating. The carbonaceous material in an archeological sample is treated with lithium metal in a small specialized research furnace to form lithium carbide (also known as lithium acetylide). The carbide can then be reacted with water, as usual, to form acetylene gas to feed into a mass spectrometer to measure the isotopic ratio of carbon-14 to carbon-12. Acetylene combustion produces a strong, bright light and the ubiquity of carbide lamps drove much acetylene commercialization in the early 20th century. Common applications included coastal lighthouses, street lights, and automobile and mining headlamps. In most of these applications, direct combustion is a fire hazard, and so acetylene has been replaced, first by incandescent lighting and many years later by low-power/high-lumen LEDs. Nevertheless, acetylene lamps remain in limited use in remote or otherwise inaccessible areas and in countries with a weak or unreliable central electric grid. Natural occurrence The energy richness of the C≡C triple bond and the rather high solubility of acetylene in water make it a suitable substrate for bacteria, provided an adequate source is available. A number of bacteria living on acetylene have been identified. The enzyme acetylene hydratase catalyzes the hydration of acetylene to give acetaldehyde: Acetylene is a moderately common chemical in the universe, often associated with the atmospheres of gas giants. One curious discovery of acetylene is on Enceladus, a moon of Saturn. Natural acetylene is believed to form from catalytic decomposition of long-chain hydrocarbons at temperatures of and above. Since such temperatures are highly unlikely on such a small distant body, this discovery is potentially suggestive of catalytic reactions within that moon, making it a promising site to search for prebiotic chemistry. Reactions Vinylation reactions In vinylation reactions, H−X compounds add across the triple bond. Alcohols and phenols add to acetylene to give vinyl ethers. Thiols give vinyl thioethers. Similarly, vinylpyrrolidone and vinylcarbazole are produced industrially by vinylation of 2-pyrrolidone and carbazole. The hydration of acetylene is a vinylation reaction, but the resulting vinyl alcohol isomerizes to acetaldehyde. The reaction is catalyzed by mercury salts. This reaction once was the dominant technology for acetaldehyde production, but it has been displaced by the Wacker process, which affords acetaldehyde by oxidation of ethylene, a cheaper feedstock. A similar situation applies to the conversion of acetylene to the valuable vinyl chloride by hydrochlorination vs the oxychlorination of ethylene. Vinyl acetate is used instead of acetylene for some vinylations, which are more accurately described as transvinylations. Higher esters of vinyl acetate have been used in the synthesis of vinyl formate. Organometallic chemistry Acetylene and its derivatives (2-butyne, diphenylacetylene, etc.) form complexes with transition metals. Its bonding to the metal is somewhat similar to that of ethylene complexes. These complexes are intermediates in many catalytic reactions such as alkyne trimerisation to benzene, tetramerization to cyclooctatetraene, and carbonylation to hydroquinone: at basic conditions (50–, 20–). Metal acetylides, species of the formula , are also common. Copper(I) acetylide and silver acetylide can be formed in aqueous solutions with ease due to a favorable solubility equilibrium. Acid-base reactions Acetylene has a pKa of 25, acetylene can be deprotonated by a superbase to form an acetylide: HC#CH + RM -> RH + HC#CM Various organometallic and inorganic reagents are effective. Hydrogenation Acetylene can be semihydrogenated to ethylene, providing a feedstock for a variety of polyethylene plastics. Halogens add to the triple bond. Safety and handling Acetylene is not especially toxic, but when generated from calcium carbide, or CAC2, it can contain toxic impurities such as traces of phosphine and arsine, which gives it a distinct garlic-like smell. It is also highly flammable, as are most light hydrocarbons, hence its use in welding. Its most singular hazard is associated with its intrinsic instability, especially when it is pressurized: under certain conditions acetylene can react in an exothermic addition-type reaction to form a number of products, typically benzene and/or vinylacetylene, possibly in addition to carbon and hydrogen. Consequently, acetylene, if initiated by intense heat or a shockwave, can decompose explosively if the absolute pressure of the gas exceeds about . Most regulators and pressure gauges on equipment report gauge pressure, and the safe limit for acetylene therefore is 101 kPagage, or 15 psig. It is therefore supplied and stored dissolved in acetone or dimethylformamide (DMF), contained in a gas cylinder with a porous filling, which renders it safe to transport and use, given proper handling. Acetylene cylinders should be used in the upright position to avoid withdrawing acetone during use. Information on safe storage of acetylene in upright cylinders is provided by the OSHA, Compressed Gas Association, United States Mine Safety and Health Administration (MSHA), EIGA, and other agencies. Copper catalyses the decomposition of acetylene, and as a result acetylene should not be transported in copper pipes. Cylinders should be stored in an area segregated from oxidizers to avoid exacerbated reaction in case of fire/leakage. Acetylene cylinders should not be stored in confined spaces, enclosed vehicles, garages, and buildings, to avoid unintended leakage leading to explosive atmosphere. In the US, National Electric Code (NEC) requires consideration for hazardous areas including those where acetylene may be released during accidents or leaks. Consideration may include electrical classification and use of listed Group A electrical components in US. Further information on determining the areas requiring special consideration is in NFPA 497. In Europe, ATEX also requires consideration for hazardous areas where flammable gases may be released during accidents or leaks.
Physical sciences
Aliphatic hydrocarbons
Chemistry
1786
https://en.wikipedia.org/wiki/Arabic%20numerals
Arabic numerals
The ten Arabic numerals (0, 1, 2, 3, 4, 5, 6, 7, 8, and 9) are the most commonly used symbols for writing numbers. The term often also implies a positional notation number with a decimal base, in particular when contrasted with Roman numerals. However the symbols are also used to write numbers in other bases, such as octal, as well as non-numerical information such as trademarks or license plate identifiers. They are also called Western Arabic numerals, Western digits, European digits, Ghubār numerals, or Hindu–Arabic numerals due to positional notation (but not these digits) originating in India. The Oxford English Dictionary uses lowercase Arabic numerals while using the fully capitalized term Arabic Numerals for Eastern Arabic numerals. In contemporary society, the terms digits, numbers, and numerals often implies only these symbols, although it can only be inferred from context. Europeans first learned of Arabic numerals , though their spread was a gradual process. After Italian scholar Fibonacci of Pisa encountered the numerals in the Algerian city of Béjaïa, his 13th-century work became crucial in making them known in Europe. However, their use was largely confined to Northern Italy until the invention of the printing press in the 15th century. European trade, books, and colonialism subsequently helped popularize the adoption of Arabic numerals around the world. The numerals are used worldwide—significantly beyond the contemporary spread of the Latin alphabet—and have become common in the writing systems where other numeral systems existed previously, such as Chinese and Japanese numerals. History Origin Positional decimal notation including a zero symbol was developed in India, using symbols visually distinct from those that would eventually enter into international use. As the concept spread, the sets of symbols used in different regions diverged over time. The immediate ancestors of the digits now commonly called "Arabic numerals" were introduced to Europe in the 10th century by Arabic speakers of Spain and North Africa, with digits at the time in wide use from Libya to Morocco. In the east from Egypt to Iraq and the Arabian Peninsula, the Arabs were using the Eastern Arabic numerals or "Mashriki" numerals: ٠, ١, ٢, ٣, ٤, ٥, ٦, ٧, ٨, ٩. Al-Nasawi wrote in the early 11th century that mathematicians had not agreed on the form of the numerals, but most of them had agreed to train themselves with the forms now known as Eastern Arabic numerals. The oldest specimens of the written numerals available are from Egypt and date to 873–874 AD. They show three forms of the numeral "2" and two forms of the numeral "3", and these variations indicate the divergence between what later became known as the Eastern Arabic numerals and the Western Arabic numerals. The Western Arabic numerals came to be used in the Maghreb and Al-Andalus from the 10th century onward. Some amount of consistency in the Western Arabic numeral forms endured from the 10th century, found in a Latin manuscript of Isidore of Seville's from 976 and the Gerbertian abacus, into the 12th and 13th centuries, in early manuscripts of translations from the city of Toledo. Calculations were originally performed using a dust board (, Latin: ), which involved writing symbols with a stylus and erasing them. The use of the dust board appears to have introduced a divergence in terminology as well: whereas the Hindu reckoning was called in the east, it was called 'calculation with dust' in the west. The numerals themselves were referred to in the west as 'dust figures' or 'dust letters'. Al-Uqlidisi later invented a system of calculations with ink and paper 'without board and erasing' (). A popular myth claims that the symbols were designed to indicate their numeric value through the number of angles they contained, but there is no contemporary evidence of this, and the myth is difficult to reconcile with any digits past 4. Adoption and spread The first mentions of the numerals from 1 to 9 in the West are found in the 976 , an illuminated collection of various historical documents covering a period from antiquity to the 10th century in Hispania. Other texts show that numbers from 1 to 9 were occasionally supplemented by a placeholder known as , represented as a circle or wheel, reminiscent of the eventual symbol for zero. The Arabic term for zero is (), transliterated into Latin as , which became the English word cipher. From the 980s, Gerbert of Aurillac (later Pope Sylvester II) used his position to spread knowledge of the numerals in Europe. Gerbert studied in Barcelona in his youth. He was known to have requested mathematical treatises concerning the astrolabe from Lupitus of Barcelona after he had returned to France. The reception of Arabic numerals in the West was gradual and lukewarm, as other numeral systems circulated in addition to the older Roman numbers. As a discipline, the first to adopt Arabic numerals as part of their own writings were astronomers and astrologists, evidenced from manuscripts surviving from mid-12th-century Bavaria. Reinher of Paderborn (1140–1190) used the numerals in his calendrical tables to calculate the dates of Easter more easily in his text . Italy Leonardo Fibonacci was a Pisan mathematician who had studied in the Pisan trading colony of Bugia, in what is now Algeria, and he endeavored to promote the numeral system in Europe with his 1202 book : When my father, who had been appointed by his country as public notary in the customs at Bugia acting for the Pisan merchants going there, was in charge, he summoned me to him while I was still a child, and having an eye to usefulness and future convenience, desired me to stay there and receive instruction in the school of accounting. There, when I had been introduced to the art of the Indians' nine symbols through remarkable teaching, knowledge of the art very soon pleased me above all else and I came to understand it. The s analysis highlighting the advantages of positional notation was widely influential. Likewise, Fibonacci's use of the Béjaïa digits in his exposition ultimately led to their widespread adoption in Europe. Fibonacci's work coincided with the European commercial revolution of the 12th and 13th centuries centered in Italy. Positional notation facilitated complex calculations (such as currency conversion) to be completed more quickly than was possible with the Roman system. In addition, the system could handle larger numbers, did not require a separate reckoning tool, and allowed the user to check their work without repeating the entire procedure. Late medieval Italian merchants did not stop using Roman numerals or other reckoning tools: instead, Arabic numerals were adopted for use in addition to their preexisting methods. Europe By the late 14th century, only a few texts using Arabic numerals appeared outside of Italy. This suggests that the use of Arabic numerals in commercial practice, and the significant advantage they conferred, remained a virtual Italian monopoly until the late 15th century. This may in part have been due to language barriers: although Fibonacci's was written in Latin, the Italian abacus traditions were predominantly written in Italian vernaculars that circulated in the private collections of abacus schools or individuals. The European acceptance of the numerals was accelerated by the invention of the printing press, and they became widely known during the 15th century. Their use grew steadily in other centers of finance and trade such as Lyon. Early evidence of their use in Britain includes: an equal hour horary quadrant from 1396, in England, a 1445 inscription on the tower of Heathfield Church, Sussex; a 1448 inscription on a wooden lych-gate of Bray Church, Berkshire; and a 1487 inscription on the belfry door at Piddletrenthide church, Dorset; and in Scotland a 1470 inscription on the tomb of the first Earl of Huntly in Elgin Cathedral. In central Europe, the King of Hungary Ladislaus the Posthumous, started the use of Arabic numerals, which appear for the first time in a royal document of 1456. By the mid-16th century, they had been widely adopted in Europe, and by 1800 had almost completely replaced the use of counting boards and Roman numerals in accounting. Roman numerals were mostly relegated to niche uses such as years and numbers on clock faces. Russia Prior to the introduction of Arabic numerals, Cyrillic numerals, derived from the Cyrillic alphabet, were used by South and East Slavs. The system was used in Russia as late as the early 18th century, although it was formally replaced in official use by Peter the Great in 1699. Reasons for Peter's switch from the alphanumerical system are believed to go beyond a surface-level desire to imitate the West. Historian Peter Brown makes arguments for sociological, militaristic, and pedagogical reasons for the change. At a broad, societal level, Russian merchants, soldiers, and officials increasingly came into contact with counterparts from the West and became familiar with the communal use of Arabic numerals. Peter also covertly travelled throughout Northern Europe from 1697 to 1698 during his Grand Embassy and was likely informally exposed to Western mathematics during this time. The Cyrillic system was found to be inferior for calculating practical kinematic values, such as the trajectories and parabolic flight patterns of artillery. With its use, it was difficult to keep pace with Arabic numerals in the growing field of ballistics, whereas Western mathematicians such as John Napier had been publishing on the topic since 1614. China The Chinese Shang dynasty numerals from the 14th century BC predates the Indian Brahmi numerals by over 1000 years and shows substantial similarity to the Brahmi numerals. Similar to the modern Arabic numerals, the Shang dynasty numeral system was also decimal based and positional. While positional Chinese numeral systems such as the counting rod system and Suzhou numerals had been in use prior to the introduction of modern Arabic numerals, the externally-developed system was eventually introduced to medieval China by the Hui people. In the early 17th century, European-style Arabic numerals were introduced by Spanish and Portuguese Jesuits. Encoding The ten Arabic numerals are encoded in virtually every character set designed for electric, radio, and digital communication, such as Morse code. They are encoded in ASCII (and therefore in Unicode encodings) at positions 0x30 to 0x39. Masking all but the four least-significant binary digits gives the value of the decimal digit, a design decision facilitating the digitization of text onto early computers. EBCDIC used a different offset, but also possessed the aforementioned masking property.
Mathematics
Language
null
1797
https://en.wikipedia.org/wiki/Acre
Acre
The acre ( ) is a unit of land area used in the British imperial and the United States customary systems. It is traditionally defined as the area of one chain by one furlong (66 by 660 feet), which is exactly equal to 10 square chains, of a square mile, 4,840 square yards, or 43,560 square feet, and approximately 4,047 m2, or about 40% of a hectare. Based upon the international yard and pound agreement of 1959, an acre may be declared as exactly 4,046.8564224 square metres. The acre is sometimes abbreviated ac but is usually spelled out as the word "acre". Traditionally, in the Middle Ages, an acre was conceived of as the area of land that could be ploughed by one man using a team of eight oxen in one day. The acre is still a statutory measure in the United States. Both the international acre and the US survey acre are in use, but they differ by only four parts per million (see below). The most common use of the acre is to measure tracts of land. The acre is used in many established and former Commonwealth of Nations countries by custom. In a few, it continues as a statute measure, although not since 2010 in the UK, and not for decades in Australia, New Zealand, and South Africa. In many places where it is not a statute measure, it is still lawful to "use for trade" if given as supplementary information and is not used for land registration. Description One acre equals (0.0015625) square mile, 4,840 square yards, 43,560 square feet, or about (see below). While all modern variants of the acre contain 4,840 square yards, there are alternative definitions of a yard, so the exact size of an acre depends upon the particular yard on which it is based. Originally, an acre was understood as a strip of land sized at forty perches (660 ft, or 1 furlong) long and four perches (66 ft) wide; this may have also been understood as an approximation of the amount of land a yoke of oxen could plough in one day (a furlong being "a furrow long"). A square enclosing one acre is approximately 69.57 yards, or 208 feet 9 inches (), on a side. As a unit of measure, an acre has no prescribed shape; any area of 43,560 square feet is an acre. US survey acres In the international yard and pound agreement of 1959, the United States and five countries of the Commonwealth of Nations defined the international yard to be exactly 0.9144 metre. The US authorities decided that, while the refined definition would apply nationally in all other respects, the US survey foot (and thus the survey acre) would continue 'until such a time as it becomes desirable and expedient to readjust [it]'. By inference, an "international acre" may be calculated as exactly square metres but it does not have a basis in any international agreement. Both the international acre and the US survey acre contain of a square mile or 4,840 square yards, but alternative definitions of a yard are used (see survey foot and survey yard), so the exact size of an acre depends upon the yard upon which it is based. The US survey acre is about 4,046.872 square metres; its exact value ( m2) is based on an inch defined by 1 metre = 39.37 inches exactly, as established by the Mendenhall Order of 1893. Surveyors in the United States use both international and survey feet, and consequently, both varieties of acre. Since the difference between the US survey acre and international acre (0.016 square metres, 160 square centimetres or 24.8 square inches), is only about a quarter of the size of an A4 sheet or US letter, it is usually not important which one is being discussed. Areas are seldom measured with sufficient accuracy for the different definitions to be detectable. In October 2019, the US National Geodetic Survey and the National Institute of Standards and Technology announced their joint intent to end the "temporary" continuance of the US survey foot, mile, and acre units (as permitted by their 1959 decision, above), with effect from the end of 2022. Spanish acre The Puerto Rican cuerda () is sometimes called the "Spanish acre" in the continental United States. Use The acre is commonly used in many current and former Commonwealth countries by custom, and in a few it continues as a statute measure. These include Antigua and Barbuda, American Samoa, The Bahamas, Belize, the British Virgin Islands, Canada, the Cayman Islands, Dominica, the Falkland Islands, Grenada, Ghana, Guam, the Northern Mariana Islands, Jamaica, Montserrat, Samoa, Saint Lucia, St. Helena, St. Kitts and Nevis, St. Vincent and the Grenadines, Turks and Caicos, the United Kingdom, the United States and the US Virgin Islands. Republic of Ireland In the Republic of Ireland, the hectare is legally used under European units of measurement directives; however, the acre (the same standard statute as used in the UK, not the old Irish acre, which was of a different size) is still widely used, especially in agriculture. Indian subcontinent In India, residential plots are measured in square feet or square metre, while agricultural land is measured in acres. In Sri Lanka, the division of an acre into 160 perches or 4 roods is common. In Pakistan, residential plots are measured in (20 = 1  = 605 sq yards) and open/agriculture land measurement is in acres (8 = 1 acre) and (25 acres = 1 = 200 ), and . United Kingdom Its use as a primary unit for trade in the United Kingdom ceased to be permitted from 1 October 1995, due to the 1994 amendment of the Weights and Measures Act, where it was replaced by the hectare though its use as a supplementary unit continues to be permitted indefinitely. This was with the exemption of Land registration, which records the sale and possession of land, in 2010 HM Land Registry ended its exemption. The measure is still used to communicate with the public and informally (non-contract) by the farming and property industries. Equivalence to other units of area 1 international acre is equal to the following metric units: 0.40468564224 hectare (A square with 100 m sides has an area of 1 hectare.) 4,046.8564224 square metres (or a square with approximately 63.61 m sides) 1 United States survey acre is equal to: 0.404687261 hectare 4,046.87261 square metres (1 square kilometre is equal to 247.105 acres) 1 acre (both variants) is equal to the following customary units: 66 feet × 660 feet (43,560 square feet) 10 square chains (1 chain = 66 feet = 22 yards = 4 rods = 100 links) 1 acre is approximately 208.71 feet × 208.71 feet (a square) 4,840 square yards 43,560 square feet 160 perches. A perch is equal to a square rod (1 square rod is 0.00625 acre) 4 roods A furlong by a chain (furlong 220 yards, chain 22 yards) 40 rods by 4 rods, 160 rods2 (historically fencing was often sold in 40 rod lengths) (0.0015625) square mile (1 square mile is equal to 640 acres) Perhaps the easiest way for US residents to envision an acre is as a rectangle measuring 88 yards by 55 yards ( of 880 yards by of 880 yards), about the size of a standard American football field. To be more exact, one acre is 90.75% of a 100-yd-long by 53.33-yd-wide American football field (without the end zone). The full field, including the end zones, covers about . For residents of other countries, the acre might be envisioned as rather more than half of a football pitch. Historical origin The word acre is derived from the Norman, attested for the first time in a text of Fécamp in 1006 to the meaning of «agrarian measure». Acre dates back to the old Scandinavian akr “cultivated field, ploughed land” which is perpetuated in Icelandic and the Faroese “field (wheat)”, Norwegian and Swedish , Danish “field”, cognate with German , Dutch , Latin , Sanskrit , and Greek (). In English, an obsolete variant spelling was aker. According to the Act on the Composition of Yards and Perches, dating from around 1300, an acre is "40 perches [rods] in length and four in breadth", meaning 220 yards by 22 yards. As detailed in the diagram, an acre was roughly the amount of land tillable by a yoke of oxen in one day. Before the enactment of the metric system, many countries in Europe used their own official acres. In France, the traditional unit of area was the arpent carré, a measure based on the Roman system of land measurement. The was used only in Normandy (and neighbouring places outside its traditional borders), but its value varied greatly across Normandy, ranging from 3,632 to 9,725 square metres, with 8,172 square metres being the most frequent value. But inside the same of Normandy, for instance in pays de Caux, the farmers (still in the 20th century) made the difference between the (68 ares, 66 centiares) and the (56 to 65 ca). The Normandy was usually divided in 4 (roods) and 160 square , like the English acre. The Normandy was equal to 1.6 , the unit of area more commonly used in Northern France outside of Normandy. In Canada, the Paris used in Quebec before the metric system was adopted is sometimes called "French acre" in English, even though the Paris and the Normandy were two very different units of area in ancient France (the Paris became the unit of area of French Canada, whereas the Normandy was never used in French Canada). In Germany, the Netherlands, and Eastern Europe the traditional unit of area was . Like the acre, the morgen was a unit of ploughland, representing a strip that could be ploughed by one man and an ox or horse in a morning. There were many variants of the morgen, differing between the different German territories, ranging from . It was also used in Old Prussia, in the Balkans, Norway, and Denmark, where it was equal to about . Statutory values for the acre were enacted in England, and subsequently the United Kingdom, by acts of: Edward I Edward III Henry VIII George IV Queen Victoria – the British Weights and Measures Act of 1878 defined it as containing 4,840 square yards. Historically, the size of farms and landed estates in the United Kingdom was usually expressed in acres (or acres, roods, and perches), even if the number of acres was so large that it might conveniently have been expressed in square miles. For example, a certain landowner might have been said to own 32,000 acres of land, not 50 square miles of land. The acre is related to the square mile, with 640 acres making up one square mile. One mile is 5280 feet (1760 yards). In western Canada and the western United States, divisions of land area were typically based on the square mile, and fractions thereof. If the square mile is divided into quarters, each quarter has a side length of mile (880 yards) and is square mile in area, or 160 acres. These subunits are typically then again divided into quarters, with each side being mile long, and being of a square mile in area, or 40 acres. In the United States, farmland was typically divided as such, and the phrase "the back 40" refers to the 40-acre parcel to the back of the farm. Most of the Canadian Prairie Provinces and the US Midwest are on square-mile grids for surveying purposes. Legacy units Customary acre – The customary acre was roughly similar to the Imperial acre, but it was subject to considerable local variation similar to the variation in carucates, virgates, bovates, nooks, and farundels. These may have been multiples of the customary acre, rather than the statute acre. Builder's acre = an even or , used in US real-estate development to simplify the math and for marketing. It is nearly 10% smaller than a survey acre, and the discrepancy has led to lawsuits alleging misrepresentation. Feddan - Middle Eastern measurement unit, . Scottish acre = 1.3 Imperial acres (5,080 m2, an obsolete Scottish measurement) Irish acre = Cheshire acre = Stremma or Greek acre ≈ 10,000 square Greek feet, but now set at exactly 1,000 square metres (a similar unit was the zeugarion) Dunam or Turkish acre ≈ 1,600 square Turkish paces, but now set at exactly 1,000 square metres (a similar unit was the çift) Actus quadratus or Roman acre ≈ 14,400 square Roman feet (about 1,260 square metres) God's Acre – a synonym for a churchyard. Long acre the grass strip on either side of a road that may be used for illicit grazing. Town acre was a term used in early 19th century in the planning of towns on a grid plan, such as Adelaide, South Australia and Wellington, New Plymouth and Nelson in New Zealand. The land was divided into plots of an Imperial acre, and these became known as town acres.
Physical sciences
Area
null
1800
https://en.wikipedia.org/wiki/Adenosine%20triphosphate
Adenosine triphosphate
Adenosine triphosphate (ATP) is a nucleoside triphosphate that provides energy to drive and support many processes in living cells, such as muscle contraction, nerve impulse propagation, and chemical synthesis. Found in all known forms of life, it is often referred to as the "molecular unit of currency" for intracellular energy transfer. When consumed in a metabolic process, ATP converts either to adenosine diphosphate (ADP) or to adenosine monophosphate (AMP). Other processes regenerate ATP. It is also a precursor to DNA and RNA, and is used as a coenzyme. An average adult human processes around 50 kilograms (about 100 moles) daily. From the perspective of biochemistry, ATP is classified as a nucleoside triphosphate, which indicates that it consists of three components: a nitrogenous base (adenine), the sugar ribose, and the triphosphate. Structure ATP consists of an adenine attached by the #9-nitrogen atom to the 1′ carbon atom of a sugar (ribose), which in turn is attached at the 5' carbon atom of the sugar to a triphosphate group. In its many reactions related to metabolism, the adenine and sugar groups remain unchanged, but the triphosphate is converted to di- and monophosphate, giving respectively the derivatives ADP and AMP. The three phosphoryl groups are labeled as alpha (α), beta (β), and, for the terminal phosphate, gamma (γ). In neutral solution, ionized ATP exists mostly as ATP4−, with a small proportion of ATP3−. Metal cation binding Polyanionic and featuring a potentially chelating polyphosphate group, ATP binds metal cations with high affinity. The binding constant for is (). The binding of a divalent cation, almost always magnesium, strongly affects the interaction of ATP with various proteins. Due to the strength of the ATP-Mg2+ interaction, ATP exists in the cell mostly as a complex with bonded to the phosphate oxygen centers. A second magnesium ion is critical for ATP binding in the kinase domain. The presence of Mg2+ regulates kinase activity. It is interesting from an RNA world perspective that ATP can carry a Mg ion which catalyzes RNA polymerization. Chemical properties Salts of ATP can be isolated as colorless solids. ATP is stable in aqueous solutions between pH 6.8 and 7.4 (in the absence of catalysts). At more extreme pH levels, it rapidly hydrolyses to ADP and phosphate. Living cells maintain the ratio of ATP to ADP at a point ten orders of magnitude from equilibrium, with ATP concentrations fivefold higher than the concentration of ADP. In the context of biochemical reactions, the P-O-P bonds are frequently referred to as high-energy bonds. Reactive aspects The hydrolysis of ATP into ADP and inorganic phosphate ATP(aq) + (l) = ADP(aq) + HPO(aq) + H(aq) releases of enthalpy. This may differ under physiological conditions if the reactant and products are not exactly in these ionization states. The values of the free energy released by cleaving either a phosphate (Pi) or a pyrophosphate (PPi) unit from ATP at standard state concentrations of 1 mol/L at pH 7 are: ATP + → ADP + Pi ΔG°' = −30.5 kJ/mol (−7.3 kcal/mol) ATP + → AMP + PPi ΔG°' = −45.6 kJ/mol (−10.9 kcal/mol) These abbreviated equations at a pH near 7 can be written more explicitly (R = adenosyl): [RO-P(O)2-O-P(O)2-O-PO3]4− + → [RO-P(O)2-O-PO3]3− + [HPO4]2− + H+ [RO-P(O)2-O-P(O)2-O-PO3]4− + → [RO-PO3]2− + [HO3P-O-PO3]3− + H+ At cytoplasmic conditions, where the ADP/ATP ratio is 10 orders of magnitude from equilibrium, the ΔG is around −57 kJ/mol. Along with pH, the free energy change of ATP hydrolysis is also associated with Mg2+ concentration, from ΔG°' = −35.7 kJ/mol at a Mg2+ concentration of zero, to ΔG°' = −31 kJ/mol at [Mg2+] = 5 mM. Higher concentrations of Mg2+ decrease free energy released in the reaction due to binding of Mg2+ ions to negatively charged oxygen atoms of ATP at pH 7. Production from AMP and ADP Production, aerobic conditions A typical intracellular concentration of ATP may be 1–10 μmol per gram of tissue in a variety of eukaryotes. The dephosphorylation of ATP and rephosphorylation of ADP and AMP occur repeatedly in the course of aerobic metabolism. ATP can be produced by a number of distinct cellular processes; the three main pathways in eukaryotes are (1) glycolysis, (2) the citric acid cycle/oxidative phosphorylation, and (3) beta-oxidation. The overall process of oxidizing glucose to carbon dioxide, the combination of pathways 1 and 2, known as cellular respiration, produces about 30 equivalents of ATP from each molecule of glucose. ATP production by a non-photosynthetic aerobic eukaryote occurs mainly in the mitochondria, which comprise nearly 25% of the volume of a typical cell. Glycolysis In glycolysis, glucose and glycerol are metabolized to pyruvate. Glycolysis generates two equivalents of ATP through substrate phosphorylation catalyzed by two enzymes, phosphoglycerate kinase (PGK) and pyruvate kinase. Two equivalents of nicotinamide adenine dinucleotide (NADH) are also produced, which can be oxidized via the electron transport chain and result in the generation of additional ATP by ATP synthase. The pyruvate generated as an end-product of glycolysis is a substrate for the Krebs Cycle. Glycolysis is viewed as consisting of two phases with five steps each. In phase 1, "the preparatory phase", glucose is converted to 2 d-glyceraldehyde-3-phosphate (g3p). One ATP is invested in Step 1, and another ATP is invested in Step 3. Steps 1 and 3 of glycolysis are referred to as "Priming Steps". In Phase 2, two equivalents of g3p are converted to two pyruvates. In Step 7, two ATP are produced. Also, in Step 10, two further equivalents of ATP are produced. In Steps 7 and 10, ATP is generated from ADP. A net of two ATPs is formed in the glycolysis cycle. The glycolysis pathway is later associated with the Citric Acid Cycle which produces additional equivalents of ATP. Regulation In glycolysis, hexokinase is directly inhibited by its product, glucose-6-phosphate, and pyruvate kinase is inhibited by ATP itself. The main control point for the glycolytic pathway is phosphofructokinase (PFK), which is allosterically inhibited by high concentrations of ATP and activated by high concentrations of AMP. The inhibition of PFK by ATP is unusual since ATP is also a substrate in the reaction catalyzed by PFK; the active form of the enzyme is a tetramer that exists in two conformations, only one of which binds the second substrate fructose-6-phosphate (F6P). The protein has two binding sites for ATP – the active site is accessible in either protein conformation, but ATP binding to the inhibitor site stabilizes the conformation that binds F6P poorly. A number of other small molecules can compensate for the ATP-induced shift in equilibrium conformation and reactivate PFK, including cyclic AMP, ammonium ions, inorganic phosphate, and fructose-1,6- and -2,6-biphosphate. Citric acid cycle In the mitochondrion, pyruvate is oxidized by the pyruvate dehydrogenase complex to the acetyl group, which is fully oxidized to carbon dioxide by the citric acid cycle (also known as the Krebs cycle). Every "turn" of the citric acid cycle produces two molecules of carbon dioxide, one equivalent of ATP guanosine triphosphate (GTP) through substrate-level phosphorylation catalyzed by succinyl-CoA synthetase, as succinyl-CoA is converted to succinate, three equivalents of NADH, and one equivalent of FADH2. NADH and FADH2 are recycled (to NAD+ and FAD, respectively) by oxidative phosphorylation, generating additional ATP. The oxidation of NADH results in the synthesis of 2–3 equivalents of ATP, and the oxidation of one FADH2 yields between 1–2 equivalents of ATP. The majority of cellular ATP is generated by this process. Although the citric acid cycle itself does not involve molecular oxygen, it is an obligately aerobic process because O2 is used to recycle the NADH and FADH2. In the absence of oxygen, the citric acid cycle ceases. The generation of ATP by the mitochondrion from cytosolic NADH relies on the malate-aspartate shuttle (and to a lesser extent, the glycerol-phosphate shuttle) because the inner mitochondrial membrane is impermeable to NADH and NAD+. Instead of transferring the generated NADH, a malate dehydrogenase enzyme converts oxaloacetate to malate, which is translocated to the mitochondrial matrix. Another malate dehydrogenase-catalyzed reaction occurs in the opposite direction, producing oxaloacetate and NADH from the newly transported malate and the mitochondrion's interior store of NAD+. A transaminase converts the oxaloacetate to aspartate for transport back across the membrane and into the intermembrane space. In oxidative phosphorylation, the passage of electrons from NADH and FADH2 through the electron transport chain releases the energy to pump protons out of the mitochondrial matrix and into the intermembrane space. This pumping generates a proton motive force that is the net effect of a pH gradient and an electric potential gradient across the inner mitochondrial membrane. Flow of protons down this potential gradient – that is, from the intermembrane space to the matrix – yields ATP by ATP synthase. Three ATP are produced per turn. Although oxygen consumption appears fundamental for the maintenance of the proton motive force, in the event of oxygen shortage (hypoxia), intracellular acidosis (mediated by enhanced glycolytic rates and ATP hydrolysis), contributes to mitochondrial membrane potential and directly drives ATP synthesis. Most of the ATP synthesized in the mitochondria will be used for cellular processes in the cytosol; thus it must be exported from its site of synthesis in the mitochondrial matrix. ATP outward movement is favored by the membrane's electrochemical potential because the cytosol has a relatively positive charge compared to the relatively negative matrix. For every ATP transported out, it costs 1 H+. Producing one ATP costs about 3 H+. Therefore, making and exporting one ATP requires 4H+. The inner membrane contains an antiporter, the ADP/ATP translocase, which is an integral membrane protein used to exchange newly synthesized ATP in the matrix for ADP in the intermembrane space. Regulation The citric acid cycle is regulated mainly by the availability of key substrates, particularly the ratio of NAD+ to NADH and the concentrations of calcium, inorganic phosphate, ATP, ADP, and AMP. Citrate – the ion that gives its name to the cycle – is a feedback inhibitor of citrate synthase and also inhibits PFK, providing a direct link between the regulation of the citric acid cycle and glycolysis. Beta oxidation In the presence of air and various cofactors and enzymes, fatty acids are converted to acetyl-CoA. The pathway is called beta-oxidation. Each cycle of beta-oxidation shortens the fatty acid chain by two carbon atoms and produces one equivalent each of acetyl-CoA, NADH, and FADH2. The acetyl-CoA is metabolized by the citric acid cycle to generate ATP, while the NADH and FADH2 are used by oxidative phosphorylation to generate ATP. Dozens of ATP equivalents are generated by the beta-oxidation of a single long acyl chain. Regulation In oxidative phosphorylation, the key control point is the reaction catalyzed by cytochrome c oxidase, which is regulated by the availability of its substrate – the reduced form of cytochrome c. The amount of reduced cytochrome c available is directly related to the amounts of other substrates: which directly implies this equation: Thus, a high ratio of [NADH] to [NAD+] or a high ratio of [ADP] [Pi] to [ATP] imply a high amount of reduced cytochrome c and a high level of cytochrome c oxidase activity. An additional level of regulation is introduced by the transport rates of ATP and NADH between the mitochondrial matrix and the cytoplasm. Ketosis Ketone bodies can be used as fuels, yielding 22 ATP and 2 GTP molecules per acetoacetate molecule when oxidized in the mitochondria. Ketone bodies are transported from the liver to other tissues, where acetoacetate and beta-hydroxybutyrate can be reconverted to acetyl-CoA to produce reducing equivalents (NADH and FADH2), via the citric acid cycle. Ketone bodies cannot be used as fuel by the liver, because the liver lacks the enzyme β-ketoacyl-CoA transferase, also called thiolase. Acetoacetate in low concentrations is taken up by the liver and undergoes detoxification through the methylglyoxal pathway which ends with lactate. Acetoacetate in high concentrations is absorbed by cells other than those in the liver and enters a different pathway via 1,2-propanediol. Though the pathway follows a different series of steps requiring ATP, 1,2-propanediol can be turned into pyruvate. Production, anaerobic conditions Fermentation is the metabolism of organic compounds in the absence of air. It involves substrate-level phosphorylation in the absence of a respiratory electron transport chain. The equation for the reaction of glucose to form lactic acid is: + 2 ADP + 2 Pi → 2  + 2 ATP + 2  Anaerobic respiration is respiration in the absence of . Prokaryotes can utilize a variety of electron acceptors. These include nitrate, sulfate, and carbon dioxide. ATP replenishment by nucleoside diphosphate kinases ATP can also be synthesized through several so-called "replenishment" reactions catalyzed by the enzyme families of nucleoside diphosphate kinases (NDKs), which use other nucleoside triphosphates as a high-energy phosphate donor, and the ATP:guanido-phosphotransferase family. ATP production during photosynthesis In plants, ATP is synthesized in the thylakoid membrane of the chloroplast. The process is called photophosphorylation. The "machinery" is similar to that in mitochondria except that light energy is used to pump protons across a membrane to produce a proton-motive force. ATP synthase then ensues exactly as in oxidative phosphorylation. Some of the ATP produced in the chloroplasts is consumed in the Calvin cycle, which produces triose sugars. ATP recycling The total quantity of ATP in the human body is about 0.1 mol/L. The majority of ATP is recycled from ADP by the aforementioned processes. Thus, at any given time, the total amount of ATP + ADP remains fairly constant. The energy used by human cells in an adult requires the hydrolysis of 100 to 150 mol/L of ATP daily, which means a human will typically use their body weight worth of ATP over the course of the day. Each equivalent of ATP is recycled 1000–1500 times during a single day (), at approximately 9×1020 molecules/s. Biochemical functions Intracellular signaling ATP is involved in signal transduction by serving as substrate for kinases, enzymes that transfer phosphate groups. Kinases are the most common ATP-binding proteins. They share a small number of common folds. Phosphorylation of a protein by a kinase can activate a cascade such as the mitogen-activated protein kinase cascade. ATP is also a substrate of adenylate cyclase, most commonly in G protein-coupled receptor signal transduction pathways and is transformed to second messenger, cyclic AMP, which is involved in triggering calcium signals by the release of calcium from intracellular stores. This form of signal transduction is particularly important in brain function, although it is involved in the regulation of a multitude of other cellular processes. DNA and RNA synthesis ATP is one of four monomers required in the synthesis of RNA. The process is promoted by RNA polymerases. A similar process occurs in the formation of DNA, except that ATP is first converted to the deoxyribonucleotide dATP. Like many condensation reactions in nature, DNA replication and DNA transcription also consume ATP. Amino acid activation in protein synthesis Aminoacyl-tRNA synthetase enzymes consume ATP in the attachment tRNA to amino acids, forming aminoacyl-tRNA complexes. Aminoacyl transferase binds AMP-amino acid to tRNA. The coupling reaction proceeds in two steps: aa + ATP ⟶ aa-AMP + PPi aa-AMP + tRNA ⟶ aa-tRNA + AMP The amino acid is coupled to the penultimate nucleotide at the 3′-end of the tRNA (the A in the sequence CCA) via an ester bond (roll over in illustration). ATP binding cassette transporter Transporting chemicals out of a cell against a gradient is often associated with ATP hydrolysis. Transport is mediated by ATP binding cassette transporters. The human genome encodes 48 ABC transporters, that are used for exporting drugs, lipids, and other compounds. Extracellular signalling and neurotransmission Cells secrete ATP to communicate with other cells in a process called purinergic signalling. ATP serves as a neurotransmitter in many parts of the nervous system, modulates ciliary beating, affects vascular oxygen supply etc. ATP is either secreted directly across the cell membrane through channel proteins or is pumped into vesicles which then fuse with the membrane. Cells detect ATP using the purinergic receptor proteins P2X and P2Y. ATP has been shown to be a critically important signalling molecule for microglia - neuron interactions in the adult brain, as well as during brain development. Furthermore, tissue-injury induced ATP-signalling is a major factor in rapid microglial phenotype changes. Muscle contraction ATP fuels muscle contractions. Muscle contractions are regulated by signaling pathways, although different muscle types being regulated by specific pathways and stimuli based on their particular function. However, in all muscle types, contraction is performed by the proteins actin and myosin. ATP is initially bound to myosin. When ATPase hydrolyzes the bound ATP into ADP and inorganic phosphate, myosin is positioned in a way that it can bind to actin. Myosin bound by ADP and Pi forms cross-bridges with actin and the subsequent release of ADP and Pi releases energy as the power stroke. The power stroke causes actin filament to slide past the myosin filament, shortening the muscle and causing a contraction. Another ATP molecule can then bind to myosin, releasing it from actin and allowing this process to repeat. Protein solubility ATP has recently been proposed to act as a biological hydrotrope and has been shown to affect proteome-wide solubility. Abiogenic origins Acetyl phosphate (AcP), a precursor to ATP, can readily be synthesized at modest yields from thioacetate in pH 7 and 20 °C and pH 8 and 50 °C, although acetyl phosphate is less stable in warmer temperatures and alkaline conditions than in cooler and acidic to neutral conditions. It is unable to promote polymerization of ribonucleotides and amino acids and was only capable of phosphorylation of organic compounds. It was shown that it can promote aggregation and stabilization of AMP in the presence of Na+, aggregation of nucleotides could promote polymerization above 75 °C in the absence of Na+. It is possible that polymerization promoted by AcP could occur at mineral surfaces. It was shown that ADP can only be phosphorylated to ATP by AcP and other nucleoside triphosphates were not phosphorylated by AcP. This might explain why all lifeforms use ATP to drive biochemical reactions. ATP analogues Biochemistry laboratories often use in vitro studies to explore ATP-dependent molecular processes. ATP analogs are also used in X-ray crystallography to determine a protein structure in complex with ATP, often together with other substrates. Enzyme inhibitors of ATP-dependent enzymes such as kinases are needed to examine the binding sites and transition states involved in ATP-dependent reactions. Most useful ATP analogs cannot be hydrolyzed as ATP would be; instead, they trap the enzyme in a structure closely related to the ATP-bound state. Adenosine 5′-(γ-thiotriphosphate) is an extremely common ATP analog in which one of the gamma-phosphate oxygens is replaced by a sulfur atom; this anion is hydrolyzed at a dramatically slower rate than ATP itself and functions as an inhibitor of ATP-dependent processes. In crystallographic studies, hydrolysis transition states are modeled by the bound vanadate ion. Caution is warranted in interpreting the results of experiments using ATP analogs, since some enzymes can hydrolyze them at appreciable rates at high concentration. Medical use ATP is used intravenously for some heart related conditions. History ATP was discovered in 1929 by and Jendrassik and, independently, by Cyrus Fiske and Yellapragada Subba Rao of Harvard Medical School, both teams competing against each other to find an assay for phosphorus. It was proposed to be the intermediary between energy-yielding and energy-requiring reactions in cells by Fritz Albert Lipmann in 1941. It was first synthesized in the laboratory by Alexander Todd in 1948, and he was awarded the Nobel Prize in Chemistry in 1957 partly for this work. The 1978 Nobel Prize in Chemistry was awarded to Peter Dennis Mitchell for the discovery of the chemiosmotic mechanism of ATP synthesis. The 1997 Nobel Prize in Chemistry was divided, one half jointly to Paul D. Boyer and John E. Walker "for their elucidation of the enzymatic mechanism underlying the synthesis of adenosine triphosphate (ATP)" and the other half to Jens C. Skou "for the first discovery of an ion-transporting enzyme, Na+, K+ -ATPase."
Biology and health sciences
Biochemistry and molecular biology
null
1805
https://en.wikipedia.org/wiki/Antibiotic
Antibiotic
An antibiotic is a type of antimicrobial substance active against bacteria. It is the most important type of antibacterial agent for fighting bacterial infections, and antibiotic medications are widely used in the treatment and prevention of such infections. They may either kill or inhibit the growth of bacteria. A limited number of antibiotics also possess antiprotozoal activity. Antibiotics are not effective against viruses such as the ones which cause the common cold or influenza. Drugs which inhibit growth of viruses are termed antiviral drugs or antivirals. Antibiotics are also not effective against fungi. Drugs which inhibit growth of fungi are called antifungal drugs. Sometimes, the term antibiotic—literally "opposing life", from the Greek roots ἀντι anti, "against" and βίος bios, "life"—is broadly used to refer to any substance used against microbes, but in the usual medical usage, antibiotics (such as penicillin) are those produced naturally (by one microorganism fighting another), whereas non-antibiotic antibacterials (such as sulfonamides and antiseptics) are fully synthetic. However, both classes have the same effect of killing or preventing the growth of microorganisms, and both are included in antimicrobial chemotherapy. "Antibacterials" include bactericides, bacteriostatics, antibacterial soaps, and chemical disinfectants, whereas antibiotics are an important class of antibacterials used more specifically in medicine and sometimes in livestock feed. Antibiotics have been used since ancient times. Many civilizations used topical application of moldy bread, with many references to its beneficial effects arising from ancient Egypt, Nubia, China, Serbia, Greece, and Rome. The first person to directly document the use of molds to treat infections was John Parkinson (1567–1650). Antibiotics revolutionized medicine in the 20th century. Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s. Alexander Fleming (1881–1955) discovered modern day penicillin in 1928, the widespread use of which proved significantly beneficial during wartime. The first sulfonamide and the first systemically active antibacterial drug, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 or 1933 at the Bayer Laboratories of the IG Farben conglomerate in Germany. However, the effectiveness and easy access to antibiotics have also led to their overuse and some bacteria have evolved resistance to them. Antimicrobial resistance (AMR), a naturally occurring process, is driven largely by the misuse and overuse of antimicrobials. Yet, at the same time, many people around the world do not have access to essential antimicrobials. The World Health Organization has classified AMR as a widespread "serious threat [that] is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country". Each year, nearly 5 million deaths are associated with AMR globally. Global deaths attributable to AMR numbered 1.27 million in 2019. Etymology The term 'antibiosis', meaning "against life", was introduced by the French bacteriologist Jean Paul Vuillemin as a descriptive name of the phenomenon exhibited by these early antibacterial drugs. Antibiosis was first described in 1877 in bacteria when Louis Pasteur and Robert Koch observed that an airborne bacillus could inhibit the growth of Bacillus anthracis. These drugs were later renamed antibiotics by Selman Waksman, an American microbiologist, in 1947. The term antibiotic was first used in 1942 by Selman Waksman and his collaborators in journal articles to describe any substance produced by a microorganism that is antagonistic to the growth of other microorganisms in high dilution. This definition excluded substances that kill bacteria but that are not produced by microorganisms (such as gastric juices and hydrogen peroxide). It also excluded synthetic antibacterial compounds such as the sulfonamides. In current usage, the term "antibiotic" is applied to any medication that kills bacteria or inhibits their growth, regardless of whether that medication is produced by a microorganism or not. The term "antibiotic" derives from anti + βιωτικός (biōtikos), "fit for life, lively", which comes from βίωσις (biōsis), "way of life", and that from βίος (bios), "life". The term "antibacterial" derives from Greek ἀντί (anti), "against" + βακτήριον (baktērion), diminutive of βακτηρία (baktēria), "staff, cane", because the first bacteria to be discovered were rod-shaped. Usage Medical uses Antibiotics are used to treat or prevent bacterial infections, and sometimes protozoan infections. (Metronidazole is effective against a number of parasitic diseases). When an infection is suspected of being responsible for an illness but the responsible pathogen has not been identified, an empiric therapy is adopted. This involves the administration of a broad-spectrum antibiotic based on the signs and symptoms presented and is initiated pending laboratory results that can take several days. When the responsible pathogenic microorganism is already known or has been identified, definitive therapy can be started. This will usually involve the use of a narrow-spectrum antibiotic. The choice of antibiotic given will also be based on its cost. Identification is critically important as it can reduce the cost and toxicity of the antibiotic therapy and also reduce the possibility of the emergence of antimicrobial resistance. To avoid surgery, antibiotics may be given for non-complicated acute appendicitis. Antibiotics may be given as a preventive measure and this is usually limited to at-risk populations such as those with a weakened immune system (particularly in HIV cases to prevent pneumonia), those taking immunosuppressive drugs, cancer patients, and those having surgery. Their use in surgical procedures is to help prevent infection of incisions. They have an important role in dental antibiotic prophylaxis where their use may prevent bacteremia and consequent infective endocarditis. Antibiotics are also used to prevent infection in cases of neutropenia particularly cancer-related. The use of antibiotics for secondary prevention of coronary heart disease is not supported by current scientific evidence, and may actually increase cardiovascular mortality, all-cause mortality and the occurrence of stroke. Routes of administration There are many different routes of administration for antibiotic treatment. Antibiotics are usually taken by mouth. In more severe cases, particularly deep-seated systemic infections, antibiotics can be given intravenously or by injection. Where the site of infection is easily accessed, antibiotics may be given topically in the form of eye drops onto the conjunctiva for conjunctivitis or ear drops for ear infections and acute cases of swimmer's ear. Topical use is also one of the treatment options for some skin conditions including acne and cellulitis. Advantages of topical application include achieving high and sustained concentration of antibiotic at the site of infection; reducing the potential for systemic absorption and toxicity, and total volumes of antibiotic required are reduced, thereby also reducing the risk of antibiotic misuse. Topical antibiotics applied over certain types of surgical wounds have been reported to reduce the risk of surgical site infections. However, there are certain general causes for concern with topical administration of antibiotics. Some systemic absorption of the antibiotic may occur; the quantity of antibiotic applied is difficult to accurately dose, and there is also the possibility of local hypersensitivity reactions or contact dermatitis occurring. It is recommended to administer antibiotics as soon as possible, especially in life-threatening infections. Many emergency departments stock antibiotics for this purpose. Global consumption Antibiotic consumption varies widely between countries. The WHO report on surveillance of antibiotic consumption published in 2018 analysed 2015 data from 65 countries. As measured in defined daily doses per 1,000 inhabitants per day. Mongolia had the highest consumption with a rate of 64.4. Burundi had the lowest at 4.4. Amoxicillin and amoxicillin/clavulanic acid were the most frequently consumed. Side effects Antibiotics are screened for any negative effects before their approval for clinical use, and are usually considered safe and well tolerated. However, some antibiotics have been associated with a wide extent of adverse side effects ranging from mild to very severe depending on the type of antibiotic used, the microbes targeted, and the individual patient. Side effects may reflect the pharmacological or toxicological properties of the antibiotic or may involve hypersensitivity or allergic reactions. Adverse effects range from fever and nausea to major allergic reactions, including photodermatitis and anaphylaxis. Common side effects of oral antibiotics include diarrhea, resulting from disruption of the species composition in the intestinal flora, resulting, for example, in overgrowth of pathogenic bacteria, such as Clostridioides difficile. Taking probiotics during the course of antibiotic treatment can help prevent antibiotic-associated diarrhea. Antibacterials can also affect the vaginal flora, and may lead to overgrowth of yeast species of the genus Candida in the vulvo-vaginal area. Additional side effects can result from interaction with other drugs, such as the possibility of tendon damage from the administration of a quinolone antibiotic with a systemic corticosteroid. Some antibiotics may also damage the mitochondrion, a bacteria-derived organelle found in eukaryotic, including human, cells. Mitochondrial damage cause oxidative stress in cells and has been suggested as a mechanism for side effects from fluoroquinolones. They are also known to affect chloroplasts. Interactions Birth control pills There are few well-controlled studies on whether antibiotic use increases the risk of oral contraceptive failure. The majority of studies indicate antibiotics do not interfere with birth control pills, such as clinical studies that suggest the failure rate of contraceptive pills caused by antibiotics is very low (about 1%). Situations that may increase the risk of oral contraceptive failure include non-compliance (missing taking the pill), vomiting, or diarrhea. Gastrointestinal disorders or interpatient variability in oral contraceptive absorption affecting ethinylestradiol serum levels in the blood. Women with menstrual irregularities may be at higher risk of failure and should be advised to use backup contraception during antibiotic treatment and for one week after its completion. If patient-specific risk factors for reduced oral contraceptive efficacy are suspected, backup contraception is recommended. In cases where antibiotics have been suggested to affect the efficiency of birth control pills, such as for the broad-spectrum antibiotic rifampicin, these cases may be due to an increase in the activities of hepatic liver enzymes' causing increased breakdown of the pill's active ingredients. Effects on the intestinal flora, which might result in reduced absorption of estrogens in the colon, have also been suggested, but such suggestions have been inconclusive and controversial. Clinicians have recommended that extra contraceptive measures be applied during therapies using antibiotics that are suspected to interact with oral contraceptives. More studies on the possible interactions between antibiotics and birth control pills (oral contraceptives) are required as well as careful assessment of patient-specific risk factors for potential oral contractive pill failure prior to dismissing the need for backup contraception. Alcohol Interactions between alcohol and certain antibiotics may occur and may cause side effects and decreased effectiveness of antibiotic therapy. While moderate alcohol consumption is unlikely to interfere with many common antibiotics, there are specific types of antibiotics with which alcohol consumption may cause serious side effects. Therefore, potential risks of side effects and effectiveness depend on the type of antibiotic administered. Antibiotics such as metronidazole, tinidazole, cephamandole, latamoxef, cefoperazone, cefmenoxime, and furazolidone, cause a disulfiram-like chemical reaction with alcohol by inhibiting its breakdown by acetaldehyde dehydrogenase, which may result in vomiting, nausea, and shortness of breath. In addition, the efficacy of doxycycline and erythromycin succinate may be reduced by alcohol consumption. Other effects of alcohol on antibiotic activity include altered activity of the liver enzymes that break down the antibiotic compound. Pharmacodynamics The successful outcome of antimicrobial therapy with antibacterial compounds depends on several factors. These include host defense mechanisms, the location of infection, and the pharmacokinetic and pharmacodynamic properties of the antibacterial. The bactericidal activity of antibacterials may depend on the bacterial growth phase, and it often requires ongoing metabolic activity and division of bacterial cells. These findings are based on laboratory studies, and in clinical settings have also been shown to eliminate bacterial infection. Since the activity of antibacterials depends frequently on its concentration, in vitro characterization of antibacterial activity commonly includes the determination of the minimum inhibitory concentration and minimum bactericidal concentration of an antibacterial. To predict clinical outcome, the antimicrobial activity of an antibacterial is usually combined with its pharmacokinetic profile, and several pharmacological parameters are used as markers of drug efficacy. Combination therapy In important infectious diseases, including tuberculosis, combination therapy (i.e., the concurrent application of two or more antibiotics) has been used to delay or prevent the emergence of resistance. In acute bacterial infections, antibiotics as part of combination therapy are prescribed for their synergistic effects to improve treatment outcome as the combined effect of both antibiotics is better than their individual effect. Fosfomycin has the highest number of synergistic combinations among antibiotics and is almost always used as a partner drug. Methicillin-resistant Staphylococcus aureus infections may be treated with a combination therapy of fusidic acid and rifampicin. Antibiotics used in combination may also be antagonistic and the combined effects of the two antibiotics may be less than if one of the antibiotics was given as a monotherapy. For example, chloramphenicol and tetracyclines are antagonists to penicillins. However, this can vary depending on the species of bacteria. In general, combinations of a bacteriostatic antibiotic and bactericidal antibiotic are antagonistic. In addition to combining one antibiotic with another, antibiotics are sometimes co-administered with resistance-modifying agents. For example, β-lactam antibiotics may be used in combination with β-lactamase inhibitors, such as clavulanic acid or sulbactam, when a patient is infected with a β-lactamase-producing strain of bacteria. Classes Antibiotics are commonly classified based on their mechanism of action, chemical structure, or spectrum of activity. Most target bacterial functions or growth processes. Those that target the bacterial cell wall (penicillins and cephalosporins) or the cell membrane (polymyxins), or interfere with essential bacterial enzymes (rifamycins, lipiarmycins, quinolones, and sulfonamides) have bactericidal activities, killing the bacteria. Protein synthesis inhibitors (macrolides, lincosamides, and tetracyclines) are usually bacteriostatic, inhibiting further growth (with the exception of bactericidal aminoglycosides). Further categorization is based on their target specificity. "Narrow-spectrum" antibiotics target specific types of bacteria, such as gram-negative or gram-positive, whereas broad-spectrum antibiotics affect a wide range of bacteria. Following a 40-year break in discovering classes of antibacterial compounds, four new classes of antibiotics were introduced to clinical use in the late 2000s and early 2010s: cyclic lipopeptides (such as daptomycin), glycylcyclines (such as tigecycline), oxazolidinones (such as linezolid), and lipiarmycins (such as fidaxomicin). Production With advances in medicinal chemistry, most modern antibacterials are semisynthetic modifications of various natural compounds. These include, for example, the beta-lactam antibiotics, which include the penicillins (produced by fungi in the genus Penicillium), the cephalosporins, and the carbapenems. Compounds that are still isolated from living organisms are the aminoglycosides, whereas other antibacterials—for example, the sulfonamides, the quinolones, and the oxazolidinones—are produced solely by chemical synthesis. Many antibacterial compounds are relatively small molecules with a molecular weight of less than 1000 daltons. Since the first pioneering efforts of Howard Florey and Chain in 1939, the importance of antibiotics, including antibacterials, to medicine has led to intense research into producing antibacterials at large scales. Following screening of antibacterials against a wide range of bacteria, production of the active compounds is carried out using fermentation, usually in strongly aerobic conditions. Resistance Antimicrobial resistance (AMR or AR) is a naturally occurring process. AMR is driven largely by the misuse and overuse of antimicrobials. Yet, at the same time, many people around the world do not have access to essential antimicrobials. The emergence of antibiotic-resistant bacteria is a common phenomenon mainly caused by the overuse/misuse. It represents a threat to health globally. Each year, nearly 5 million deaths are associated with AMR globally. Emergence of resistance often reflects evolutionary processes that take place during antibiotic therapy. The antibiotic treatment may select for bacterial strains with physiologically or genetically enhanced capacity to survive high doses of antibiotics. Under certain conditions, it may result in preferential growth of resistant bacteria, while growth of susceptible bacteria is inhibited by the drug. For example, antibacterial selection for strains having previously acquired antibacterial-resistance genes was demonstrated in 1943 by the Luria–Delbrück experiment. Antibiotics such as penicillin and erythromycin, which used to have a high efficacy against many bacterial species and strains, have become less effective, due to the increased resistance of many bacterial strains. Resistance may take the form of biodegradation of pharmaceuticals, such as sulfamethazine-degrading soil bacteria introduced to sulfamethazine through medicated pig feces. The survival of bacteria often results from an inheritable resistance, but the growth of resistance to antibacterials also occurs through horizontal gene transfer. Horizontal transfer is more likely to happen in locations of frequent antibiotic use. Antibacterial resistance may impose a biological cost, thereby reducing fitness of resistant strains, which can limit the spread of antibacterial-resistant bacteria, for example, in the absence of antibacterial compounds. Additional mutations, however, may compensate for this fitness cost and can aid the survival of these bacteria. Paleontological data show that both antibiotics and antibiotic resistance are ancient compounds and mechanisms. Useful antibiotic targets are those for which mutations negatively impact bacterial reproduction or viability. Several molecular mechanisms of antibacterial resistance exist. Intrinsic antibacterial resistance may be part of the genetic makeup of bacterial strains. For example, an antibiotic target may be absent from the bacterial genome. Acquired resistance results from a mutation in the bacterial chromosome or the acquisition of extra-chromosomal DNA. Antibacterial-producing bacteria have evolved resistance mechanisms that have been shown to be similar to, and may have been transferred to, antibacterial-resistant strains. The spread of antibacterial resistance often occurs through vertical transmission of mutations during growth and by genetic recombination of DNA by horizontal genetic exchange. For instance, antibacterial resistance genes can be exchanged between different bacterial strains or species via plasmids that carry these resistance genes. Plasmids that carry several different resistance genes can confer resistance to multiple antibacterials. Cross-resistance to several antibacterials may also occur when a resistance mechanism encoded by a single gene conveys resistance to more than one antibacterial compound. Antibacterial-resistant strains and species, sometimes referred to as "superbugs", now contribute to the emergence of diseases that were, for a while, well controlled. For example, emergent bacterial strains causing tuberculosis that are resistant to previously effective antibacterial treatments pose many therapeutic challenges. Every year, nearly half a million new cases of multidrug-resistant tuberculosis (MDR-TB) are estimated to occur worldwide. For example, NDM-1 is a newly identified enzyme conveying bacterial resistance to a broad range of beta-lactam antibacterials. The United Kingdom's Health Protection Agency has stated that "most isolates with NDM-1 enzyme are resistant to all standard intravenous antibiotics for treatment of severe infections." On 26 May 2016, an E. coli "superbug" was identified in the United States resistant to colistin, "the last line of defence" antibiotic. In recent years, even anaerobic bacteria, historically considered less concerning in terms of resistance, have demonstrated high rates of antibiotic resistance, particularly Bacteroides, for which resistance rates to penicillin have been reported to exceed 90%. Misuse Per The ICU Book, "The first rule of antibiotics is to try not to use them, and the second rule is try not to use too many of them." Inappropriate antibiotic treatment and overuse of antibiotics have contributed to the emergence of antibiotic-resistant bacteria. However, potential harm from antibiotics extends beyond selection of antimicrobial resistance and their overuse is associated with adverse effects for patients themselves, seen most clearly in critically ill patients in Intensive care units. Self-prescribing of antibiotics is an example of misuse. Many antibiotics are frequently prescribed to treat symptoms or diseases that do not respond to antibiotics or that are likely to resolve without treatment. Also, incorrect or suboptimal antibiotics are prescribed for certain bacterial infections. The overuse of antibiotics, like penicillin and erythromycin, has been associated with emerging antibiotic resistance since the 1950s. Widespread usage of antibiotics in hospitals has also been associated with increases in bacterial strains and species that no longer respond to treatment with the most common antibiotics. Common forms of antibiotic misuse include excessive use of prophylactic antibiotics in travelers and failure of medical professionals to prescribe the correct dosage of antibiotics on the basis of the patient's weight and history of prior use. Other forms of misuse include failure to take the entire prescribed course of the antibiotic, incorrect dosage and administration, or failure to rest for sufficient recovery. Inappropriate antibiotic treatment, for example, is their prescription to treat viral infections such as the common cold. One study on respiratory tract infections found "physicians were more likely to prescribe antibiotics to patients who appeared to expect them". Multifactorial interventions aimed at both physicians and patients can reduce inappropriate prescription of antibiotics. The lack of rapid point of care diagnostic tests, particularly in resource-limited settings is considered one of the drivers of antibiotic misuse. Several organizations concerned with antimicrobial resistance are lobbying to eliminate the unnecessary use of antibiotics. The issues of misuse and overuse of antibiotics have been addressed by the formation of the US Interagency Task Force on Antimicrobial Resistance. This task force aims to actively address antimicrobial resistance, and is coordinated by the US Centers for Disease Control and Prevention, the Food and Drug Administration (FDA), and the National Institutes of Health, as well as other US agencies. A non-governmental organization campaign group is Keep Antibiotics Working. In France, an "Antibiotics are not automatic" government campaign started in 2002 and led to a marked reduction of unnecessary antibiotic prescriptions, especially in children. The emergence of antibiotic resistance has prompted restrictions on their use in the UK in 1970 (Swann report 1969), and the European Union has banned the use of antibiotics as growth-promotional agents since 2003. Moreover, several organizations (including the World Health Organization, the National Academy of Sciences, and the U.S. Food and Drug Administration) have advocated restricting the amount of antibiotic use in food animal production. However, commonly there are delays in regulatory and legislative actions to limit the use of antibiotics, attributable partly to resistance against such regulation by industries using or selling antibiotics, and to the time required for research to test causal links between their use and resistance to them. Two federal bills (S.742 and H.R. 2562) aimed at phasing out nontherapeutic use of antibiotics in US food animals were proposed, but have not passed. These bills were endorsed by public health and medical organizations, including the American Holistic Nurses' Association, the American Medical Association, and the American Public Health Association. Despite pledges by food companies and restaurants to reduce or eliminate meat that comes from animals treated with antibiotics, the purchase of antibiotics for use on farm animals has been increasing every year. There has been extensive use of antibiotics in animal husbandry. In the United States, the question of emergence of antibiotic-resistant bacterial strains due to use of antibiotics in livestock was raised by the US Food and Drug Administration (FDA) in 1977. In March 2012, the United States District Court for the Southern District of New York, ruling in an action brought by the Natural Resources Defense Council and others, ordered the FDA to revoke approvals for the use of antibiotics in livestock, which violated FDA regulations. Studies have shown that common misconceptions about the effectiveness and necessity of antibiotics to treat common mild illnesses contribute to their overuse. Other forms of antibiotic-associated harm include anaphylaxis, drug toxicity most notably kidney and liver damage, and super-infections with resistant organisms. Antibiotics are also known to affect mitochondrial function, and this may contribute to the bioenergetic failure of immune cells seen in sepsis. They also alter the microbiome of the gut, lungs, and skin, which may be associated with adverse effects such as Clostridioides difficile associated diarrhoea. Whilst antibiotics can clearly be lifesaving in patients with bacterial infections, their overuse, especially in patients where infections are hard to diagnose, can lead to harm via multiple mechanisms. History Before the early 20th century, treatments for infections were based primarily on medicinal folklore. Mixtures with antimicrobial properties that were used in treatments of infections were described over 2,000 years ago. Many ancient cultures, including the ancient Egyptians and ancient Greeks, used specially selected mold and plant materials to treat infections. Nubian mummies studied in the 1990s were found to contain significant levels of tetracycline. The beer brewed at that time was conjectured to have been the source. The use of antibiotics in modern medicine began with the discovery of synthetic antibiotics derived from dyes. Various Essential oils have been shown to have anti-microbial properties. Along with this, the plants from which these oils have been derived can be used as niche anti-microbial agents. Synthetic antibiotics derived from dyes Synthetic antibiotic chemotherapy as a science and development of antibacterials began in Germany with Paul Ehrlich in the late 1880s. Ehrlich noted certain dyes would colour human, animal, or bacterial cells, whereas others did not. He then proposed the idea that it might be possible to create chemicals that would act as a selective drug that would bind to and kill bacteria without harming the human host. After screening hundreds of dyes against various organisms, in 1907, he discovered a medicinally useful drug, the first synthetic antibacterial organoarsenic compound salvarsan, now called arsphenamine. This heralded the era of antibacterial treatment that was begun with the discovery of a series of arsenic-derived synthetic antibiotics by both Alfred Bertheim and Ehrlich in 1907. Ehrlich and Bertheim had experimented with various chemicals derived from dyes to treat trypanosomiasis in mice and spirochaeta infection in rabbits. While their early compounds were too toxic, Ehrlich and Sahachiro Hata, a Japanese bacteriologist working with Ehrlich in the quest for a drug to treat syphilis, achieved success with the 606th compound in their series of experiments. In 1910, Ehrlich and Hata announced their discovery, which they called drug "606", at the Congress for Internal Medicine at Wiesbaden. The Hoechst company began to market the compound toward the end of 1910 under the name Salvarsan, now known as arsphenamine. The drug was used to treat syphilis in the first half of the 20th century. In 1908, Ehrlich received the Nobel Prize in Physiology or Medicine for his contributions to immunology. Hata was nominated for the Nobel Prize in Chemistry in 1911 and for the Nobel Prize in Physiology or Medicine in 1912 and 1913. The first sulfonamide and the first systemically active antibacterial drug, Prontosil, was developed by a research team led by Gerhard Domagk in 1932 or 1933 at the Bayer Laboratories of the IG Farben conglomerate in Germany, for which Domagk received the 1939 Nobel Prize in Physiology or Medicine. Sulfanilamide, the active drug of Prontosil, was not patentable as it had already been in use in the dye industry for some years. Prontosil had a relatively broad effect against Gram-positive cocci, but not against enterobacteria. Research was stimulated apace by its success. The discovery and development of this sulfonamide drug opened the era of antibacterials. Penicillin and other natural antibiotics Observations about the growth of some microorganisms inhibiting the growth of other microorganisms have been reported since the late 19th century. These observations of antibiosis between microorganisms led to the discovery of natural antibacterials. Louis Pasteur observed, "if we could intervene in the antagonism observed between some bacteria, it would offer perhaps the greatest hopes for therapeutics". In 1874, physician Sir William Roberts noted that cultures of the mould Penicillium glaucum that is used in the making of some types of blue cheese did not display bacterial contamination. In 1895 Vincenzo Tiberio, Italian physician, published a paper on the antibacterial power of some extracts of mold. In 1897, doctoral student Ernest Duchesne submitted a dissertation, "" (Contribution to the study of vital competition in micro-organisms: antagonism between moulds and microbes), the first known scholarly work to consider the therapeutic capabilities of moulds resulting from their anti-microbial activity. In his thesis, Duchesne proposed that bacteria and moulds engage in a perpetual battle for survival. Duchesne observed that E. coli was eliminated by Penicillium glaucum when they were both grown in the same culture. He also observed that when he inoculated laboratory animals with lethal doses of typhoid bacilli together with Penicillium glaucum, the animals did not contract typhoid. Duchesne's army service after getting his degree prevented him from doing any further research. Duchesne died of tuberculosis, a disease now treated by antibiotics. In 1928, Sir Alexander Fleming postulated the existence of penicillin, a molecule produced by certain moulds that kills or stops the growth of certain kinds of bacteria. Fleming was working on a culture of disease-causing bacteria when he noticed the spores of a green mold, Penicillium rubens, in one of his culture plates. He observed that the presence of the mould killed or prevented the growth of the bacteria. Fleming postulated that the mould must secrete an antibacterial substance, which he named penicillin in 1928. Fleming believed that its antibacterial properties could be exploited for chemotherapy. He initially characterised some of its biological properties, and attempted to use a crude preparation to treat some infections, but he was unable to pursue its further development without the aid of trained chemists. Ernst Chain, Howard Florey and Edward Abraham succeeded in purifying the first penicillin, penicillin G, in 1942, but it did not become widely available outside the Allied military before 1945. Later, Norman Heatley developed the back extraction technique for efficiently purifying penicillin in bulk. The chemical structure of penicillin was first proposed by Abraham in 1942 and then later confirmed by Dorothy Crowfoot Hodgkin in 1945. Purified penicillin displayed potent antibacterial activity against a wide range of bacteria and had low toxicity in humans. Furthermore, its activity was not inhibited by biological constituents such as pus, unlike the synthetic sulfonamides. (see below) The development of penicillin led to renewed interest in the search for antibiotic compounds with similar efficacy and safety. For their successful development of penicillin, which Fleming had accidentally discovered but could not develop himself, as a therapeutic drug, Chain and Florey shared the 1945 Nobel Prize in Medicine with Fleming. Florey credited René Dubos with pioneering the approach of deliberately and systematically searching for antibacterial compounds, which had led to the discovery of gramicidin and had revived Florey's research in penicillin. In 1939, coinciding with the start of World War II, Dubos had reported the discovery of the first naturally derived antibiotic, tyrothricin, a compound of 20% gramicidin and 80% tyrocidine, from Bacillus brevis. It was one of the first commercially manufactured antibiotics and was very effective in treating wounds and ulcers during World War II. Gramicidin, however, could not be used systemically because of toxicity. Tyrocidine also proved too toxic for systemic usage. Research results obtained during that period were not shared between the Axis and the Allied powers during World War II and limited access during the Cold War. Late 20th century During the mid-20th century, the number of new antibiotic substances introduced for medical use increased significantly. From 1935 to 1968, 12 new classes were launched. However, after this, the number of new classes dropped markedly, with only two new classes introduced between 1969 and 2003. Antibiotic pipeline Both the WHO and the Infectious Disease Society of America report that the weak antibiotic pipeline does not match bacteria's increasing ability to develop resistance. The Infectious Disease Society of America report noted that the number of new antibiotics approved for marketing per year had been declining and identified seven antibiotics against the Gram-negative bacilli currently in phase 2 or phase 3 clinical trials. However, these drugs did not address the entire spectrum of resistance of Gram-negative bacilli. According to the WHO fifty one new therapeutic entities - antibiotics (including combinations), are in phase 1–3 clinical trials as of May 2017. Antibiotics targeting multidrug-resistant Gram-positive pathogens remains a high priority. A few antibiotics have received marketing authorization in the last seven years. The cephalosporin ceftaroline and the lipoglycopeptides oritavancin and telavancin have been approved for the treatment of acute bacterial skin and skin structure infection and community-acquired bacterial pneumonia. The lipoglycopeptide dalbavancin and the oxazolidinone tedizolid has also been approved for use for the treatment of acute bacterial skin and skin structure infection. The first in a new class of narrow-spectrum macrocyclic antibiotics, fidaxomicin, has been approved for the treatment of C. difficile colitis. New cephalosporin-lactamase inhibitor combinations also approved include ceftazidime-avibactam and ceftolozane-avibactam for complicated urinary tract infection and intra-abdominal infection. Possible improvements include clarification of clinical trial regulations by FDA. Furthermore, appropriate economic incentives could persuade pharmaceutical companies to invest in this endeavor. In the US, the Antibiotic Development to Advance Patient Treatment (ADAPT) Act was introduced with the aim of fast tracking the drug development of antibiotics to combat the growing threat of 'superbugs'. Under this Act, FDA can approve antibiotics and antifungals treating life-threatening infections based on smaller clinical trials. The CDC will monitor the use of antibiotics and the emerging resistance, and publish the data. The FDA antibiotics labeling process, 'Susceptibility Test Interpretive Criteria for Microbial Organisms' or 'breakpoints', will provide accurate data to healthcare professionals. According to Allan Coukell, senior director for health programs at The Pew Charitable Trusts, "By allowing drug developers to rely on smaller datasets, and clarifying FDA's authority to tolerate a higher level of uncertainty for these drugs when making a risk/benefit calculation, ADAPT would make the clinical trials more feasible." Replenishing the antibiotic pipeline and developing other new therapies Because antibiotic-resistant bacterial strains continue to emerge and spread, there is a constant need to develop new antibacterial treatments. Current strategies include traditional chemistry-based approaches such as natural product-based drug discovery, newer chemistry-based approaches such as drug design, traditional biology-based approaches such as immunoglobulin therapy, and experimental biology-based approaches such as phage therapy, fecal microbiota transplants, antisense RNA-based treatments, and CRISPR-Cas9-based treatments. Natural product-based antibiotic discovery Most of the antibiotics in current use are natural products or natural product derivatives, and bacterial, fungal, plant and animal extracts are being screened in the search for new antibiotics. Organisms may be selected for testing based on ecological, ethnomedical, genomic, or historical rationales. Medicinal plants, for example, are screened on the basis that they are used by traditional healers to prevent or cure infection and may therefore contain antibacterial compounds. Also, soil bacteria are screened on the basis that, historically, they have been a very rich source of antibiotics (with 70 to 80% of antibiotics in current use derived from the actinomycetes). In addition to screening natural products for direct antibacterial activity, they are sometimes screened for the ability to suppress antibiotic resistance and antibiotic tolerance. For example, some secondary metabolites inhibit drug efflux pumps, thereby increasing the concentration of antibiotic able to reach its cellular target and decreasing bacterial resistance to the antibiotic. Natural products known to inhibit bacterial efflux pumps include the alkaloid lysergol, the carotenoids capsanthin and capsorubin, and the flavonoids rotenone and chrysin. Other natural products, this time primary metabolites rather than secondary metabolites, have been shown to eradicate antibiotic tolerance. For example, glucose, mannitol, and fructose reduce antibiotic tolerance in Escherichia coli and Staphylococcus aureus, rendering them more susceptible to killing by aminoglycoside antibiotics. Natural products may be screened for the ability to suppress bacterial virulence factors too. Virulence factors are molecules, cellular structures and regulatory systems that enable bacteria to evade the body's immune defenses (e.g. urease, staphyloxanthin), move towards, attach to, and/or invade human cells (e.g. type IV pili, adhesins, internalins), coordinate the activation of virulence genes (e.g. quorum sensing), and cause disease (e.g. exotoxins). Examples of natural products with antivirulence activity include the flavonoid epigallocatechin gallate (which inhibits listeriolysin O), the quinone tetrangomycin (which inhibits staphyloxanthin), and the sesquiterpene zerumbone (which inhibits Acinetobacter baumannii motility). Immunoglobulin therapy Antibodies (anti-tetanus immunoglobulin) have been used in the treatment and prevention of tetanus since the 1910s, and this approach continues to be a useful way of controlling bacterial diseases. The monoclonal antibody bezlotoxumab, for example, has been approved by the US FDA and EMA for recurrent Clostridioides difficile infection, and other monoclonal antibodies are in development (e.g. AR-301 for the adjunctive treatment of S. aureus ventilator-associated pneumonia). Antibody treatments act by binding to and neutralizing bacterial exotoxins and other virulence factors. Phage therapy Phage therapy is under investigation as a method of treating antibiotic-resistant strains of bacteria. Phage therapy involves infecting bacterial pathogens with viruses. Bacteriophages and their host ranges are extremely specific for certain bacteria, thus, unlike antibiotics, they do not disturb the host organism's intestinal microbiota. Bacteriophages, also known as phages, infect and kill bacteria primarily during lytic cycles. Phages insert their DNA into the bacterium, where it is transcribed and used to make new phages, after which the cell will lyse, releasing new phage that are able to infect and destroy further bacteria of the same strain. The high specificity of phage protects "good" bacteria from destruction. Some disadvantages to the use of bacteriophages also exist, however. Bacteriophages may harbour virulence factors or toxic genes in their genomes and, prior to use, it may be prudent to identify genes with similarity to known virulence factors or toxins by genomic sequencing. In addition, the oral and IV administration of phages for the eradication of bacterial infections poses a much higher safety risk than topical application. Also, there is the additional concern of uncertain immune responses to these large antigenic cocktails. There are considerable regulatory hurdles that must be cleared for such therapies. Despite numerous challenges, the use of bacteriophages as a replacement for antimicrobial agents against MDR pathogens that no longer respond to conventional antibiotics, remains an attractive option. Fecal microbiota transplants Fecal microbiota transplants involve transferring the full intestinal microbiota from a healthy human donor (in the form of stool) to patients with C. difficile infection. Although this procedure has not been officially approved by the US FDA, its use is permitted under some conditions in patients with antibiotic-resistant C. difficile infection. Cure rates are around 90%, and work is underway to develop stool banks, standardized products, and methods of oral delivery. Fecal microbiota transplantation has also been used more recently for inflammatory bowel diseases. Antisense RNA-based treatments Antisense RNA-based treatment (also known as gene silencing therapy) involves (a) identifying bacterial genes that encode essential proteins (e.g. the Pseudomonas aeruginosa genes acpP, lpxC, and rpsJ), (b) synthesizing single-stranded RNA that is complementary to the mRNA encoding these essential proteins, and (c) delivering the single-stranded RNA to the infection site using cell-penetrating peptides or liposomes. The antisense RNA then hybridizes with the bacterial mRNA and blocks its translation into the essential protein. Antisense RNA-based treatment has been shown to be effective in in vivo models of P. aeruginosa pneumonia. In addition to silencing essential bacterial genes, antisense RNA can be used to silence bacterial genes responsible for antibiotic resistance. For example, antisense RNA has been developed that silences the S. aureus mecA gene (the gene that encodes modified penicillin-binding protein 2a and renders S. aureus strains methicillin-resistant). Antisense RNA targeting mecA mRNA has been shown to restore the susceptibility of methicillin-resistant staphylococci to oxacillin in both in vitro and in vivo studies. CRISPR-Cas9-based treatments In the early 2000s, a system was discovered that enables bacteria to defend themselves against invading viruses. The system, known as CRISPR-Cas9, consists of (a) an enzyme that destroys DNA (the nuclease Cas9) and (b) the DNA sequences of previously encountered viral invaders (CRISPR). These viral DNA sequences enable the nuclease to target foreign (viral) rather than self (bacterial) DNA. Although the function of CRISPR-Cas9 in nature is to protect bacteria, the DNA sequences in the CRISPR component of the system can be modified so that the Cas9 nuclease targets bacterial resistance genes or bacterial virulence genes instead of viral genes. The modified CRISPR-Cas9 system can then be administered to bacterial pathogens using plasmids or bacteriophages. This approach has successfully been used to silence antibiotic resistance and reduce the virulence of enterohemorrhagic E. coli in an in vivo model of infection. Reducing the selection pressure for antibiotic resistance In addition to developing new antibacterial treatments, it is important to reduce the selection pressure for the emergence and spread of antimicrobial resistance (AMR), such as antibiotic resistance. Strategies to accomplish this include well-established infection control measures such as infrastructure improvement (e.g. less crowded housing), better sanitation (e.g. safe drinking water and food), better use of vaccines and vaccine development, other approaches such as antibiotic stewardship, and experimental approaches such as the use of prebiotics and probiotics to prevent infection. Antibiotic cycling, where antibiotics are alternated by clinicians to treat microbial diseases, is proposed, but recent studies revealed such strategies are ineffective against antibiotic resistance. Vaccines Vaccines are an essential part of the response to reduce AMR as they prevent infections, reduce the use and overuse of antimicrobials, and slow the emergence and spread of drug-resistant pathogens. Vaccination either excites or reinforces the immune competence of a host to ward off infection, leading to the activation of macrophages, the production of antibodies, inflammation, and other classic immune reactions. Antibacterial vaccines have been responsible for a drastic reduction in global bacterial diseases.
Biology and health sciences
Drugs and medication
null
1839
https://en.wikipedia.org/wiki/Allotropy
Allotropy
Allotropy or allotropism () is the property of some chemical elements to exist in two or more different forms, in the same physical state, known as allotropes of the elements. Allotropes are different structural modifications of an element: the atoms of the element are bonded together in different manners. For example, the allotropes of carbon include diamond (the carbon atoms are bonded together to form a cubic lattice of tetrahedra), graphite (the carbon atoms are bonded together in sheets of a hexagonal lattice), graphene (single sheets of graphite), and fullerenes (the carbon atoms are bonded together in spherical, tubular, or ellipsoidal formations). The term allotropy is used for elements only, not for compounds. The more general term, used for any compound, is polymorphism, although its use is usually restricted to solid materials such as crystals. Allotropy refers only to different forms of an element within the same physical phase (the state of matter, such as a solid, liquid or gas). The differences between these states of matter would not alone constitute examples of allotropy. Allotropes of chemical elements are frequently referred to as polymorphs or as phases of the element. For some elements, allotropes have different molecular formulae or different crystalline structures, as well as a difference in physical phase; for example, two allotropes of oxygen (dioxygen, O2, and ozone, O3) can both exist in the solid, liquid and gaseous states. Other elements do not maintain distinct allotropes in different physical phases; for example, phosphorus has numerous solid allotropes, which all revert to the same P4 form when melted to the liquid state. History The concept of allotropy was originally proposed in 1840 by the Swedish scientist Baron Jöns Jakob Berzelius (1779–1848). The term is derived . After the acceptance of Avogadro's hypothesis in 1860, it was understood that elements could exist as polyatomic molecules, and two allotropes of oxygen were recognized as O2 and O3. In the early 20th century, it was recognized that other cases such as carbon were due to differences in crystal structure. By 1912, Ostwald noted that the allotropy of elements is just a special case of the phenomenon of polymorphism known for compounds, and proposed that the terms allotrope and allotropy be abandoned and replaced by polymorph and polymorphism. Although many other chemists have repeated this advice, IUPAC and most chemistry texts still favour the usage of allotrope and allotropy for elements only. Differences in properties of an element's allotropes Allotropes are different structural forms of the same element and can exhibit quite different physical properties and chemical behaviours. The change between allotropic forms is triggered by the same forces that affect other structures, i.e., pressure, light, and temperature. Therefore, the stability of the particular allotropes depends on particular conditions. For instance, iron changes from a body-centered cubic structure (ferrite) to a face-centered cubic structure (austenite) above 906 °C, and tin undergoes a modification known as tin pest from a metallic form to a semimetallic form below 13.2 °C (55.8 °F). As an example of allotropes having different chemical behaviour, ozone (O3) is a much stronger oxidizing agent than dioxygen (O2). List of allotropes Typically, elements capable of variable coordination number and/or oxidation states tend to exhibit greater numbers of allotropic forms. Another contributing factor is the ability of an element to catenate. Examples of allotropes include: Non-metals Metalloids Metals Among the metallic elements that occur in nature in significant quantities (56 up to U, without Tc and Pm), almost half (27) are allotropic at ambient pressure: Li, Be, Na, Ca, Ti, Mn, Fe, Co, Sr, Y, Zr, Sn, La, Ce, Pr, Nd, Sm, Gd, Tb, Dy, Yb, Hf, Tl, Th, Pa and U. Some phase transitions between allotropic forms of technologically relevant metals are those of Ti at 882 °C, Fe at 912 °C and 1,394 °C, Co at 422 °C, Zr at 863 °C, Sn at 13 °C and U at 668 °C and 776 °C. Most stable structure under standard conditions. Structures stable below room temperature. Structures stable above room temperature. Structures stable above atmospheric pressure. Lanthanides and actinides Cerium, samarium, dysprosium and ytterbium have three allotropes. Praseodymium, neodymium, gadolinium and terbium have two allotropes. Plutonium has six distinct solid allotropes under "normal" pressures. Their densities vary within a ratio of some 4:3, which vastly complicates all kinds of work with the metal (particularly casting, machining, and storage). A seventh plutonium allotrope exists at very high pressures. The transuranium metals Np, Am, and Cm are also allotropic. Promethium, americium, berkelium and californium have three allotropes each. Nanoallotropes In 2017, the concept of nanoallotropy was proposed. Nanoallotropes, or allotropes of nanomaterials, are nanoporous materials that have the same chemical composition (e.g., Au), but differ in their architecture at the nanoscale (that is, on a scale 10 to 100 times the dimensions of individual atoms). Such nanoallotropes may help create ultra-small electronic devices and find other industrial applications. The different nanoscale architectures translate into different properties, as was demonstrated for surface-enhanced Raman scattering performed on several different nanoallotropes of gold. A two-step method for generating nanoallotropes was also created.
Physical sciences
Basics_4
null
1845
https://en.wikipedia.org/wiki/Alternative%20medicine
Alternative medicine
Alternative medicine is any practice that aims to achieve the healing effects of medicine despite lacking biological plausibility, testability, repeatability or evidence of effectiveness. Unlike modern medicine, which employs the scientific method to test plausible therapies by way of responsible and ethical clinical trials, producing repeatable evidence of either effect or of no effect, alternative therapies reside outside of mainstream medicine and do not originate from using the scientific method, but instead rely on testimonials, anecdotes, religion, tradition, superstition, belief in supernatural "energies", pseudoscience, errors in reasoning, propaganda, fraud, or other unscientific sources. Frequently used terms for relevant practices are New Age medicine, pseudo-medicine, unorthodox medicine, holistic medicine, fringe medicine, and unconventional medicine, with little distinction from quackery. Some alternative practices are based on theories that contradict the established science of how the human body works; others appeal to the supernatural or superstitious to explain their effect or lack thereof. In others, the practice has plausibility but lacks a positive risk–benefit outcome probability. Research into alternative therapies often fails to follow proper research protocols (such as placebo-controlled trials, blind experiments and calculation of prior probability), providing invalid results. History has shown that if a method is proven to work, it eventually ceases to be alternative and becomes mainstream medicine. Much of the perceived effect of an alternative practice arises from a belief that it will be effective, the placebo effect, or from the treated condition resolving on its own (the natural course of disease). This is further exacerbated by the tendency to turn to alternative therapies upon the failure of medicine, at which point the condition will be at its worst and most likely to spontaneously improve. In the absence of this bias, especially for diseases that are not expected to get better by themselves such as cancer or HIV infection, multiple studies have shown significantly worse outcomes if patients turn to alternative therapies. While this may be because these patients avoid effective treatment, some alternative therapies are actively harmful (e.g. cyanide poisoning from amygdalin, or the intentional ingestion of hydrogen peroxide) or actively interfere with effective treatments. The alternative medicine sector is a highly profitable industry with a strong lobby, and faces far less regulation over the use and marketing of unproven treatments. Complementary medicine (CM), complementary and alternative medicine (CAM), integrated medicine or integrative medicine (IM), and holistic medicine attempt to combine alternative practices with those of mainstream medicine. Traditional medicine practices become "alternative" when used outside their original settings and without proper scientific explanation and evidence. Alternative methods are often marketed as more "natural" or "holistic" than methods offered by medical science, that is sometimes derogatorily called "Big Pharma" by supporters of alternative medicine. Billions of dollars have been spent studying alternative medicine, with few or no positive results and many methods thoroughly disproven. Definitions and terminology The terms alternative medicine, complementary medicine, integrative medicine, holistic medicine, natural medicine, unorthodox medicine, fringe medicine, unconventional medicine, and new age medicine are used interchangeably as having the same meaning and are almost synonymous in most contexts. Terminology has shifted over time, reflecting the preferred branding of practitioners. For example, the United States National Institutes of Health department studying alternative medicine, currently named the National Center for Complementary and Integrative Health (NCCIH), was established as the Office of Alternative Medicine (OAM) and was renamed the National Center for Complementary and Alternative Medicine (NCCAM) before obtaining its current name. Therapies are often framed as "natural" or "holistic", implicitly and intentionally suggesting that conventional medicine is "artificial" and "narrow in scope". The meaning of the term "alternative" in the expression "alternative medicine", is not that it is an effective alternative to medical science (though some alternative medicine promoters may use the loose terminology to give the appearance of effectiveness). Loose terminology may also be used to suggest meaning that a dichotomy exists when it does not (e.g., the use of the expressions "Western medicine" and "Eastern medicine" to suggest that the difference is a cultural difference between the Asian east and the European west, rather than that the difference is between evidence-based medicine and treatments that do not work). Alternative medicine Alternative medicine is defined loosely as a set of products, practices, and theories that are believed or perceived by their users to have the healing effects of medicine, but whose effectiveness has not been established using scientific methods, or whose theory and practice is not part of biomedicine, or whose theories or practices are directly contradicted by scientific evidence or scientific principles used in biomedicine. "Biomedicine" or "medicine" is that part of medical science that applies principles of biology, physiology, molecular biology, biophysics, and other natural sciences to clinical practice, using scientific methods to establish the effectiveness of that practice. Unlike medicine, an alternative product or practice does not originate from using scientific methods, but may instead be based on hearsay, religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, fraud, or other unscientific sources. Some other definitions seek to specify alternative medicine in terms of its social and political marginality to mainstream healthcare. This can refer to the lack of support that alternative therapies receive from medical scientists regarding access to research funding, sympathetic coverage in the medical press, or inclusion in the standard medical curriculum. For example, a widely used definition devised by the US NCCIH calls it "a group of diverse medical and health care systems, practices, and products that are not generally considered part of conventional medicine". However, these descriptive definitions are inadequate in the present-day when some conventional doctors offer alternative medical treatments and introductory courses or modules can be offered as part of standard undergraduate medical training; alternative medicine is taught in more than half of US medical schools and US health insurers are increasingly willing to provide reimbursement for alternative therapies. Complementary or integrative medicine Complementary medicine (CM) or integrative medicine (IM) is when alternative medicine is used together with mainstream functional medical treatment in a belief that it improves the effect of treatments. For example, acupuncture (piercing the body with needles to influence the flow of a supernatural energy) might be believed to increase the effectiveness or "complement" science-based medicine when used at the same time. Significant drug interactions caused by alternative therapies may make treatments less effective, notably in cancer therapy. Several medical organizations differentiate between complementary and alternative medicine including the UK National Health Service (NHS), Cancer Research UK, and the US Center for Disease Control and Prevention (CDC), the latter of which states that "Complementary medicine is used in addition to standard treatments" whereas "Alternative medicine is used instead of standard treatments." Complementary and integrative interventions are used to improve fatigue in adult cancer patients. David Gorski has described integrative medicine as an attempt to bring pseudoscience into academic science-based medicine with skeptics such as Gorski and David Colquhoun referring to this with the pejorative term "quackademia". Robert Todd Carroll described Integrative medicine as "a synonym for 'alternative' medicine that, at its worst, integrates sense with nonsense. At its best, integrative medicine supports both consensus treatments of science-based medicine and treatments that the science, while promising perhaps, does not justify" Rose Shapiro has criticized the field of alternative medicine for rebranding the same practices as integrative medicine. CAM is an abbreviation of the phrase complementary and alternative medicine. The 2019 World Health Organization (WHO) Global Report on Traditional and Complementary Medicine states that the terms complementary and alternative medicine "refer to a broad set of health care practices that are not part of that country's own traditional or conventional medicine and are not fully integrated into the dominant health care system. They are used interchangeably with traditional medicine in some countries." The Integrative Medicine Exam by the American Board of Physician Specialties includes the following subjects: Manual Therapies, Biofield Therapies, Acupuncture, Movement Therapies, Expressive Arts, Traditional Chinese Medicine, Ayurveda, Indigenous Medical Systems, Homeopathic Medicine, Naturopathic Medicine, Osteopathic Medicine, Chiropractic, and Functional Medicine. Other terms Traditional medicine (TM) refers to certain practices within a culture which have existed since before the advent of medical science, Many TM practices are based on "holistic" approaches to disease and health, versus the scientific evidence-based methods in conventional medicine. The 2019 WHO report defines traditional medicine as "the sum total of the knowledge, skill and practices based on the theories, beliefs and experiences indigenous to different cultures, whether explicable or not, used in the maintenance of health as well as in the prevention, diagnosis, improvement or treatment of physical and mental illness." When used outside the original setting and in the absence of scientific evidence, TM practices are typically referred to as "alternative medicine". is another rebranding of alternative medicine. In this case, the words balance and holism are often used alongside complementary or integrative, claiming to take into fuller account the "whole" person, in contrast to the supposed reductionism of medicine. Challenges in defining alternative medicine Prominent members of the science and biomedical science community say that it is not meaningful to define an alternative medicine that is separate from a conventional medicine because the expressions "conventional medicine", "alternative medicine", "complementary medicine", "integrative medicine", and "holistic medicine" do not refer to any medicine at all. Others say that alternative medicine cannot be precisely defined because of the diversity of theories and practices it includes, and because the boundaries between alternative and conventional medicine overlap, are porous, and change. Healthcare practices categorized as alternative may differ in their historical origin, theoretical basis, diagnostic technique, therapeutic practice and in their relationship to the medical mainstream. Under a definition of alternative medicine as "non-mainstream", treatments considered alternative in one location may be considered conventional in another. Critics say the expression is deceptive because it implies there is an effective alternative to science-based medicine, and that complementary is deceptive because it implies that the treatment increases the effectiveness of (complements) science-based medicine, while alternative medicines that have been tested nearly always have no measurable positive effect compared to a placebo. Journalist John Diamond wrote that "there is really no such thing as alternative medicine, just medicine that works and medicine that doesn't", a notion later echoed by Paul Offit: "The truth is there's no such thing as conventional or alternative or complementary or integrative or holistic medicine. There's only medicine that works and medicine that doesn't. And the best way to sort it out is by carefully evaluating scientific studies—not by visiting Internet chat rooms, reading magazine articles, or talking to friends." Types Alternative medicine consists of a wide range of health care practices, products, and therapies. The shared feature is a claim to heal that is not based on the scientific method. Alternative medicine practices are diverse in their foundations and methodologies. Alternative medicine practices may be classified by their cultural origins or by the types of beliefs upon which they are based. Methods may incorporate or be based on traditional medicinal practices of a particular culture, folk knowledge, superstition, spiritual beliefs, belief in supernatural energies (antiscience), pseudoscience, errors in reasoning, propaganda, fraud, new or different concepts of health and disease, and any bases other than being proven by scientific methods. Different cultures may have their own unique traditional or belief based practices developed recently or over thousands of years, and specific practices or entire systems of practices. Unscientific belief systems Alternative medicine, such as using naturopathy or homeopathy in place of conventional medicine, is based on belief systems not grounded in science. Traditional ethnic systems Alternative medical systems may be based on traditional medicine practices, such as traditional Chinese medicine (TCM), Ayurveda in India, or practices of other cultures around the world. Some useful applications of traditional medicines have been researched and accepted within ordinary medicine, however the underlying belief systems are seldom scientific and are not accepted. Traditional medicine is considered alternative when it is used outside its home region; or when it is used together with or instead of known functional treatment; or when it can be reasonably expected that the patient or practitioner knows or should know that it will not work – such as knowing that the practice is based on superstition. Supernatural energies Bases of belief may include belief in existence of supernatural energies undetected by the science of physics, as in biofields, or in belief in properties of the energies of physics that are inconsistent with the laws of physics, as in energy medicine. Herbal remedies and other substances Substance based practices use substances found in nature such as herbs, foods, non-vitamin supplements and megavitamins, animal and fungal products, and minerals, including use of these products in traditional medical practices that may also incorporate other methods. Examples include healing claims for non-vitamin supplements, fish oil, Omega-3 fatty acid, glucosamine, echinacea, flaxseed oil, and ginseng. Herbal medicine, or phytotherapy, includes not just the use of plant products, but may also include the use of animal and mineral products. It is among the most commercially successful branches of alternative medicine, and includes the tablets, powders and elixirs that are sold as "nutritional supplements". Only a very small percentage of these have been shown to have any efficacy, and there is little regulation as to standards and safety of their contents. Religion, faith healing, and prayer NCCIH classification The United States agency National Center for Complementary and Integrative Health (NCCIH) has created a classification system for branches of complementary and alternative medicine that divides them into five major groups. These groups have some overlap, and distinguish two types of energy medicine: veritable which involves scientifically observable energy (including magnet therapy, colorpuncture and light therapy) and putative, which invokes physically undetectable or unverifiable energy. None of these energies have any evidence to support that they affect the body in any positive or health promoting way. Whole medical systems: Cut across more than one of the other groups; examples include traditional Chinese medicine, naturopathy, homeopathy, and ayurveda. Mind-body interventions: Explore the interconnection between the mind, body, and spirit, under the premise that they affect "bodily functions and symptoms". A connection between mind and body is conventional medical fact, and this classification does not include therapies with proven function such as cognitive behavioral therapy. "Biology"-based practices: Use substances found in nature such as herbs, foods, vitamins, and other natural substances. (As used here, "biology" does not refer to the science of biology, but is a usage newly coined by NCCIH in the primary source used for this article. "Biology-based" as coined by NCCIH may refer to chemicals from a nonbiological source, such as use of the poison lead in traditional Chinese medicine, and to other nonbiological substances.) Manipulative and body-based practices: feature manipulation or movement of body parts, such as is done in bodywork, chiropractic, and osteopathic manipulation. Energy medicine: is a domain that deals with putative and verifiable energy fields: Biofield therapies are intended to influence energy fields that are purported to surround and penetrate the body. The existence of such energy fields have been disproven. Bioelectromagnetic-based therapies use verifiable electromagnetic fields, such as pulsed fields, alternating-current, or direct-current fields in a non-scientific manner. History The history of alternative medicine may refer to the history of a group of diverse medical practices that were collectively promoted as "alternative medicine" beginning in the 1970s, to the collection of individual histories of members of that group, or to the history of western medical practices that were labeled "irregular practices" by the western medical establishment. It includes the histories of complementary medicine and of integrative medicine. Before the 1970s, western practitioners that were not part of the increasingly science-based medical establishment were referred to "irregular practitioners", and were dismissed by the medical establishment as unscientific and as practicing quackery. Until the 1970s, irregular practice became increasingly marginalized as quackery and fraud, as western medicine increasingly incorporated scientific methods and discoveries, and had a corresponding increase in success of its treatments. In the 1970s, irregular practices were grouped with traditional practices of nonwestern cultures and with other unproven or disproven practices that were not part of biomedicine, with the entire group collectively marketed and promoted under the single expression "alternative medicine". Use of alternative medicine in the west began to rise following the counterculture movement of the 1960s, as part of the rising new age movement of the 1970s. This was due to misleading mass marketing of "alternative medicine" being an effective "alternative" to biomedicine, changing social attitudes about not using chemicals and challenging the establishment and authority of any kind, sensitivity to giving equal measure to beliefs and practices of other cultures (cultural relativism), and growing frustration and desperation by patients about limitations and side effects of science-based medicine. At the same time, in 1975, the American Medical Association, which played the central role in fighting quackery in the United States, abolished its quackery committee and closed down its Department of Investigation. By the early to mid 1970s the expression "alternative medicine" came into widespread use, and the expression became mass marketed as a collection of "natural" and effective treatment "alternatives" to science-based biomedicine. By 1983, mass marketing of "alternative medicine" was so pervasive that the British Medical Journal (BMJ) pointed to "an apparently endless stream of books, articles, and radio and television programmes urge on the public the virtues of (alternative medicine) treatments ranging from meditation to drilling a hole in the skull to let in more oxygen". An analysis of trends in the criticism of complementary and alternative medicine (CAM) in five prestigious American medical journals during the period of reorganization within medicine (1965–1999) was reported as showing that the medical profession had responded to the growth of CAM in three phases, and that in each phase, changes in the medical marketplace had influenced the type of response in the journals. Changes included relaxed medical licensing, the development of managed care, rising consumerism, and the establishment of the USA Office of Alternative Medicine (later National Center for Complementary and Alternative Medicine, currently National Center for Complementary and Integrative Health). Medical education Mainly as a result of reforms following the Flexner Report of 1910 medical education in established medical schools in the US has generally not included alternative medicine as a teaching topic. Typically, their teaching is based on current practice and scientific knowledge about: anatomy, physiology, histology, embryology, neuroanatomy, pathology, pharmacology, microbiology and immunology. Medical schools' teaching includes such topics as doctor-patient communication, ethics, the art of medicine, and engaging in complex clinical reasoning (medical decision-making). Writing in 2002, Snyderman and Weil remarked that by the early twentieth century the Flexner model had helped to create the 20th-century academic health center, in which education, research, and practice were inseparable. While this had much improved medical practice by defining with increasing certainty the pathophysiological basis of disease, a single-minded focus on the pathophysiological had diverted much of mainstream American medicine from clinical conditions that were not well understood in mechanistic terms, and were not effectively treated by conventional therapies. By 2001 some form of CAM training was being offered by at least 75 out of 125 medical schools in the US. Exceptionally, the School of Medicine of the University of Maryland, Baltimore, includes a research institute for integrative medicine (a member entity of the Cochrane Collaboration). Medical schools are responsible for conferring medical degrees, but a physician typically may not legally practice medicine until licensed by the local government authority. Licensed physicians in the US who have attended one of the established medical schools there have usually graduated Doctor of Medicine (MD). All states require that applicants for MD licensure be graduates of an approved medical school and complete the United States Medical Licensing Examination (USMLE). Efficacy There is a general scientific consensus that alternative therapies lack the requisite scientific validation, and their effectiveness is either unproved or disproved. Many of the claims regarding the efficacy of alternative medicines are controversial, since research on them is frequently of low quality and methodologically flawed. Selective publication bias, marked differences in product quality and standardisation, and some companies making unsubstantiated claims call into question the claims of efficacy of isolated examples where there is evidence for alternative therapies. The Scientific Review of Alternative Medicine points to confusions in the general population – a person may attribute symptomatic relief to an otherwise-ineffective therapy just because they are taking something (the placebo effect); the natural recovery from or the cyclical nature of an illness (the regression fallacy) gets misattributed to an alternative medicine being taken; a person not diagnosed with science-based medicine may never originally have had a true illness diagnosed as an alternative disease category. Edzard Ernst, the first university professor of Complementary and Alternative Medicine, characterized the evidence for many alternative techniques as weak, nonexistent, or negative and in 2011 published his estimate that about 7.4% were based on "sound evidence", although he believes that may be an overestimate. Ernst has concluded that 95% of the alternative therapies he and his team studied, including acupuncture, herbal medicine, homeopathy, and reflexology, are "statistically indistinguishable from placebo treatments", but he also believes there is something that conventional doctors can usefully learn from the chiropractors and homeopath: this is the therapeutic value of the placebo effect, one of the strangest phenomena in medicine. In 2003, a project funded by the CDC identified 208 condition-treatment pairs, of which 58% had been studied by at least one randomized controlled trial (RCT), and 23% had been assessed with a meta-analysis. According to a 2005 book by a US Institute of Medicine panel, the number of RCTs focused on CAM has risen dramatically. , the Cochrane Library had 145 CAM-related Cochrane systematic reviews and 340 non-Cochrane systematic reviews. An analysis of the conclusions of only the 145 Cochrane reviews was done by two readers. In 83% of the cases, the readers agreed. In the 17% in which they disagreed, a third reader agreed with one of the initial readers to set a rating. These studies found that, for CAM, 38.4% concluded positive effect or possibly positive (12.4%), 4.8% concluded no effect, 0.7% concluded harmful effect, and 56.6% concluded insufficient evidence. An assessment of conventional treatments found that 41.3% concluded positive or possibly positive effect, 20% concluded no effect, 8.1% concluded net harmful effects, and 21.3% concluded insufficient evidence. However, the CAM review used the more developed 2004 Cochrane database, while the conventional review used the initial 1998 Cochrane database. Alternative therapies do not "complement" (improve the effect of, or mitigate the side effects of) functional medical treatment. Significant drug interactions caused by alternative therapies may instead negatively impact functional treatment by making prescription drugs less effective, such as interference by herbal preparations with warfarin. In the same way as for conventional therapies, drugs, and interventions, it can be difficult to test the efficacy of alternative medicine in clinical trials. In instances where an established, effective, treatment for a condition is already available, the Helsinki Declaration states that withholding such treatment is unethical in most circumstances. Use of standard-of-care treatment in addition to an alternative technique being tested may produce confounded or difficult-to-interpret results. Cancer researcher Andrew J. Vickers has stated: Perceived mechanism of effect Anything classified as alternative medicine by definition does not have a proven healing or medical effect. However, there are different mechanisms through which it can be perceived to "work". The common denominator of these mechanisms is that effects are mis-attributed to the alternative treatment. Placebo effect A placebo is a treatment with no intended therapeutic value. An example of a placebo is an inert pill, but it can include more dramatic interventions like sham surgery. The placebo effect is the concept that patients will perceive an improvement after being treated with an inert treatment. The opposite of the placebo effect is the nocebo effect, when patients who expect a treatment to be harmful will perceive harmful effects after taking it. Placebos do not have a physical effect on diseases or improve overall outcomes, but patients may report improvements in subjective outcomes such as pain and nausea. A 1955 study suggested that a substantial part of a medicine's impact was due to the placebo effect. However, reassessments found the study to have flawed methodology. This and other modern reviews suggest that other factors like natural recovery and reporting bias should also be considered. All of these are reasons why alternative therapies may be credited for improving a patient's condition even though the objective effect is non-existent, or even harmful. David Gorski argues that alternative treatments should be treated as a placebo, rather than as medicine. Almost none have performed significantly better than a placebo in clinical trials. Furthermore, distrust of conventional medicine may lead to patients experiencing the nocebo effect when taking effective medication. Regression to the mean A patient who receives an inert treatment may report improvements afterwards that it did not cause. Assuming it was the cause without evidence is an example of the regression fallacy. This may be due to a natural recovery from the illness, or a fluctuation in the symptoms of a long-term condition. The concept of regression toward the mean implies that an extreme result is more likely to be followed by a less extreme result. Other factors There are also reasons why a placebo treatment group may outperform a "no-treatment" group in a test which are not related to a patient's experience. These include patients reporting more favourable results than they really felt due to politeness or "experimental subordination", observer bias, and misleading wording of questions. In their 2010 systematic review of studies into placebos, Asbjørn Hróbjartsson and Peter C. Gøtzsche write that "even if there were no true effect of placebo, one would expect to record differences between placebo and no-treatment groups due to bias associated with lack of blinding." Alternative therapies may also be credited for perceived improvement through decreased use or effect of medical treatment, and therefore either decreased side effects or nocebo effects towards standard treatment. Use and regulation Appeal Practitioners of complementary medicine usually discuss and advise patients as to available alternative therapies. Patients often express interest in mind-body complementary therapies because they offer a non-drug approach to treating some health conditions. In addition to the social-cultural underpinnings of the popularity of alternative medicine, there are several psychological issues that are critical to its growth, notably psychological effects, such as the will to believe, cognitive biases that help maintain self-esteem and promote harmonious social functioning, and the post hoc, ergo propter hoc fallacy. In a 2018 interview with The BMJ, Edzard Ernst stated: "The present popularity of complementary and alternative medicine is also inviting criticism of what we are doing in mainstream medicine. It shows that we aren't fulfilling a certain need-we are not giving patients enough time, compassion, or empathy. These are things that complementary practitioners are very good at. Mainstream medicine could learn something from complementary medicine." Marketing Alternative medicine is a profitable industry with large media advertising expenditures. Accordingly, alternative practices are often portrayed positively and compared favorably to "big pharma". The popularity of complementary & alternative medicine (CAM) may be related to other factors that Ernst mentioned in a 2008 interview in The Independent: Paul Offit proposed that "alternative medicine becomes quackery" in four ways: by recommending against conventional therapies that are helpful, promoting potentially harmful therapies without adequate warning, draining patients' bank accounts, or by promoting "magical thinking". Promoting alternative medicine has been called dangerous and unethical. Social factors Authors have speculated on the socio-cultural and psychological reasons for the appeal of alternative medicines among the minority using them in lieu of conventional medicine. There are several socio-cultural reasons for the interest in these treatments centered on the low level of scientific literacy among the public at large and a concomitant increase in antiscientific attitudes and new age mysticism. Related to this are vigorous marketing of extravagant claims by the alternative medical community combined with inadequate media scrutiny and attacks on critics. Alternative medicine is criticized for taking advantage of the least fortunate members of society. There is also an increase in conspiracy theories toward conventional medicine and pharmaceutical companies, mistrust of traditional authority figures, such as the physician, and a dislike of the current delivery methods of scientific biomedicine, all of which have led patients to seek out alternative medicine to treat a variety of ailments. Many patients lack access to contemporary medicine, due to a lack of private or public health insurance, which leads them to seek out lower-cost alternative medicine. Medical doctors are also aggressively marketing alternative medicine to profit from this market. Patients can be averse to the painful, unpleasant, and sometimes-dangerous side effects of biomedical treatments. Treatments for severe diseases such as cancer and HIV infection have well-known, significant side-effects. Even low-risk medications such as antibiotics can have potential to cause life-threatening anaphylactic reactions in a very few individuals. Many medications may cause minor but bothersome symptoms such as cough or upset stomach. In all of these cases, patients may be seeking out alternative therapies to avoid the adverse effects of conventional treatments. Prevalence of use According to research published in 2015, the increasing popularity of CAM needs to be explained by moral convictions or lifestyle choices rather than by economic reasoning. In developing nations, access to essential medicines is severely restricted by lack of resources and poverty. Traditional remedies, often closely resembling or forming the basis for alternative remedies, may comprise primary healthcare or be integrated into the healthcare system. In Africa, traditional medicine is used for 80% of primary healthcare, and in developing nations as a whole over one-third of the population lack access to essential medicines. In Latin America, inequities against BIPOC communities keep them tied to their traditional practices and therefore, it is often these communities that constitute the majority of users of alternative medicine. Racist attitudes towards certain communities disable them from accessing more urbanized modes of care. In a study that assessed access to care in rural communities of Latin America, it was found that discrimination is a huge barrier to the ability of citizens to access care; more specifically, women of Indigenous and African descent, and lower-income families were especially hurt. Such exclusion exacerbates the inequities that minorities in Latin America already face. Consistently excluded from many systems of westernized care for socioeconomic and other reasons, low-income communities of color often turn to traditional medicine for care as it has proved reliable to them across generations. Commentators including David Horrobin have proposed adopting a prize system to reward medical research. This stands in opposition to the current mechanism for funding research proposals in most countries around the world. In the US, the NCCIH provides public research funding for alternative medicine. The NCCIH has spent more than US$2.5 billion on such research since 1992 and this research has not demonstrated the efficacy of alternative therapies. As of 2011, the NCCIH's sister organization in the NIC Office of Cancer Complementary and Alternative Medicine had given out grants of around $105 million each year for several years. Testing alternative medicine that has no scientific basis (as in the aforementioned grants) has been called a waste of scarce research resources. That alternative medicine has been on the rise "in countries where Western science and scientific method generally are accepted as the major foundations for healthcare, and 'evidence-based' practice is the dominant paradigm" was described as an "enigma" in the Medical Journal of Australia. A 15-year systematic review published in 2022 on the global acceptance and use of CAM among medical specialists found the overall acceptance of CAM at 52% and the overall use at 45%. In the United States In the United States, the 1974 Child Abuse Prevention and Treatment Act (CAPTA) required that for states to receive federal money, they had to grant religious exemptions to child neglect and abuse laws regarding religion-based healing practices. Thirty-one states have child-abuse religious exemptions. The use of alternative medicine in the US has increased, with a 50 percent increase in expenditures and a 25 percent increase in the use of alternative therapies between 1990 and 1997 in America. According to a national survey conducted in 2002, "36 percent of U.S. adults aged 18 years and over use some form of complementary and alternative medicine." Americans spend many billions on the therapies annually. Most Americans used CAM to treat and/or prevent musculoskeletal conditions or other conditions associated with chronic or recurring pain. In America, women were more likely than men to use CAM, with the biggest difference in use of mind-body therapies including prayer specifically for health reasons". In 2008, more than 37% of American hospitals offered alternative therapies, up from 27 percent in 2005, and 25% in 2004. More than 70% of the hospitals offering CAM were in urban areas. A survey of Americans found that 88 percent thought that "there are some good ways of treating sickness that medical science does not recognize". Use of magnets was the most common tool in energy medicine in America, and among users of it, 58 percent described it as at least "sort of scientific", when it is not at all scientific. In 2002, at least 60 percent of US medical schools have at least some class time spent teaching alternative therapies. "Therapeutic touch" was taught at more than 100 colleges and universities in 75 countries before the practice was debunked by a nine-year-old child for a school science project. Prevalence of use of specific therapies The most common CAM therapies used in the US in 2002 were prayer (45%), herbalism (19%), breathing meditation (12%), meditation (8%), chiropractic medicine (8%), yoga (5–6%), body work (5%), diet-based therapy (4%), progressive relaxation (3%), mega-vitamin therapy (3%) and visualization (2%). In Britain, the most often used alternative therapies were Alexander technique, aromatherapy, Bach and other flower remedies, body work therapies including massage, Counseling stress therapies, hypnotherapy, meditation, reflexology, Shiatsu, Ayurvedic medicine, nutritional medicine, and yoga. Ayurvedic medicine remedies are mainly plant based with some use of animal materials. Safety concerns include the use of herbs containing toxic compounds and the lack of quality control in Ayurvedic facilities. According to the National Health Service (England), the most commonly used complementary and alternative medicines (CAM) supported by the NHS in the UK are: acupuncture, aromatherapy, chiropractic, homeopathy, massage, osteopathy and clinical hypnotherapy. In palliative care Complementary therapies are often used in palliative care or by practitioners attempting to manage chronic pain in patients. Integrative medicine is considered more acceptable in the interdisciplinary approach used in palliative care than in other areas of medicine. "From its early experiences of care for the dying, palliative care took for granted the necessity of placing patient values and lifestyle habits at the core of any design and delivery of quality care at the end of life. If the patient desired complementary therapies, and as long as such treatments provided additional support and did not endanger the patient, they were considered acceptable." The non-pharmacologic interventions of complementary medicine can employ mind-body interventions designed to "reduce pain and concomitant mood disturbance and increase quality of life." Regulation The alternative medicine lobby has successfully pushed for alternative therapies to be subject to far less regulation than conventional medicine. Some professions of complementary/traditional/alternative medicine, such as chiropractic, have achieved full regulation in North America and other parts of the world and are regulated in a manner similar to that governing science-based medicine. In contrast, other approaches may be partially recognized and others have no regulation at all. In some cases, promotion of alternative therapies is allowed when there is demonstrably no effect, only a tradition of use. Despite laws making it illegal to market or promote alternative therapies for use in cancer treatment, many practitioners promote them. Regulation and licensing of alternative medicine ranges widely from country to country, and state to state. In Austria and Germany complementary and alternative medicine is mainly in the hands of doctors with MDs, and half or more of the American alternative practitioners are licensed MDs. In Germany herbs are tightly regulated: half are prescribed by doctors and covered by health insurance. Government bodies in the US and elsewhere have published information or guidance about alternative medicine. The U.S. Food and Drug Administration (FDA), has issued online warnings for consumers about medication health fraud. This includes a section on Alternative Medicine Fraud, such as a warning that Ayurvedic products generally have not been approved by the FDA before marketing. Risks and problems The National Science Foundation has studied the problematic side of the public's attitudes and understandings of science fiction, pseudoscience, and belief in alternative medicine. They use a quote from Robert L. Park to describe some issues with alternative medicine: Negative outcomes According to the Institute of Medicine, use of alternative medical techniques may result in several types of harm: "Direct harm, which results in adverse patient outcome." "Economic harm, which results in monetary loss but presents no health hazard;" "Indirect harm, which results in a delay of appropriate treatment, or in unreasonable expectations that discourage patients and their families from accepting and dealing effectively with their medical conditions;" Interactions with conventional pharmaceuticals Forms of alternative medicine that are biologically active can be dangerous even when used in conjunction with conventional medicine. Examples include immuno-augmentation therapy, shark cartilage, bioresonance therapy, oxygen and ozone therapies, and insulin potentiation therapy. Some herbal remedies can cause dangerous interactions with chemotherapy drugs, radiation therapy, or anesthetics during surgery, among other problems. An example of these dangers was reported by Associate Professor Alastair MacLennan of Adelaide University, Australia regarding a patient who almost bled to death on the operating table after neglecting to mention that she had been taking "natural" potions to "build up her strength" before the operation, including a powerful anticoagulant that nearly caused her death. To ABC Online, MacLennan also gives another possible mechanism: Side-effects Conventional treatments are subjected to testing for undesired side-effects, whereas alternative therapies, in general, are not subjected to such testing at all. Any treatment – whether conventional or alternative – that has a biological or psychological effect on a patient may also have potential to possess dangerous biological or psychological side-effects. Attempts to refute this fact with regard to alternative therapies sometimes use the appeal to nature fallacy, i.e., "That which is natural cannot be harmful." Specific groups of patients such as patients with impaired hepatic or renal function are more susceptible to side effects of alternative remedies. An exception to the normal thinking regarding side-effects is homeopathy. Since 1938, the FDA has regulated homeopathic products in "several significantly different ways from other drugs." Homeopathic preparations, termed "remedies", are extremely dilute, often far beyond the point where a single molecule of the original active (and possibly toxic) ingredient is likely to remain. They are, thus, considered safe on that count, but "their products are exempt from good manufacturing practice requirements related to expiration dating and from finished product testing for identity and strength", and their alcohol concentration may be much higher than allowed in conventional drugs. Treatment delay Alternative medicine may discourage people from getting the best possible treatment. Those having experienced or perceived success with one alternative therapy for a minor ailment may be convinced of its efficacy and persuaded to extrapolate that success to some other alternative therapy for a more serious, possibly life-threatening illness. For this reason, critics argue that therapies that rely on the placebo effect to define success are very dangerous. According to mental health journalist Scott Lilienfeld in 2002, "unvalidated or scientifically unsupported mental health practices can lead individuals to forgo effective treatments" and refers to this as opportunity cost. Individuals who spend large amounts of time and money on ineffective treatments may be left with precious little of either, and may forfeit the opportunity to obtain treatments that could be more helpful. In short, even innocuous treatments can indirectly produce negative outcomes. Between 2001 and 2003, four children died in Australia because their parents chose ineffective naturopathic, homeopathic, or other alternative medicines and diets rather than conventional therapies. Unconventional cancer "cures" There have always been "many therapies offered outside of conventional cancer treatment centers and based on theories not found in biomedicine. These alternative cancer cures have often been described as 'unproven,' suggesting that appropriate clinical trials have not been conducted and that the therapeutic value of the treatment is unknown." However, "many alternative cancer treatments have been investigated in good-quality clinical trials, and they have been shown to be ineffective.... The label 'unproven' is inappropriate for such therapies; it is time to assert that many alternative cancer therapies have been 'disproven'." Edzard Ernst has stated: Rejection of science Complementary and alternative medicine (CAM) is not as well researched as conventional medicine, which undergoes intense research before release to the public. Practitioners of science-based medicine also discard practices and treatments when they are shown ineffective, while alternative practitioners do not. Funding for research is also sparse making it difficult to do further research for effectiveness of CAM. Most funding for CAM is funded by government agencies. Proposed research for CAM are rejected by most private funding agencies because the results of research are not reliable. The research for CAM has to meet certain standards from research ethics committees, which most CAM researchers find almost impossible to meet. Even with the little research done on it, CAM has not been proven to be effective. Studies that have been done will be cited by CAM practitioners in an attempt to claim a basis in science. These studies tend to have a variety of problems, such as small samples, various biases, poor research design, lack of controls, negative results, etc. Even those with positive results can be better explained as resulting in false positives due to bias and noisy data. Alternative medicine may lead to a false understanding of the body and of the process of science. Steven Novella, a neurologist at Yale School of Medicine, wrote that government-funded studies of integrating alternative medicine techniques into the mainstream are "used to lend an appearance of legitimacy to treatments that are not legitimate." Marcia Angell considered that critics felt that healthcare practices should be classified based solely on scientific evidence, and if a treatment had been rigorously tested and found safe and effective, science-based medicine will adopt it regardless of whether it was considered "alternative" to begin with. It is possible for a method to change categories (proven vs. unproven), based on increased knowledge of its effectiveness or lack thereof. Prominent supporters of this position are George D. Lundberg, former editor of the Journal of the American Medical Association (JAMA) and the journal's interim editor-in-chief Phil Fontanarosa. Writing in 1999 in CA: A Cancer Journal for Clinicians Barrie R. Cassileth mentioned a 1997 letter to the United States Senate's Subcommittee on Public Health and Safety, which had deplored the lack of critical thinking and scientific rigor in OAM-supported research, had been signed by four Nobel Laureates and other prominent scientists. (This was supported by the National Institutes of Health (NIH).) In March 2009, a staff writer for The Washington Post reported that the impending national discussion about broadening access to health care, improving medical practice and saving money was giving a group of scientists an opening to propose shutting down the National Center for Complementary and Alternative Medicine. They quoted one of these scientists, Steven Salzberg, a genome researcher and computational biologist at the University of Maryland, as saying "One of our concerns is that NIH is funding pseudoscience." They noted that the vast majority of studies were based on fundamental misunderstandings of physiology and disease, and had shown little or no effect. Writers such as Carl Sagan, a noted astrophysicist, advocate of scientific skepticism and the author of The Demon-Haunted World: Science as a Candle in the Dark (1996), have lambasted the lack of empirical evidence to support the existence of the putative energy fields on which these therapies are predicated. Sampson has also pointed out that CAM tolerated contradiction without thorough reason and experiment. Barrett has pointed out that there is a policy at the NIH of never saying something does not work, only that a different version or dose might give different results. Barrett also expressed concern that, just because some "alternatives" have merit, there is the impression that the rest deserve equal consideration and respect even though most are worthless, since they are all classified under the one heading of alternative medicine. Some critics of alternative medicine are focused upon health fraud, misinformation, and quackery as public health problems, notably Wallace Sampson and Paul Kurtz founders of Scientific Review of Alternative Medicine and Stephen Barrett, co-founder of The National Council Against Health Fraud and webmaster of Quackwatch. Grounds for opposing alternative medicine include that: Alternative therapies typically lack any scientific validation, and their effectiveness either is unproven or has been disproved. It is usually based on religion, tradition, superstition, belief in supernatural energies, pseudoscience, errors in reasoning, propaganda, or fraud. Methods may incorporate or base themselves on traditional medicine, folk knowledge, spiritual beliefs, ignorance or misunderstanding of scientific principles, errors in reasoning, or newly conceived approaches claiming to heal. Research on alternative medicine is frequently of low quality and methodologically flawed. Treatments are not part of the conventional, science-based healthcare system. Where alternative therapies have replaced conventional science-based medicine, even with the safest alternative medicines, failure to use or delay in using conventional science-based medicine has caused deaths. Many alternative medical treatments are not patentable, which may lead to less research funding from the private sector. In addition, in most countries, alternative therapies (in contrast to pharmaceuticals) can be marketed without any proof of efficacy – also a disincentive for manufacturers to fund scientific research. English evolutionary biologist Richard Dawkins, in his 2003 book A Devil's Chaplain, defined alternative medicine as a "set of practices that cannot be tested, refuse to be tested, or consistently fail tests." Dawkins argued that if a technique is demonstrated effective in properly performed trials then it ceases to be alternative and simply becomes medicine. CAM is also often less regulated than conventional medicine. There are ethical concerns about whether people who perform CAM have the proper knowledge to treat patients. CAM is often done by non-physicians who do not operate with the same medical licensing laws which govern conventional medicine, and it is often described as an issue of non-maleficence. According to two writers, Wallace Sampson and K. Butler, marketing is part of the training required in alternative medicine, and propaganda methods in alternative medicine have been traced back to those used by Hitler and Goebels in their promotion of pseudoscience in medicine. In November 2011 Edzard Ernst stated that the "level of misinformation about alternative medicine has now reached the point where it has become dangerous and unethical. So far, alternative medicine has remained an ethics-free zone. It is time to change this." Harriet Hall criticized the low standard of evidence accepted by the alternative medicine community: Conflicts of interest Some commentators have said that special consideration must be given to the issue of conflicts of interest in alternative medicine. Edzard Ernst has said that most researchers into alternative medicine are at risk of "unidirectional bias" because of a generally uncritical belief in their chosen subject. Ernst cites as evidence the phenomenon whereby 100% of a sample of acupuncture trials originating in China had positive conclusions. David Gorski contrasts evidence-based medicine, in which researchers try to disprove hyphotheses, with what he says is the frequent practice in pseudoscience-based research, of striving to confirm pre-existing notions. Harriet Hall writes that there is a contrast between the circumstances of alternative medicine practitioners and disinterested scientists: in the case of acupuncture, for example, an acupuncturist would have "a great deal to lose" if acupuncture were rejected by research; but the disinterested skeptic would not lose anything if its effects were confirmed; rather their change of mind would enhance their skeptical credentials. Use of health and research resources Research into alternative therapies has been criticized for "diverting research time, money, and other resources from more fruitful lines of investigation in order to pursue a theory that has no basis in biology." Research methods expert and author of Snake Oil Science, R. Barker Bausell, has stated that "it's become politically correct to investigate nonsense." A commonly cited statistic is that the US National Institute of Health had spent $2.5 billion on investigating alternative therapies prior to 2009, with none being found to be effective.
Biology and health sciences
Alternative and traditional medicine
null
1911
https://en.wikipedia.org/wiki/Allele
Allele
An allele (or allelomorph) is a variant of the sequence of nucleotides at a particular location, or locus, on a DNA molecule. Alleles can differ at a single position through single nucleotide polymorphisms (SNP), but they can also have insertions and deletions of up to several thousand base pairs. Most alleles observed result in little or no change in the function of the gene product it codes for. However, sometimes different alleles can result in different observable phenotypic traits, such as different pigmentation. A notable example of this is Gregor Mendel's discovery that the white and purple flower colors in pea plants were the result of a single gene with two alleles. Nearly all multicellular organisms have two sets of chromosomes at some point in their biological life cycle; that is, they are diploid. For a given locus, if the two chromosomes contain the same allele, they, and the organism, are homozygous with respect to that allele. If the alleles are different, they, and the organism, are heterozygous with respect to those alleles. Popular definitions of 'allele' typically refer only to different alleles within genes. For example, the ABO blood grouping is controlled by the ABO gene, which has six common alleles (variants). In population genetics, nearly every living human's phenotype for the ABO gene is some combination of just these six alleles. Etymology The word "allele" is a short form of "allelomorph" ("other form", a word coined by British geneticists William Bateson and Edith Rebecca Saunders) in the 1900s, which was used in the early days of genetics to describe variant forms of a gene detected in different phenotypes and identified to cause the differences between them. It derives from the Greek prefix ἀλληλο-, allelo-, meaning "mutual", "reciprocal", or "each other", which itself is related to the Greek adjective ἄλλος, allos (cognate with Latin alius), meaning "other". Alleles that lead to dominant or recessive phenotypes In many cases, genotypic interactions between the two alleles at a locus can be described as dominant or recessive, according to which of the two homozygous phenotypes the heterozygote most resembles. Where the heterozygote is indistinguishable from one of the homozygotes, the allele expressed is the one that leads to the "dominant" phenotype, and the other allele is said to be "recessive". The degree and pattern of dominance varies among loci. This type of interaction was first formally-described by Gregor Mendel. However, many traits defy this simple categorization and the phenotypes are modelled by co-dominance and polygenic inheritance. The term "wild type" allele is sometimes used to describe an allele that is thought to contribute to the typical phenotypic character as seen in "wild" populations of organisms, such as fruit flies (Drosophila melanogaster). Such a "wild type" allele was historically regarded as leading to a dominant (overpowering – always expressed), common, and normal phenotype, in contrast to "mutant" alleles that lead to recessive, rare, and frequently deleterious phenotypes. It was formerly thought that most individuals were homozygous for the "wild type" allele at most gene loci, and that any alternative "mutant" allele was found in homozygous form in a small minority of "affected" individuals, often as genetic diseases, and more frequently in heterozygous form in "carriers" for the mutant allele. It is now appreciated that most or all gene loci are highly polymorphic, with multiple alleles, whose frequencies vary from population to population, and that a great deal of genetic variation is hidden in the form of alleles that do not produce obvious phenotypic differences. Wild type alleles are often denoted by a superscript plus sign (i.e., p for an allele p). Multiple alleles A population or species of organisms typically includes multiple alleles at each locus among various individuals. Allelic variation at a locus is measurable as the number of alleles (polymorphism) present, or the proportion of heterozygotes in the population. A null allele is a gene variant that lacks the gene's normal function because it either is not expressed, or the expressed protein is inactive. For example, at the gene locus for the ABO blood type carbohydrate antigens in humans, classical genetics recognizes three alleles, IA, IB, and i, which determine compatibility of blood transfusions. Any individual has one of six possible genotypes (IAIA, IAi, IBIB, IBi, IAIB, and ii) which produce one of four possible phenotypes: "Type A" (produced by IAIA homozygous and IAi heterozygous genotypes), "Type B" (produced by IBIB homozygous and IBi heterozygous genotypes), "Type AB" produced by IAIB heterozygous genotype, and "Type O" produced by ii homozygous genotype. (It is now known that each of the A, B, and O alleles is actually a class of multiple alleles with different DNA sequences that produce proteins with identical properties: more than 70 alleles are known at the ABO locus. Hence an individual with "Type A" blood may be an AO heterozygote, an AA homozygote, or an AA heterozygote with two different "A" alleles.) Genotype frequencies The frequency of alleles in a diploid population can be used to predict the frequencies of the corresponding genotypes (see Hardy–Weinberg principle). For a simple model, with two alleles; where p is the frequency of one allele and q is the frequency of the alternative allele, which necessarily sum to unity. Then, p2 is the fraction of the population homozygous for the first allele, 2pq is the fraction of heterozygotes, and q2 is the fraction homozygous for the alternative allele. If the first allele is dominant to the second then the fraction of the population that will show the dominant phenotype is p2 + 2pq, and the fraction with the recessive phenotype is q2. With three alleles: and In the case of multiple alleles at a diploid locus, the number of possible genotypes (G) with a number of alleles (a) is given by the expression: Allelic dominance in genetic disorders A number of genetic disorders are caused when an individual inherits two recessive alleles for a single-gene trait. Recessive genetic disorders include albinism, cystic fibrosis, galactosemia, phenylketonuria (PKU), and Tay–Sachs disease. Other disorders are also due to recessive alleles, but because the gene locus is located on the X chromosome, so that males have only one copy (that is, they are hemizygous), they are more frequent in males than in females. Examples include red–green color blindness and fragile X syndrome. Other disorders, such as Huntington's disease, occur when an individual inherits only one dominant allele. Epialleles While heritable traits are typically studied in terms of genetic alleles, epigenetic marks such as DNA methylation can be inherited at specific genomic regions in certain species, a process termed transgenerational epigenetic inheritance. The term epiallele is used to distinguish these heritable marks from traditional alleles, which are defined by nucleotide sequence. A specific class of epiallele, the metastable epialleles, has been discovered in mice and in humans which is characterized by stochastic (probabilistic) establishment of epigenetic state that can be mitotically inherited. Idiomorph The term "idiomorph", from Greek 'morphos' (form) and 'idio' (singular, unique), was introduced in 1990 in place of "allele" to denote sequences at the same locus in different strains that have no sequence similarity and probably do not share a common phylogenetic relationship. It is used mainly in the genetic research of mycology.
Biology and health sciences
Genetics
Biology
1912
https://en.wikipedia.org/wiki/Ampicillin
Ampicillin
Ampicillin is an antibiotic belonging to the aminopenicillin class of the penicillin family. The drug is used to prevent and treat several bacterial infections, such as respiratory tract infections, urinary tract infections, meningitis, salmonellosis, and endocarditis. It may also be used to prevent group B streptococcal infection in newborns. It is used by mouth, by injection into a muscle, or intravenously. Common side effects include rash, nausea, and diarrhea. It should not be used in people who are allergic to penicillin. Serious side effects may include Clostridioides difficile colitis or anaphylaxis. While usable in those with kidney problems, the dose may need to be decreased. Its use during pregnancy and breastfeeding appears to be generally safe. Ampicillin was discovered in 1958 and came into commercial use in 1961. It is on the World Health Organization's List of Essential Medicines. The World Health Organization classifies ampicillin as critically important for human medicine. It is available as a generic medication. Medical uses Diseases Bacterial meningitis; an aminoglycoside can be added to increase efficacy against gram-negative meningitis bacteria Endocarditis by enterococcal strains (off-label use); often given with an aminoglycoside Gastrointestinal infections caused by contaminated water or food (for example, by Salmonella) Genito-urinary tract infections Healthcare-associated infections that are related to infections from using urinary catheters and that are unresponsive to other medications Otitis media (middle ear infection) Prophylaxis (i.e. to prevent infection) in those who previously had rheumatic heart disease or are undergoing dental procedures, vaginal hysterectomies, or C-sections. It is also used in pregnant woman who are carriers of group B streptococci to prevent early-onset neonatal infections. Respiratory infections, including bronchitis, pharyngitis Sinusitis Sepsis Whooping cough, to prevent and treat secondary infections Ampicillin used to also be used to treat gonorrhea, but there are now too many strains resistant to penicillins. Bacteria Ampicillin is used to treat infections by many gram-positive and gram-negative bacteria. It was the first "broad spectrum" penicillin with activity against gram-positive bacteria, including Streptococcus pneumoniae, Streptococcus pyogenes, some isolates of Staphylococcus aureus (but not penicillin-resistant or methicillin-resistant strains), Trueperella, and some Enterococcus. It is one of the few antibiotics that works against multidrug resistant Enterococcus faecalis and E. faecium. Activity against gram-negative bacteria includes Neisseria meningitidis, some Haemophilus influenzae, and some of the Enterobacteriaceae (though most Enterobacteriaceae and Pseudomonas are resistant). Its spectrum of activity is enhanced by co-administration of sulbactam, a drug that inhibits beta lactamase, an enzyme produced by bacteria to inactivate ampicillin and related antibiotics. It is sometimes used in combination with other antibiotics that have different mechanisms of action, like vancomycin, linezolid, daptomycin, and tigecycline. Available forms Ampicillin can be administered by mouth, an intramuscular injection (shot) or by intravenous infusion. The oral form, available as capsules or oral suspensions, is not given as an initial treatment for severe infections, but rather as a follow-up to an IM or IV injection. For IV and IM injections, ampicillin is kept as a powder that must be reconstituted. IV injections must be given slowly, as rapid IV injections can lead to convulsive seizures. Specific populations Ampicillin is one of the most used drugs in pregnancy, and has been found to be generally harmless both by the Food and Drug Administration in the U.S. (which classified it as category B) and the Therapeutic Goods Administration in Australia (which classified it as category A). It is the drug of choice for treating Listeria monocytogenes in pregnant women, either alone or combined with an aminoglycoside. Pregnancy increases the clearance of ampicillin by up to 50%, and a higher dose is thus needed to reach therapeutic levels. Ampicillin crosses the placenta and remains in the amniotic fluid at 50–100% of the concentration in maternal plasma; this can lead to high concentrations of ampicillin in the newborn. While lactating mothers secrete some ampicillin into their breast milk, the amount is minimal. In newborns, ampicillin has a longer half-life and lower plasma protein binding. The clearance by the kidneys is lower, as kidney function has not fully developed. Contraindications Ampicillin is contraindicated in those with a hypersensitivity to penicillins, as they can cause fatal anaphylactic reactions. Hypersensitivity reactions can include frequent skin rashes and hives, exfoliative dermatitis, erythema multiforme, and a temporary decrease in both red and white blood cells. Ampicillin is not recommended in people with concurrent mononucleosis, as over 40% of patients develop a skin rash. Side effects Ampicillin is comparatively less toxic than other antibiotics, and side effects are more likely in those who are sensitive to penicillins and those with a history of asthma or allergies. In very rare cases, it causes severe side effects such as angioedema, anaphylaxis, and C. difficile infection (that can range from mild diarrhea to serious pseudomembranous colitis). Some develop black "furry" tongue. Serious adverse effects also include seizures and serum sickness. The most common side effects, experienced by about 10% of users are diarrhea and rash. Less common side effects can be nausea, vomiting, itching, and blood dyscrasias. The gastrointestinal effects, such as hairy tongue, nausea, vomiting, diarrhea, and colitis, are more common with the oral form of penicillin. Other conditions may develop up several weeks after treatment. Overdose Ampicillin overdose can cause behavioral changes, confusion, blackouts, and convulsions, as well as neuromuscular hypersensitivity, electrolyte imbalance, and kidney failure. Interactions Ampicillin reacts with probenecid and methotrexate to decrease renal excretion. Large doses of ampicillin can increase the risk of bleeding with concurrent use of warfarin and other oral anticoagulants, possibly by inhibiting platelet aggregation. Ampicillin has been said to make oral contraceptives less effective, but this has been disputed. It can be made less effective by other antibiotic, such as chloramphenicol, erythromycin, cephalosporins, and tetracyclines. For example, tetracyclines inhibit protein synthesis in bacteria, reducing the target against which ampicillin acts. If given at the same time as aminoglycosides, it can bind to it and inactivate it. When administered separately, aminoglycosides and ampicillin can potentiate each other instead. Ampicillin causes skin rashes more often when given with allopurinol. Both the live cholera vaccine and live typhoid vaccine can be made ineffective if given with ampicillin. Ampicillin is normally used to treat cholera and typhoid fever, lowering the immunological response that the body has to mount. Pharmacology Mechanism of action Ampicillin is in the penicillin group of beta-lactam antibiotics and is part of the aminopenicillin family. It is roughly equivalent to amoxicillin in terms of activity. Ampicillin is able to penetrate gram-positive and some gram-negative bacteria. It differs from penicillin G, or benzylpenicillin, only by the presence of an amino group. This amino group, present on both ampicillin and amoxicillin, helps these antibiotics pass through the pores of the outer membrane of gram-negative bacteria, such as Escherichia coli, Proteus mirabilis, Salmonella enterica, and Shigella. Ampicillin acts as an irreversible inhibitor of the enzyme transpeptidase, which is needed by bacteria to make the cell wall. It inhibits the third and final stage of bacterial cell wall synthesis in binary fission, which ultimately leads to cell lysis; therefore, ampicillin is usually bacteriolytic. Pharmacokinetics Ampicillin is well-absorbed from the GI tract (though food reduces its absorption), and reaches peak concentrations in one to two hours. The bioavailability is around 62% for parenteral routes. Unlike other penicillins, which usually bind 60–90% to plasma proteins, ampicillin binds to only 15–20%. Ampicillin is distributed through most tissues, though it is concentrated in the liver and kidneys. It can also be found in the cerebrospinal fluid when the meninges become inflamed (such as, for example, meningitis). Some ampicillin is metabolized by hydrolyzing the beta-lactam ring to penicilloic acid, though most of it is excreted unchanged. In the kidneys, it is filtered out mostly by tubular secretion; some also undergoes glomerular filtration, and the rest is excreted in the feces and bile. Hetacillin and pivampicillin are ampicillin esters that have been developed to increase bioavailability. History Ampicillin has been used extensively to treat bacterial infections since 1961. Until the introduction of ampicillin by the British company Beecham, penicillin therapies had only been effective against gram-positive organisms such as staphylococci and streptococci. Ampicillin (originally branded as "Penbritin") also demonstrated activity against gram-negative organisms such as H. influenzae, coliforms, and Proteus spp. Society and culture Economics Ampicillin is relatively inexpensive. In the United States, it is available as a generic medication. Veterinary use In veterinary medicine, ampicillin is used in cats, dogs, and farm animals to treat: Anal gland infections Cutaneous infections, such as abscesses, cellulitis, and pustular dermatitis E. coli and Salmonella infections in cattle, sheep, and goats (oral form). Ampicillin use for this purpose had declined as bacterial resistance has increased. Mastitis in sows Mixed aerobic–anaerobic infections, such as from cat bites Multidrug-resistant Enterococcus faecalis and E. faecium Prophylactic use in poultry against Salmonella and sepsis from E. coli or Staphylococcus aureus Respiratory tract infections, including tonsilitis, bovine respiratory disease, shipping fever, bronchopneumonia, and calf and bovine pneumonia Urinary tract infections in dogs Horses are generally not treated with oral ampicillin, as they have low bioavailability of beta-lactams. The half-life in animals is around that same of that in humans (just over an hour). Oral absorption is less than 50% in cats and dogs, and less than 4% in horses.
Biology and health sciences
Antibiotics
Health
1914
https://en.wikipedia.org/wiki/Antimicrobial%20resistance
Antimicrobial resistance
Antimicrobial resistance (AMR or AR) occurs when microbes evolve mechanisms that protect them from antimicrobials, which are drugs used to treat infections. This resistance affects all classes of microbes, including bacteria (antibiotic resistance), viruses (antiviral resistance), protozoa (antiprotozoal resistance), and fungi (antifungal resistance). Together, these adaptations fall under the AMR umbrella, posing significant challenges to healthcare worldwide. Misuse and improper management of antimicrobials are primary drivers of this resistance, though it can also occur naturally through genetic mutations and the spread of resistant genes. Microbes resistant to multiple drugs are termed multidrug-resistant (MDR) and are sometimes called superbugs. Antibiotic resistance, a significant AMR subset, enables bacteria to survive antibiotic treatment, complicating infection management and treatment options. Resistance arises through spontaneous mutation, horizontal gene transfer, and increased selective pressure from antibiotic overuse, both in medicine and agriculture, which accelerates resistance development. The burden of AMR is immense, with nearly 5 million annual deaths associated with resistant infections. Infections from AMR microbes are more challenging to treat and often require costly alternative therapies that may have more severe side effects. Preventive measures, such as using narrow-spectrum antibiotics and improving hygiene practices, aim to reduce the spread of resistance. The WHO claims that AMR is one of the top global public health and development threats, estimating that bacterial AMR was directly responsible for 1.27 million global deaths in 2019 and contributed to 4.95 million deaths. Moreover, the WHO and other international bodies warn that AMR could lead to up to 10 million deaths annually by 2050 unless actions are taken. Global initiatives, such as calls for international AMR treaties, emphasize coordinated efforts to limit misuse, fund research, and provide access to necessary antimicrobials in developing nations. However, the COVID-19 pandemic redirected resources and scientific attention away from AMR, intensifying the challenge. Definition The WHO defines antimicrobial resistance as a microorganism's resistance to an antimicrobial drug that was once able to treat an infection by that microorganism. A person cannot become resistant to antibiotics. Resistance is a property of the microbe, not a person or other organism infected by a microbe. All types of microbes can develop drug resistance. Thus, there are antibiotic, antifungal, antiviral and antiparasitic resistance. Antibiotic resistance is a subset of antimicrobial resistance. This more specific resistance is linked to bacteria and thus broken down into two further subsets, microbiological and clinical. Microbiological resistance is the most common and occurs from genes, mutated or inherited, that allow the bacteria to resist the mechanism to kill the microbe associated with certain antibiotics. Clinical resistance is shown through the failure of many therapeutic techniques where the bacteria that are normally susceptible to a treatment become resistant after surviving the outcome of the treatment. In both cases of acquired resistance, the bacteria can pass the genetic catalyst for resistance through horizontal gene transfer: conjugation, transduction, or transformation. This allows the resistance to spread across the same species of pathogen or even similar bacterial pathogens. Overview WHO report released April 2014 stated, "this serious threat is no longer a prediction for the future, it is happening right now in every region of the world and has the potential to affect anyone, of any age, in any country. Antibiotic resistance—when bacteria change so antibiotics no longer work in people who need them to treat infections—is now a major threat to public health." Each year, nearly 5 million deaths are associated with AMR globally. In 2019, global deaths attributable to AMR numbered 1.27 million in 2019. That same year, AMR may have contributed to 5 million deaths and one in five people who died due to AMR were children under five years old. In 2018, WHO considered antibiotic resistance to be one of the biggest threats to global health, food security and development. Deaths attributable to AMR vary by area: The European Centre for Disease Prevention and Control calculated that in 2015 there were 671,689 infections in the EU and European Economic Area caused by antibiotic-resistant bacteria, resulting in 33,110 deaths. Most were acquired in healthcare settings. In 2019 there were 133,000 deaths caused by AMR. Causes AMR is driven largely by the misuse and overuse of antimicrobials. Yet, at the same time, many people around the world do not have access to essential antimicrobials. This leads to microbes either evolving a defense against drugs used to treat them, or certain strains of microbes that have a natural resistance to antimicrobials becoming much more prevalent than the ones that are easily defeated with medication. While antimicrobial resistance does occur naturally over time, the use of antimicrobial agents in a variety of settings both within the healthcare industry and outside of has led to antimicrobial resistance becoming increasingly more prevalent. Although many microbes develop resistance to antibiotics over time though natural mutation, overprescribing and inappropriate prescription of antibiotics have accelerated the problem. It is possible that as many as 1 in 3 prescriptions written for antibiotics are unnecessary. Every year, approximately 154 million prescriptions for antibiotics are written. Of these, up to 46 million are unnecessary or inappropriate for the condition that the patient has. Microbes may naturally develop resistance through genetic mutations that occur during cell division, and although random mutations are rare, many microbes reproduce frequently and rapidly, increasing the chances of members of the population acquiring a mutation that increases resistance. Many individuals stop taking antibiotics when they begin to feel better. When this occurs, it is possible that the microbes that are less susceptible to treatment still remain in the body. If these microbes are able to continue to reproduce, this can lead to an infection by bacteria that are less susceptible or even resistant to an antibiotic. Natural occurrence AMR is a naturally occurring process. Antimicrobial resistance can evolve naturally due to continued exposure to antimicrobials. Natural selection means that organisms that are able to adapt to their environment, survive, and continue to produce offspring. As a result, the types of microorganisms that are able to survive over time with continued attack by certain antimicrobial agents will naturally become more prevalent in the environment, and those without this resistance will become obsolete. Some contemporary antimicrobial resistances have also evolved naturally before the use of antimicrobials of human clinical uses. For instance, methicillin-resistance evolved as a pathogen of hedgehogs, possibly as a co-evolutionary adaptation of the pathogen to hedgehogs that are infected by a dermatophyte that naturally produces antibiotics. Also, many soil fungi and bacteria are natural competitors and the original antibiotic penicillin discovered by Alexander Fleming rapidly lost clinical effectiveness in treating humans and, furthermore, none of the other natural penicillins (F, K, N, X, O, U1 or U6) are currently in clinical use. Antimicrobial resistance can be acquired from other microbes through swapping genes in a process termed horizontal gene transfer. This means that once a gene for resistance to an antibiotic appears in a microbial community, it can then spread to other microbes in the community, potentially moving from a non-disease causing microbe to a disease-causing microbe. This process is heavily driven by the natural selection processes that happen during antibiotic use or misuse. Over time, most of the strains of bacteria and infections present will be the type resistant to the antimicrobial agent being used to treat them, making this agent now ineffective to defeat most microbes. With the increased use of antimicrobial agents, there is a speeding up of this natural process. Self-medication In the vast majority of countries, antibiotics can only be prescribed by a doctor and supplied by a pharmacy. Self-medication by consumers is defined as "the taking of medicines on one's own initiative or on another person's suggestion, who is not a certified medical professional", and it has been identified as one of the primary reasons for the evolution of antimicrobial resistance. Self-medication with antibiotics is an unsuitable way of using them but a common practice in resource-constrained countries. The practice exposes individuals to the risk of bacteria that have developed antimicrobial resistance. Many people resort to this out of necessity, when access to a physician is unavailable, or when patients have a limited amount of time or money to see a doctor. This increased access makes it extremely easy to obtain antimicrobials. An example is India, where in the state of Punjab 73% of the population resorted to treating their minor health issues and chronic illnesses through self-medication. Self-medication is higher outside the hospital environment, and this is linked to higher use of antibiotics, with the majority of antibiotics being used in the community rather than hospitals. The prevalence of self-medication in low- and middle-income countries (LMICs) ranges from 8.1% to 93%. Accessibility, affordability, and conditions of health facilities, as well as the health-seeking behavior, are factors that influence self-medication in low- and middle-income countries. Two significant issues with self-medication are the lack of knowledge of the public on, firstly, the dangerous effects of certain antimicrobials (for example ciprofloxacin which can cause tendonitis, tendon rupture and aortic dissection) and, secondly, broad microbial resistance and when to seek medical care if the infection is not clearing. In order to determine the public's knowledge and preconceived notions on antibiotic resistance, a screening of 3,537 articles published in Europe, Asia, and North America was done. Of the 55,225 total people surveyed in the articles, 70% had heard of antibiotic resistance previously, but 88% of those people thought it referred to some type of physical change in the human body. Clinical misuse Clinical misuse by healthcare professionals is another contributor to increased antimicrobial resistance. Studies done in the US show that the indication for treatment of antibiotics, choice of the agent used, and the duration of therapy was incorrect in up to 50% of the cases studied. In 2010 and 2011 about a third of antibiotic prescriptions in outpatient settings in the United States were not necessary. Another study in an intensive care unit in a major hospital in France has shown that 30% to 60% of prescribed antibiotics were unnecessary. These inappropriate uses of antimicrobial agents promote the evolution of antimicrobial resistance by supporting the bacteria in developing genetic alterations that lead to resistance. According to research conducted in the US that aimed to evaluate physicians' attitudes and knowledge on antimicrobial resistance in ambulatory settings, only 63% of those surveyed reported antibiotic resistance as a problem in their local practices, while 23% reported the aggressive prescription of antibiotics as necessary to avoid failing to provide adequate care. This demonstrates how a majority of doctors underestimate the impact that their own prescribing habits have on antimicrobial resistance as a whole. It also confirms that some physicians may be overly cautious and prescribe antibiotics for both medical or legal reasons, even when clinical indications for use of these medications are not always confirmed. This can lead to unnecessary antimicrobial use, a pattern which may have worsened during the COVID-19 pandemic. Studies have shown that common misconceptions about the effectiveness and necessity of antibiotics to treat common mild illnesses contribute to their overuse. Important to the conversation of antibiotic use is the veterinary medical system. Veterinary oversight is required by law for all medically important antibiotics. Veterinarians use the Pharmacokinetic/pharmacodynamic model (PK/PD) approach to ensuring that the correct dose of the drug is delivered to the correct place at the correct timing. Pandemics, disinfectants and healthcare systems Increased antibiotic use during the early waves of the COVID-19 pandemic may exacerbate this global health challenge. Moreover, pandemic burdens on some healthcare systems may contribute to antibiotic-resistant infections. On the other hand, "increased hand hygiene, decreased international travel, and decreased elective hospital procedures may have reduced AMR pathogen selection and spread in the short term" during the COVID-19 pandemic. The use of disinfectants such as alcohol-based hand sanitizers, and antiseptic hand wash may also have the potential to increase antimicrobial resistance. Extensive use of disinfectants can lead to mutations that induce antimicrobial resistance. A 2024 United Nations High-Level Meeting on AMR has pledged to reduce deaths associated with bacterial AMR by 10% over the next six years. In their first major declaration on the issue since 2016, global leaders also committed to raising $100 million to update and implement AMR action plans. However, the final draft of the declaration omitted an earlier target to reduce antibiotic use in animals by 30% by 2030, due to opposition from meat-producing countries and the farming industry. Critics argue this omission is a major weakness, as livestock accounts for around 73% of global sales of antimicrobial agents, including antibiotics, antivirals, and antiparasitics. Environmental pollution Considering the complex interactions between humans, animals and the environment, it is also important to consider the environmental aspects and contributors to antimicrobial resistance. Although there are still some knowledge gaps in understanding the mechanisms and transmission pathways, environmental pollution is considered a significant contributor to antimicrobial resistance. Important contributing factors are through "antibiotic residues", "industrial effluents", " agricultural runoffs", "heavy metals", "biocides and pesticides" and "sewage and wastewater" that create reservoirs for resistant genes and bacteria that facilitates the transfer of human pathogens. Unused or expired antibiotics, if not disposed of properly, can enter water systems and soil. Discharge from pharmaceutical manufacturing and other industrial companies can also introduce antibiotics and other chemicals into the environment. These factors allow for creating selective pressure for resistant bacteria. Antibiotics used in livestock and aquaculture can contaminate soil and water, which promotes resistance in environmental microbes. Heavy metals such as zinc, copper and mercury, and also biocides and pesticides, can co- select for antibiotic resistance, enhancing their speed. Inadequate treatment of sewage and wastewater allows resistant bacteria and genes to spread through water systems. Food production Livestock The antimicrobial resistance crisis also extends to the food industry, specifically with food producing animals. With an ever-increasing human population, there is constant pressure to intensify productivity in many agricultural sectors, including the production of meat as a source of protein. Antibiotics are fed to livestock to act as growth supplements, and a preventive measure to decrease the likelihood of infections. Farmers typically use antibiotics in animal feed to improve growth rates and prevent infections. However, this is illogical as antibiotics are used to treat infections and not prevent infections. 80% of antibiotic use in the U.S. is for agricultural purposes and about 70% of these are medically important. Overusing antibiotics gives the bacteria time to adapt leaving higher doses or even stronger antibiotics needed to combat the infection. Though antibiotics for growth promotion were banned throughout the EU in 2006, 40 countries worldwide still use antibiotics to promote growth. This can result in the transfer of resistant bacterial strains into the food that humans eat, causing potentially fatal transfer of disease. While the practice of using antibiotics as growth promoters does result in better yields and meat products, it is a major issue and needs to be decreased in order to prevent antimicrobial resistance. Though the evidence linking antimicrobial usage in livestock to antimicrobial resistance is limited, the World Health Organization Advisory Group on Integrated Surveillance of Antimicrobial Resistance strongly recommended the reduction of use of medically important antimicrobials in livestock. Additionally, the Advisory Group stated that such antimicrobials should be expressly prohibited for both growth promotion and disease prevention in food producing animals. By mapping antimicrobial consumption in livestock globally, it was predicted that in 228 countries there would be a total 67% increase in consumption of antibiotics by livestock by 2030. In some countries such as Brazil, Russia, India, China, and South Africa it is predicted that a 99% increase will occur. Several countries have restricted the use of antibiotics in livestock, including Canada, China, Japan, and the US. These restrictions are sometimes associated with a reduction of the prevalence of antimicrobial resistance in humans. In the United States the Veterinary Feed Directive went into practice in 2017 dictating that All medically important antibiotics to be used in feed or water for food animal species require a veterinary feed directive (VFD) or a prescription. Pesticides Most pesticides protect crops against insects and plants, but in some cases antimicrobial pesticides are used to protect against various microorganisms such as bacteria, viruses, fungi, algae, and protozoa. The overuse of many pesticides in an effort to have a higher yield of crops has resulted in many of these microbes evolving a tolerance against these antimicrobial agents. Currently there are over 4000 antimicrobial pesticides registered with the US Environmental Protection Agency (EPA) and sold to market, showing the widespread use of these agents. It is estimated that for every single meal a person consumes, 0.3 g of pesticides is used, as 90% of all pesticide use is in agriculture. A majority of these products are used to help defend against the spread of infectious diseases, and hopefully protect public health. But out of the large amount of pesticides used, it is also estimated that less than 0.1% of those antimicrobial agents, actually reach their targets. That leaves over 99% of all pesticides used available to contaminate other resources. In soil, air, and water these antimicrobial agents are able to spread, coming in contact with more microorganisms and leading to these microbes evolving mechanisms to tolerate and further resist pesticides. The use of antifungal azole pesticides that drive environmental azole resistance have been linked to azole resistance cases in the clinical setting. The same issues confront the novel antifungal classes (e.g. orotomides) which are again being used in both the clinic and agriculture. Wild birds Wildlife, including wild and migratory birds, serve as a reservoir for zoonotic disease and antimicrobial-resistant organisms.  Birds are a key link between the transmission of zoonotic diseases to human populations.  By the same token, increased contact between wild birds and human populations (including domesticated animals), has increased the amount of anti-microbial resistance (AMR) to the bird population.  The introduction of AMR to wild birds positively correlates with human pollution and increased human contact.  Additionally, wild birds can participate in horizontal gene transfer with bacteria, leading to the transmission of antibiotic-resistant genes (ARG). For simplicity, wild bird populations can be divided into two major categories, wild sedentary birds and wild migrating birds.  Wild sedentary bird exposure to AMR is through increased contact with densely populated areas, human waste, domestic animals, and domestic animal/livestock waste. Wild migrating birds interact with sedentary birds in different environments along their migration route.  This increases the rate and diversity of AMR across varying ecosystems. Neglect of wildlife in the global discussions surrounding health security and AMR, creates large barriers to true AMR surveillance. The surveillance of anti-microbial resistant organisms in wild birds is a potential metric for the rate of AMR in the environment. This surveillance also allows for further investigation into the transmission routs between different ecosystems and human populations (including domesticated animals and livestock).  Such information gathered from wild bird biomes, can help identify patterns of diseased transmission and better target interventions.  These targeted interventions can inform the use of antimicrobial agents and reduce the persistence of multi-drug resistant organisms. Gene transfer from ancient microorganisms Permafrost is a term used to refer to any ground that remained frozen for two years or more, with the oldest known examples continuously frozen for around 700,000 years. In the recent decades, permafrost has been rapidly thawing due to climate change. The cold preserves any organic matter inside the permafrost, and it is possible for microorganisms to resume their life functions once it thaws. While some common pathogens such as influenza, smallpox or the bacteria associated with pneumonia have failed to survive intentional attempts to revive them, more cold-adapted microorganisms such as anthrax, or several ancient plant and amoeba viruses, have successfully survived prolonged thaw. Some scientists have argued that the inability of known causative agents of contagious diseases to survive being frozen and thawed makes this threat unlikely. Instead, there have been suggestions that when modern pathogenic bacteria interact with the ancient ones, they may, through horizontal gene transfer, pick up genetic sequences which are associated with antimicrobial resistance, exacerbating an already difficult issue. Antibiotics to which permafrost bacteria have displayed at least some resistance include chloramphenicol, streptomycin, kanamycin, gentamicin, tetracycline, spectinomycin and neomycin. However, other studies show that resistance levels in ancient bacteria to modern antibiotics remain lower than in the contemporary bacteria from the active layer of thawed ground above them, which may mean that this risk is "no greater" than from any other soil. Prevention There have been increasing public calls for global collective action to address the threat, including a proposal for an international treaty on antimicrobial resistance. Further detail and attention is still needed in order to recognize and measure trends in resistance on the international level; the idea of a global tracking system has been suggested but implementation has yet to occur. A system of this nature would provide insight to areas of high resistance as well as information necessary for evaluating programs, introducing interventions and other changes made to fight or reverse antibiotic resistance. Duration of antimicrobials Delaying or minimizing the use of antibiotics for certain conditions may help safely reduce their use. Antimicrobial treatment duration should be based on the infection and other health problems a person may have. For many infections once a person has improved there is little evidence that stopping treatment causes more resistance. Some, therefore, feel that stopping early may be reasonable in some cases. Other infections, however, do require long courses regardless of whether a person feels better. Delaying antibiotics for ailments such as a sore throat and otitis media may have no difference in the rate of complications compared with immediate antibiotics, for example. When treating respiratory tract infections, clinical judgement is required as to the appropriate treatment (delayed or immediate antibiotic use). The study, "Shorter and Longer Antibiotic Durations for Respiratory Infections: To Fight Antimicrobial Resistance—A Retrospective Cross-Sectional Study in a Secondary Care Setting in the UK," highlights the urgency of reevaluating antibiotic treatment durations amidst the global challenge of antimicrobial resistance (AMR). It investigates the effectiveness of shorter versus longer antibiotic regimens for respiratory tract infections (RTIs) in a UK secondary care setting, emphasizing the need for evidence-based prescribing practices to optimize patient outcomes and combat AMR. Monitoring and mapping There are multiple national and international monitoring programs for drug-resistant threats, including methicillin-resistant Staphylococcus aureus (MRSA), vancomycin-resistant S. aureus (VRSA), extended spectrum beta-lactamase (ESBL) producing Enterobacterales, vancomycin-resistant Enterococcus (VRE), and multidrug-resistant Acinetobacter baumannii (MRAB). ResistanceOpen is an online global map of antimicrobial resistance developed by HealthMap which displays aggregated data on antimicrobial resistance from publicly available and user submitted data. The website can display data for a radius from a location. Users may submit data from antibiograms for individual hospitals or laboratories. European data is from the EARS-Net (European Antimicrobial Resistance Surveillance Network), part of the ECDC. ResistanceMap is a website by the Center for Disease Dynamics, Economics & Policy and provides data on antimicrobial resistance on a global level. The WHO's AMR global action plan also recommends antimicrobial resistance surveillance in animals. Initial steps in the EU for establishing the veterinary counterpart EARS-Vet (EARS-Net for veterinary medicine) have been made. AMR data from pets in particular is scarce, but needed to support antibiotic stewardship in veterinary medicine. By comparison there is a lack of national and international monitoring programs for antifungal resistance. Limiting antimicrobial use in humans Antimicrobial stewardship programmes appear useful in reducing rates of antimicrobial resistance. The antimicrobial stewardship program will also provide pharmacists with the knowledge to educate patients that antibiotics will not work for a virus for example. Excessive antimicrobial use has become one of the top contributors to the evolution of antimicrobial resistance. Since the beginning of the antimicrobial era, antimicrobials have been used to treat a wide range of infectious diseases. Overuse of antimicrobials has become the primary cause of rising levels of antimicrobial resistance. The main problem is that doctors are willing to prescribe antimicrobials to ill-informed individuals who believe that antimicrobials can cure nearly all illnesses, including viral infections like the common cold. In an analysis of drug prescriptions, 36% of individuals with a cold or an upper respiratory infection (both usually viral in origin) were given prescriptions for antibiotics. These prescriptions accomplished nothing other than increasing the risk of further evolution of antibiotic resistant bacteria. Using antimicrobials without prescription is another driving force leading to the overuse of antibiotics to self-treat diseases like the common cold, cough, fever, and dysentery resulting in an epidemic of antibiotic resistance in countries like Bangladesh, risking its spread around the globe. Introducing strict antibiotic stewardship in the outpatient setting to reduce inappropriate prescribing of antibiotics may reduce the emerging bacterial resistance. The WHO AWaRe (Access, Watch, Reserve) guidance and antibiotic book has been introduced to guide antibiotic choice for the 30 most common infections in adults and children to reduce inappropriate prescribing in primary care and hospitals. Narrow-spectrum antibiotics are preferred due to their lower resistance potential, and broad-spectrum antibiotics are only recommended for people with more severe symptoms. Some antibiotics are more likely to confer resistance, so are kept as reserve antibiotics in the AWaRe book. Various diagnostic strategies have been employed to prevent the overuse of antifungal therapy in the clinic, proving a safe alternative to empirical antifungal therapy, and thus underpinning antifungal stewardship schemes. At the hospital level Antimicrobial stewardship teams in hospitals are encouraging optimal use of antimicrobials. The goals of antimicrobial stewardship are to help practitioners pick the right drug at the right dose and duration of therapy while preventing misuse and minimizing the development of resistance. Stewardship interventions may reduce the length of stay by an average of slightly over 1 day while not increasing the risk of death. Dispensing, to discharged in-house patients, the exact number of antibiotic pharmaceutical units necessary to complete an ongoing treatment can reduce antibiotic leftovers within the community as community pharmacies can have antibiotic package inefficiencies. At the primary care level Given the volume of care provided in primary care (general practice), recent strategies have focused on reducing unnecessary antimicrobial prescribing in this setting. Simple interventions, such as written information explaining when taking antibiotics is not necessary, for example in common infections of the upper respiratory tract, have been shown to reduce antibiotic prescribing. Various tools are also available to help professionals decide if prescribing antimicrobials is necessary. Parental expectations, driven by the worry for their children's health, can influence how often children are prescribed antibiotics. Parents often rely on their clinician for advice and reassurance. However a lack of plain language information and not having adequate time for consultation negatively impacts this relationship. In effect parents often rely on past experiences in their expectations rather than reassurance from the clinician. Adequate time for consultation and plain language information can help parents make informed decisions and avoid unnecessary antibiotic use. Parents play a critical role in reducing unnecessary antibiotic use, particularly during cold and flu season when children frequently experience respiratory illnesses. Many of these illnesses are caused by viruses, such as colds or the flu, which antibiotics cannot treat. Misusing antibiotics in these situations not only fails to benefit the child but also contributes to the emergence of antibiotic-resistant bacteria, posing a broader public health threat. To address parental concerns and reduce inappropriate prescribing, healthcare providers can offer plain-language explanations about the difference between bacterial and viral infections, alongside clear guidance on managing viral illnesses without antibiotics. Vaccinations also play a vital role in reducing the incidence of serious bacterial infections that may require antibiotic treatment, thereby helping to preserve the effectiveness of existing antibiotics. Schools further amplify the spread of infections due to close contact and shared surfaces, underscoring the importance of hygiene practices like regular handwashing, covering coughs, and staying home when unwell. These preventive measures not only reduce the need for antibiotics but also lower the overall risk of resistant bacteria spreading within communities. The prescriber should closely adhere to the five rights of drug administration: the right patient, the right drug, the right dose, the right route, and the right time. Microbiological samples should be taken for culture and sensitivity testing before treatment when indicated and treatment potentially changed based on the susceptibility report. Health workers and pharmacists can help tackle antibiotic resistance by: enhancing infection prevention and control; only prescribing and dispensing antibiotics when they are truly needed; prescribing and dispensing the right antibiotic(s) to treat the illness. A unit dose system implemented in community pharmacies can also reduce antibiotic leftovers at households. At the individual level People can help tackle resistance by using antibiotics only when infected with a bacterial infection and prescribed by a doctor; completing the full prescription even if the user is feeling better, never sharing antibiotics with others, or using leftover prescriptions. Taking antibiotics when not needed won't help the user, but instead give bacteria the option to adapt and leave the user with the side effects that come with the certain type of antibiotic. The CDC recommends that you follow these behaviors so that you avoid these negative side effects and keep the community safe from spreading drug-resistant bacteria. Practicing basic bacterial infection prevention courses, such as hygiene, also helps to prevent the spread of antibiotic-resistant bacteria. Country examples The Netherlands has the lowest rate of antibiotic prescribing in the OECD, at a rate of 11.4 defined daily doses (DDD) per 1,000 people per day in 2011. The defined daily dose (DDD) is a statistical measure of drug consumption, defined by the World Health Organization (WHO). Germany and Sweden also have lower prescribing rates, with Sweden's rate having been declining since 2007. Greece, France and Belgium have high prescribing rates for antibiotics of more than 28 DDD. Water, sanitation, hygiene Infectious disease control through improved water, sanitation and hygiene (WASH) infrastructure needs to be included in the antimicrobial resistance (AMR) agenda. The "Interagency Coordination Group on Antimicrobial Resistance" stated in 2018 that "the spread of pathogens through unsafe water results in a high burden of gastrointestinal disease, increasing even further the need for antibiotic treatment." This is particularly a problem in developing countries where the spread of infectious diseases caused by inadequate WASH standards is a major driver of antibiotic demand. Growing usage of antibiotics together with persistent infectious disease levels have led to a dangerous cycle in which reliance on antimicrobials increases while the efficacy of drugs diminishes. The proper use of infrastructure for water, sanitation and hygiene (WASH) can result in a 47–72 percent decrease of diarrhea cases treated with antibiotics depending on the type of intervention and its effectiveness. A reduction of the diarrhea disease burden through improved infrastructure would result in large decreases in the number of diarrhea cases treated with antibiotics. This was estimated as ranging from 5 million in Brazil to up to 590 million in India by the year 2030. The strong link between increased consumption and resistance indicates that this will directly mitigate the accelerating spread of AMR. Sanitation and water for all by 2030 is Goal Number 6 of the Sustainable Development Goals. An increase in hand washing compliance by hospital staff results in decreased rates of resistant organisms. Water supply and sanitation infrastructure in health facilities offer significant co-benefits for combatting AMR, and investment should be increased. There is much room for improvement: WHO and UNICEF estimated in 2015 that globally 38% of health facilities did not have a source of water, nearly 19% had no toilets and 35% had no water and soap or alcohol-based hand rub for handwashing. Industrial wastewater treatment Manufacturers of antimicrobials need to improve the treatment of their wastewater (by using industrial wastewater treatment processes) to reduce the release of residues into the environment. Limiting antimicrobial use in animals and farming It is established that the use of antibiotics in animal husbandry can give rise to AMR resistances in bacteria found in food animals to the antibiotics being administered (through injections or medicated feeds). For this reason only antimicrobials that are deemed "not-clinically relevant" are used in these practices. Unlike resistance to antibacterials, antifungal resistance can be driven by arable farming, currently there is no regulation on the use of similar antifungal classes in agriculture and the clinic. Recent studies have shown that the prophylactic use of "non-priority" or "non-clinically relevant" antimicrobials in feeds can potentially, under certain conditions, lead to co-selection of environmental AMR bacteria with resistance to medically important antibiotics. The possibility for co-selection of AMR resistances in the food chain pipeline may have far-reaching implications for human health. Country examples Europe In 1997, European Union health ministers voted to ban avoparcin and four additional antibiotics used to promote animal growth in 1999. In 2006 a ban on the use of antibiotics in European feed, with the exception of two antibiotics in poultry feeds, became effective. In Scandinavia, there is evidence that the ban has led to a lower prevalence of antibiotic resistance in (nonhazardous) animal bacterial populations. As of 2004, several European countries established a decline of antimicrobial resistance in humans through limiting the use of antimicrobials in agriculture and food industries without jeopardizing animal health or economic cost. United States The United States Department of Agriculture (USDA) and the Food and Drug Administration (FDA) collect data on antibiotic use in humans and in a more limited fashion in animals. About 80% of antibiotic use in the U.S. is for agriculture purposes, and about 70% of these are medically important. This gives reason for concern about the antibiotic resistance crisis in the U.S. and more reason to monitor it. The FDA first determined in 1977 that there is evidence of emergence of antibiotic-resistant bacterial strains in livestock. The long-established practice of permitting OTC sales of antibiotics (including penicillin and other drugs) to lay animal owners for administration to their own animals nonetheless continued in all states. In 2000, the FDA announced their intention to revoke approval of fluoroquinolone use in poultry production because of substantial evidence linking it to the emergence of fluoroquinolone-resistant Campylobacter infections in humans. Legal challenges from the food animal and pharmaceutical industries delayed the final decision to do so until 2006. Fluroquinolones have been banned from extra-label use in food animals in the USA since 2007. However, they remain widely used in companion and exotic animals. Global action plans and awareness At the 79th United Nations General Assembly High-Level Meeting on AMR on 26 September 2024, world leaders approved a political declaration committing to a clear set of targets and actions, including reducing the estimated 4.95 million human deaths associated with bacterial AMR annually by 10% by 2030. The increasing interconnectedness of the world and the fact that new classes of antibiotics have not been developed and approved for more than 25 years highlight the extent to which antimicrobial resistance is a global health challenge. A global action plan to tackle the growing problem of resistance to antibiotics and other antimicrobial medicines was endorsed at the Sixty-eighth World Health Assembly in May 2015. One of the key objectives of the plan is to improve awareness and understanding of antimicrobial resistance through effective communication, education and training. This global action plan developed by the World Health Organization was created to combat the issue of antimicrobial resistance and was guided by the advice of countries and key stakeholders. The WHO's global action plan is composed of five key objectives that can be targeted through different means, and represents countries coming together to solve a major problem that can have future health consequences. These objectives are as follows: improve awareness and understanding of antimicrobial resistance through effective communication, education and training. strengthen the knowledge and evidence base through surveillance and research. reduce the incidence of infection through effective sanitation, hygiene and infection prevention measures. optimize the use of antimicrobial medicines in human and animal health. develop the economic case for sustainable investment that takes account of the needs of all countries and to increase investment in new medicines, diagnostic tools, vaccines and other interventions. Steps towards progress React based in Sweden has produced informative material on AMR for the general public. Videos are being produced for the general public to generate interest and awareness. The Irish Department of Health published a National Action Plan on Antimicrobial Resistance in October 2017. The Strategy for the Control of Antimicrobial Resistance in Ireland (SARI), Iaunched in 2001 developed Guidelines for Antimicrobial Stewardship in Hospitals in Ireland in conjunction with the Health Protection Surveillance Centre, these were published in 2009. Following their publication a public information campaign 'Action on Antibiotics' was launched to highlight the need for a change in antibiotic prescribing. Despite this, antibiotic prescribing remains high with variance in adherence to guidelines. The United Kingdom published a 20-year vision for antimicrobial resistance that sets out the goal of containing and controlling AMR by 2040. The vision is supplemented by a 5-year action plan running from 2019 to 2024, building on the previous action plan (2013–2018). The World Health Organization has published the 2024 Bacterial Priority Pathogens List which covers 15 families of antibiotic-resistant bacterial pathogens. Notable among these are gram-negative bacteria resistant to last-resort antibiotics, drug-resistant mycobacterium tuberculosis, and other high-burden resistant pathogens such as Salmonella, Shigella, Neisseria gonorrhoeae, Pseudomonas aeruginosa, and Staphylococcus aureus. The inclusion of these pathogens in the list underscores their global impact in terms of burden, as well as issues related to transmissibility, treatability, and prevention options. It also reflects the R&D pipeline of new treatments and emerging resistance trends. Antibiotic Awareness Week The World Health Organization has promoted the first World Antibiotic Awareness Week running from 16 to 22 November 2015. The aim of the week is to increase global awareness of antibiotic resistance. It also wants to promote the correct usage of antibiotics across all fields in order to prevent further instances of antibiotic resistance. World Antibiotic Awareness Week has been held every November since 2015. For 2017, the Food and Agriculture Organization of the United Nations (FAO), the World Health Organization (WHO) and the World Organisation for Animal Health (OIE) are together calling for responsible use of antibiotics in humans and animals to reduce the emergence of antibiotic resistance. United Nations In 2016 the Secretary-General of the United Nations convened the Interagency Coordination Group (IACG) on Antimicrobial Resistance. The IACG worked with international organizations and experts in human, animal, and plant health to create a plan to fight antimicrobial resistance. Their report released in April 2019 highlights the seriousness of antimicrobial resistance and the threat it poses to world health. It suggests five recommendations for member states to follow in order to tackle this increasing threat. The IACG recommendations are as follows: Accelerate progress in countries Innovate to secure the future Collaborate for more effective action Invest for a sustainable response Strengthen accountability and global governance Mechanisms and organisms Bacteria The five main mechanisms by which bacteria exhibit resistance to antibiotics are: Drug inactivation or modification: for example, enzymatic deactivation of penicillin G in some penicillin-resistant bacteria through the production of β-lactamases. Drugs may also be chemically modified through the addition of functional groups by transferase enzymes; for example, acetylation, phosphorylation, or adenylation are common resistance mechanisms to aminoglycosides. Acetylation is the most widely used mechanism and can affect a number of drug classes. Alteration of target- or binding site: for example, alteration of PBP—the binding target site of penicillins—in MRSA and other penicillin-resistant bacteria. Another protective mechanism found among bacterial species is ribosomal protection proteins. These proteins protect the bacterial cell from antibiotics that target the cell's ribosomes to inhibit protein synthesis. The mechanism involves the binding of the ribosomal protection proteins to the ribosomes of the bacterial cell, which in turn changes its conformational shape. This allows the ribosomes to continue synthesizing proteins essential to the cell while preventing antibiotics from binding to the ribosome to inhibit protein synthesis. Alteration of metabolic pathway: for example, some sulfonamide-resistant bacteria do not require para-aminobenzoic acid (PABA), an important precursor for the synthesis of folic acid and nucleic acids in bacteria inhibited by sulfonamides, instead, like mammalian cells, they turn to using preformed folic acid. Reduced drug accumulation: by decreasing drug permeability or increasing active efflux (pumping out) of the drugs across the cell surface These pumps within the cellular membrane of certain bacterial species are used to pump antibiotics out of the cell before they are able to do any damage. They are often activated by a specific substrate associated with an antibiotic, as in fluoroquinolone resistance. Ribosome splitting and recycling: for example, drug-mediated stalling of the ribosome by lincomycin and erythromycin unstalled by a heat shock protein found in Listeria monocytogenes, which is a homologue of HflX from other bacteria. Liberation of the ribosome from the drug allows further translation and consequent resistance to the drug. There are several different types of germs that have developed a resistance over time. The six pathogens causing most deaths associated with resistance are Escherichia coli, Staphylococcus aureus, Klebsiella pneumoniae, Streptococcus pneumoniae, Acinetobacter baumannii, and Pseudomonas aeruginosa. They were responsible for 929,000 deaths attributable to resistance and 3.57 million deaths associated with resistance in 2019. Penicillinase-producing Neisseria gonorrhoeae developed a resistance to penicillin in 1976. Another example is Azithromycin-resistant Neisseria gonorrhoeae, which developed a resistance to azithromycin in 2011. In gram-negative bacteria, plasmid-mediated resistance genes produce proteins that can bind to DNA gyrase, protecting it from the action of quinolones. Finally, mutations at key sites in DNA gyrase or topoisomerase IV can decrease their binding affinity to quinolones, decreasing the drug's effectiveness. Some bacteria are naturally resistant to certain antibiotics; for example, gram-negative bacteria are resistant to most β-lactam antibiotics due to the presence of β-lactamase. Antibiotic resistance can also be acquired as a result of either genetic mutation or horizontal gene transfer. Although mutations are rare, with spontaneous mutations in the pathogen genome occurring at a rate of about 1 in 105 to 1 in 108 per chromosomal replication, the fact that bacteria reproduce at a high rate allows for the effect to be significant. Given that lifespans and production of new generations can be on a timescale of mere hours, a new (de novo) mutation in a parent cell can quickly become an inherited mutation of widespread prevalence, resulting in the microevolution of a fully resistant colony. However, chromosomal mutations also confer a cost of fitness. For example, a ribosomal mutation may protect a bacterial cell by changing the binding site of an antibiotic but may result in slower growth rate. Moreover, some adaptive mutations can propagate not only through inheritance but also through horizontal gene transfer. The most common mechanism of horizontal gene transfer is the transferring of plasmids carrying antibiotic resistance genes between bacteria of the same or different species via conjugation. However, bacteria can also acquire resistance through transformation, as in Streptococcus pneumoniae uptaking of naked fragments of extracellular DNA that contain antibiotic resistance genes to streptomycin, through transduction, as in the bacteriophage-mediated transfer of tetracycline resistance genes between strains of S. pyogenes, or through gene transfer agents, which are particles produced by the host cell that resemble bacteriophage structures and are capable of transferring DNA. Antibiotic resistance can be introduced artificially into a microorganism through laboratory protocols, sometimes used as a selectable marker to examine the mechanisms of gene transfer or to identify individuals that absorbed a piece of DNA that included the resistance gene and another gene of interest. Recent findings show no necessity of large populations of bacteria for the appearance of antibiotic resistance. Small populations of Escherichia coli in an antibiotic gradient can become resistant. Any heterogeneous environment with respect to nutrient and antibiotic gradients may facilitate antibiotic resistance in small bacterial populations. Researchers hypothesize that the mechanism of resistance evolution is based on four SNP mutations in the genome of E. coli produced by the gradient of antibiotic. In one study, which has implications for space microbiology, a non-pathogenic strain E. coli MG1655 was exposed to trace levels of the broad spectrum antibiotic chloramphenicol, under simulated microgravity (LSMMG, or Low Shear Modeled Microgravity) over 1000 generations. The adapted strain acquired resistance to not only chloramphenicol, but also cross-resistance to other antibiotics; this was in contrast to the observation on the same strain, which was adapted to over 1000 generations under LSMMG, but without any antibiotic exposure; the strain in this case did not acquire any such resistance. Thus, irrespective of where they are used, the use of an antibiotic would likely result in persistent resistance to that antibiotic, as well as cross-resistance to other antimicrobials. In recent years, the emergence and spread of β-lactamases called carbapenemases has become a major health crisis. One such carbapenemase is New Delhi metallo-beta-lactamase 1 (NDM-1), an enzyme that makes bacteria resistant to a broad range of beta-lactam antibiotics. The most common bacteria that make this enzyme are gram-negative such as E. coli and Klebsiella pneumoniae, but the gene for NDM-1 can spread from one strain of bacteria to another by horizontal gene transfer. Viruses Specific antiviral drugs are used to treat some viral infections. These drugs prevent viruses from reproducing by inhibiting essential stages of the virus's replication cycle in infected cells. Antivirals are used to treat HIV, hepatitis B, hepatitis C, influenza, herpes viruses including varicella zoster virus, cytomegalovirus and Epstein–Barr virus. With each virus, some strains have become resistant to the administered drugs. Antiviral drugs typically target key components of viral reproduction; for example, oseltamivir targets influenza neuraminidase, while guanosine analogs inhibit viral DNA polymerase. Resistance to antivirals is thus acquired through mutations in the genes that encode the protein targets of the drugs. Resistance to HIV antivirals is problematic, and even multi-drug resistant strains have evolved. One source of resistance is that many current HIV drugs, including NRTIs and NNRTIs, target reverse transcriptase; however, HIV-1 reverse transcriptase is highly error prone and thus mutations conferring resistance arise rapidly. Resistant strains of the HIV virus emerge rapidly if only one antiviral drug is used. Using three or more drugs together, termed combination therapy, has helped to control this problem, but new drugs are needed because of the continuing emergence of drug-resistant HIV strains. Fungi Infections by fungi are a cause of high morbidity and mortality in immunocompromised persons, such as those with HIV/AIDS, tuberculosis or receiving chemotherapy. The fungi Candida, Cryptococcus neoformans and Aspergillus fumigatus cause most of these infections and antifungal resistance occurs in all of them. Multidrug resistance in fungi is increasing because of the widespread use of antifungal drugs to treat infections in immunocompromised individuals and the use of some agricultural antifungals. Antifungal resistant disease is associated with increased mortality. Some fungi (e.g. Candida krusei and fluconazole) exhibit intrinsic resistance to certain antifungal drugs or classes, whereas some species develop antifungal resistance to external pressures. Antifungal resistance is a One Health concern, driven by multiple extrinsic factors, including extensive fungicidal use, overuse of clinical antifungals, environmental change and host factors. In the USA fluconazole-resistant Candida species and azole resistance in Aspergillus fumigatus have been highlighted as a growing threat. More than 20 species of Candida can cause candidiasis infection, the most common of which is Candida albicans. Candida yeasts normally inhabit the skin and mucous membranes without causing infection. However, overgrowth of Candida can lead to candidiasis. Some Candida species (e.g. Candida glabrata) are becoming resistant to first-line and second-line antifungal agents such as echinocandins and azoles. The emergence of Candida auris as a potential human pathogen that sometimes exhibits multi-class antifungal drug resistance is concerning and has been associated with several outbreaks globally. The WHO has released a priority fungal pathogen list, including pathogens with antifungal resistance. The identification of antifungal resistance is undermined by limited classical diagnosis of infection, where a culture is lacking, preventing susceptibility testing. National and international surveillance schemes for fungal disease and antifungal resistance are limited, hampering the understanding of the disease burden and associated resistance. The application of molecular testing to identify genetic markers associating with resistance may improve the identification of antifungal resistance, but the diversity of mutations associated with resistance is increasing across the fungal species causing infection. In addition, a number of resistance mechanisms depend on up-regulation of selected genes (for instance reflux pumps) rather than defined mutations that are amenable to molecular detection. Due to the limited number of antifungals in clinical use and the increasing global incidence of antifungal resistance, using the existing antifungals in combination might be beneficial in some cases but further research is needed. Similarly, other approaches that might help to combat the emergence of antifungal resistance could rely on the development of host-directed therapies such as immunotherapy or vaccines. Parasites The protozoan parasites that cause the diseases malaria, trypanosomiasis, toxoplasmosis, cryptosporidiosis and leishmaniasis are important human pathogens. Malarial parasites that are resistant to the drugs that are currently available to infections are common and this has led to increased efforts to develop new drugs. Resistance to recently developed drugs such as artemisinin has also been reported. The problem of drug resistance in malaria has driven efforts to develop vaccines. Trypanosomes are parasitic protozoa that cause African trypanosomiasis and Chagas disease (American trypanosomiasis). There are no vaccines to prevent these infections so drugs such as pentamidine and suramin, benznidazole and nifurtimox are used to treat infections. These drugs are effective but infections caused by resistant parasites have been reported. Leishmaniasis is caused by protozoa and is an important public health problem worldwide, especially in sub-tropical and tropical countries. Drug resistance has "become a major concern". Global and genomic data In 2022, genomic epidemiologists reported results from a global survey of antimicrobial resistance via genomic wastewater-based epidemiology, finding large regional variations, providing maps, and suggesting resistance genes are also passed on between microbial species that are not closely related. The WHO provides the Global Antimicrobial Resistance and Use Surveillance System (GLASS) reports which summarize annual (e.g. 2020's) data on international AMR, also including an interactive dashboard. Epidemiology United Kingdom Public Health England reported that the total number of antibiotic resistant infections in England rose by 9% from 55,812 in 2017 to 60,788 in 2018, but antibiotic consumption had fallen by 9% from 20.0 to 18.2 defined daily doses per 1,000 inhabitants per day between 2014 and 2018. United States The Centers for Disease Control and Prevention reported that more than 2.8 million cases of antibiotic resistance have been reported. However, in 2019 overall deaths from antibiotic-resistant infections decreased by 18% and deaths in hospitals decreased by 30%. The COVID pandemic caused a reversal of much of the progress made on attenuating the effects of antibiotic resistance, resulting in more antibiotic use, more resistant infections, and less data on preventive action. Hospital-onset infections and deaths both increased by 15% in 2020, and significantly higher rates of infections were reported for 4 out of 6 types of healthcare associated infections. History The 1950s to 1970s represented the golden age of antibiotic discovery, where countless new classes of antibiotics were discovered to treat previously incurable diseases such as tuberculosis and syphilis. However, since that time the discovery of new classes of antibiotics has been almost nonexistent, and represents a situation that is especially problematic considering the resiliency of bacteria shown over time and the continued misuse and overuse of antibiotics in treatment. The phenomenon of antimicrobial resistance caused by overuse of antibiotics was predicted as early as 1945 by Alexander Fleming who said "The time may come when penicillin can be bought by anyone in the shops. Then there is the danger that the ignorant man may easily under-dose himself and by exposing his microbes to nonlethal quantities of the drug make them resistant." Without the creation of new and stronger antibiotics an era where common infections and minor injuries can kill, and where complex procedures such as surgery and chemotherapy become too risky, is a very real possibility. Antimicrobial resistance can lead to epidemics of enormous proportions if preventive actions are not taken. In this day and age current antimicrobial resistance leads to longer hospital stays, higher medical costs, and increased mortality. Society and culture Innovation policy Since the mid-1980s pharmaceutical companies have invested in medications for cancer or chronic disease that have greater potential to make money and have "de-emphasized or dropped development of antibiotics". On 20 January 2016 at the World Economic Forum in Davos, Switzerland, more than "80 pharmaceutical and diagnostic companies" from around the world called for "transformational commercial models" at a global level to spur research and development on antibiotics and on the "enhanced use of diagnostic tests that can rapidly identify the infecting organism". A number of countries are considering or implementing delinked payment models for new antimicrobials whereby payment is based on value rather than volume of drug sales. This offers the opportunity to pay for valuable new drugs even if they are reserved for use in relatively rare drug resistant infections. Legal frameworks Some global health scholars have argued that a global, legal framework is needed to prevent and control antimicrobial resistance. For instance, binding global policies could be used to create antimicrobial use standards, regulate antibiotic marketing, and strengthen global surveillance systems. Ensuring compliance of involved parties is a challenge. Global antimicrobial resistance policies could take lessons from the environmental sector by adopting strategies that have made international environmental agreements successful in the past such as: sanctions for non-compliance, assistance for implementation, majority vote decision-making rules, an independent scientific panel, and specific commitments. United States For the United States 2016 budget, U.S. president Barack Obama proposed to nearly double the amount of federal funding to "combat and prevent" antibiotic resistance to more than $1.2 billion. Many international funding agencies like USAID, DFID, SIDA and Bill & Melinda Gates Foundation have pledged money for developing strategies to counter antimicrobial resistance. On 27 March 2015, the White House released a comprehensive plan to address the increasing need for agencies to combat the rise of antibiotic-resistant bacteria. The Task Force for Combating Antibiotic-Resistant Bacteria developed The National Action Plan for Combating Antibiotic-Resistant Bacteria with the intent of providing a roadmap to guide the US in the antibiotic resistance challenge and with hopes of saving many lives. This plan outlines steps taken by the Federal government over the next five years needed in order to prevent and contain outbreaks of antibiotic-resistant infections; maintain the efficacy of antibiotics already on the market; and to help to develop future diagnostics, antibiotics, and vaccines. The Action Plan was developed around five goals with focuses on strengthening health care, public health veterinary medicine, agriculture, food safety and research, and manufacturing. These goals, as listed by the White House, are as follows: Slow the Emergence of Resistant Bacteria and Prevent the Spread of Resistant Infections Strengthen National One-Health Surveillance Efforts to Combat Resistance Advance Development and use of Rapid and Innovative Diagnostic Tests for Identification and Characterization of Resistant Bacteria Accelerate Basic and Applied Research and Development for New Antibiotics, Other Therapeutics, and Vaccines Improve International Collaboration and Capacities for Antibiotic Resistance Prevention, Surveillance, Control and Antibiotic Research and Development The following are goals set to meet by 2020: Establishment of antimicrobial programs within acute care hospital settings Reduction of inappropriate antibiotic prescription and use by at least 50% in outpatient settings and 20% inpatient settings Establishment of State Antibiotic Resistance (AR) Prevention Programs in all 50 states Elimination of the use of medically important antibiotics for growth promotion in food-producing animals. Current Status of AMR in the U.S. As of 2023, antimicrobial resistance (AMR) remains a significant public health threat in the United States. According to the Centers for Disease Control and Prevention's 2023 Report on Antibiotic Resistance Threats, over 2.8 million antibiotic-resistant infections occur in the U.S. each year, leading to at least 35,000 deaths annually. Among the most concerning resistant pathogens are Carbapenem-resistant Enterobacteriaceae (CRE), Methicillin-resistant Staphylococcus aureus (MRSA), and Clostridioides difficile (C. diff), all of which continue to be responsible for severe healthcare-associated infections (HAIs). The COVID-19 pandemic led to a significant disruption in healthcare, with an increase in the use of antibiotics during the treatment of viral infections. This rise in antibiotic prescribing, coupled with overwhelmed healthcare systems, contributed to a resurgence in AMR during the pandemic years. A 2021 CDC report identified a sharp increase in HAIs caused by resistant pathogens in COVID-19 patients, a trend that has persisted into 2023. Recent data suggest that although antibiotic use has decreased since the pandemic, some resistant pathogens remain prevalent in healthcare settings. The CDC has also expanded its Get Ahead of Sepsis campaign in 2023, focusing on raising awareness of AMR's role in sepsis and promoting the judicious use of antibiotics in both healthcare and community settings. This initiative has reached millions through social media, healthcare facilities, and public health outreach, aiming to educate the public on the importance of preventing infections and reducing antibiotic misuse. Policies According to World Health Organization, policymakers can help tackle resistance by strengthening resistance-tracking and laboratory capacity and by regulating and promoting the appropriate use of medicines. Policymakers and industry can help tackle resistance by: fostering innovation and research and development of new tools; and promoting cooperation and information sharing among all stakeholders. The U.S. government continues to prioritize AMR mitigation through policy and legislation. In 2023, the National Action Plan for Combating Antibiotic-Resistant Bacteria (CARB) 2023-2028 was released, outlining strategic objectives for reducing antibiotic-resistant infections, advancing infection prevention, and accelerating research on new antibiotics. The plan also emphasizes the importance of improving antibiotic stewardship across healthcare, agriculture, and veterinary settings. Furthermore, the PASTEUR Act (Pioneering Antimicrobial Subscriptions to End Upsurging Resistance) has gained momentum in Congress. If passed, the bill would create a subscription-based payment model to incentivize the development of new antimicrobial drugs, while supporting antimicrobial stewardship programs to reduce the misuse of existing antibiotics. This legislation is considered a critical step toward addressing the economic barriers to developing new antimicrobials. Policy evaluation Measuring the costs and benefits of strategies to combat AMR is difficult and policies may only have effects in the distant future. In other infectious diseases this problem has been addressed by using mathematical models. More research is needed to understand how AMR develops and spreads so that mathematical modelling can be used to anticipate the likely effects of different policies. Further research Rapid testing and diagnostics Distinguishing infections requiring antibiotics from self-limiting ones is clinically challenging. In order to guide appropriate use of antibiotics and prevent the evolution and spread of antimicrobial resistance, diagnostic tests that provide clinicians with timely, actionable results are needed. Acute febrile illness is a common reason for seeking medical care worldwide and a major cause of morbidity and mortality. In areas with decreasing malaria incidence, many febrile patients are inappropriately treated for malaria, and in the absence of a simple diagnostic test to identify alternative causes of fever, clinicians presume that a non-malarial febrile illness is most likely a bacterial infection, leading to inappropriate use of antibiotics. Multiple studies have shown that the use of malaria rapid diagnostic tests without reliable tools to distinguish other fever causes has resulted in increased antibiotic use. Antimicrobial susceptibility testing (AST) can facilitate a precision medicine approach to treatment by helping clinicians to prescribe more effective and targeted antimicrobial therapy. At the same time with traditional phenotypic AST it can take 12 to 48 hours to obtain a result due to the time taken for organisms to grow on/in culture media. Rapid testing, possible from molecular diagnostics innovations, is defined as "being feasible within an 8-h working shift". There are several commercial Food and Drug Administration-approved assays available which can detect AMR genes from a variety of specimen types. Progress has been slow due to a range of reasons including cost and regulation. Genotypic AMR characterisation methods are, however, being increasingly used in combination with machine learning algorithms in research to help better predict phenotypic AMR from organism genotype. Optical techniques such as phase contrast microscopy in combination with single-cell analysis are another powerful method to monitor bacterial growth. In 2017, scientists from Uppsala University in Sweden published a method that applies principles of microfluidics and cell tracking, to monitor bacterial response to antibiotics in less than 30 minutes overall manipulation time. This invention was awarded the 8M£ Longitude Prize on AMR in 2024. Recently, this platform has been advanced by coupling microfluidic chip with optical tweezing in order to isolate bacteria with altered phenotype directly from the analytical matrix. Rapid diagnostic methods have also been trialled as antimicrobial stewardship interventions to influence the healthcare drivers of AMR. Serum procalcitonin measurement has been shown to reduce mortality rate, antimicrobial consumption and antimicrobial-related side-effects in patients with respiratory infections, but impact on AMR has not yet been demonstrated. Similarly, point of care serum testing of the inflammatory biomarker C-reactive protein has been shown to influence antimicrobial prescribing rates in this patient cohort, but further research is required to demonstrate an effect on rates of AMR. Clinical investigation to rule out bacterial infections are often done for patients with pediatric acute respiratory infections. Currently it is unclear if rapid viral testing affects antibiotic use in children. Vaccines Vaccines are an essential part of the response to reduce AMR as they prevent infections, reduce the use and overuse of antimicrobials, and slow the emergence and spread of drug-resistant pathogens. Microorganisms usually do not develop resistance to vaccines because vaccines reduce the spread of the infection and target the pathogen in multiple ways in the same host and possibly in different ways between different hosts. Furthermore, if the use of vaccines increases, there is evidence that antibiotic resistant strains of pathogens will decrease; the need for antibiotics will naturally decrease as vaccines prevent infection before it occurs. A 2024 report by WHO finds that vaccines against 24 pathogens could reduce the number of antibiotics needed by 22% or 2.5 billion defined daily doses globally every year. If vaccines could be rolled out against all the evaluated pathogens, they could save a third of the hospital costs associated with AMR. Vaccinated people have fewer infections and are protected against potential complications from secondary infections that may need antimicrobial medicines or require admission to hospital. However, there are well documented cases of vaccine resistance, although these are usually much less of a problem than antimicrobial resistance. While theoretically promising, antistaphylococcal vaccines have shown limited efficacy, because of immunological variation between Staphylococcus species, and the limited duration of effectiveness of the antibodies produced. Development and testing of more effective vaccines is underway. Two registrational trials have evaluated vaccine candidates in active immunization strategies against S. aureus infection. In a phase II trial, a bivalent vaccine of capsular proteins 5 & 8 was tested in 1804 hemodialysis patients with a primary fistula or synthetic graft vascular access. After 40 weeks following vaccination a protective effect was seen against S. aureus bacteremia, but not at 54 weeks following vaccination. Based on these results, a second trial was conducted which failed to show efficacy. Merck tested V710, a vaccine targeting IsdB, in a blinded randomized trial in patients undergoing median sternotomy. The trial was terminated after a higher rate of multiorgan system failure–related deaths was found in the V710 recipients. Vaccine recipients who developed S. aureus infection were five times more likely to die than control recipients who developed S. aureus infection. Numerous investigators have suggested that a multiple-antigen vaccine would be more effective, but a lack of biomarkers defining human protective immunity keep these proposals in the logical, but strictly hypothetical arena. Antibody therapy Antibodies are promising against antimicrobial resistance. Monoclonal antibodies (mAbs) target bacterial virulence factors, aiding in bacterial destruction through various mechanisms. Three FDA-approved antibodies target B. anthracis and C. difficile toxins. Innovative strategies include DSTA4637S, an antibody-antibiotic conjugate, and MEDI13902, a bispecific antibody targeting Pseudomonas aeruginosa components. Alternating therapy Alternating therapy is a proposed method in which two or three antibiotics are taken in a rotation versus taking just one antibiotic such that bacteria resistant to one antibiotic are killed when the next antibiotic is taken. Studies have found that this method reduces the rate at which antibiotic resistant bacteria emerge in vitro relative to a single drug for the entire duration. Studies have found that bacteria that evolve antibiotic resistance towards one group of antibiotic may become more sensitive to others. This phenomenon can be used to select against resistant bacteria using an approach termed collateral sensitivity cycling, which has recently been found to be relevant in developing treatment strategies for chronic infections caused by Pseudomonas aeruginosa. Despite its promise, large-scale clinical and experimental studies revealed limited evidence of susceptibility to antibiotic cycling across various pathogens. Development of new drugs Since the discovery of antibiotics, research and development (R&D) efforts have provided new drugs in time to treat bacteria that became resistant to older antibiotics, but in the 2000s there has been concern that development has slowed enough that seriously ill people may run out of treatment options. Another concern is that practitioners may become reluctant to perform routine surgeries because of the increased risk of harmful infection. Backup treatments can have serious side-effects; for example, antibiotics like aminoglycosides (such as amikacin, gentamicin, kanamycin, streptomycin, etc.) used for the treatment of drug-resistant tuberculosis and cystic fibrosis can cause respiratory disorders, deafness and kidney failure. The potential crisis at hand is the result of a marked decrease in industry research and development. Poor financial investment in antibiotic research has exacerbated the situation. The pharmaceutical industry has little incentive to invest in antibiotics because of the high risk and because the potential financial returns are less likely to cover the cost of development than for other pharmaceuticals. In 2011, Pfizer, one of the last major pharmaceutical companies developing new antibiotics, shut down its primary research effort, citing poor shareholder returns relative to drugs for chronic illnesses. However, small and medium-sized pharmaceutical companies are still active in antibiotic drug research. In particular, apart from classical synthetic chemistry methodologies, researchers have developed a combinatorial synthetic biology platform on single cell level in a high-throughput screening manner to diversify novel lanthipeptides. In the 5–10 years since 2010, there has been a significant change in the ways new antimicrobial agents are discovered and developed – principally via the formation of public-private funding initiatives. These include CARB-X, which focuses on nonclinical and early phase development of novel antibiotics, vaccines, rapid diagnostics; Novel Gram Negative Antibiotic (GNA-NOW), which is part of the EU's Innovative Medicines Initiative; and Replenishing and Enabling the Pipeline for Anti-infective Resistance Impact Fund (REPAIR). Later stage clinical development is supported by the AMR Action Fund, which in turn is supported by multiple investors with the aim of developing 2–4 new antimicrobial agents by 2030. The delivery of these trials is facilitated by national and international networks supported by the Clinical Research Network of the National Institute for Health and Care Research (NIHR), European Clinical Research Alliance in Infectious Diseases (ECRAID) and the recently formed ADVANCE-ID, which is a clinical research network based in Asia. The Global Antibiotic Research and Development Partnership (GARDP) is generating new evidence for global AMR threats such as neonatal sepsis, treatment of serious bacterial infections and sexually transmitted infections as well as addressing global access to new and strategically important antibacterial drugs. The discovery and development of new antimicrobial agents has been facilitated by regulatory advances, which have been principally led by the European Medicines Agency (EMA) and the Food and Drug Administration (FDA). These processes are increasingly aligned although important differences remain and drug developers must prepare separate documents. New development pathways have been developed to help with the approval of new antimicrobial agents that address unmet needs such as the Limited Population Pathway for Antibacterial and Antifungal Drugs (LPAD). These new pathways are required because of difficulties in conducting large definitive phase III clinical trials in a timely way. Some of the economic impediments to the development of new antimicrobial agents have been addressed by innovative reimbursement schemes that delink payment of antimicrobials from volume-based sales. In the UK, a market entry reward scheme has been pioneered by the National Institute for Clinical Excellence (NICE) whereby an annual subscription fee is paid for use of strategically valuable antimicrobial agents – cefiderocol and ceftazidime-aviabactam are the first agents to be used in this manner and the scheme is potential blueprint for comparable programs in other countries. The available classes of antifungal drugs are still limited but as of 2021 novel classes of antifungals are being developed and are undergoing various stages of clinical trials to assess performance. Scientists have started using advanced computational approaches with supercomputers for the development of new antibiotic derivatives to deal with antimicrobial resistance. Biomaterials Using antibiotic-free alternatives in bone infection treatment may help decrease the use of antibiotics and thus antimicrobial resistance. The bone regeneration material bioactive glass S53P4 has shown to effectively inhibit the bacterial growth of up to 50 clinically relevant bacteria including MRSA and MRSE. Nanomaterials During the last decades, copper and silver nanomaterials have demonstrated appealing features for the development of a new family of antimicrobial agents. Nanoparticles (1–100 nm) show unique properties and promise as antimicrobial agents against resistant bacteria. Silver (AgNPs) and gold nanoparticles (AuNPs) are extensively studied, disrupting bacterial cell membranes and interfering with protein synthesis. Zinc oxide (ZnO NPs), copper (CuNPs), and silica (SiNPs) nanoparticles also exhibit antimicrobial properties. However, high synthesis costs, potential toxicity, and instability pose challenges. To overcome these, biological synthesis methods and combination therapies with other antimicrobials are explored. Enhanced biocompatibility and targeting are also under investigation to improve efficacy. Rediscovery of ancient treatments Similar to the situation in malaria therapy, where successful treatments based on ancient recipes have been found, there has already been some success in finding and testing ancient drugs and other treatments that are effective against AMR bacteria. Computational community surveillance One of the key tools identified by the WHO and others for the fight against rising antimicrobial resistance is improved surveillance of the spread and movement of AMR genes through different communities and regions. Recent advances in high-throughput DNA sequencing as a result of the Human Genome Project have resulted in the ability to determine the individual microbial genes in a sample. Along with the availability of databases of known antimicrobial resistance genes, such as the Comprehensive Antimicrobial Resistance Database (CARD) and ResFinder, this allows the identification of all the antimicrobial resistance genes within the sample – the so-called "resistome". In doing so, a profile of these genes within a community or environment can be determined, providing insights into how antimicrobial resistance is spreading through a population and allowing for the identification of resistance that is of concern. Phage therapy Phage therapy is the therapeutic use of bacteriophages to treat pathogenic bacterial infections. Phage therapy has many potential applications in human medicine as well as dentistry, veterinary science, and agriculture. Phage therapy relies on the use of naturally occurring bacteriophages to infect and lyse bacteria at the site of infection in a host. Due to current advances in genetics and biotechnology these bacteriophages can possibly be manufactured to treat specific infections. Phages can be bioengineered to target multidrug-resistant bacterial infections, and their use involves the added benefit of preventing the elimination of beneficial bacteria in the human body. Phages destroy bacterial cell walls and membrane through the use of lytic proteins which kill bacteria by making many holes from the inside out. Bacteriophages can even possess the ability to digest the biofilm that many bacteria develop that protect them from antibiotics in order to effectively infect and kill bacteria. Bioengineering can play a role in creating successful bacteriophages. Understanding the mutual interactions and evolutions of bacterial and phage populations in the environment of a human or animal body is essential for rational phage therapy. Bacteriophagics are used against antibiotic resistant bacteria in Georgia (George Eliava Institute) and in one institute in Wrocław, Poland. Bacteriophage cocktails are common drugs sold over the counter in pharmacies in eastern countries. In Belgium, four patients with severe musculoskeletal infections received bacteriophage therapy with concomitant antibiotics. After a single course of phage therapy, no recurrence of infection occurred and no severe side-effects related to the therapy were detected.
Biology and health sciences
Anti-infectives
Health
1915
https://en.wikipedia.org/wiki/Antigen
Antigen
In immunology, an antigen (Ag) is a molecule, moiety, foreign particulate matter, or an allergen, such as pollen, that can bind to a specific antibody or T-cell receptor. The presence of antigens in the body may trigger an immune response. Antigens can be proteins, peptides (amino acid chains), polysaccharides (chains of simple sugars), lipids, or nucleic acids. Antigens exist on normal cells, cancer cells, parasites, viruses, fungi, and bacteria. Antigens are recognized by antigen receptors, including antibodies and T-cell receptors. Diverse antigen receptors are made by cells of the immune system so that each cell has a specificity for a single antigen. Upon exposure to an antigen, only the lymphocytes that recognize that antigen are activated and expanded, a process known as clonal selection. In most cases, antibodies are antigen-specific, meaning that an antibody can only react to and bind one specific antigen; in some instances, however, antibodies may cross-react to bind more than one antigen. The reaction between an antigen and an antibody is called the antigen-antibody reaction. Antigen can originate either from within the body ("self-protein" or "self antigens") or from the external environment ("non-self"). The immune system identifies and attacks "non-self" external antigens. Antibodies usually do not react with self-antigens due to negative selection of T cells in the thymus and B cells in the bone marrow. The diseases in which antibodies react with self antigens and damage the body's own cells are called autoimmune diseases. Vaccines are examples of antigens in an immunogenic form, which are intentionally administered to a recipient to induce the memory function of the adaptive immune system towards antigens of the pathogen invading that recipient. The vaccine for seasonal influenza is a common example. Etymology Paul Ehrlich coined the term antibody () in his side-chain theory at the end of the 19th century. In 1899, Ladislas Deutsch (László Detre) named the hypothetical substances halfway between bacterial constituents and antibodies "antigenic or immunogenic substances" (). He originally believed those substances to be precursors of antibodies, just as a zymogen is a precursor of an enzyme. But, by 1903, he understood that an antigen induces the production of immune bodies (antibodies) and wrote that the word antigen is a contraction of antisomatogen (). The Oxford English Dictionary indicates that the logical construction should be "anti(body)-gen". The term originally referred to a substance that acts as an antibody generator. Terminology Epitope – the distinct surface features of an antigen, its antigenic determinant.Antigenic molecules, normally "large" biological polymers, usually present surface features that can act as points of interaction for specific antibodies. Any such feature constitutes an epitope. Most antigens have the potential to be bound by multiple antibodies, each of which is specific to one of the antigen's epitopes. Using the "lock and key" metaphor, the antigen can be seen as a string of keys (epitopes) each of which matches a different lock (antibody). Different antibody idiotypes, each have distinctly formed complementarity-determining regions. Allergen – A substance capable of causing an allergic reaction. The (detrimental) reaction may result after exposure via ingestion, inhalation, injection, or contact with skin. Superantigen – A class of antigens that cause non-specific activation of T-cells, resulting in polyclonal T-cell activation and massive cytokine release. Tolerogen – A substance that invokes a specific immune non-responsiveness due to its molecular form. If its molecular form is changed, a tolerogen can become an immunogen. Immunoglobulin-binding protein – Proteins such as protein A, protein G, and protein L that are capable of binding to antibodies at positions outside of the antigen-binding site. While antigens are the "target" of antibodies, immunoglobulin-binding proteins "attack" antibodies. T-dependent antigen – Antigens that require the assistance of T cells to induce the formation of specific antibodies. T-independent antigen – Antigens that stimulate B cells directly. Immunodominant antigens – Antigens that dominate (over all others from a pathogen) in their ability to produce an immune response. T cell responses typically are directed against a relatively few immunodominant epitopes, although in some cases (e.g., infection with the malaria pathogen Plasmodium spp.) it is dispersed over a relatively large number of parasite antigens. Antigen-presenting cells present antigens in the form of peptides on histocompatibility molecules. The T cells selectively recognize the antigens; depending on the antigen and the type of the histocompatibility molecule, different types of T cells will be activated. For T-cell receptor (TCR) recognition, the peptide must be processed into small fragments inside the cell and presented by a major histocompatibility complex (MHC). The antigen cannot elicit the immune response without the help of an immunologic adjuvant. Similarly, the adjuvant component of vaccines plays an essential role in the activation of the innate immune system. An immunogen is an antigen substance (or adduct) that is able to trigger a humoral (innate) or cell-mediated immune response. It first initiates an innate immune response, which then causes the activation of the adaptive immune response. An antigen binds the highly variable immunoreceptor products (B-cell receptor or T-cell receptor) once these have been generated. Immunogens are those antigens, termed immunogenic, capable of inducing an immune response. At the molecular level, an antigen can be characterized by its ability to bind to an antibody's paratopes. Different antibodies have the potential to discriminate among specific epitopes present on the antigen surface. A hapten is a small molecule that can only induce an immune response when attached to a larger carrier molecule, such as a protein. Antigens can be proteins, polysaccharides, lipids, nucleic acids or other biomolecules. This includes parts (coats, capsules, cell walls, flagella, fimbriae, and toxins) of bacteria, viruses, and other microorganisms. Non-microbial non-self antigens can include pollen, egg white, and proteins from transplanted tissues and organs or on the surface of transfused blood cells. Sources Antigens can be classified according to their source. Exogenous antigens Exogenous antigens are antigens that have entered the body from the outside, for example, by inhalation, ingestion or injection. The immune system's response to exogenous antigens is often subclinical. By endocytosis or phagocytosis, exogenous antigens are taken into the antigen-presenting cells (APCs) and processed into fragments. APCs then present the fragments to T helper cells (CD4+) by the use of class II histocompatibility molecules on their surface. Some T cells are specific for the peptide:MHC complex. They become activated and start to secrete cytokines, substances that activate cytotoxic T lymphocytes (CTL), antibody-secreting B cells, macrophages and other particles. Some antigens start out as exogenous and later become endogenous (for example, intracellular viruses). Intracellular antigens can be returned to circulation upon the destruction of the infected cell. Endogenous antigens Endogenous antigens are generated within normal cells as a result of normal cell metabolism, or because of viral or intracellular bacterial infection. The fragments are then presented on the cell surface in the complex with MHC class I molecules. If activated cytotoxic CD8+ T cells recognize them, the T cells secrete various toxins that cause the lysis or apoptosis of the infected cell. In order to keep the cytotoxic cells from killing cells just for presenting self-proteins, the cytotoxic cells (self-reactive T cells) are deleted as a result of tolerance (negative selection). Endogenous antigens include xenogenic (heterologous), autologous and idiotypic or allogenic (homologous) antigens. Sometimes antigens are part of the host itself in an autoimmune disease. Autoantigens An autoantigen is usually a self-protein or protein complex (and sometimes DNA or RNA) that is recognized by the immune system of patients with a specific autoimmune disease. Under normal conditions, these self-proteins should not be the target of the immune system, but in autoimmune diseases, their associated T cells are not deleted and instead attack. Neoantigens Neoantigens are those that are entirely absent from the normal human genome. As compared with nonmutated self-proteins, neoantigens are of relevance to tumor control, as the quality of the T cell pool that is available for these antigens is not affected by central T cell tolerance. Technology to systematically analyze T cell reactivity against neoantigens became available only recently. Neoantigens can be directly detected and quantified. Viral antigens For virus-associated tumors, such as cervical cancer and a subset of head and neck cancers, epitopes derived from viral open reading frames contribute to the pool of neoantigens. Tumor antigens Tumor antigens are those antigens that are presented by MHC class I or MHC class II molecules on the surface of tumor cells. Antigens found only on such cells are called tumor-specific antigens (TSAs) and generally result from a tumor-specific mutation. More common are antigens that are presented by tumor cells and normal cells, called tumor-associated antigens (TAAs). Cytotoxic T lymphocytes that recognize these antigens may be able to destroy tumor cells. Tumor antigens can appear on the surface of the tumor in the form of, for example, a mutated receptor, in which case they are recognized by B cells. For human tumors without a viral etiology, novel peptides (neo-epitopes) are created by tumor-specific DNA alterations. Process A large fraction of human tumor mutations are effectively patient-specific. Therefore, neoantigens may also be based on individual tumor genomes. Deep-sequencing technologies can identify mutations within the protein-coding part of the genome (the exome) and predict potential neoantigens. In mice models, for all novel protein sequences, potential MHC-binding peptides were predicted. The resulting set of potential neoantigens was used to assess T cell reactivity. Exome–based analyses were exploited in a clinical setting, to assess reactivity in patients treated by either tumor-infiltrating lymphocyte (TIL) cell therapy or checkpoint blockade. Neoantigen identification was successful for multiple experimental model systems and human malignancies. The false-negative rate of cancer exome sequencing is low—i.e.: the majority of neoantigens occur within exonic sequence with sufficient coverage. However, the vast majority of mutations within expressed genes do not produce neoantigens that are recognized by autologous T cells. As of 2015 mass spectrometry resolution is insufficient to exclude many false positives from the pool of peptides that may be presented by MHC molecules. Instead, algorithms are used to identify the most likely candidates. These algorithms consider factors such as the likelihood of proteasomal processing, transport into the endoplasmic reticulum, affinity for the relevant MHC class I alleles and gene expression or protein translation levels. The majority of human neoantigens identified in unbiased screens display a high predicted MHC binding affinity. Minor histocompatibility antigens, a conceptually similar antigen class are also correctly identified by MHC binding algorithms. Another potential filter examines whether the mutation is expected to improve MHC binding. The nature of the central TCR-exposed residues of MHC-bound peptides is associated with peptide immunogenicity. Nativity A native antigen is an antigen that is not yet processed by an APC to smaller parts. T cells cannot bind native antigens, but require that they be processed by APCs, whereas B cells can be activated by native ones. Antigenic specificity Antigenic specificity is the ability of the host cells to recognize an antigen specifically as a unique molecular entity and distinguish it from another with exquisite precision. Antigen specificity is due primarily to the side-chain conformations of the antigen. It is measurable and need not be linear or of a rate-limited step or equation. Both T cells and B cells are cellular components of adaptive immunity.
Biology and health sciences
Immune system
Biology
1924
https://en.wikipedia.org/wiki/Argo%20Navis
Argo Navis
Argo Navis (the Ship Argo), or simply Argo, is one of Ptolemy's 48 constellations, now a grouping of three IAU constellations. It is formerly a single large constellation in the southern sky. The genitive is "Argus Navis", abbreviated "Arg". John Flamsteed and other early modern astronomers called it Navis (the Ship), genitive "Navis", abbreviated "Nav". The constellation proved to be of unwieldy size, as it was 28% larger than the next largest constellation and had more than 160 easily visible stars. The 1755 catalogue of Nicolas Louis de Lacaille divided it into the three modern constellations that occupy much of the same area: Carina (the keel), Puppis (the poop deck or stern), and Vela (the sails). Argo derived from the ship Argo in Greek mythology, sailed by Jason and the Argonauts to Colchis in search of the Golden Fleece. Some stars of Puppis and Vela can be seen from Mediterranean latitudes in winter and spring, the ship appearing to skim along the "river of the Milky Way." The precession of the equinoxes has caused the position of the stars from Earth's viewpoint to shift southward. Though most of the constellation was visible in Classical times, the constellation is now not easily visible from most of the northern hemisphere. All the stars of Argo Navis are easily visible from the tropics southward and pass near zenith from southern temperate latitudes. The brightest of these is Canopus (α Carinae), the second-brightest night-time star, now assigned to Carina. History Development of the Greek constellation Argo Navis is known from Greek texts, which derived it from Egypt around 1000 BC. Plutarch attributed it to the Egyptian "Boat of Osiris." Some academics theorized a Sumerian origin related to the Epic of Gilgamesh, a hypothesis rejected for lack of evidence that Mesopotamian cultures considered these stars, or any portion of them, to form a boat. Over time, Argo became identified exclusively with ancient Greek myth of Jason and the Argonauts. In Ptolemy's Almagest, Argo Navis occupies the portion of the Milky Way between Canis Major and Centaurus, with stars marking such details as the "little shield", the "steering-oar", the "mast-holder", and the "stern-ornament", which continued to be reflected in cartographic representations in celestial atlases into the nineteenth century (see below). The ship appeared to rotate about the pole sternwards, so nautically in reverse. Aratus, the Greek poet / historian living in the third century BCE, noted this backward progression writing, "Argo by the Great Dog's [Canis Major's] tail is drawn; for hers is not a usual course, but backward turned she comes ...". The constituent modern constellations In modern times, Argo Navis was considered unwieldy due to its enormous size (28% larger than Hydra, the largest modern constellation). In his 1763 star catalogue, Nicolas Louis de Lacaille explained that there were more than a hundred and sixty stars clearly visible to the naked eye in Navis, and so he used the set of lowercase and uppercase Latin letters three times on portions of the constellation referred to as "Argûs in carina" (Carina, the keel), "Argûs in puppi" (Puppis, the poop deck or stern), and "Argûs in velis" (Vela, the sails). Lacaille replaced Bayer's designations with new ones that followed stellar magnitudes more closely, but used only a single Greek-letter sequence and described the constellation for those stars as "Argûs". Similarly, faint unlettered stars were listed only as in "Argûs". The final breakup and abolition of Argo Navis was proposed by Sir John Herschel in 1841 and again in 1844. Despite this, the constellation remained in use in parallel with its constituent parts into the 20th century. In 1922, along with the other constellations, it received a three-letter abbreviation: Arg. The breakup and relegation to a former constellation occurred in 1930 when the IAU defined the 88 modern constellations, formally instituting Carina, Puppis, and Vela, and declaring Argo obsolete. Lacaille's designations were kept in the offspring, so Carina has α, β, and ε; Vela has γ and δ; Puppis has ζ; and so on. As a result of this breakup, Argo Navis is the only one of Ptolemy's 48 constellations that is no longer officially recognized as a single constellation. In addition, the constellation Pyxis (the mariner's compass) occupies an area near what in antiquity was considered part of Argo's mast. Some recent authors state that the compass was part of the ship, but magnetic compasses were unknown in ancient Greek times. Lacaille considered it a separate constellation representing a modern scientific instrument (like Microscopium and Telescopium), that he created for maps of the stars of the southern hemisphere. Pyxis was listed among his 14 new constellations. In 1844, John Herschel suggested formalizing the mast as a new constellation, Malus, to replace Lacaille's Pyxis, but the idea did not catch on. Similarly, an effort by Edmond Halley to detach the "cloud of mist" at the prow of Argo Navis to form a new constellation named Robur Carolinum (Charles' Oak) in honor of King Charles II, his patron, was unsuccessful. Representations in other cultures In Vedic period astronomy, which drew its zodiac signs and many constellations from the period of the Indo-Greek Kingdom, Indian observers saw the asterism as a boat. The Māori had several names for the constellation, including Te Waka-o-Tamarereti (the canoe of Tamarereti), Te Kohi-a-Autahi (an expression meaning "cold of autumn settling down on land and water"), and Te Kohi.
Physical sciences
Asterism
Astronomy
1926
https://en.wikipedia.org/wiki/Antlia
Antlia
Antlia (; from Ancient Greek ἀντλία) is a constellation in the Southern Celestial Hemisphere. Its name means "pump" in Latin and Greek; it represents an air pump. Originally Antlia Pneumatica, the constellation was established by Nicolas-Louis de Lacaille in the 18th century. Its non-specific (single-word) name, already in limited use, was preferred by John Herschel then welcomed by the astronomic community which officially accepted this. North of stars forming some of the sails of the ship Argo Navis (the constellation Vela), Antlia is completely visible from latitudes south of 49 degrees north. Antlia is a faint constellation; its brightest star is Alpha Antliae, an orange giant that is a suspected variable star, ranging between apparent magnitudes 4.22 and 4.29. S Antliae is an eclipsing binary star system, changing in brightness as one star passes in front of the other. Sharing a common envelope, the stars are so close they will one day merge to form a single star. Two star systems with known exoplanets, HD 93083 and WASP-66, lie within Antlia, as do NGC 2997, a spiral galaxy, and the Antlia Dwarf Galaxy. History The French astronomer Nicolas-Louis de Lacaille first described the constellation in French as la Machine Pneumatique (the Pneumatic Machine) in 1751–52, commemorating the air pump invented by the French physicist Denis Papin. De Lacaille had observed and catalogued almost 10,000 southern stars during a two-year stay at the Cape of Good Hope, devising fourteen new constellations in uncharted regions of the Southern Celestial Hemisphere not visible from Europe. He named all but one in honour of instruments that symbolised the Age of Enlightenment. Lacaille depicted Antlia as a single-cylinder vacuum pump used in Papin's initial experiments, while German astronomer Johann Bode chose the more advanced double-cylinder version. Lacaille Latinised the name to Antlia pneumatica on his 1763 chart. English astronomer John Herschel proposed shrinking the name to one word in 1844, noting that Lacaille himself had abbreviated his constellations thus on occasion. This was universally adopted. The International Astronomical Union adopted it as one of the 88 modern constellations in 1922. Although visible to the Ancient Greeks, Antlia's stars were too faint to have been commonly recognised as a figurative object, or part of one, in ancient asterisms. The stars that now comprise Antlia are in a zone of the sky associated with the asterism/old constellation Argo Navis, the ship, the Argo, of the Argonauts, in its latter centuries. This, due to its immense size, was split into hull, poop deck and sails by Lacaille in 1763. Ridpath reports that due to their faintness, the stars of Antlia did not make up part of the classical depiction of Argo Navis. In non-Western astronomy Chinese astronomers were able to view what is modern Antlia from their latitudes, and incorporated its stars into two different constellations. Several stars in the southern part of Antlia were a portion of "Dong'ou", which represented an area in southern China. Furthermore, Epsilon, Eta, and Theta Antliae were incorporated into the celestial temple, which also contained stars from modern Pyxis. Characteristics Covering 238.9 square degrees and hence 0.579% of the sky, Antlia ranks 62nd of the 88 modern constellations by area. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 49°N. Hydra the sea snake runs along the length of its northern border, while Pyxis the compass, Vela the sails, and Centaurus the centaur line it to the west, south and east respectively. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union, is "Ant". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon with an east side, south side and ten other sides (facing the two other cardinal compass points) (illustrated in infobox at top-right). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −24.54° and −40.42°. Features Stars Lacaille gave nine stars Bayer designations, labelling them Alpha through to Theta, combining two stars next to each other as Zeta. Gould later added a tenth, Iota Antliae. Beta and Gamma Antliae (now HR 4339 and HD 90156) ended up in the neighbouring constellation Hydra once the constellation boundaries were delineated in 1930. Within the constellation's borders, there are 42 stars brighter than or equal to apparent magnitude 6.5. The constellation's two brightest stars—Alpha and Epsilon Antliae—shine with a reddish tinge. Alpha is an orange giant of spectral type K4III that is a suspected variable star, ranging between apparent magnitudes 4.22 and 4.29. It is located 320 ± 10 light-years away from Earth. Estimated to be shining with around 480 to 555 times the luminosity of the Sun, it is most likely an ageing star that is brightening and on its way to becoming a Mira variable star, having converted all its core fuel into carbon. Located 590 ± 30 light-years from Earth, Epsilon Antliae is an evolved orange giant star of spectral type K3 IIIa, that has swollen to have a diameter about 69 times that of the Sun, and a luminosity of around 1279 Suns. It is slightly variable. At the other end of Antlia, Iota Antliae is likewise an orange giant of spectral type K1 III. It is 202 ± 2 light-years distant. Located near Alpha is Delta Antliae, a binary star, 450 ± 10 light-years distant from Earth. The primary is a blue-white main sequence star of spectral type B9.5V and magnitude 5.6, and the secondary is a yellow-white main sequence star of spectral type F9Ve and magnitude 9.6. Zeta Antliae is a wide optical double star. The brighter star—Zeta1 Antliae—is 410 ± 40 light-years distant and has a magnitude of 5.74, though it is a true binary star system composed of two white main sequence stars of magnitudes 6.20 and 7.01 that are separated by 8.042 arcseconds. The fainter star—Zeta2 Antliae—is 386 ± 5 light-years distant and of magnitude 5.9. Eta Antliae is another double composed of a yellow white star of spectral type F1V and magnitude 5.31, with a companion of magnitude 11.3. Theta Antliae is likewise double, most likely composed of an A-type main sequence star and a yellow giant. S Antliae is an eclipsing binary star system that varies in apparent magnitude from 6.27 to 6.83 over a period of 15.6 hours. The system is classed as a W Ursae Majoris variable—the primary is hotter than the secondary and the drop in magnitude is caused by the latter passing in front of the former. Calculating the properties of the component stars from the orbital period indicates that the primary star has a mass 1.94 times and a diameter 2.026 times that of the Sun, and the secondary has a mass 0.76 times and a diameter 1.322 times that of the Sun. The two stars have similar luminosity and spectral type as they have a common envelope and share stellar material. The system is thought to be around 5–6 billion years old. The two stars will eventually merge to form a single fast-spinning star. T Antliae is a yellow-white supergiant of spectral type F6Iab and Classical Cepheid variable ranging between magnitude 8.88 and 9.82 over 5.9 days. U Antliae is a red C-type carbon star and is an irregular variable that ranges between magnitudes 5.27 and 6.04. At 910 ± 50 light-years distant, it is around 5819 times as luminous as the Sun. BF Antliae is a Delta Scuti variable that varies by 0.01 of a magnitude. HR 4049, also known as AG Antliae, is an unusual hot variable ageing star of spectral type B9.5Ib-II. It is undergoing intense loss of mass and is a unique variable that does not belong to any class of known variable star, ranging between magnitudes 5.29 and 5.83 with a period of 429 days. It is around 6000 light-years away from Earth. UX Antliae is an R Coronae Borealis variable with a baseline apparent magnitude of around 11.85, with irregular dimmings down to below magnitude 18.0. A luminous and remote star, it is a supergiant with a spectrum resembling that of a yellow-white F-type star but it has almost no hydrogen. HD 93083 is an orange dwarf star of spectral type K3V that is smaller and cooler than the Sun. It has a planet that was discovered by the radial velocity method with the HARPS spectrograph in 2005. About as massive as Saturn, the planet orbits its star with a period of 143 days at a mean distance of 0.477 AU. WASP-66 is a sunlike star of spectral type F4V. A planet with 2.3 times the mass of Jupiter orbits it every 4 days, discovered by the transit method in 2012. DEN 1048-3956 is a brown dwarf of spectral type M8 located around 13 light-years distant from Earth. At magnitude 17 it is much too faint to be seen with the unaided eye. It has a surface temperature of about 2500 K. Two powerful flares lasting 4–5 minutes each were detected in 2002. 2MASS 0939-2448 is a system of two cool and faint brown dwarfs, probably with effective temperatures of about 500 and 700 K and masses of about 25 and 40 times that of Jupiter, though it is also possible that both objects have temperatures of 600 K and 30 Jupiter masses. Deep-sky objects Antlia contains many faint galaxies, the brightest of which is NGC 2997 at magnitude 10.6. It is a loosely wound face-on spiral galaxy of type Sc. Though nondescript in most amateur telescopes, it presents bright clusters of young stars and many dark dust lanes in photographs. Discovered in 1997, the Antlia Dwarf is a 14.8m dwarf spheroidal galaxy that belongs to the Local Group of galaxies. In 2018 the discovery was announced of a very low surface brightness galaxy near Epsilon Antliae, Antlia 2, which is a satellite galaxy of the Milky Way. The Antlia Cluster, also known as Abell S0636, is a cluster of galaxies located in the Hydra–Centaurus Supercluster. It is the third nearest to the Local Group after the Virgo Cluster and the Fornax Cluster. The cluster's distance from earth is Located in the southeastern corner of the constellation, it boasts the giant elliptical galaxies NGC 3268 and NGC 3258 as the main members of a southern and northern subgroup respectively, and contains around 234 galaxies in total. Antlia is home to the huge Antlia Supernova Remnant, one of the largest supernova remnants in the sky.
Physical sciences
Other
Astronomy
1927
https://en.wikipedia.org/wiki/Ara%20%28constellation%29
Ara (constellation)
Ara (Latin for "the Altar") is a southern constellation between Scorpius, Telescopium, Triangulum Australe, and Norma. It was (as ) one of the Greek bulk (namely 48) described by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations designated by the International Astronomical Union. The orange supergiant Beta Arae, to us its brightest star measured with near-constant apparent magnitude of 2.85, is marginally brighter than blue-white Alpha Arae. Seven star systems are known to host planets. Sunlike Mu Arae hosts four known planets. Gliese 676 is a (gravity-paired) binary red dwarf system with four known planets. The Milky Way crosses the northwestern part of Ara. Within the constellation is Westerlund 1, a super star cluster that contains the red supergiant Westerlund 1-26, one of the largest stars known. History In ancient Greek mythology, Ara was identified as the altar where the gods first made offerings and formed an alliance before defeating the Titans. One of the southernmost constellations depicted by Ptolemy, it had been recorded by Aratus in 270 BC as lying close to the horizon, and the Almagest portrays stars as far south as Gamma Arae. Professor Bradley Schaefer proposes such Ancients must have been able to see as far south as Zeta Arae, for a pattern that looked like an altar. In illustrations, Ara is usually depicted as compact classical altar with its smoke 'rising' southward. However, depictions often vary. In the early days of printing, a 1482 woodcut of Gaius Julius Hyginus's classic Poeticon Astronomicon depicts the altar as surrounded by demons. Johann Bayer in 1603 depicted Ara as an altar with burning incense. Indeed, frankincense burners were common throughout the Levant especially in the Yemen, where they are known as Mabkhara. This required live coals or burning embers called Jamra', in order to burn the incense. Willem Blaeu, a Dutch uranographer of the 16th and 17th centuries, drew Ara as an altar for sacrifices, with a burning animal offering unusually whose smoke rises northward, represented by Alpha Arae. The Castle of Knowledge by Robert Record of 1556 lists the constellation stating that "Under the Scorpions tayle, standeth the Altar."; a decade later a translation of a fairly recent mainly astrological work by Marcellus Palingenius of 1565, by Barnabe Googe states "Here mayst thou both the Altar, and the myghty Cup beholde." Equivalents In Chinese astronomy, the stars of the constellation Ara lie within The Azure Dragon of the East (, ). Five stars of Ara formed (), a tortoise, while another three formed (), a pestle. The Wardaman people of the Northern Territory in Australia saw the stars of Ara and the neighbouring constellation Pavo as flying foxes. Characteristics Covering 237.1 square degrees and hence 0.575% of the sky, Ara ranks 63rd of the 88 modern constellations by area. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 22°N. Scorpius runs along the length of its northern border, while Norma and Triangulum Australe border it to the west, Apus to the south, and Pavo and Telescopium to the east respectively. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union, is "Ara". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of twelve segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −45.49° and −67.69°. Features Stars Bayer gave eight stars Bayer designations, labelling them Alpha through to Theta, though he had never seen the constellation directly as it never rises above the horizon in Germany. After charting the southern constellations, French astronomer Nicolas-Louis de Lacaille recharted the stars of Ara from Alpha through to Sigma, including three pairs of stars next to each other as Epsilon, Kappa and Nu. Ara contains part of the Milky Way to the south of Scorpius and thus has rich star fields. Within the constellation's borders, there are 71 stars brighter than or equal to apparent magnitude 6.5. Beta Arae, apparent magnitude 2.85, is the brightest star in the constellation, about 0.1 mag brighter than Alpha Arae although the difference in brightness between the two is undetectable by the unaided eye. Beta is an orange-hued star of spectral type K3Ib-IIa that has been classified as a supergiant or bright giant, and lies around 650 light-years from Earth. It is over 8 times as massive and 5,636 times as luminous as the Sun. Close to Beta Arae is Gamma Arae, a blue-hued supergiant of spectral type B1Ib. Of apparent magnitude 3.3, it is 1110 ± 60 light-years from Earth. It has been estimated to be between 12.5 and 25 times as massive as the Sun, and have around 120,000 times its luminosity. Alpha Arae is a blue-white main sequence star of magnitude 2.95, that is 270 ± 20 light-years from Earth. This star is around 9.6 times as massive as the Sun, and has an average of 4.5 times its radius. It is 5,800 times as luminous as the Sun, its energy emitted from its outer envelope at an effective temperature of 18,044 K. A Be star, Alpha Arae is surrounded by a dense equatorial disk of material in Keplerian (rather than uniform) rotation. The star is losing mass by a polar stellar wind with a terminal velocity of approximately 1,000 km/s. The third brightest star in Ara at magnitude 3.13 is Zeta Arae, an orange giant of spectral type K3III that is located 490 ± 10 light-years from Earth. Around 7–8 times as massive as the Sun, it has swollen to a diameter around 114 times that of the Sun and is 3800 times as luminous. Were it not dimmer by intervening interstellar dust, it would be significantly brighter at magnitude 2.11. Delta Arae is a blue-white main sequence star of spectral type B8Vn and magnitude 3.6, 198 ± 4 light-years from Earth. It is around 3.56 times as massive as the Sun. Epsilon1 Arae is an orange giant of apparent magnitude 4.1, 360 ± 10 light-years distant from Earth. It is around 74% more massive than the Sun. At an age of about 1.7 billion years, the outer envelope of the star has expanded to almost 34 times the Sun's radius. Eta Arae is an orange giant of apparent magnitude 3.76, located 299 ± 5 light-years distant from Earth. Estimated to be around five billion years old, it has reached the giant star stage of its evolution. With 1.12 times the mass of the Sun, it has an outer envelope that has expanded to 40 times the Sun's radius. The star is now spinning so slowly that it takes more than eleven years to complete a single rotation. GX 339-4 (V821 Arae) is a moderately strong variable galactic low-mass X-ray binary (LMXB) source and black-hole candidate that flares from time to time. From spectroscopic measurements, the mass of the black-hole was found to be at least of 5.8 solar masses. Exoplanets have been discovered in seven star systems in the constellation. Mu Arae (Cervantes) is a sunlike star that hosts four planets. HD 152079 is a sunlike star with a jupiter-like planet with an orbital period of 2097 ± 930 days. HD 154672 is an ageing sunlike star with a Hot Jupiter. HD 154857 is a sunlike star with one confirmed and one suspected planet. HD 156411 is a star hotter and larger than the sun with a gas giant planet in orbit. Gliese 674 is a nearby red dwarf star with a planet. Gliese 676 is a binary star system composed of two red dwarves with four planets. Deep-sky objects The northwest corner of Ara is crossed by the galactic plane of the Milky Way and contains several open clusters (notably NGC 6200) and diffuse nebulae (including the bright cluster/nebula pair NGC 6188 and NGC 6193). The brightest of the globular clusters, sixth magnitude NGC 6397, lies at a distance of just , making it one of the closest globular clusters to the Solar System. Ara also contains Westerlund 1, a super star cluster containing itself the possible red supergiant Westerlund 1-237 and the red supergiant Westerlund 1-26. The latter is one of the largest stars known with an estimate varying between and . Although Ara lies close to the heart of the Milky Way, two spiral galaxies (NGC 6215 and NGC 6221) are visible near star Eta Arae. Open clusters NGC 6193 is an open cluster containing approximately 30 stars with an overall magnitude of 5.0 and a size of 0.25 square degrees, about half the size of the full Moon. It is approximately 4200 light-years from Earth. It has one bright member, a double star with a blue-white hued primary of magnitude 5.6 and a secondary of magnitude 6.9. NGC 6193 is surrounded by NGC 6188, a faint nebula only normally visible in long-exposure photographs. NGC 6200 NGC 6204 NGC 6208 NGC 6250 NGC 6253 IC 4651 Globular clusters NGC 6352 NGC 6362 NGC 6397 is a globular cluster with an overall magnitude of 6.0; it is visible to the naked eye under exceptionally dark skies and is normally visible in binoculars. It is a fairly close globular cluster, at a distance of 10,500 light-years. Planetary Nebulae The Stingray Nebula (Hen 3–1357), the youngest known planetary nebula as of 2010, formed in Ara; the light from its formation was first observable around 1987. NGC 6326. A planetary nebula that might have a binary system at its center.
Physical sciences
Other
Astronomy
1933
https://en.wikipedia.org/wiki/Apus
Apus
Apus is a small constellation in the southern sky. It represents a bird-of-paradise, and its name means "without feet" in Greek because the bird-of-paradise was once wrongly believed to lack feet. First depicted on a celestial globe by Petrus Plancius in 1598, it was charted on a star atlas by Johann Bayer in his 1603 Uranometria. The French explorer and astronomer Nicolas Louis de Lacaille charted and gave the brighter stars their Bayer designations in 1756. The five brightest stars are all reddish in hue. Shading the others at apparent magnitude 3.8 is Alpha Apodis, an orange giant that has around 48 times the diameter and 928 times the luminosity of the Sun. Marginally fainter is Gamma Apodis, another aging giant star. Delta Apodis is a double star, the two components of which are 103 arcseconds apart and visible with the naked eye. Two star systems have been found to have planets. History Apus was one of twelve constellations published by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman who had sailed on the first Dutch trading expedition, known as the Eerste Schipvaart, to the East Indies. It first appeared on a 35-cm (14 in) diameter celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. De Houtman included it in his southern star catalogue in 1603 under the Dutch name De Paradijs Voghel, "The Bird of Paradise", and Plancius called the constellation Paradysvogel Apis Indica; the first word is Dutch for "bird of paradise". Apis (Latin for "bee") is assumed to have been a typographical error for avis ("bird"). After its introduction on Plancius's globe, the constellation's first known appearance in a celestial atlas was in German cartographer Johann Bayer's Uranometria of 1603. Bayer called it Apis Indica while fellow astronomers Johannes Kepler and his son-in-law Jakob Bartsch called it Apus or Avis Indica. The name Apus is derived from the Greek apous, meaning "without feet". This referred to the Western misconception that the bird-of-paradise had no feet, which arose because the only specimens available in the West had their feet and wings removed. Such specimens began to arrive in Europe in 1522, when the survivors of Ferdinand Magellan's expedition brought them home. The constellation later lost some of its tail when Nicolas-Louis de Lacaille used those stars to establish Octans in the 1750s. Characteristics Covering 206.3 square degrees and hence 0.5002% of the sky, Apus ranks 67th of the 88 modern constellations by area. Its position in the Southern Celestial Hemisphere means that the whole constellation is visible to observers south of 7°N. It is bordered by Ara, Triangulum Australe and Circinus to the north, Musca and Chamaeleon to the west, Octans to the south, and Pavo to the east. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Aps". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of six segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −67.48° and −83.12°. Features Stars Lacaille gave twelve stars Bayer designations, labelling them Alpha through to Kappa, including two stars next to each other as Delta and another two stars near each other as Kappa. Within the constellation's borders, there are 39 stars brighter than or equal to apparent magnitude 6.5. Beta, Gamma and Delta Apodis form a narrow triangle, with Alpha Apodis lying to the east. The five brightest stars are all red-tinged, which is unusual among constellations. Alpha Apodis is an orange giant of spectral type K3III located 430 ± 20 light-years away from Earth, with an apparent magnitude of 3.8. It spent much of its life as a blue-white (B-type) main sequence star before expanding, cooling and brightening as it used up its core hydrogen. It has swollen to 48 times the Sun's diameter, and shines with a luminosity approximately 928 times that of the Sun, with a surface temperature of 4312 K. Beta Apodis is an orange giant 149 ± 2 light-years away, with a magnitude of 4.2. It is around 1.84 times as massive as the Sun, with a surface temperature of 4677 K. Gamma Apodis is a yellow giant of spectral type G8III located 150 ± 4 light-years away, with a magnitude of 3.87. It is approximately 63 times as luminous the Sun, with a surface temperature of 5279 K. Delta Apodis is a double star, the two components of which are 103 arcseconds apart and visible through binoculars. Delta1 is a red giant star of spectral type M4III located 630 ± 30 light-years away. It is a semiregular variable that varies from magnitude +4.66 to +4.87, with pulsations of multiple periods of 68.0, 94.9 and 101.7 days. Delta2 is an orange giant star of spectral type K3III, located 550 ± 10 light-years away, with a magnitude of 5.3. The separate components can be resolved with the naked eye. The fifth-brightest star is Zeta Apodis at magnitude 4.8, a star that has swollen and cooled to become an orange giant of spectral type K1III, with a surface temperature of 4649 K and a luminosity 133 times that of the Sun. It is 300 ± 4 light-years distant. Near Zeta is Iota Apodis, a binary star system 1,040 ± 60 light-years distant, that is composed of two blue-white main sequence stars that orbit each other every 59.32 years. Of spectral types B9V and B9.5 V, they are both over three times as massive as the Sun. Eta Apodis is a white main sequence star located 140.8 ± 0.9 light-years distant. Of apparent magnitude 4.89, it is 1.77 times as massive, 15.5 times as luminous as the Sun and has 2.13 times its radius. Aged 250 ± 200 million years old, this star is emitting an excess of 24 μm infrared radiation, which may be caused by a debris disk of dust orbiting at a distance of more than 31 astronomical units from it. Theta Apodis is a cool red giant of spectral type M7 III located 350 ± 30 light-years distant. It shines with a luminosity approximately 3879 times that of the Sun and has a surface temperature of 3151 K. A semiregular variable, it varies by 0.56 magnitudes with a period of 119 days—or approximately 4 months. It is losing mass at the rate of times the mass of the Sun per year through its stellar wind. Dusty material ejected from this star is interacting with the surrounding interstellar medium, forming a bow shock as the star moves through the galaxy. NO Apodis is a red giant of spectral type M3III that varies between magnitudes 5.71 and 5.95. Located 780 ± 20 light-years distant, it shines with a luminosity estimated at 2059 times that of the Sun and has a surface temperature of 3568 K. S Apodis is a rare R Coronae Borealis variable, an extremely hydrogen-deficient supergiant thought to have arisen as the result of the merger of two white dwarfs; fewer than 100 have been discovered as of 2012. It has a baseline magnitude of 9.7. R Apodis is a star that was given a variable star designation, yet has turned out not to be variable. Of magnitude 5.3, it is another orange giant. Two star systems have had exoplanets discovered by doppler spectroscopy, and the substellar companion of a third star system—the sunlike star HD 131664—has since been found to be a brown dwarf with a calculated mass of the companion to 23 times that of Jupiter (minimum of 18 and maximum of 49 Jovian masses). HD 134606 is a yellow sunlike star of spectral type G6IV that has begun expanding and cooling off the main sequence. Three planets orbit it with periods of 12, 59.5 and 459 days, successively larger as they are further away from the star. HD 137388 is another star—of spectral type K2IV—that is cooler than the Sun and has begun cooling off the main sequence. Around 47% as luminous and 88% as massive as the Sun, with 85% of its diameter, it is thought to be around 7.4 ± 3.9 billion years old. It has a planet that is 79 times as massive as the Earth and orbits its sun every 330 days at an average distance of 0.89 astronomical units (AU). Deep-sky objects The Milky Way covers much of the constellation's area. Of the deep-sky objects in Apus, there are two prominent globular clusters—NGC 6101 and IC 4499—and a large faint nebula that covers several degrees east of Beta and Gamma Apodis. NGC 6101 is a globular cluster of apparent magnitude 9.2 located around 50,000 light-years distant from Earth, which is around 160 light-years across. Around 13 billion years old, it contains a high concentration of massive bright stars known as blue stragglers, thought to be the result of two stars merging. IC 4499 is a loose globular cluster in the medium-far galactic halo; its apparent magnitude is 10.6. The galaxies in the constellation are faint. IC 4633 is a very faint spiral galaxy surrounded by a vast amount of Milky Way line-of-sight integrated flux nebulae—large faint clouds thought to be lit by large numbers of stars.
Physical sciences
Other
Astronomy
1941
https://en.wikipedia.org/wiki/Aeon
Aeon
The word aeon , also spelled eon (in American and Australian English), originally meant "life", "vital force" or "being", "generation" or "a period of time", though it tended to be translated as "age" in the sense of "ages", "forever", "timeless" or "for eternity". It is a Latin transliteration from the ancient Greek word (), from the archaic () meaning "century". In Greek, it literally refers to the timespan of one hundred years. A cognate Latin word (cf. ) for "age" is present in words such as eternal, longevity and mediaeval. Although the term aeon may be used in reference to a period of a billion years (especially in geology, cosmology and astronomy), its more common usage is for any long, indefinite period. Aeon can also refer to the four aeons on the geologic time scale that make up the Earth's history, the Hadean, Archean, Proterozoic, and the current aeon, Phanerozoic. Astronomy and cosmology In astronomy, an aeon is defined as a billion years (109 years, abbreviated AE). Roger Penrose uses the word aeon to describe the period between successive and cyclic Big Bangs within the context of conformal cyclic cosmology. Philosophy and mysticism In Buddhism, an "aeon" or (Sanskrit: ) is often said to be 1,334,240,000 years, the life cycle of the world. Yet, these numbers are symbolic, not literal. Christianity's idea of "eternal life" comes from the word for life, (), and a form of (), which could mean life in the next aeon, the Kingdom of God, or Heaven, just as much as immortality, as in John 3:16. According to Christian universalism, the Greek New Testament scriptures use the word () to mean a long period and the word () to mean "during a long period"; thus, there was a time before the aeons, and the aeonian period is finite. After each person's mortal life ends, they are judged worthy of aeonian life or aeonian punishment. That is, after the period of the aeons, all punishment will cease and death is overcome and then God becomes the all in each one (1Cor 15:28). This contrasts with the conventional Christian belief in eternal life and eternal punishment. Occultists of the Thelema and Ordo Templi Orientis (English: "Order of the Temple of the East") traditions sometimes speak of a "magical Aeon" that may last for perhaps as little as 2,000 years. Gnosticism In many Gnostic systems, the various emanations of God, who is also known by such names as the One, the Monad, Aion teleos ("The Broadest Aeon", Greek: ), Bythos ("depth or profundity", Greek: ), Proarkhe ("before the beginning", Greek: ), ("the beginning", Greek: ), ("wisdom"), and ("the Anointed One"), are called Aeons. In the different systems these emanations are differently named, classified, and described, but the emanation theory itself is common to all forms of Gnosticism. In the Basilidian Gnosis they are called sonships ( ; singular: ); according to Marcus, they are numbers and sounds; in Valentinianism they form male/female pairs called "" (Greek , from ).
Physical sciences
Time
Basics and measurement
1942
https://en.wikipedia.org/wiki/Airline
Airline
An airline is a company that provides air transport services for traveling passengers or freight (cargo). Airlines use aircraft to supply these services and may form partnerships or alliances with other airlines for codeshare agreements, in which they both offer and operate the same flight. Generally, airline companies are recognized with an air operating certificate or license issued by a governmental aviation body. Airlines may be scheduled or charter operators. The first airline was the German airship company DELAG, founded on November 16, 1909. The four oldest non-airship airlines that still exist are the Netherlands' KLM (1919), Colombia's Avianca (1919), Australia's Qantas (1920) and the Russian Aeroflot (1923). Airline ownership has seen a shift from mostly personal ownership until the 1930s to government-ownership of major airlines from the 1940s to 1980s and back to large-scale privatization following the mid-1980s. Since the 1980s, there has been a trend of major airline mergers and the formation of airline alliances. The largest alliances are Star Alliance, SkyTeam and Oneworld. Airline alliances coordinate their passenger service programs (such as lounges and frequent-flyer programs), offer special interline tickets and often engage in extensive codesharing (sometimes systemwide). History The first airlines DELAG, Deutsche Luftschiffahrts-Aktiengesellschaft I was the world's first airline. It was founded on November 16, 1909, with government assistance, and operated airships manufactured by The Zeppelin Corporation. Its headquarters were in Frankfurt. The first fixed-wing scheduled airline was started on January 1, 1914. The flight was piloted by Tony Jannus and flew from St. Petersburg, Florida, to Tampa, Florida, operated by the St. Petersburg–Tampa Airboat Line. Europe Beginnings The earliest fixed wing airline in Europe was Aircraft Transport and Travel, formed by George Holt Thomas in 1916; via a series of takeovers and mergers, this company is an ancestor of modern-day British Airways. Using a fleet of former military Airco DH.4A biplanes that had been modified to carry two passengers in the fuselage, it operated relief flights between Folkestone and Ghent, Belgium. On July 15, 1919, the company flew a proving flight across the English Channel, despite a lack of support from the British government. Flown by Lt. H Shaw in an Airco DH.9 between RAF Hendon and Paris – Le Bourget Airport, the flight took 2 hours and 30 minutes at £21 per passenger. On August 25, 1919, the company used DH.16s to pioneer a regular service from Hounslow Heath Aerodrome to Paris's Le Bourget, the first regular international service in the world. The airline soon gained a reputation for reliability, despite problems with bad weather, and began to attract European competition. In November 1919, it won the first British civil airmail contract. Six Royal Air Force Airco DH.9A aircraft were lent to the company, to operate the airmail service between Hawkinge and Cologne. In 1920, they were returned to the Royal Air Force. Other British competitors were quick to follow – Handley Page Transport was established in 1919 and used the company's converted wartime Type O/400 bombers with a capacity for 12 passengers, to run a London-Paris passenger service. The first French airline was Société des lignes Latécoère, later known as Aéropostale, which started its first service in late 1918 to Spain. The Société Générale des Transports Aériens was created in late 1919, by the Farman brothers and the Farman F.60 Goliath plane flew scheduled services from Toussus-le-Noble to Kenley, near Croydon, England. Another early French airline was the Compagnie des Messageries Aériennes, established in 1919 by Louis-Charles Breguet, offering a mail and freight service between Le Bourget Airport, Paris and Lesquin Airport, Lille. The first German airline to use heavier than air aircraft was Deutsche Luft-Reederei established in 1917 which started operating in February 1919. In its first year, the D.L.R. operated regularly scheduled flights on routes with a combined length of nearly 1000 miles. By 1921 the D.L.R. network was more than 3000 km (1865 miles) long, and included destinations in the Netherlands, Scandinavia and the Baltic Republics. Another important German airline was Junkers Luftverkehr, which began operations in 1921. It was a division of the aircraft manufacturer Junkers, which became a separate company in 1924. It operated joint-venture airlines in Austria, Denmark, Estonia, Finland, Hungary, Latvia, Norway, Poland, Sweden and Switzerland. The Dutch airline KLM made its first flight in 1920, and is the oldest continuously operating airline in the world. Established by aviator Albert Plesman, it was immediately awarded a "Royal" predicate from Queen Wilhelmina. Its first flight was from Croydon Airport, London to Amsterdam, using a leased Aircraft Transport and Travel DH-16, and carrying two British journalists and a number of newspapers. In 1921, KLM started scheduled services. In Finland, the charter establishing Aero O/Y (now Finnair) was signed in the city of Helsinki on 12 September 1923. Junkers F.13 D-335 became the first aircraft of the company, when Aero took delivery of it on 14 March 1924. The first flight was between Helsinki and Tallinn, capital of Estonia, and it took place on 20 March 1924, one week later. In the Soviet Union, the Chief Administration of the Civil Air Fleet was established in 1921. One of its first acts was to help found Deutsch-Russische Luftverkehrs A.G. (Deruluft), a German-Russian joint venture to provide air transport from Russia to the West. Domestic air service began around the same time, when Dobrolyot started operations on 15 July 1923 between Moscow and Nizhni Novgorod. Since 1932 all operations had been carried under the name Aeroflot. Early European airlines tended to favor comfort – the passenger cabins were often spacious with luxurious interiors – over speed and efficiency. The relatively basic navigational capabilities of pilots at the time also meant that delays due to the weather were commonplace. Rationalization By the early 1920s, small airlines were struggling to compete, and there was a movement towards increased rationalization and consolidation. In 1924, Imperial Airways was formed from the merger of Instone Air Line Company, British Marine Air Navigation, Daimler Airway and Handley Page Transport, to allow British airlines to compete with stiff competition from French and German airlines that were enjoying heavy government subsidies. The airline was a pioneer in surveying and opening up air routes across the world to serve far-flung parts of the British Empire and to enhance trade and integration. The first new airliner ordered by Imperial Airways, was the Handley Page W8f City of Washington, delivered on 3 November 1924. In the first year of operation the company carried 11,395 passengers and 212,380 letters. In April 1925, the film The Lost World became the first film to be screened for passengers on a scheduled airliner flight when it was shown on the London-Paris route. Two French airlines also merged to form Air Union on 1 January 1923. This later merged with four other French airlines to become Air France, the country's flagship carrier to this day, on 17 May 1933. Germany's Deutsche Lufthansa was created in 1926 by merger of two airlines, one of them Junkers Luftverkehr. Lufthansa, due to the Junkers heritage and unlike most other airlines at the time, became a major investor in airlines outside of Europe, providing capital to Varig and Avianca. German airliners built by Junkers, Dornier, and Fokker were among the most advanced in the world at the time. Expansion In 1926, Alan Cobham surveyed a flight route from the UK to Cape Town, South Africa, following this up with another proving flight to Melbourne, Australia. Other routes to British India and the Far East were also charted and demonstrated at this time. Regular services to Cairo and Basra began in 1927 and were extended to Karachi in 1929. The London-Australia service was inaugurated in 1932 with the Handley Page HP 42 airliners. Further services were opened up to Calcutta, Rangoon, Singapore, Brisbane and Hong Kong passengers departed London on 14 March 1936 following the establishment of a branch from Penang to Hong Kong. France began an air mail service to Morocco in 1919 that was bought out in 1927, renamed Aéropostale, and injected with capital to become a major international carrier. In 1933, Aéropostale went bankrupt, was nationalized and merged into Air France. Although Germany lacked colonies, it also began expanding its services globally. In 1931, the airship Graf Zeppelin began offering regular scheduled passenger service between Germany and South America, usually every two weeks, which continued until 1937. In 1936, the airship Hindenburg entered passenger service and successfully crossed the Atlantic 36 times before crashing at Lakehurst, New Jersey, on 6 May 1937. In 1938, a weekly air service from Berlin to Kabul, Afghanistan, started operating. From February 1934 until World War II began in 1939, Deutsche Lufthansa operated an airmail service from Stuttgart, Germany via Spain, the Canary Islands and West Africa to Natal in Brazil. This was the first time an airline flew across an ocean. By the end of the 1930s Aeroflot had become the world's largest airline, employing more than 4,000 pilots and 60,000 other service personnel and operating around 3,000 aircraft (of which 75% were considered obsolete by its own standards). During the Soviet era Aeroflot was synonymous with Russian civil aviation, as it was the only air carrier. It became the first airline in the world to operate sustained regular jet services on 15 September 1956 with the Tupolev Tu-104. Deregulation Deregulation of the European Union airspace in the early 1990s has had substantial effect on the structure of the industry there. The shift towards 'budget' airlines on shorter routes has been significant. Airlines such as EasyJet and Ryanair have often grown at the expense of the traditional national airlines. There has also been a trend for these national airlines themselves to be privatized such as has occurred for Aer Lingus and British Airways. Other national airlines, including Italy's Alitalia, suffered – particularly with the rapid increase of oil prices in early 2008. United States Early development Tony Jannus conducted the United States' first scheduled commercial airline flight on January 1, 1914, for the St. Petersburg-Tampa Airboat Line. The 23-minute flight traveled between St. Petersburg, Florida and Tampa, Florida, passing some above Tampa Bay in Jannus' Benoist XIV wood and muslin biplane flying boat. His passenger was a former mayor of St. Petersburg, who paid $400 for the privilege of sitting on a wooden bench in the open cockpit. The Airboat line operated for about four months, carrying more than 1,200 passengers who paid $5 each. Chalk's International Airlines began service between Miami and Bimini in the Bahamas in February 1919. Based in Ft. Lauderdale, Chalk's claimed to be the oldest continuously operating airline in the United States until its closure in 2008. Following World War I, the United States found itself swamped with aviators. Many decided to take their war-surplus aircraft on barnstorming campaigns, performing aerobatic maneuvers to woo crowds. In 1918, the United States Postal Service won the financial backing of Congress to begin experimenting with air mail service, initially using Curtiss Jenny aircraft that had been procured by the United States Army Air Service. Private operators were the first to fly the mail but due to numerous accidents the US Army was tasked with mail delivery. During the Army's involvement they proved to be too unreliable and lost their air mail duties. By the mid-1920s, the Postal Service had developed its own air mail network, based on a transcontinental backbone between New York City and San Francisco. To supplement this service, they offered twelve contracts for spur routes to independent bidders. Some of the carriers that won these routes would, through time and mergers, evolve into Pan Am, Delta Air Lines, Braniff Airways, American Airlines, United Airlines (originally a division of Boeing), Trans World Airlines, Northwest Airlines, and Eastern Air Lines. Service during the early 1920s was sporadic: most airlines at the time were focused on carrying bags of mail. In 1925, however, the Ford Motor Company bought out the Stout Aircraft Company and began construction of the all-metal Ford Trimotor, which became the first successful American airliner. With a 12-passenger capacity, the Trimotor made passenger service potentially profitable. Air service was seen as a supplement to rail service in the American transportation network. At the same time, Juan Trippe began a crusade to create an air network that would link America to the world, and he achieved this goal through his airline, Pan Am, with a fleet of flying boats that linked Los Angeles to Shanghai and Boston to London. Pan Am and Northwest Airways (which began flights to Canada in the 1920s) were the only U.S. airlines to go international before the 1940s. With the introduction of the Boeing 247 and Douglas DC-3 in the 1930s, the U.S. airline industry was generally profitable, even during the Great Depression. This trend continued until the beginning of World War II. Since 1945 World War II, like World War I, brought new life to the airline industry. Many airlines in the Allied countries were flush from lease contracts to the military, and foresaw a future explosive demand for civil air transport, for both passengers and cargo. They were eager to invest in the newly emerging flagships of air travel such as the Boeing Stratocruiser, Lockheed Constellation, and Douglas DC-6. Most of these new aircraft were based on American bombers such as the B-29, which had spearheaded research into new technologies such as pressurization. Most offered increased efficiency from both added speed and greater payload. In the 1950s, the De Havilland Comet, Boeing 707, Douglas DC-8, and Sud Aviation Caravelle became the first flagships of the Jet Age in the West, while the Eastern bloc had Tupolev Tu-104 and Tupolev Tu-124 in the fleets of state-owned carriers such as Czechoslovak ČSA, Soviet Aeroflot and East-German Interflug. The Vickers Viscount and Lockheed L-188 Electra inaugurated turboprop transport. On 4 October 1958, British Overseas Airways Corporation started transatlantic flights between London Heathrow and New York Idlewild with a Comet 4, and Pan Am followed on 26 October with a Boeing 707 service between New York and Paris. The next big boost for the airlines would come in the 1970s, when the Boeing 747, McDonnell Douglas DC-10, and Lockheed L-1011 inaugurated widebody ("jumbo jet") service, which is still the standard in international travel. The Tupolev Tu-144 and its Western counterpart, Concorde, made supersonic travel a reality. Concorde first flew in 1969 and operated through 2003. In 1972, Airbus began producing Europe's most commercially successful line of airliners to date. The added efficiencies for these aircraft were often not in speed, but in passenger capacity, payload, and range. Airbus also features modern electronic cockpits that were common across their aircraft to enable pilots to fly multiple models with minimal cross-training. Deregulation The 1978 U.S. airline industry deregulation lowered federally controlled barriers for new airlines just as a downturn in the nation's economy occurred. New start-ups entered during the downturn, during which time they found aircraft and funding, contracted hangar and maintenance services, trained new employees, and recruited laid-off staff from other airlines. Major airlines dominated their routes through aggressive pricing and additional capacity offerings, often swamping new start-ups. In the place of high barriers to entry imposed by regulation, the major airlines implemented an equally high barrier called loss leader pricing. In this strategy an already established and dominant airline stomps out its competition by lowering airfares on specific routes, below the cost of operating on it, choking out any chance a start-up airline may have. The industry side effect is an overall drop in revenue and service quality. Since deregulation in 1978 the average domestic ticket price has dropped by 40%. So has airline employee pay. By incurring massive losses, the airlines of the USA now rely upon a scourge of cyclical Chapter 11 bankruptcy proceedings to continue doing business. America West Airlines (which has since merged with US Airways) remained a significant survivor from this new entrant era, as dozens, even hundreds, have gone under. In many ways, the biggest winner in the deregulated environment was the air passenger. Although not exclusively attributable to deregulation, indeed the U.S. witnessed an explosive growth in demand for air travel. Many millions who had never or rarely flown before became regular fliers, even joining frequent flyer loyalty programs and receiving free flights and other benefits from their flying. New services and higher frequencies meant that business fliers could fly to another city, do business, and return the same day, from almost any point in the country. Air travel's advantages put long-distance intercity railroad travel and bus lines under pressure, with most of the latter having withered away, whilst the former is still protected under nationalization through the continuing existence of Amtrak. By the 1980s, almost half of the total flying in the world took place in the U.S., and today the domestic industry operates over 10,000 daily departures nationwide. Toward the end of the century, a new style of low cost airline emerged, offering a no-frills product at a lower price. Southwest Airlines, JetBlue, AirTran Airways, Skybus Airlines and other low-cost carriers began to represent a serious challenge to the so-called "legacy airlines", as did their low-cost counterparts in many other countries. Their commercial viability represented a serious competitive threat to the legacy carriers. However, of these, ATA and Skybus have since ceased operations. Increasingly since 1978, US airlines have been reincorporated and spun off by newly created and internally led management companies, and thus becoming nothing more than operating units and subsidiaries with limited financially decisive control. Among some of these holding companies and parent companies which are relatively well known, are the UAL Corporation, along with the AMR Corporation, among a long list of airline holding companies sometime recognized worldwide. Less recognized are the private-equity firms which often seize managerial, financial, and board of directors control of distressed airline companies by temporarily investing large sums of capital in air carriers, to rescheme an airlines assets into a profitable organization or liquidating an air carrier of their profitable and worthwhile routes and business operations. Thus the last 50 years of the airline industry have varied from reasonably profitable, to devastatingly depressed. As the first major market to deregulate the industry in 1978, U.S. airlines have experienced more turbulence than almost any other country or region. In fact, no U.S. legacy carrier survived bankruptcy-free. Among the outspoken critics of deregulation, former CEO of American Airlines, Robert Crandall has publicly stated: "Chapter 11 bankruptcy protection filing shows airline industry deregulation was a mistake." Bailout Congress passed the Air Transportation Safety and System Stabilization Act (P.L. 107–42) in response to a severe liquidity crisis facing the already-troubled airline industry in the aftermath of the September 11 attacks. Through the ATSB Congress sought to provide cash infusions to carriers for both the cost of the four-day federal shutdown of the airlines and the incremental losses incurred through December 31, 2001, as a result of the terrorist attacks. This resulted in the first government bailout of the 21st century. Between 2000 and 2005 US airlines lost $30 billion with wage cuts of over $15 billion and 100,000 employees laid off. In recognition of the essential national economic role of a healthy aviation system, Congress authorized partial compensation of up to $5 billion in cash subject to review by the U.S. Department of Transportation and up to $10 billion in loan guarantees subject to review by a newly created Air Transportation Stabilization Board (ATSB). The applications to DOT for reimbursements were subjected to rigorous multi-year reviews not only by DOT program personnel but also by the Government Accountability Office and the DOT Inspector General. Ultimately, the federal government provided $4.6 billion in one-time, subject-to-income-tax cash payments to 427 U.S. air carriers, with no provision for repayment, essentially a gift from the taxpayers. (Passenger carriers operating scheduled service received approximately $4 billion, subject to tax.) In addition, the ATSB approved loan guarantees to six airlines totaling approximately $1.6 billion. Data from the U.S. Treasury Department show that the government recouped the $1.6 billion and a profit of $339 million from the fees, interest and purchase of discounted airline stock associated with loan guarantees. The three largest major carriers and Southwest Airlines control 70% of the U.S. passenger market. Asia Although Philippine Airlines (PAL) was officially founded on February 26, 1941, its license to operate as an airliner was derived from merged Philippine Aerial Taxi Company (PATCO) established by mining magnate Emmanuel N. Bachrach on 3 December 1930, making it Asia's oldest scheduled carrier still in operation. Commercial air service commenced three weeks later from Manila to Baguio, making it Asia's first airline route. Bachrach's death in 1937 paved the way for its eventual merger with Philippine Airlines in March 1941 and made it Asia's oldest airline. It is also the oldest airline in Asia still operating under its current name. Bachrach's majority share in PATCO was bought by beer magnate Andres R. Soriano in 1939 upon the advice of General Douglas MacArthur and later merged with newly formed Philippine Airlines with PAL as the surviving entity. Soriano has controlling interest in both airlines before the merger. PAL restarted service on 15 March 1941, with a single Beech Model 18 NPC-54 aircraft, which started its daily services between Manila (from Nielson Field) and Baguio, later to expand with larger aircraft such as the DC-3 and Vickers Viscount. In Japan, Japan Air Transport was established in 1928 as the national flag carrier. Upon the completion of Haneda Airport in 1931, it became the airline's hub. The airline initially operated domestic routes such as Tokyo–Osaka and Osaka–Fukuoka. In September 1929, it opened its first overseas route, which connected Fukuoka to Dalian in the Kwantung Leased Territory via Seoul and Pyongyang in Japanese Korea. After Japan established the puppet state of Manchukuo, the airline opened routes to major cities within this territory. The company was reorganised as Japan Airways in 1938. During the Second World War, it operated routes to various Japanese-occupied territories and Thailand. The company was dissolved immediately after the war, as civil aviation was prohibited by the Allied Occupation Forces. Civil aviation in Japan did not resume until the founding of Japan Airlines in 1951. Cathay Pacific was one of the first airlines to be launched among the other Asian countries in 1946. The license to operate as an airliner was granted by the federal government body after reviewing the necessity at the national assembly. The Hanjin occupies the largest ownership of Korean Air as well as few low-budget airlines as of now. Korean Air is one of the four founders of SkyTeam, which was established in 2000. Asiana Airlines, launched in 1988, joined Star Alliance in 2003. Korean Air and Asiana Airlines comprise one of the largest combined airline miles and number of passenger served at the regional market of Asian airline industry India was also one of the first countries to embrace civil aviation. One of the first Asian airline companies was Air India, which was founded as Tata Airlines in 1932, a division of Tata Sons Ltd. (now Tata Group). The airline was founded by India's leading industrialist, JRD Tata. On 15 October 1932, J. R. D. Tata himself flew a single engined De Havilland Puss Moth carrying air mail (postal mail of Imperial Airways) from Karachi to Bombay via Ahmedabad. The aircraft continued to Madras via Bellary piloted by Royal Air Force pilot Nevill Vintcent. Tata Airlines was also one of the world's first major airlines which began its operations without any support from the Government. With the outbreak of World War II, the airline presence in Asia came to a relative halt, with many new flag carriers donating their aircraft for military aid and other uses. Following the end of the war in 1945, regular commercial service was restored in India and Tata Airlines became a public limited company on 29 July 1946, under the name Air India. After the independence of India, 49% of the airline was acquired by the Government of India. In return, the airline was granted status to operate international services from India as the designated flag carrier under the name Air India International. On 31 July 1946, a chartered Philippine Airlines (PAL) DC-4 ferried 40 American servicemen to Oakland, California, from Nielson Airport in Makati with stops in Guam, Wake Island, Johnston Atoll and Honolulu, Hawaii, making PAL the first Asian airline to cross the Pacific Ocean. A regular service between Manila and San Francisco was started in December. It was during this year that the airline was designated as the flag carrier of Philippines. During the era of decolonization, newly born Asian countries started to embrace air transport. Among the first Asian carriers during the era were Cathay Pacific of Hong Kong (founded in September 1946), Orient Airways (later Pakistan International Airlines; founded in October 1946), Air Ceylon (later SriLankan Airlines; founded in 1947), Malayan Airways Limited in 1947 (later Singapore and Malaysia Airlines), El Al in Israel in 1948, Garuda Indonesia in 1949, Thai Airways in 1960, and Korean National Airlines in 1947. Latin America and Caribbean Among the first countries to have regular airlines in Latin America and the Caribbean were Bolivia with Lloyd Aéreo Boliviano, Cuba with Cubana de Aviación, Colombia with Avianca (the first airline established in the Americas), Argentina with Aerolíneas Argentinas, Chile with LAN Chile (today LATAM Airlines), Brazil with Varig, the Dominican Republic with Dominicana de Aviación, Mexico with Mexicana de Aviación, Trinidad and Tobago with BWIA West Indies Airways (today Caribbean Airlines), Venezuela with Aeropostal, Puerto Rico with Puertorriquena; and TACA based in El Salvador and representing several airlines of Central America (Costa Rica, Guatemala, Honduras and Nicaragua). All the previous airlines started regular operations well before World War II. Puerto Rican commercial airlines such as Prinair, Oceanair, Fina Air and Vieques Air Link came much after the second world war, as did several others from other countries like Mexico's Interjet and Volaris, Venezuela's Aserca Airlines and others. The air travel market has evolved rapidly over recent years in Latin America. Some industry estimates indicated in 2011 that over 2,000 new aircraft will begin service over the next five years in this region. These airlines serve domestic flights within their countries, as well as connections within Latin America and also overseas flights to North America, Europe, Australia, and Asia. Only five airline groups – Avianca, Panama's Copa, Mexico's Volaris, the Irelandia group and LATAM Airlines – have international subsidiaries and cover many destinations within the Americas as well as major hubs in other continents. LATAM with Chile as the central operation along with Peru, Ecuador, Colombia, Brazil and Argentina and formerly with some operations in the Dominican Republic. The Avianca group has its main operation in Colombia based around the hub in Bogotá, Colombia, as well as subsidiaries in various Latin American countries with hubs in San Salvador, El Salvador, as well as Lima, Peru, with a smaller operation in Ecuador. Copa has subsidiaries Copa Airlines Colombia and Wingo, both in Colombia, while Volaris of Mexico has Volaris Costa Rica and Volaris El Salvador, and the Irelandia group formerly included Viva Aerobus of Mexico; it formerly included Viva Colombia and Viva Air Peru. Regulation National Many countries have national airlines that the government owns and operates. Fully private airlines are subject to much government regulation for economic, political, and safety concerns. For instance, governments often intervene to halt airline labor actions to protect the free flow of people, communications, and goods between different regions without compromising safety. The United States, Australia, and to a lesser extent Brazil, Mexico, India, the United Kingdom, and Japan have "deregulated" their airlines. In the past, these governments dictated airfares, route networks, and other operational requirements for each airline. Since deregulation, airlines have been largely free to negotiate their own operating arrangements with different airports, enter and exit routes easily, and to levy airfares and supply flights according to market demand. The entry barriers for new airlines are lower in a deregulated market, and so the U.S. has seen hundreds of airlines start up (sometimes for only a brief operating period). This has produced far greater competition than before deregulation in most markets. The added competition, together with pricing freedom, means that new entrants often take market share with highly reduced rates that, to a limited degree, full service airlines must match. This is a major constraint on profitability for established carriers, which tend to have a higher cost base. As a result, profitability in a deregulated market is uneven for most airlines. These forces have caused some major airlines to go out of business, in addition to most of the poorly established new entrants. In the United States, the airline industry is dominated by four large firms. Because of industry consolidation, after fuel prices dropped considerably in 2015, very little of the savings were passed on to consumers. International Groups such as the International Civil Aviation Organization establish worldwide standards for safety and other vital concerns. Most international air traffic is regulated by bilateral agreements between countries, which designate specific carriers to operate on specific routes. The model of such an agreement was the Bermuda Agreement between the US and UK following World War II, which designated airports to be used for transatlantic flights and gave each government the authority to nominate carriers to operate routes. Bilateral agreements are based on the "freedoms of the air", a group of generalized traffic rights ranging from the freedom to overfly a country to the freedom to provide domestic flights within a country (a very rarely granted right known as cabotage). Most agreements permit airlines to fly from their home country to designated airports in the other country: some also extend the freedom to provide continuing service to a third country, or to another destination in the other country while carrying passengers from overseas. In the 1990s, "open skies" agreements became more common. These agreements take many of these regulatory powers from state governments and open up international routes to further competition. Open skies agreements have met some criticism, particularly within the European Union, whose airlines would be at a comparative disadvantage with the United States' because of cabotage restrictions. Economy In 2017, 4.1 billion passengers have been carried by airlines in 41.9 million commercial scheduled flights (an average payload of passengers), for 7.75 trillion passenger kilometres (an average trip of km) over 45,091 airline routes served globally. In 2016, air transport generated $704.4 billion of revenue in 2016, employed 10.2 million workers, supported 65.5 million jobs and $2.7 trillion of economic activity: 3.6% of the global GDP. In July 2016, the total weekly airline capacity was 181.1 billion Available Seat Kilometers (+6.9% compared to July 2015): 57.6bn in Asia-Pacific, 47.7bn in Europe, 46.2bn in North America, 12.2bn in Middle East, 12.0bn in Latin America and 5.4bn in Africa. Costs Airlines have substantial fixed and operating costs to establish and maintain air services: labor, fuel, airplanes, engines, spares and parts, IT services and networks, airport equipment, airport handling services, booking commissions, advertising, catering, training, aviation insurance and other costs. Thus all but a small percentage of the income from ticket sales is paid out to a wide variety of external providers or internal cost centers. Moreover, the industry is structured so that airlines often act as tax collectors. Airline fuel is untaxed because of a series of treaties existing between countries. Ticket prices include a number of fees, taxes and surcharges beyond the control of airlines. Airlines are also responsible for enforcing government regulations. If airlines carry passengers without proper documentation on an international flight, they are responsible for returning them back to the original country. Analysis of the 1992–1996 period shows that every player in the air transport chain is far more profitable than the airlines, who collect and pass through fees and revenues to them from ticket sales. While airlines as a whole earned 6% return on capital employed (2–3.5% less than the cost of capital), airports earned 10%, catering companies 10–13%, handling companies 11–14%, aircraft lessors 15%, aircraft manufacturers 16%, and global distribution companies more than 30%. There has been continuing cost competition from low cost airlines. Many companies emulate Southwest Airlines in various respects. The lines between full-service and low-cost airlines have become blurred – e.g., with most "full service" airlines introducing baggage check fees despite Southwest not doing so. Many airlines in the U.S. and elsewhere have experienced business difficulty. U.S. airlines that have declared Chapter 11 bankruptcy since 1990 have included American Airlines, Continental Airlines (twice), Delta Air Lines, Northwest Airlines, Pan Am, United Airlines and US Airways (twice). Where an airline has established an engineering base at an airport, then there may be considerable economic advantages in using that same airport as a preferred focus (or "hub") for its scheduled flights. Fuel hedging is a contractual tool used by transportation companies like airlines to reduce their exposure to volatile and potentially rising fuel costs. Several low-cost carriers such as Southwest Airlines adopt this practice. Southwest is credited with maintaining strong business profits between 1999 and the early 2000s due to its fuel hedging policy. Many other airlines are replicating Southwest's hedging policy to control their fuel costs. Operating costs for US major airlines are primarily aircraft operating expense including jet fuel, aircraft maintenance, depreciation and aircrew for 44%, servicing expense for 29% (traffic 11%, passenger 11% and aircraft 7%), 14% for reservations and sales and 13% for overheads (administration 6% and advertising 2%). An average US major Boeing 757-200 flies stages 11.3 block hours per day and costs $2,550 per block hour: $923 of ownership, $590 of maintenance, $548 of fuel and $489 of crew; or $13.34 per 186 seats per block hour. For a Boeing 737-500, a low-cost carrier like Southwest have lower operating costs at $1,526 than a full service one like United at $2,974, and higher productivity with 399,746 ASM per day against 264,284, resulting in a unit cost of $cts/ASM against $cts/ASM. McKinsey observes that "newer technology, larger aircraft, and increasingly efficient operations continually drive down the cost of running an airline", from nearly 40 US cents per ASK at the beginning of the jet age, to just above 10 cents since 2000. Those improvements were passed onto the customer due to high competition: fares have been falling throughout the history of airlines. Revenue Airlines assign prices to their services in an attempt to maximize profitability. The pricing of airline tickets has become increasingly complicated over the years and is now largely determined by computerized yield management systems. Because of the complications in scheduling flights and maintaining profitability, airlines have many loopholes that can be used by the knowledgeable traveler. Many of these airfare secrets are becoming more and more known to the general public, so airlines are forced to make constant adjustments. Most airlines use differentiated pricing, a form of price discrimination, to sell air services at varying prices simultaneously to different segments. Factors influencing the price include the days remaining until departure, the booked load factor, the forecast of total demand by price point, competitive pricing in force, and variations by day of week of departure and by time of day. Carriers often accomplish this by dividing each cabin of the aircraft (first, business and economy) into a number of travel classes for pricing purposes. A complicating factor is that of origin-destination control ("O&D control"). Someone purchasing a ticket from Melbourne to Sydney (as an example) for A$200 is competing with someone else who wants to fly Melbourne to Los Angeles through Sydney on the same flight, and who is willing to pay A$1400. Should the airline prefer the $1400 passenger, or the $200 passenger plus a possible Sydney-Los Angeles passenger willing to pay $1300? Airlines have to make hundreds of thousands of similar pricing decisions daily. The advent of advanced computerized reservations systems in the late 1970s, most notably Sabre, allowed airlines to easily perform cost-benefit analyses on different pricing structures, leading to almost perfect price discrimination in some cases (that is, filling each seat on an aircraft at the highest price that can be charged without driving the consumer elsewhere). The intense nature of airfare pricing has led to the term "fare war" to describe efforts by airlines to undercut other airlines on competitive routes. Through computers, new airfares can be published quickly and efficiently to the airlines' sales channels. For this purpose the airlines use the Airline Tariff Publishing Company (ATPCO), who distribute latest fares for more than 500 airlines to Computer Reservation Systems across the world. The extent of these pricing phenomena is strongest in "legacy" carriers. In contrast, low fare carriers usually offer pre-announced and simplified price structure, and sometimes quote prices for each leg of a trip separately. Computers also allow airlines to predict, with some accuracy, how many passengers will actually fly after making a reservation to fly. This allows airlines to overbook their flights enough to fill the aircraft while accounting for "no-shows", but not enough (in most cases) to force paying passengers off the aircraft for lack of seats, stimulative pricing for low demand flights coupled with overbooking on high demand flights can help reduce this figure. This is especially crucial during tough economic times as airlines undertake massive cuts to ticket prices to retain demand. Over January/February 2018, the cheapest airline surveyed by price comparator rome2rio was now-defunct Tigerair Australia with $0.06/km followed by AirAsia X with $0.07/km, while the most expensive was Charterlines, Inc. with $1.26/km followed by Buddha Air with $1.18/km. For the IATA, the global airline industry revenue was $754 billion in 2017 for a $38.4 billion collective profit, and should rise by 10.7% to $834 billion in 2018 for a $33.8 billion profit forecast, down by 12% due to rising jet fuel and labor costs. The demand for air transport will be less elastic for longer flights than for shorter flights, and more elastic for leisure travel than for business travel. Airlines often have a strong seasonality, with traffic low in winter and peaking in summer. In Europe the most extreme market are the Greek islands with July/August having more than ten times the winter traffic, as Jet2 is the most seasonal among low-cost carriers with July having seven times the January traffic, whereas legacy carriers are much less with only 85/115% variability. Assets and financing Airline financing is quite complex, since airlines are highly leveraged operations. Not only must they purchase (or lease) new airliner bodies and engines regularly, they must make major long-term fleet decisions with the goal of meeting the demands of their markets while producing a fleet that is relatively economical to operate and maintain; comparably Southwest Airlines and their reliance on a single airplane type (the Boeing 737 and derivatives), with the now defunct Eastern Air Lines which operated 17 different aircraft types, each with varying pilot, engine, maintenance, and support needs. A second financial issue is that of hedging oil and fuel purchases, which are usually second only to labor in its relative cost to the company. However, with the current high fuel prices it has become the largest cost to an airline. Legacy airlines, compared with new entrants, have been hit harder by rising fuel prices partly due to the running of older, less fuel efficient aircraft. While hedging instruments can be expensive, they can easily pay for themselves many times over in periods of increasing fuel costs, such as in the 2000–2005 period. In view of the congestion apparent at many international airports, the ownership of slots at certain airports (the right to take-off or land an aircraft at a particular time of day or night) has become a significant tradable asset for many airlines. Clearly take-off slots at popular times of the day can be critical in attracting the more profitable business traveler to a given airline's flight and in establishing a competitive advantage against a competing airline. If a particular city has two or more airports, market forces will tend to attract the less profitable routes, or those on which competition is weakest, to the less congested airport, where slots are likely to be more available and therefore cheaper. For example, Reagan National Airport attracts profitable routes due partly to its congestion, leaving less-profitable routes to Baltimore-Washington International Airport and Dulles International Airport. Other factors, such as surface transport facilities and onward connections, will also affect the relative appeal of different airports and some long-distance flights may need to operate from the one with the longest runway. For example, LaGuardia Airport is the preferred airport for most of Manhattan due to its proximity, while long-distance routes must use John F. Kennedy International Airport's longer runways. Airline alliances The first airline alliance was formed in the 1930s when Pan Am and its subsidiary, Panair do Brasil, agreed to codeshare routes in Latin America when they overlapped with each other. Codesharing involves one airline selling tickets for another airline's flights under its own airline code. An early example of this was Japan Airlines' (JAL) codesharing partnership with Aeroflot in the 1960s on Tokyo–Moscow flights; Aeroflot operated the flights using Aeroflot aircraft, but JAL sold tickets for the flights as if they were JAL flights. Another example was the Austrian–Sabena partnership on the Vienna–Brussels–New York/JFK route during the late '60s, using a Sabena Boeing 707 with Austrian livery. Since airline reservation requests are often made by city-pair (such as "show me flights from Chicago to Düsseldorf"), an airline that can codeshare with another airline for a variety of routes might be able to be listed as indeed offering a Chicago–Düsseldorf flight. The passenger is advised however, that airline no. 1 operates the flight from say Chicago to Amsterdam (for example), and airline no. 2 operates the continuing flight (on a different airplane, sometimes from another terminal) to Düsseldorf. Thus the primary rationale for code sharing is to expand one's service offerings in city-pair terms to increase sales. A more recent development is the airline alliance, which became prevalent in the late 1990s. These alliances can act as virtual mergers to get around government restrictions. The largest are Star Alliance, SkyTeam and Oneworld, and these accounted for over 60% of global commercial air traffic . Alliances of airlines coordinate their passenger service programs (such as lounges and frequent-flyer programs), offer special interline tickets and often engage in extensive codesharing (sometimes systemwide). These are increasingly integrated business combinations—sometimes including cross-equity arrangements—in which products, service standards, schedules, and airport facilities are standardized and combined for higher efficiency. One of the first airlines to start an alliance with another airline was KLM, who partnered with Northwest Airlines. Both airlines later entered the SkyTeam alliance after the fusion of KLM and Air France in 2004. Often the companies combine IT operations, or purchase fuel and aircraft as a bloc to achieve higher bargaining power. However, the alliances have been most successful at purchasing invisible supplies and services, such as fuel. Airlines usually prefer to purchase items visible to their passengers to differentiate themselves from local competitors. If an airline's main domestic competitor flies Boeing airliners, then the airline may prefer to use Airbus aircraft regardless of what the rest of the alliance chooses. Largest airlines The world's largest airlines can be defined in several ways. , American Airlines Group was the largest by fleet size, passengers carried and revenue passenger mile. Delta Air Lines was the largest by revenue, assets value and market capitalization. Lufthansa Group was the largest by number of employees, FedEx Express by freight tonne-kilometres, Turkish Airlines by number of countries served and UPS Airlines by number of destinations served (though United Airlines was the largest passenger airline by number of destinations served). State support Historically, air travel has survived largely through state support, whether in the form of equity or subsidies. The airline industry as a whole has made a cumulative loss during its 100-year history. One argument is that positive externalities, such as higher growth due to global mobility, outweigh the microeconomic losses and justify continuing government intervention. A historically high level of government intervention in the airline industry can be seen as part of a wider political consensus on strategic forms of transport, such as highways and railways, both of which receive public funding in most parts of the world. Although many countries continue to operate state-owned or parastatal airlines, many large airlines today are privately owned and are therefore governed by microeconomic principles to maximize shareholder profit. In December 1991, the collapse of Pan Am, an airline often credited for shaping the international airline industry, highlighted the financial complexities faced by major airline companies. Following the 1978 deregulation, U.S. carriers did not manage to make an aggregate profit for 12 years in 31, including four years where combined losses amounted to $10 billion, but rebounded with eight consecutive years of profits since 2010, including its four with over $10 billion profits. They drop loss-making routes, avoid fare wars and market share battles, limit capacity growth, add hub feed with regional jets to increase their profitability. They change schedules to create more connections, buy used aircraft, reduce international frequencies and leverage partnerships to optimize capacities and benefit from overseas connectivity. Environment Aircraft engines emit noise pollution, gases and particulate emissions, and contribute to global dimming. Growth of the industry in recent years raised a number of ecological questions. Domestic air transport grew in China at 15.5 percent annually from 2001 to 2006. The rate of air travel globally increased at 3.7 percent per year over the same time. In the EU greenhouse gas emissions from aviation increased by 87% between 1990 and 2006. However it must be compared with the flights increase, only in UK, between 1990 and 2006 terminal passengers increased from 100 000 thousands to 250 000 thousands., according to AEA reports every year, 750 million passengers travel by European airlines, which also share 40% of merchandise value in and out of Europe. Without even pressure from "green activists", targeting lower ticket prices, generally, airlines do what is possible to cut the fuel consumption (and gas emissions connected therewith). Further, according to some reports, it can be concluded that the last piston-powered aircraft were as fuel-efficient as the average jet in 2005. Despite continuing efficiency improvements from the major aircraft manufacturers, the expanding demand for global air travel has resulted in growing greenhouse gas (GHG) emissions. Currently, the aviation sector, including US domestic and global international travel, make approximately 1.6 percent of global anthropogenic GHG emissions per annum. North America accounts for nearly 40 percent of the world's GHG emissions from aviation fuel use. emissions from the jet fuel burned per passenger on an average airline flight is about 353 kilograms (776 pounds). Loss of natural habitat potential associated with the jet fuel burned per passenger on a airline flight is estimated to be 250 square meters (2700 square feet). In the context of climate change and peak oil, there is a debate about possible taxation of air travel and the inclusion of aviation in an emissions trading scheme, with a view to ensuring that the total external costs of aviation are taken into account. The airline industry is responsible for about 11 percent of greenhouse gases emitted by the U.S. transportation sector. Boeing estimates that biofuels could reduce flight-related greenhouse-gas emissions by 60 to 80 percent. The solution would be blending algae fuels with existing jet fuel: Boeing and Air New Zealand are collaborating with leading Brazilian biofuel maker Tecbio, New Zealand's Aquaflow Bionomic and other jet biofuel developers around the world. Virgin Atlantic and Virgin Green Fund are looking into the technology as part of a biofuel initiative. KLM has made the first commercial flight with biofuel in 2009. There are projects on electric aircraft, and some of which are fully operational as of 2013. Call signs Each operator of a scheduled or charter flight uses an airline call sign when communicating with airports or air traffic control. Most of these call-signs are derived from the airline's trade name, but for reasons of history, marketing, or the need to reduce ambiguity in spoken English (so that pilots do not mistakenly make navigational decisions based on instructions issued to a different aircraft), some airlines and air forces use call-signs less obviously connected with their trading name. For example, British Airways uses a Speedbird call-sign, named after the logo of one of its predecessors, BOAC, while SkyEurope used Relax. Personnel The various types of airline personnel include flight crew, responsible for the operation of the aircraft. Flight crew members include: pilots (captain and first officer: some older aircraft also required a flight engineer and/or a navigator); flight attendants (led by a purser on larger aircraft); In-flight security personnel on some airlines (most notably El Al) Groundcrew, responsible for operations at airports, include Aerospace and avionics engineers responsible for certifying the aircraft for flight and management of aircraft maintenance; Aerospace engineers, responsible for airframe, powerplant and electrical systems maintenance; Avionics engineers responsible for avionics and instruments maintenance; Airframe and powerplant technicians; Electric System technicians, responsible for maintenance of electrical systems; Flight dispatchers; Baggage handlers; Ramp Agents; Remote centralized weight and balancing; Gate agents; Ticket agents; Passenger service agents (such as airline lounge employees); Reservation agents, usually (but not always) at facilities outside the airport; Crew schedulers. Airlines follow a corporate structure where each broad area of operations (such as maintenance, flight operations (including flight safety), and passenger service) is supervised by a vice president. Larger airlines often appoint vice presidents to oversee each of the airline's hubs as well. Airlines employ lawyers to deal with regulatory procedures and other administrative tasks. Trends The pattern of ownership has been privatized since the mid-1980s, that is, the ownership has gradually changed from governments to private and individual sectors or organizations. This occurs as regulators permit greater freedom and non-government ownership, in steps that are usually decades apart. This pattern is not seen for all airlines in all regions. Many major airlines operating between the 1940s and 1980s were government-owned or government-established. However, most airlines from the earliest days of air travel in the 1920s and 1930s were personal businesses. Growth rates are not consistent in all regions, but countries with a deregulated airline industry have more competition and greater pricing freedom. This results in lower fares and sometimes dramatic spurts in traffic growth. The U.S., Australia, Canada, Japan, Brazil, India and other markets exhibit this trend. The industry has been observed to be cyclical in its financial performance. Four or five years of poor earnings precede five or six years of improvement. But profitability even in the good years is generally low, in the range of 2–3% net profit after interest and tax. In times of profit, airlines lease new generations of airplanes and upgrade services in response to higher demand. Since 1980, the industry has not earned back the cost of capital during the best of times. Conversely, in bad times losses can be dramatically worse. Warren Buffett in 1999 said "the money that had been made since the dawn of aviation by all of this country's airline companies was zero. Absolutely zero." As in many mature industries, consolidation is a trend. Airline groupings may consist of limited bilateral partnerships, long-term, multi-faceted alliances between carriers, equity arrangements, mergers, or takeovers. Since governments often restrict ownership and merger between companies in different countries, most consolidation takes place within a country. In the U.S., over 200 airlines have merged, been taken over, or gone out of business since the Airline Deregulation Act in 1978. Many international airline managers are lobbying their governments to permit greater consolidation to achieve higher economy and efficiency. Types There are several types of passenger airlines, mainly: Mainline airlines operate flights by the airline's main operating unit, rather than by regional affiliates or subsidiaries. Regional airlines, non-"mainline" airlines that operate regional aircraft; regionals typically operate over shorter non-intercontinental distances, often as feeder services for legacy mainline networks. Low-cost carriers, giving a "basic", "no-frills" and perceived inexpensive service. Business class airline, an airline aimed at the business traveler, featuring all business class seating and amenities. Charter airlines, operating outside regular schedule intervals. Flag carriers, the historically nationally owned airlines that were considered representative of the country overseas. Legacy carriers, US carriers that predate the Airline Deregulation Act of 1978. Major airlines of the United States, airlines with at least $1 billion in revenues. In addition, there are several cargo-only airlines.
Technology
Aviation
null
1962
https://en.wikipedia.org/wiki/Apparent%20magnitude
Apparent magnitude
Apparent magnitude () is a measure of the brightness of a star, astronomical object or other celestial objects like artificial satellites. Its value depends on its intrinsic luminosity, its distance, and any extinction of the object's light caused by interstellar dust along the line of sight to the observer. Unless stated otherwise, the word magnitude in astronomy usually refers to a celestial object's apparent magnitude. The magnitude scale likely dates to before the ancient Roman astronomer Claudius Ptolemy, whose star catalog popularized the system by listing stars from 1st magnitude (brightest) to 6th magnitude (dimmest). The modern scale was mathematically defined to closely match this historical system by Norman Pogson in 1856. The scale is reverse logarithmic: the brighter an object is, the lower its magnitude number. A difference of 1.0 in magnitude corresponds to the brightness ratio of , or about 2.512. For example, a magnitude 2.0 star is 2.512 times as bright as a magnitude 3.0 star, 6.31 times as magnitude 4.0, and 100 times magnitude 7.0. The brightest astronomical objects have negative apparent magnitudes: for example, Venus at −4.2 or Sirius at −1.46. The faintest stars visible with the naked eye on the darkest night have apparent magnitudes of about +6.5, though this varies depending on a person's eyesight and with altitude and atmospheric conditions. The apparent magnitudes of known objects range from the Sun at −26.832 to objects in deep Hubble Space Telescope images of magnitude +31.5. The measurement of apparent magnitude is called photometry. Photometric measurements are made in the ultraviolet, visible, or infrared wavelength bands using standard passband filters belonging to photometric systems such as the UBV system or the Strömgren uvbyβ system. Measurement in the V-band may be referred to as the apparent visual magnitude. Absolute magnitude is a related quantity which measures the luminosity that a celestial object emits, rather than its apparent brightness when observed, and is expressed on the same reverse logarithmic scale. Absolute magnitude is defined as the apparent magnitude that a star or object would have if it were observed from a distance of . Therefore, it is of greater use in stellar astrophysics since it refers to a property of a star regardless of how close it is to Earth. But in observational astronomy and popular stargazing, references to "magnitude" are understood to mean apparent magnitude. Amateur astronomers commonly express the darkness of the sky in terms of limiting magnitude, i.e. the apparent magnitude of the faintest star they can see with the naked eye. This can be useful as a way of monitoring the spread of light pollution. Apparent magnitude is technically a measure of illuminance, which can also be measured in photometric units such as lux. History The scale used to indicate magnitude originates in the Hellenistic practice of dividing stars visible to the naked eye into six magnitudes. The brightest stars in the night sky were said to be of first magnitude ( = 1), whereas the faintest were of sixth magnitude ( = 6), which is the limit of human visual perception (without the aid of a telescope). Each grade of magnitude was considered twice the brightness of the following grade (a logarithmic scale), although that ratio was subjective as no photodetectors existed. This rather crude scale for the brightness of stars was popularized by Ptolemy in his Almagest and is generally believed to have originated with Hipparchus. This cannot be proved or disproved because Hipparchus's original star catalogue is lost. The only preserved text by Hipparchus himself (a commentary to Aratus) clearly documents that he did not have a system to describe brightness with numbers: He always uses terms like "big" or "small", "bright" or "faint" or even descriptions such as "visible at full moon". In 1856, Norman Robert Pogson formalized the system by defining a first magnitude star as a star that is 100 times as bright as a sixth-magnitude star, thereby establishing the logarithmic scale still in use today. This implies that a star of magnitude is about 2.512 times as bright as a star of magnitude . This figure, the fifth root of 100, became known as Pogson's Ratio. The 1884 Harvard Photometry and 1886 Potsdamer Duchmusterung star catalogs popularized Pogson's ratio, and eventually it became a de facto standard in modern astronomy to describe differences in brightness. Defining and calibrating what magnitude 0.0 means is difficult, and different types of measurements which detect different kinds of light (possibly by using filters) have different zero points. Pogson's original 1856 paper defined magnitude 6.0 to be the faintest star the unaided eye can see, but the true limit for faintest possible visible star varies depending on the atmosphere and how high a star is in the sky. The Harvard Photometry used an average of 100 stars close to Polaris to define magnitude 5.0. Later, the Johnson UVB photometric system defined multiple types of photometric measurements with different filters, where magnitude 0.0 for each filter is defined to be the average of six stars with the same spectral type as Vega. This was done so the color index of these stars would be 0. Although this system is often called "Vega normalized", Vega is slightly dimmer than the six-star average used to define magnitude 0.0, meaning Vega's magnitude is normalized to 0.03 by definition. With the modern magnitude systems, brightness is described using Pogson's ratio. In practice, magnitude numbers rarely go above 30 before stars become too faint to detect. While Vega is close to magnitude 0, there are four brighter stars in the night sky at visible wavelengths (and more at infrared wavelengths) as well as the bright planets Venus, Mars, and Jupiter, and since brighter means smaller magnitude, these must be described by negative magnitudes. For example, Sirius, the brightest star of the celestial sphere, has a magnitude of −1.4 in the visible. Negative magnitudes for other very bright astronomical objects can be found in the table below. Astronomers have developed other photometric zero point systems as alternatives to Vega normalized systems. The most widely used is the AB magnitude system, in which photometric zero points are based on a hypothetical reference spectrum having constant flux per unit frequency interval, rather than using a stellar spectrum or blackbody curve as the reference. The AB magnitude zero point is defined such that an object's AB and Vega-based magnitudes will be approximately equal in the V filter band. However, the AB magnitude system is defined assuming an idealized detector measuring only one wavelength of light, while real detectors accept energy from a range of wavelengths. Measurement Precision measurement of magnitude (photometry) requires calibration of the photographic or (usually) electronic detection apparatus. This generally involves contemporaneous observation, under identical conditions, of standard stars whose magnitude using that spectral filter is accurately known. Moreover, as the amount of light actually received by a telescope is reduced due to transmission through the Earth's atmosphere, the airmasses of the target and calibration stars must be taken into account. Typically one would observe a few different stars of known magnitude which are sufficiently similar. Calibrator stars close in the sky to the target are favoured (to avoid large differences in the atmospheric paths). If those stars have somewhat different zenith angles (altitudes) then a correction factor as a function of airmass can be derived and applied to the airmass at the target's position. Such calibration obtains the brightness as would be observed from above the atmosphere, where apparent magnitude is defined. The apparent magnitude scale in astronomy reflects the received power of stars and not their amplitude. Newcomers should consider using the relative brightness measure in astrophotography to adjust exposure times between stars. Apparent magnitude also integrates over the entire object, regardless of its focus, and this needs to be taken into account when scaling exposure times for objects with significant apparent size, like the Sun, Moon and planets. For example, directly scaling the exposure time from the Moon to the Sun works because they are approximately the same size in the sky. However, scaling the exposure from the Moon to Saturn would result in an overexposure if the image of Saturn takes up a smaller area on your sensor than the Moon did (at the same magnification, or more generally, f/#). Calculations The dimmer an object appears, the higher the numerical value given to its magnitude, with a difference of 5 magnitudes corresponding to a brightness factor of exactly 100. Therefore, the magnitude , in the spectral band , would be given by which is more commonly expressed in terms of common (base-10) logarithms as where is the observed irradiance using spectral filter , and is the reference flux (zero-point) for that photometric filter. Since an increase of 5 magnitudes corresponds to a decrease in brightness by a factor of exactly 100, each magnitude increase implies a decrease in brightness by the factor (Pogson's ratio). Inverting the above formula, a magnitude difference implies a brightness factor of Example: Sun and Moon What is the ratio in brightness between the Sun and the full Moon? The apparent magnitude of the Sun is −26.832 (brighter), and the mean magnitude of the full moon is −12.74 (dimmer). Difference in magnitude: Brightness factor: The Sun appears to be approximately times as bright as the full Moon. Magnitude addition Sometimes one might wish to add brightness. For example, photometry on closely separated double stars may only be able to produce a measurement of their combined light output. To find the combined magnitude of that double star knowing only the magnitudes of the individual components, this can be done by adding the brightness (in linear units) corresponding to each magnitude. Solving for yields where is the resulting magnitude after adding the brightnesses referred to by and . Apparent bolometric magnitude While magnitude generally refers to a measurement in a particular filter band corresponding to some range of wavelengths, the apparent or absolute bolometric magnitude (mbol) is a measure of an object's apparent or absolute brightness integrated over all wavelengths of the electromagnetic spectrum (also known as the object's irradiance or power, respectively). The zero point of the apparent bolometric magnitude scale is based on the definition that an apparent bolometric magnitude of 0 mag is equivalent to a received irradiance of 2.518×10−8 watts per square metre (W·m−2). Absolute magnitude While apparent magnitude is a measure of the brightness of an object as seen by a particular observer, absolute magnitude is a measure of the intrinsic brightness of an object. Flux decreases with distance according to an inverse-square law, so the apparent magnitude of a star depends on both its absolute brightness and its distance (and any extinction). For example, a star at one distance will have the same apparent magnitude as a star four times as bright at twice that distance. In contrast, the intrinsic brightness of an astronomical object, does not depend on the distance of the observer or any extinction. The absolute magnitude , of a star or astronomical object is defined as the apparent magnitude it would have as seen from a distance of . The absolute magnitude of the Sun is 4.83 in the V band (visual), 4.68 in the Gaia satellite's G band (green) and 5.48 in the B band (blue). In the case of a planet or asteroid, the absolute magnitude rather means the apparent magnitude it would have if it were from both the observer and the Sun, and fully illuminated at maximum opposition (a configuration that is only theoretically achievable, with the observer situated on the surface of the Sun). Standard reference values The magnitude scale is a reverse logarithmic scale. A common misconception is that the logarithmic nature of the scale is because the human eye itself has a logarithmic response. In Pogson's time this was thought to be true (see Weber–Fechner law), but it is now believed that the response is a power law . Magnitude is complicated by the fact that light is not monochromatic. The sensitivity of a light detector varies according to the wavelength of the light, and the way it varies depends on the type of light detector. For this reason, it is necessary to specify how the magnitude is measured for the value to be meaningful. For this purpose the UBV system is widely used, in which the magnitude is measured in three different wavelength bands: U (centred at about 350 nm, in the near ultraviolet), B (about 435 nm, in the blue region) and V (about 555 nm, in the middle of the human visual range in daylight). The V band was chosen for spectral purposes and gives magnitudes closely corresponding to those seen by the human eye. When an apparent magnitude is discussed without further qualification, the V magnitude is generally understood. Because cooler stars, such as red giants and red dwarfs, emit little energy in the blue and UV regions of the spectrum, their power is often under-represented by the UBV scale. Indeed, some L and T class stars have an estimated magnitude of well over 100, because they emit extremely little visible light, but are strongest in infrared. Measures of magnitude need cautious treatment and it is extremely important to measure like with like. On early 20th century and older orthochromatic (blue-sensitive) photographic film, the relative brightnesses of the blue supergiant Rigel and the red supergiant Betelgeuse irregular variable star (at maximum) are reversed compared to what human eyes perceive, because this archaic film is more sensitive to blue light than it is to red light. Magnitudes obtained from this method are known as photographic magnitudes, and are now considered obsolete. For objects within the Milky Way with a given absolute magnitude, 5 is added to the apparent magnitude for every tenfold increase in the distance to the object. For objects at very great distances (far beyond the Milky Way), this relationship must be adjusted for redshifts and for non-Euclidean distance measures due to general relativity. For planets and other Solar System bodies, the apparent magnitude is derived from its phase curve and the distances to the Sun and observer. List of apparent magnitudes Some of the listed magnitudes are approximate. Telescope sensitivity depends on observing time, optical bandpass, and interfering light from scattering and airglow.
Physical sciences
Basics
Astronomy
1963
https://en.wikipedia.org/wiki/Absolute%20magnitude
Absolute magnitude
In astronomy, absolute magnitude () is a measure of the luminosity of a celestial object on an inverse logarithmic astronomical magnitude scale; the more luminous (intrinsically bright) an object, the lower its magnitude number. An object's absolute magnitude is defined to be equal to the apparent magnitude that the object would have if it were viewed from a distance of exactly , without extinction (or dimming) of its light due to absorption by interstellar matter and cosmic dust. By hypothetically placing all objects at a standard reference distance from the observer, their luminosities can be directly compared among each other on a magnitude scale. For Solar System bodies that shine in reflected light, a different definition of absolute magnitude (H) is used, based on a standard reference distance of one astronomical unit. Absolute magnitudes of stars generally range from approximately −10 to +20. The absolute magnitudes of galaxies can be much lower (brighter). The more luminous an object, the smaller the numerical value of its absolute magnitude. A difference of 5 magnitudes between the absolute magnitudes of two objects corresponds to a ratio of 100 in their luminosities, and a difference of n magnitudes in absolute magnitude corresponds to a luminosity ratio of 100n/5. For example, a star of absolute magnitude MV = 3.0 would be 100 times as luminous as a star of absolute magnitude MV = 8.0 as measured in the V filter band. The Sun has absolute magnitude MV = +4.83. Highly luminous objects can have negative absolute magnitudes: for example, the Milky Way galaxy has an absolute B magnitude of about −20.8. As with all astronomical magnitudes, the absolute magnitude can be specified for different wavelength ranges corresponding to specified filter bands or passbands; for stars a commonly quoted absolute magnitude is the absolute visual magnitude, which uses the visual (V) band of the spectrum (in the UBV photometric system). Absolute magnitudes are denoted by a capital M, with a subscript representing the filter band used for measurement, such as MV for absolute magnitude in the V band. An object's absolute bolometric magnitude (Mbol) represents its total luminosity over all wavelengths, rather than in a single filter band, as expressed on a logarithmic magnitude scale. To convert from an absolute magnitude in a specific filter band to absolute bolometric magnitude, a bolometric correction (BC) is applied. Stars and galaxies In stellar and galactic astronomy, the standard distance is 10 parsecs (about 32.616 light-years, 308.57 petameters or 308.57 trillion kilometres). A star at 10 parsecs has a parallax of 0.1″ (100 milliarcseconds). Galaxies (and other extended objects) are much larger than 10 parsecs; their light is radiated over an extended patch of sky, and their overall brightness cannot be directly observed from relatively short distances, but the same convention is used. A galaxy's magnitude is defined by measuring all the light radiated over the entire object, treating that integrated brightness as the brightness of a single point-like or star-like source, and computing the magnitude of that point-like source as it would appear if observed at the standard 10 parsecs distance. Consequently, the absolute magnitude of any object equals the apparent magnitude it would have if it were 10 parsecs away. Some stars visible to the naked eye have such a low absolute magnitude that they would appear bright enough to outshine the planets and cast shadows if they were at 10 parsecs from the Earth. Examples include Rigel (−7.8), Deneb (−8.4), Naos (−6.2), and Betelgeuse (−5.8). For comparison, Sirius has an absolute magnitude of only 1.4, which is still brighter than the Sun, whose absolute visual magnitude is 4.83. The Sun's absolute bolometric magnitude is set arbitrarily, usually at 4.75. Absolute magnitudes of stars generally range from approximately −10 to +20. The absolute magnitudes of galaxies can be much lower (brighter). For example, the giant elliptical galaxy M87 has an absolute magnitude of −22 (i.e. as bright as about 60,000 stars of magnitude −10). Some active galactic nuclei (quasars like CTA-102) can reach absolute magnitudes in excess of −32, making them the most luminous persistent objects in the observable universe, although these objects can vary in brightness over astronomically short timescales. At the extreme end, the optical afterglow of the gamma ray burst GRB 080319B reached, according to one paper, an absolute r magnitude brighter than −38 for a few tens of seconds. Apparent magnitude The Greek astronomer Hipparchus established a numerical scale to describe the brightness of each star appearing in the sky. The brightest stars in the sky were assigned an apparent magnitude , and the dimmest stars visible to the naked eye are assigned . The difference between them corresponds to a factor of 100 in brightness. For objects within the immediate neighborhood of the Sun, the absolute magnitude and apparent magnitude from any distance (in parsecs, with 1 pc = 3.2616 light-years) are related by where is the radiant flux measured at distance (in parsecs), the radiant flux measured at distance . Using the common logarithm, the equation can be written as where it is assumed that extinction from gas and dust is negligible. Typical extinction rates within the Milky Way galaxy are 1 to 2 magnitudes per kiloparsec, when dark clouds are taken into account. For objects at very large distances (outside the Milky Way) the luminosity distance (distance defined using luminosity measurements) must be used instead of , because the Euclidean approximation is invalid for distant objects. Instead, general relativity must be taken into account. Moreover, the cosmological redshift complicates the relationship between absolute and apparent magnitude, because the radiation observed was shifted into the red range of the spectrum. To compare the magnitudes of very distant objects with those of local objects, a K correction might have to be applied to the magnitudes of the distant objects. The absolute magnitude can also be written in terms of the apparent magnitude and stellar parallax : or using apparent magnitude and distance modulus : Examples Rigel has a visual magnitude of 0.12 and distance of about 860 light-years: Vega has a parallax of 0.129″, and an apparent magnitude of 0.03: The Black Eye Galaxy has a visual magnitude of 9.36 and a distance modulus of 31.06: Bolometric magnitude The absolute bolometric magnitude () takes into account electromagnetic radiation at all wavelengths. It includes those unobserved due to instrumental passband, the Earth's atmospheric absorption, and extinction by interstellar dust. It is defined based on the luminosity of the stars. In the case of stars with few observations, it must be computed assuming an effective temperature. Classically, the difference in bolometric magnitude is related to the luminosity ratio according to: which makes by inversion: where is the Sun's luminosity (bolometric luminosity) is the star's luminosity (bolometric luminosity) is the bolometric magnitude of the Sun is the bolometric magnitude of the star. In August 2015, the International Astronomical Union passed Resolution B2 defining the zero points of the absolute and apparent bolometric magnitude scales in SI units for power (watts) and irradiance (W/m2), respectively. Although bolometric magnitudes had been used by astronomers for many decades, there had been systematic differences in the absolute magnitude-luminosity scales presented in various astronomical references, and no international standardization. This led to systematic differences in bolometric corrections scales. Combined with incorrect assumed absolute bolometric magnitudes for the Sun, this could lead to systematic errors in estimated stellar luminosities (and other stellar properties, such as radii or ages, which rely on stellar luminosity to be calculated). Resolution B2 defines an absolute bolometric magnitude scale where corresponds to luminosity , with the zero point luminosity set such that the Sun (with nominal luminosity ) corresponds to absolute bolometric magnitude . Placing a radiation source (e.g. star) at the standard distance of 10 parsecs, it follows that the zero point of the apparent bolometric magnitude scale corresponds to irradiance . Using the IAU 2015 scale, the nominal total solar irradiance ("solar constant") measured at 1 astronomical unit () corresponds to an apparent bolometric magnitude of the Sun of . Following Resolution B2, the relation between a star's absolute bolometric magnitude and its luminosity is no longer directly tied to the Sun's (variable) luminosity: where is the star's luminosity (bolometric luminosity) in watts is the zero point luminosity is the bolometric magnitude of the star The new IAU absolute magnitude scale permanently disconnects the scale from the variable Sun. However, on this SI power scale, the nominal solar luminosity corresponds closely to , a value that was commonly adopted by astronomers before the 2015 IAU resolution. The luminosity of the star in watts can be calculated as a function of its absolute bolometric magnitude as: using the variables as defined previously. Solar System bodies () For planets and asteroids, a definition of absolute magnitude that is more meaningful for non-stellar objects is used. The absolute magnitude, commonly called , is defined as the apparent magnitude that the object would have if it were one astronomical unit (AU) from both the Sun and the observer, and in conditions of ideal solar opposition (an arrangement that is impossible in practice). Because Solar System bodies are illuminated by the Sun, their brightness varies as a function of illumination conditions, described by the phase angle. This relationship is referred to as the phase curve. The absolute magnitude is the brightness at phase angle zero, an arrangement known as opposition, from a distance of one AU. Apparent magnitude The absolute magnitude can be used to calculate the apparent magnitude of a body. For an object reflecting sunlight, and are connected by the relation where is the phase angle, the angle between the body-Sun and body–observer lines. is the phase integral (the integration of reflected light; a number in the 0 to 1 range). By the law of cosines, we have: Distances: is the distance between the body and the observer is the distance between the body and the Sun is the distance between the observer and the Sun , a unit conversion factor, is the constant 1 AU, the average distance between the Earth and the Sun Approximations for phase integral The value of depends on the properties of the reflecting surface, in particular on its roughness. In practice, different approximations are used based on the known or assumed properties of the surface. The surfaces of terrestrial planets are generally more difficult to model than those of gaseous planets, the latter of which have smoother visible surfaces. Planets as diffuse spheres Planetary bodies can be approximated reasonably well as ideal diffuse reflecting spheres. Let be the phase angle in degrees, then A full-phase diffuse sphere reflects two-thirds as much light as a diffuse flat disk of the same diameter. A quarter phase () has as much light as full phase (). By contrast, a diffuse disk reflector model is simply , which isn't realistic, but it does represent the opposition surge for rough surfaces that reflect more uniform light back at low phase angles. The definition of the geometric albedo , a measure for the reflectivity of planetary surfaces, is based on the diffuse disk reflector model. The absolute magnitude , diameter (in kilometers) and geometric albedo of a body are related by or equivalently, Example: The Moon's absolute magnitude can be calculated from its diameter and geometric albedo : We have , At quarter phase, (according to the diffuse reflector model), this yields an apparent magnitude of The actual value is somewhat lower than that, This is not a good approximation, because the phase curve of the Moon is too complicated for the diffuse reflector model. A more accurate formula is given in the following section. More advanced models Because Solar System bodies are never perfect diffuse reflectors, astronomers use different models to predict apparent magnitudes based on known or assumed properties of the body. For planets, approximations for the correction term in the formula for have been derived empirically, to match observations at different phase angles. The approximations recommended by the Astronomical Almanac are (with in degrees): Here is the effective inclination of Saturn's rings (their tilt relative to the observer), which as seen from Earth varies between 0° and 27° over the course of one Saturn orbit, and is a small correction term depending on Uranus' sub-Earth and sub-solar latitudes. is the Common Era year. Neptune's absolute magnitude is changing slowly due to seasonal effects as the planet moves along its 165-year orbit around the Sun, and the approximation above is only valid after the year 2000. For some circumstances, like for Venus, no observations are available, and the phase curve is unknown in those cases. The formula for the Moon is only applicable to the near side of the Moon, the portion that is visible from the Earth. Example 1: On 1 January 2019, Venus was from the Sun, and from Earth, at a phase angle of (near quarter phase). Under full-phase conditions, Venus would have been visible at Accounting for the high phase angle, the correction term above yields an actual apparent magnitude of This is close to the value of predicted by the Jet Propulsion Laboratory. Example 2: At first quarter phase, the approximation for the Moon gives With that, the apparent magnitude of the Moon is close to the expected value of about . At last quarter, the Moon is about 0.06 mag fainter than at first quarter, because that part of its surface has a lower albedo. Earth's albedo varies by a factor of 6, from 0.12 in the cloud-free case to 0.76 in the case of altostratus cloud. The absolute magnitude in the table corresponds to an albedo of 0.434. Due to the variability of the weather, Earth's apparent magnitude cannot be predicted as accurately as that of most other planets. Asteroids If an object has an atmosphere, it reflects light more or less isotropically in all directions, and its brightness can be modelled as a diffuse reflector. Bodies with no atmosphere, like asteroids or moons, tend to reflect light more strongly to the direction of the incident light, and their brightness increases rapidly as the phase angle approaches . This rapid brightening near opposition is called the opposition effect. Its strength depends on the physical properties of the body's surface, and hence it differs from asteroid to asteroid. In 1985, the IAU adopted the semi-empirical -system, based on two parameters and called absolute magnitude and slope, to model the opposition effect for the ephemerides published by the Minor Planet Center. where the phase integral is and for or , , , and . This relation is valid for phase angles , and works best when . The slope parameter relates to the surge in brightness, typically , when the object is near opposition. It is known accurately only for a small number of asteroids, hence for most asteroids a value of is assumed. In rare cases, can be negative. An example is 101955 Bennu, with . In 2012, the -system was officially replaced by an improved system with three parameters , and , which produces more satisfactory results if the opposition effect is very small or restricted to very small phase angles. However, as of 2022, this -system has not been adopted by either the Minor Planet Center nor Jet Propulsion Laboratory. The apparent magnitude of asteroids varies as they rotate, on time scales of seconds to weeks depending on their rotation period, by up to or more. In addition, their absolute magnitude can vary with the viewing direction, depending on their axial tilt. In many cases, neither the rotation period nor the axial tilt are known, limiting the predictability. The models presented here do not capture those effects. Cometary magnitudes The brightness of comets is given separately as total magnitude (, the brightness integrated over the entire visible extend of the coma) and nuclear magnitude (, the brightness of the core region alone). Both are different scales than the magnitude scale used for planets and asteroids, and can not be used for a size comparison with an asteroid's absolute magnitude . The activity of comets varies with their distance from the Sun. Their brightness can be approximated as where are the total and nuclear apparent magnitudes of the comet, respectively, are its "absolute" total and nuclear magnitudes, and are the body-sun and body-observer distances, is the Astronomical Unit, and are the slope parameters characterising the comet's activity. For , this reduces to the formula for a purely reflecting body (showing no cometary activity). For example, the lightcurve of comet C/2011 L4 (PANSTARRS) can be approximated by On the day of its perihelion passage, 10 March 2013, comet PANSTARRS was from the Sun and from Earth. The total apparent magnitude is predicted to have been at that time. The Minor Planet Center gives a value close to that, . The absolute magnitude of any given comet can vary dramatically. It can change as the comet becomes more or less active over time or if it undergoes an outburst. This makes it difficult to use the absolute magnitude for a size estimate. When comet 289P/Blanpain was discovered in 1819, its absolute magnitude was estimated as . It was subsequently lost and was only rediscovered in 2003. At that time, its absolute magnitude had decreased to , and it was realised that the 1819 apparition coincided with an outburst. 289P/Blanpain reached naked eye brightness (5–8 mag) in 1819, even though it is the comet with the smallest nucleus that has ever been physically characterised, and usually doesn't become brighter than 18 mag. For some comets that have been observed at heliocentric distances large enough to distinguish between light reflected from the coma, and light from the nucleus itself, an absolute magnitude analogous to that used for asteroids has been calculated, allowing to estimate the sizes of their nuclei. Meteors For a meteor, the standard distance for measurement of magnitudes is at an altitude of at the observer's zenith.
Physical sciences
Basics
Astronomy
1979
https://en.wikipedia.org/wiki/Alpha%20Centauri
Alpha Centauri
Alpha Centauri (, α Cen, or Alpha Cen) is a triple star system in the southern constellation of Centaurus. It consists of three stars: Rigil Kentaurus (), Toliman (), and Proxima Centauri (). Proxima Centauri is the closest star to the Sun at 4.2465 light-years (1.3020 pc). and B are Sun-like stars (class G and K, respectively) that together form the binary star system . To the naked eye, these two main components appear to be a single star with an apparent magnitude of −0.27. It is the brightest star in the constellation and the third-brightest in the night sky, outshone by only Sirius and Canopus. (Rigil Kentaurus) has 1.1 times the mass and 1.5 times the luminosity of the Sun, while (Toliman) is smaller and cooler, at 0.9 solar masses and less than 0.5 solar luminosities. The pair orbit around a common centre with an orbital period of 79 years. Their elliptical orbit is eccentric, so that the distance between A and B varies from 35.6 astronomical units (AU), or about the distance between Pluto and the Sun, to or about the distance between Saturn and the Sun. One astronomical unit is the distance from Earth to the Sun, 150 million kilometers. Proxima Centauri, or , is a small faint red dwarf (class M). Though not visible to the naked eye, Proxima Centauri is the closest star to the Sun at a distance of , slightly closer than . Currently, the distance between Proxima Centauri and is about , equivalent to about 430 times the radius of Neptune's orbit. Proxima Centauri has one confirmed planet: Proxima b, an Earth-sized planet in the habitable zone (though it is unlikely to be habitable), one candidate planet, Proxima d, sub-Earth which orbits very closely to the star, and the controversial Proxima c, a mini-Neptune  astronomical units away. may have a Neptune-sized planet in the habitable zone, though it is not yet known with certainty to be planetary in nature and could be an artifact of the discovery mechanism. has no known planets: Planet , purportedly discovered in 2012, was later disproven, and no other planet has yet been confirmed. Etymology and nomenclature α Centauri (Latinised to Alpha Centauri) is the system's designation given by J. Bayer in 1603. It belongs to the constellation Centaurus, named after the half human, half horse creature in Greek mythology. Heracles accidentally wounded the centaur and placed him in the sky after his death. Alpha Centauri marks the right front hoof of the Centaur. The common name Rigil Kentaurus is a Latinisation of the Arabic translation Rijl al-Qinṭūrus, meaning "the Foot of the Centaur". Qinṭūrus is the Arabic transliteration of the Greek (Kentaurus). The name is frequently abbreviated to Rigil Kent () or even Rigil, though the latter name is better known for Rigel ( Orionis). An alternative name found in European sources, Toliman, is an approximation of the Arabic aẓ-Ẓalīmān (in older transcription, aṭ-Ṭhalīmān), meaning 'the (two male) Ostriches', an appellation Zakariya al-Qazwini had applied to the pair of stars Lambda and Mu Sagittarii; it was often not clear on old star maps which name was intended to go with which star (or stars), and the referents changed over time. The name Toliman originates with Jacobus Golius' 1669 edition of Al-Farghani's Compendium. Tolimân is Golius' Latinisation of the Arabic name "the ostriches", the name of an asterism of which Alpha Centauri formed the main star. was discovered in 1915 by Robert T. A. Innes, who suggested that it be named Proxima Centaurus, . The name Proxima Centauri later became more widely used and is now listed by the International Astronomical Union (IAU) as the approved proper name; it is frequently abbreviated to Proxima. In 2016, the Working Group on Star Names of the IAU, having decided to attribute proper names to individual component stars rather than to multiple systems, approved the name Rigil Kentaurus () as being restricted to and the name Proxima Centauri () for On 10 August 2018, the IAU approved the name Toliman () for Other names During the 19th century, the northern amateur popularist E.H. Burritt used the now-obscure name Bungula (). Its origin is not known, but it may have been coined from the Greek letter beta () and Latin 'hoof', originally for Beta Centauri (the other hoof). In Chinese astronomy, Nán Mén, meaning Southern Gate, refers to an asterism consisting of Alpha Centauri and Epsilon Centauri. Consequently, the Chinese name for Alpha Centauri itself is Nán Mén Èr, the Second Star of the Southern Gate. To the Indigenous Boorong people of northwestern Victoria in Australia, Alpha Centauri and Beta Centauri are Bermbermgle, two brothers noted for their courage and destructiveness, who speared and killed Tchingal "The Emu" (the Coalsack Nebula). The form in Wotjobaluk is Bram-bram-bult. Observation To the naked eye, appears to be a single star, the brightest in the southern constellation of Centaurus. Their apparent angular separation varies over about 80 years between 2 and 22 arcseconds (the naked eye has a resolution of 60 arcsec), but through much of the orbit, both are easily resolved in binoculars or small telescopes. At −0.27 apparent magnitude (combined for A and B magnitudes ), Alpha Centauri is a first-magnitude star and is fainter only than Sirius and Canopus. It is the outer star of The Pointers or The Southern Pointers, so called because the line through Beta Centauri (Hadar/Agena), some 4.5° west, points to the constellation Crux—the Southern Cross. The Pointers easily distinguish the true Southern Cross from the fainter asterism known as the False Cross. South of about 29° South latitude, is circumpolar and never sets below the horizon. North of about 29° N latitude, Alpha Centauri never rises. Alpha Centauri lies close to the southern horizon when viewed from the 29° North latitude to the equator (close to Hermosillo and Chihuahua City in Mexico; Galveston, Texas; Ocala, Florida; and Lanzarote, the Canary Islands of Spain), but only for a short time around its culmination. The star culminates each year at local midnight on 24 April and at local 9 p.m. on 8 June. As seen from Earth, Proxima Centauri is 2.2° southwest from this distance is about four times the angular diameter of the Moon. Proxima Centauri appears as a deep-red star of a typical apparent magnitude of 11.1 in a sparsely populated star field, requiring moderately sized telescopes to be seen. Listed as V645 Cen in the General Catalogue of Variable Stars, version 4.2, this UV Ceti star or "flare star" can unexpectedly brighten rapidly by as much as 0.6 magnitude at visual wavelengths, then fade after only a few minutes. Some amateur and professional astronomers regularly monitor for outbursts using either optical or radio telescopes. In August 2015, the largest recorded flares of the star occurred, with the star becoming 8.3 times brighter than normal on 13 August, in the B band (blue light region). Alpha Centauri may be inside the G-cloud of the Local Bubble, and its nearest known system is the binary brown dwarf system Luhman 16, at from it. Observational history Alpha Centauri is listed in the 2nd century the star catalog appended to Ptolemy's Almagest. He gave its ecliptic coordinates, but texts differ as to whether the ecliptic latitude reads or . (Presently the ecliptic latitude is , but it has decreased by a fraction of a degree since Ptolemy's time due to proper motion.) In Ptolemy's time, Alpha Centauri was visible from Alexandria, Egypt, at but, due to precession, its declination is now , and it can no longer be seen at that latitude. English explorer Robert Hues brought Alpha Centauri to the attention of European observers in his 1592 work Tractatus de Globis, along with Canopus and Achernar, noting: The binary nature of Alpha Centauri AB was recognized in December 1689 by Jean Richaud, while observing a passing comet from his station in Puducherry. Alpha Centauri was only the third binary star to be discovered, preceded by Mizar AB and Acrux. The large proper motion of Alpha Centauri AB was discovered by Manuel John Johnson, observing from Saint Helena, who informed Thomas Henderson at the Royal Observatory, Cape of Good Hope of it. The parallax of Alpha Centauri was subsequently determined by Henderson from many exacting positional observations of the AB system between April 1832 and May 1833. He withheld his results, however, because he suspected they were too large to be true, but eventually published them in 1839 after Bessel released his own accurately determined parallax for in 1838. For this reason, Alpha Centauri is sometimes considered as the second star to have its distance measured because Henderson's work was not fully acknowledged at first. (The distance of Alpha Centauri from the Earth is now reckoned at 4.396 light-years or .) Later, John Herschel made the first micrometrical observations in 1834. Since the early 20th century, measures have been made with photographic plates. By 1926, William Stephen Finsen calculated the approximate orbit elements close to those now accepted for this system. All future positions are now sufficiently accurate for visual observers to determine the relative places of the stars from a binary star ephemeris. Others, like D. Pourbaix (2002), have regularly refined the precision of new published orbital elements. Robert T. A. Innes discovered Proxima Centauri in 1915 by blinking photographic plates taken at different times during a proper motion survey. These showed large proper motion and parallax similar in both size and direction to those of which suggested that Proxima Centauri is part of the system and slightly closer to Earth than . As such, Innes concluded that Proxima Centauri was the closest star to Earth yet discovered. Kinematics All components of display significant proper motion against the background sky. Over centuries, this causes their apparent positions to slowly change. Proper motion was unknown to ancient astronomers. Most assumed that the stars were permanently fixed on the celestial sphere, as stated in the works of the philosopher Aristotle. In 1718, Edmond Halley found that some stars had significantly moved from their ancient astrometric positions. In the 1830s, Thomas Henderson discovered the true distance to by analysing his many astrometric mural circle observations. He then realised this system also likely had a high proper motion. In this case, the apparent stellar motion was found using Nicolas Louis de Lacaille's astrometric observations of 1751–1752, by the observed differences between the two measured positions in different epochs. Calculated proper motion of the centre of mass for is about 3620 mas/y (milliarcseconds per year) toward the west and 694 mas/y toward the north, giving an overall motion of 3686 mas/y in a direction 11° north of west. The motion of the centre of mass is about 6.1 arcmin each century, or 1.02° each millennium. The speed in the western direction is and in the northerly direction . Using spectroscopy the mean radial velocity has been determined to be around towards the Solar System. This gives a speed with respect to the Sun of , very close to the peak in the distribution of speeds of nearby stars. Since is almost exactly in the plane of the Milky Way as viewed from Earth, many stars appear behind it. In early May 2028, will pass between the Earth and a distant red star, when there is a 45% probability that an Einstein ring will be observed. Other conjunctions will also occur in the coming decades, allowing accurate measurement of proper motions and possibly giving information on planets. Predicted future changes Based on the system's common proper motion and radial velocities, will continue to change its position in the sky significantly and will gradually brighten. For example, in about 6,200 CE, α Centauri's true motion will cause an extremely rare first-magnitude stellar conjunction with Beta Centauri, forming a brilliant optical double star in the southern sky. It will then pass just north of the Southern Cross or Crux, before moving northwest and up towards the present celestial equator and away from the galactic plane. By about 26,700 CE, in the present-day constellation of Hydra, will reach perihelion at away, though later calculations suggest that this will occur in 27,000 AD. At its nearest approach, α Centauri will attain a maximum apparent magnitude of −0.86, comparable to present-day magnitude of Canopus, but it will still not surpass that of Sirius, which will brighten incrementally over the next 60,000 years, and will continue to be the brightest star as seen from Earth (other than the Sun) for the next 210,000 years. Stellar system Alpha Centauri is a triple star system, with its two main stars, A and B, together comprising a binary component. The AB designation, or older A×B, denotes the mass centre of a main binary system relative to companion star(s) in a multiple star system. AB-C refers to the component of Proxima Centauri in relation to the central binary, being the distance between the centre of mass and the outlying companion. Because the distance between Proxima (C) and either of Alpha Centauri A or B is similar, the AB binary system is sometimes treated as a single gravitational object. Orbital properties The A and B components of Alpha Centauri have an orbital period of 79.762 years. Their orbit is moderately eccentric, as it has an eccentricity of almost 0.52; their closest approach or periastron is , or about the distance between the Sun and Saturn; and their furthest separation or apastron is , about the distance between the Sun and Pluto. The most recent periastron was in August 1955 and the next will occur in May 2035; the most recent apastron was in May 1995 and will next occur in 2075. Viewed from Earth, the apparent orbit of A and B means that their separation and position angle (PA) are in continuous change throughout their projected orbit. Observed stellar positions in 2019 are separated by 4.92 arcsec through the PA of 337.1°, increasing to 5.49 arcsec through 345.3° in 2020. The closest recent approach was in February 2016, at 4.0 arcsec through the PA of 300°. The observed maximum separation of these stars is about 22 arcsec, while the minimum distance is 1.7 arcsec. The widest separation occurred during February 1976, and the next will be in January 2056. Alpha Centauri C is about from Alpha Centauri AB, equivalent to about 5% of the distance between Alpha Centauri AB and the Sun. Until 2017, measurements of its small speed and its trajectory were of too little accuracy and duration in years to determine whether it is bound to Alpha Centauri AB or unrelated. Radial velocity measurements made in 2017 were precise enough to show that Proxima Centauri and Alpha Centauri AB are gravitationally bound. The orbital period of Proxima Centauri is approximately years, with an eccentricity of 0.5, much more eccentric than Mercury's. Proxima Centauri comes within of AB at periastron, and its apastron occurs at . Physical properties Asteroseismic studies, chromospheric activity, and stellar rotation (gyrochronology) are all consistent with the Alpha Centauri system being similar in age to, or slightly older than, the Sun. Asteroseismic analyses that incorporate tight observational constraints on the stellar parameters for the Alpha Centauri stars have yielded age estimates of Gyr, Gyr, 6.4 Gyr, and Gyr. Age estimates for the stars based on chromospheric activity (Calcium H & K emission) yield whereas gyrochronology yields Gyr. Stellar evolution theory implies both stars are slightly older than the Sun at 5 to 6 billion years, as derived by their mass and spectral characteristics. From the orbital elements, the total mass of Alpha Centauri AB is about – or twice that of the Sun. The average individual stellar masses are about and , respectively, though slightly different masses have also been quoted in recent years, such as and , totaling . Alpha Centauri A and B have absolute magnitudes of +4.38 and +5.71, respectively. Alpha Centauri AB System Alpha Centauri A Alpha Centauri A, also known as Rigil Kentaurus, is the principal member, or primary, of the binary system. It is a solar-like main-sequence star with a similar yellowish colour, whose stellar classification is spectral type G2-V; it is about 10% more massive than the Sun, with a radius about 22% larger. When considered among the individual brightest stars in the night sky, it is the fourth-brightest at an apparent magnitude of +0.01, being slightly fainter than Arcturus at an apparent magnitude of −0.05. The type of magnetic activity on Alpha Centauri A is comparable to that of the Sun, showing coronal variability due to star spots, as modulated by the rotation of the star. However, since 2005 the activity level has fallen into a deep minimum that might be similar to the Sun's historical Maunder Minimum. Alternatively, it may have a very long stellar activity cycle and is slowly recovering from a minimum phase. Alpha Centauri B Alpha Centauri B, also known as Toliman, is the secondary star of the binary system. It is a main-sequence star of spectral type K1-V, making it more an orange colour than Alpha Centauri A; it has around 90% of the mass of the Sun and a 14% smaller diameter. Although it has a lower luminosity than A, Alpha Centauri B emits more energy in the X-ray band. Its light curve varies on a short time scale, and there has been at least one observed flare. It is more magnetically active than Alpha Centauri A, showing a cycle of compared to 11 years for the Sun, and has about half the minimum-to-peak variation in coronal luminosity of the Sun. Alpha Centauri B has an apparent magnitude of +1.35, slightly dimmer than Mimosa. Alpha Centauri C (Proxima Centauri) Alpha Centauri C, better known as Proxima Centauri, is a small main-sequence red dwarf of spectral class M6-Ve. It has an absolute magnitude of +15.60, over 20,000 times fainter than the Sun. Its mass is calculated to be . It is the closest star to the Sun but is too faint to be visible to the naked eye. Planetary system The Alpha Centauri system as a whole has two confirmed planets, both of them around Proxima Centauri. While other planets have been claimed to exist around all of the stars, none of the discoveries have been confirmed. Planets of Proxima Centauri Proxima Centauri b is a terrestrial planet discovered in 2016 by astronomers at the European Southern Observatory (ESO). It has an estimated minimum mass of 1.17 (Earth masses) and orbits approximately 0.049 AU from Proxima Centauri, placing it in the star's habitable zone. The discovery of Proxima Centauri c was formally published in 2020 and could be a super-Earth or mini-Neptune. It has a mass of roughly 7 and orbits about from Proxima Centauri with a period of . In June 2020, a possible direct imaging detection of the planet hinted at the presence of a large ring system. However, a 2022 study disputed the existence of this planet. A 2020 paper refining Proxima b's mass excludes the presence of extra companions with masses above at periods shorter than 50 days, but the authors detected a radial-velocity curve with a periodicity of 5.15 days, suggesting the presence of a planet with a mass of about . This planet, Proxima Centauri d, was detected in 2022. Planets of Alpha Centauri A In 2021, a candidate planet named Candidate 1 (or C1) was detected around Alpha Centauri A, thought to orbit at approximately with a period of about one year, and to have a mass between that of Neptune and one-half that of Saturn, though it may be a dust disk or an artifact. The possibility of C1 being a background star has been ruled out. If this candidate is confirmed, the temporary name C1 will most likely be replaced with the scientific designation Alpha Centauri Ab in accordance with current naming conventions. GO Cycle 1 observations are planned for the James Webb Space Telescope (JWST) to search for planets around Alpha Centauri A, as well as observations of Epsilon Muscae. The coronographic observations, which occurred on July 26 and 27, 2023, were failures, though there are follow-up observations in March 2024. Pre-launch estimates predicted that JWST will be able to find planets with a radius of 5 at . Multiple observations every 3–6 months could push the limit down to 3 . Post-launch estimates based on observations of HIP 65426 b find that JWST will be able to find planets even closer to Alpha Centauri A and could find a 5 planet at . Candidate 1 has an estimated radius between and orbits at . It is therefore likely within the reach of JWST observations. Planets of Alpha Centauri B The first claim of a planet around Alpha Centauri B was that of Alpha Centauri Bb in 2012, which was proposed to be an Earth-mass planet in a 3.2-day orbit. This was refuted in 2015 when the apparent planet was shown to be an artifact of the way the radial velocity data was processed. A search for transits of planet Bb was conducted with the Hubble Space Telescope from 2013 to 2014. This search detected one potential transit-like event, which could be associated with a different planet with a radius around . This planet would most likely orbit Alpha Centauri B with an orbital period of 20.4 days or less, with only a 5% chance of it having a longer orbit. The median of the likely orbits is 12.4 days. Its orbit would likely have an eccentricity of 0.24 or less. It could have lakes of molten lava and would be far too close to Alpha Centauri B to harbour life. If confirmed, this planet might be called . However, the name has not been used in the literature, as it is not a claimed discovery. Hypothetical planets Additional planets may exist in the Alpha Centauri system, either orbiting Alpha Centauri A or Alpha Centauri B individually, or in large orbits around Alpha Centauri AB. Because both stars are fairly similar to the Sun (for example, in age and metallicity), astronomers have been especially interested in making detailed searches for planets in the Alpha Centauri system. Several established planet-hunting teams have used various radial velocity or star transit methods in their searches around these two bright stars. All the observational studies have so far failed to find evidence for brown dwarfs or gas giants. In 2009, computer simulations showed that a planet might have been able to form near the inner edge of Alpha Centauri B's habitable zone, which extends from from the star. Certain special assumptions, such as considering that the Alpha Centauri pair may have initially formed with a wider separation and later moved closer to each other (as might be possible if they formed in a dense star cluster), would permit an accretion-friendly environment farther from the star. Bodies around Alpha Centauri A would be able to orbit at slightly farther distances due to its stronger gravity. In addition, the lack of any brown dwarfs or gas giants in close orbits around Alpha Centauri make the likelihood of terrestrial planets greater than otherwise. A theoretical study indicates that a radial velocity analysis might detect a hypothetical planet of in Alpha Centauri B's habitable zone. Radial velocity measurements of Alpha Centauri B made with the High Accuracy Radial Velocity Planet Searcher spectrograph were sufficiently sensitive to detect a planet within the habitable zone of the star (i.e. with an orbital period P = 200 days), but no planets were detected. Current estimates place the probability of finding an Earth-like planet around Alpha Centauri at roughly 75%. The observational thresholds for planet detection in the habitable zones by the radial velocity method are currently (2017) estimated to be about for Alpha Centauri A, for Alpha Centauri B, and for Proxima Centauri. Early computer-generated models of planetary formation predicted the existence of terrestrial planets around both Alpha Centauri A and B, but most recent numerical investigations have shown that the gravitational pull of the companion star renders the accretion of planets difficult. Despite these difficulties, given the similarities to the Sun in spectral types, star type, age and probable stability of the orbits, it has been suggested that this stellar system could hold one of the best possibilities for harbouring extraterrestrial life on a potential planet. In the Solar System, it was once thought that Jupiter and Saturn were probably crucial in perturbing comets into the inner Solar System, providing the inner planets with a source of water and various other ices. However, since isotope measurements of the deuterium to hydrogen (D/H) ratio in comets Halley, Hyakutake, Hale–Bopp, 2002T7, and Tuttle yield values approximately twice that of Earth's oceanic water, more recent models and research predict that less than 10% of Earth's water was supplied from comets. In the system, Proxima Centauri may have influenced the planetary disk as the system was forming, enriching the area around Alpha Centauri with volatile materials. This would be discounted if, for example, happened to have gas giants orbiting (or vice versa), or if and B themselves were able to perturb comets into each other's inner systems, as Jupiter and Saturn presumably have done in the Solar System. Such icy bodies probably also reside in Oort clouds of other planetary systems. When they are influenced gravitationally by either the gas giants or disruptions by passing nearby stars, many of these icy bodies then travel star-wards. Such ideas also apply to the close approach of Alpha Centauri or other stars to the Solar system, when, in the distant future, the Oort Cloud might be disrupted enough to increase the number of active comets. To be in the habitable zone, a planet around Alpha Centauri A would have an orbital radius of between about 1.2 and so as to have similar planetary temperatures and conditions for liquid water to exist. For the slightly less luminous and cooler , the habitable zone is between about 0.7 and . With the goal of finding evidence of such planets, both Proxima Centauri and were among the listed "Tier-1" target stars for NASA's Space Interferometry Mission (S.I.M.). Detecting planets as small as three Earth-masses or smaller within two AU of a "Tier-1" target would have been possible with this new instrument. The S.I.M. mission, however, was cancelled due to financial issues in 2010. Circumstellar discs Based on observations between 2007 and 2012, a study found a slight excess of emissions in the 24 μm (mid/far-infrared) band surrounding , which may be interpreted as evidence for a sparse circumstellar disc or dense interplanetary dust. The total mass was estimated to be between to the mass of the Moon, or 10–100 times the mass of the Solar System's zodiacal cloud. If such a disc existed around both stars, disc would likely be stable to and would likely be stable to This would put A's disc entirely within the frost line, and a small part of B's outer disc just outside. View from this system The sky from would appear much as it does from the Earth, except that Centaurus's brightest star, being itself, would be absent from the constellation. The Sun would appear as a white star of apparent magnitude +0.5, roughly the same as the average brightness of Betelgeuse from Earth. It would be at the antipodal point of current right ascension and declination, at (2000), in eastern Cassiopeia, easily outshining all the rest of the stars in the constellation. With the placement of the Sun east of the magnitude 3.4 star Epsilon Cassiopeiae, nearly in front of the Heart Nebula, the "W" line of stars of Cassiopeia would have a "/W" shape. Other nearby stars' placements may be affected somewhat drastically. Sirius, at 9.2 light years away from the system, would still be the brightest star in the night sky, with a magnitude of -1.2, but would be located in Orion less than a degree away from Betelgeuse. Procyon, which would also be at a slightly further distance than from the Sun, would move to outshine Pollux in the middle of Gemini. A planet around either or B would see the other star as a very bright secondary. For example, an Earth-like planet at from (with a revolution period of 1.34 years) would get Sun-like illumination from its primary, and would appear 5.7–8.6 magnitudes dimmer (−21.0 to −18.2), 190–2,700 times dimmer than but still 150–2,100 times brighter than the full Moon. Conversely, an Earth-like planet at from (with a revolution period of 0.63 years) would get nearly Sun-like illumination from its primary, and would appear 4.6–7.3 magnitudes dimmer (−22.1 to −19.4), 70 to 840 times dimmer than but still 470–5,700 times brighter than the full Moon. Proxima Centauri would appear dim as one of many stars, being magnitude 4.5 at its current distance, and magnitude 2.6 at periastron. Future exploration Alpha Centauri is a first target for crewed or robotic interstellar exploration. Using current spacecraft technologies, crossing the distance between the Sun and Alpha Centauri would take several millennia, though the possibility of nuclear pulse propulsion or laser light sail technology, as considered in the Breakthrough Starshot program, could make the journey to Alpha Centauri in 20 years. An objective of such a mission would be to make a fly-by of, and possibly photograph, planets that might exist in the system. The existence of Proxima Centauri b, announced by the European Southern Observatory (ESO) in August 2016, would be a target for the Starshot program. NASA released a mission concept in 2017 that would send a spacecraft to Alpha Centauri in 2069, scheduled to coincide with the 100th anniversary of the first crewed lunar landing in 1969, Even at speed 10% of the speed of light (about 108 million km/h), which NASA experts say may be possible, it would take a spacecraft 44 years to reach the constellation, by the year 2113, and would take another 4 years for a signal, by the year 2117 to reach Earth. The concept received no further funding or development. Historical distance estimates {| class="wikitable sortable mw-collapsible" |+ Alpha Centauri AB historical distance estimates |- ! rowspan="2" | Source ! rowspan="2" |Year ! rowspan="2" |Subject!! rowspan="2" | Parallax (mas) !! colspan="3" | Distance !! rowspan="2" |
Physical sciences
Notable stars
null
1980
https://en.wikipedia.org/wiki/Amiga
Amiga
Amiga is a family of personal computers produced by Commodore from 1985 until the company's bankruptcy in 1994, with production by others afterward. The original model is one of a number of mid-1980s computers with 16-bit or 16/32-bit processors, 256 KB or more of RAM, mouse-based GUIs, and significantly improved graphics and audio compared to previous 8-bit systems. These include the Atari ST—released earlier the same year—as well as the Macintosh and Acorn Archimedes. The Amiga differs from its contemporaries through custom hardware to accelerate graphics and sound, including sprites, a blitter, and four channels of sample-based audio. It runs a pre-emptive multitasking operating system called AmigaOS. The Amiga 1000, based on the Motorola 68000 microprocessor, was released in July 1985. Production problems kept it from becoming widely available until early 1986. While early advertisements cast the computer as an all-purpose business machine, especially with the Sidecar IBM PC compatibility add-on, the Amiga was most commercially successful as a home computer with a range of video games and creative software. The bestselling model, the Amiga 500, was introduced in 1987 along with the more expandable Amiga 2000. The 1990 Amiga 3000 includes a minor update to the graphics hardware via the Enhanced Chip Set, also used in subsequent models. The Amiga established a niche in audio and multimedia. The first music tracker was written for the Amiga, and it became a popular platform music creation. The 3D rendering packages LightWave 3D, Imagine, and Traces (a predecessor to Blender) originated on the system. The 1990 third-party Video Toaster made the Amiga a comparatively low cost option for video production. In later years, the Amiga started losing market share to IBM PC compatibles and video game consoles, eventually leading to Commodore's bankruptcy in 1994 and then the end of Amiga. Commodore is estimated to have sold an 4.85 million Amigas. Various groups have since released spiritual successors. Overview The Amiga 1000, based on the Motorola 68000 microprocessor, was released in July 1985. Production problems kept it from becoming widely available until early 1986. While early advertisements cast the computer as an all-purpose business machine, especially when outfitted with the Sidecar IBM PC compatibility add-on, the Amiga was most commercially successful as a home computer with a wide range of video games and creative software. The best-selling model, the Amiga 500, was introduced in 1987 along with the more expandable Amiga 2000. The 1990 Amiga 3000 included a minor update to the graphics hardware via the Amiga Enhanced Chip Set. The ECS was included in the Amiga 500 Plus (1991) and Amiga 600 (March 1992), followed by the Amiga 1200 and Amiga 4000. Poor marketing and the failure of later models to repeat the technological advances of the first systems resulted in Commodore quickly losing market share to the rapidly dropping prices of IBM PC compatibles (which gained 256 color graphics in 1987), as well as the fourth generation of video game consoles. Commodore went bankrupt in April 1994 after a version of the Amiga packaged as a game console, the CD32, failed in the marketplace. Escom of Germany, who acquired Commodore properties, continued developing the Amiga line for just under two more years until it also went bankrupt. Since the demise of Commodore and Escom, various groups have marketed successors to the original Amiga line, including Eyetech, ACube Systems Srl and A-EON Technology who have produced AmigaOne computers since the 2000s. AmigaOS has influenced replacements, clones, and compatible software such as MorphOS and AROS. Currently Belgian company Hyperion Entertainment maintains and develops AmigaOS 4, which is an official and direct descendant of AmigaOS 3.1 – the last system made by Commodore for the original Amiga computers. History Concept and early development Jay Miner joined Atari, Inc. in the 1970s and led development of the Atari Video Computer System's graphics and sound chip, the Television Interface Adaptor. When complete, the team began developing a much more sophisticated set of chips, CTIA, ANTIC and POKEY, that formed the basis of the Atari 8-bit computers. With the 8-bit line's launch in 1979, the team once again started looking at a next generation chipset. Nolan Bushnell had sold the company to Warner Communications in 1978, and the new management was much more interested in the existing lines than development of new products that might cut into their sales. Miner wanted to start work with the new Motorola 68000, but management was only interested in another 6502 based system. Miner left the company, and, for a time, the industry. In 1979, Larry Kaplan left Atari and founded Activision. In 1982, Kaplan was approached by a number of investors who wanted to develop a new game platform. Kaplan hired Miner to run the hardware side of the newly formed company, "Hi-Toro". The system was code-named "Lorraine" in keeping with Miner's policy of giving systems female names, in this case the company president's wife, Lorraine Morse. When Kaplan left the company late in 1982, Miner was promoted to head engineer and the company relaunched as Amiga Corporation. The Amiga hardware was designed by Miner, RJ Mical, and Dale Luck. A breadboard prototype for testing and development was largely completed by late 1983, and shown at the January 1984 Consumer Electronics Show (CES). At the time, the operating system was not ready, so the machine was demonstrated with the "Boing Ball" demo, a real-time animation showing a red-and-white spinning ball bouncing and casting a shadow; this bouncing ball later became the official logo of Escom subsidiary Amiga Technologies. CES attendees had trouble believing the computer being demonstrated had the power to display such a demo and searched in vain for the "real" computer behind it. A further developed version of the system was demonstrated at the June 1984 CES and shown to many companies in hopes of garnering further funding, but found little interest in a market that was in the final stages of the video game crash of 1983. In March, Atari expressed a tepid interest in Lorraine for its potential use in a games console or home computer tentatively known as the . The talks were progressing slowly, and Amiga was running out of money. A temporary arrangement in June led to a $500,000 loan from Atari to Amiga to keep the company going. The terms required the loan to be repaid at the end of the month, otherwise Amiga would forfeit the Lorraine design to Atari. Commodore During 1983, Atari lost over a week due to the combined effects of the crash and the ongoing price war in the home computer market. By the end of the year, Warner was desperate to sell the company. In January 1984, Jack Tramiel resigned from Commodore due to internal battles over the future direction of the company. A number of Commodore employees followed him to his new company, Tramel Technology. This included a number of the senior technical staff, where they began development of a 68000-based machine of their own. In June, Tramiel arranged a no-cash deal to take over Atari, reforming Tramel Technology as Atari Corporation. As many Commodore technical staff had moved to Atari, Commodore was left with no workable path to design their own next-generation computer. The company approached Amiga offering to fund development as a home computer system. They quickly arranged to repay the Atari loan, ending that threat. The two companies were initially arranging a license agreement before Commodore offered to purchase Amiga outright. By late 1984, the prototype breadboard chipset had successfully been turned into integrated circuits, and the system hardware was being readied for production. At this time the operating system (OS) was not as ready, and led to a deal to port an OS known as TRIPOS to the platform. TRIPOS was a multitasking system that had been written in BCPL during the 1970s for the PDP-11 minicomputer, but later experimentally ported to the 68000. This early version was known as AmigaDOS and the GUI as Workbench. The BCPL parts were later rewritten in the C language, and the entire system became AmigaOS. The system was enclosed in a pizza box form factor case; a late change was the introduction of vertical supports on either side of the case to provide a "garage" under the main section of the system where the keyboard could be stored. Launch The first model was announced in 1985 as simply "The Amiga from Commodore", later to be retroactively dubbed the Amiga 1000. They were first offered for sale in August, but by October only 50 had been built, all of which were used by Commodore. Machines only began to arrive in quantity in mid-November, meaning they missed the Christmas buying rush. By the end of the year, they had sold 35,000 machines, and severe cashflow problems made the company pull out of the January 1986 CES. Bad or entirely missing marketing, forcing the development team to move to the east coast, notorious stability problems and other blunders limited sales in early 1986 to between 10,000 and 15,000 units a month. 120,000 units were reported as having been sold from the machine's launch up to the end of 1986. Later models In late 1985, Thomas Rattigan was promoted to COO of Commodore, and then to CEO in February 1986. He immediately implemented an ambitious plan that covered almost all of the company's operations. Among these was the long-overdue cancellation of the now outdated PET and VIC-20 lines, as well as a variety of poorly selling Commodore 64 offshoots and the Commodore 900 workstation effort. Another one of the changes was to split the Amiga into two products, a new high-end version of the Amiga aimed at the creative market, and a cost-reduced version that would take over for the Commodore 64 in the low-end market. These new designs were released in 1987 as the Amiga 2000 and Amiga 500, the latter of which went on to widespread success and became their best selling model. Similar high-end/low-end models would make up the Amiga line for the rest of its history; follow-on designs included the Amiga 3000/Amiga 500 Plus/Amiga 600, and the Amiga 4000/Amiga 1200. These models incorporated a series of technical upgrades known as the ECS and AGA, which added higher resolution displays among many other improvements and simplifications. The Amiga line sold an estimated 4,910,000 machines over its lifetime. The machines were most popular in the UK and Germany, with about 1.5 million sold in each country, and sales in the high hundreds of thousands in other European nations. The machine was less popular in North America, where an estimated 700,000 were sold. In the United States, the Amiga found a niche with enthusiasts and in vertical markets for video processing and editing. In Europe, it was more broadly popular as a home computer and often used for video games. Beginning in 1988 it overlapped with the 16-bit Mega Drive, then the Super Nintendo Entertainment System in the early 1990s. Commodore UK's Kelly Sumner did not see Sega or Nintendo as competitors, but instead credited their marketing campaigns which spent over or for promoting video games as a whole and thus helping to boost Amiga sales. Bankruptcy and aftermath In spite of his successes in making the company profitable and bringing the Amiga line to market, Rattigan was soon forced out in a power struggle with majority shareholder, Irving Gould. This is widely regarded as the turning point, as further improvements to the Amiga were eroded by rapid improvements in other platforms. Commodore shut down the Amiga division on April 26, 1994, and filed for bankruptcy three days later. Commodore's assets were purchased by Escom, a German PC manufacturer, who created the subsidiary company Amiga Technologies. They re-released the A1200 and A4000T, and introduced a new 68060 version of the A4000T. Amiga Technologies researched and developed the Amiga Walker prototype. They presented the machine publicly at CeBit, but Escom went bankrupt in 1996. Some Amigas were still made afterwards for the North American market by QuikPak, a small Pennsylvania-based firm who was the manufacturer of Amigas for Escom. After a reported sale to VisCorp fell through, a U.S. Wintel PC manufacturer, Gateway 2000, eventually purchased the Amiga branch and technology in 1997. QuickPak attempted but failed to license Amiga from Gateway and build new models. Gateway was then working on a brand new Amiga platform, likely encouraged by a desire to be independent of Microsoft and Intel. However this did not materialize and in 2000, Gateway sold the Amiga brand to Amiga, Inc., without having released any products. Amiga, Inc. licensed the rights to sell hardware using the AmigaOne brand to Eyetech Group and Hyperion Entertainment. In 2019, Amiga, Inc. sold its intellectual property to Amiga Corporation. Hardware The Amiga has a custom chipset consisting of several coprocessors which handle audio, video, and direct memory access independently of the central processing unit (CPU). This architecture gave the Amiga a performance edge over its competitors, particularly for graphics-intensive applications and games. The architecture uses two distinct bus subsystems: the chipset bus and the CPU bus. The chipset bus allows the coprocessors and CPU to address "Chip RAM". The CPU bus provides addressing to conventional RAM, ROM and the Zorro II or Zorro III expansion subsystems. This enables independent operation of the subsystems. The CPU bus can be much faster than the chipset bus. CPU expansion boards may provide additional custom buses. Additionally, "busboards" or "bridgeboards" may provide ISA or PCI buses. Central processing unit The most popular models from Commodore, including the Amiga 1000, Amiga 500, and Amiga 2000, use the Motorola 68000 as the CPU. From a developer's point of view, the 68000 provides a full suite of 32-bit operations, but the chip can address only 16 MB of physical memory and is implemented using a 16-bit arithmetic logic unit and has a 16-bit external data bus, so 32-bit computations are transparently handled as multiple 16-bit values at a performance cost. The later Amiga 2500 and the Amiga 3000 models use fully 32-bit, 68000-compatible processors from Motorola with improved performance and larger addressing capability. CPU upgrades were offered by both Commodore and third-party manufacturers. Most Amiga models can be upgraded either by direct CPU replacement or through expansion boards. Such boards often included faster and higher capacity memory interfaces and hard disk controllers. Towards the end of Commodore's time in charge of Amiga development, there were suggestions that Commodore intended to move away from the 68000 series to higher performance RISC processors, such as the PA-RISC. Those ideas were never developed before Commodore filed for bankruptcy. Despite this, third-party manufacturers designed upgrades featuring a combination of 68000 series and PowerPC processors along with a PowerPC native microkernel and software. Later Amiga clones featured PowerPC processors only. Custom chipset The custom chipset at the core of the Amiga design appeared in three distinct generations, with a large degree of backward-compatibility. The Original Chip Set (OCS) appeared with the launch of the A1000 in 1985. OCS was eventually followed by the modestly improved Enhanced Chip Set (ECS) in 1990 and finally by the partly 32-bit Advanced Graphics Architecture (AGA) in 1992. Each chipset consists of several coprocessors that handle graphics acceleration, digital audio, direct memory access and communication between various peripherals (e.g., CPU, memory and floppy disks). In addition, some models featured auxiliary custom chips that performed tasks such as SCSI control and display de-interlacing. Graphics All Amiga systems can display full-screen animated planar graphics with 2, 4, 8, 16, 32, 64 (EHB Mode), or 4096 colors (HAM Mode). Models with the AGA chipset (A1200 and A4000) also have non-EHB 64, 128, 256, and 262144 (HAM8 Mode) color modes and a palette expanded from 4096 to 16.8 million colors. The Amiga chipset can genlock, which is the ability to adjust its own screen refresh timing to match an incoming NTSC or PAL video signal. When combined with setting transparency, this allows an Amiga to overlay an external video source with graphics. This ability made the Amiga popular for many applications, and provides the ability to do character generation and CGI effects far more cheaply than earlier systems. This ability has been frequently utilized by wedding videographers, TV stations and their weather forecasting divisions (for weather graphics and radar), advertising channels, music video production, and desktop videographers. The NewTek Video Toaster was made possible by the genlock ability of the Amiga. In 1988, the release of the Amiga A2024 fixed-frequency monochrome monitor with built-in framebuffer and flicker fixer hardware provided the Amiga with a choice of high-resolution graphic modes (1024×800 for NTSC and 1024×1024 for PAL). ReTargetable Graphics ReTargetable Graphics is an API for device drivers mainly used by 3rd party graphics hardware to interface with AmigaOS via a set of libraries. The software libraries may include software tools to adjust resolution, screen colors, pointers and screenmodes. The standard Intuition interface is limited to display depths of 8 bits, while RTG makes it possible to handle higher depths like 24-bits. Sound The sound chip, named Paula, supports four PCM sound channels (two for the left speaker and two for the right) with 8-bit resolution for each channel and a 6-bit volume control per channel. The analog output is connected to a low-pass filter, which filters out high-frequency aliasing when the Amiga is using a lower sampling rate (see Nyquist frequency). The brightness of the Amiga's power LED is used to indicate the status of the Amiga's low-pass filter. The filter is active when the LED is at normal brightness, and deactivated when dimmed (or off on older A500 Amigas). On Amiga 1000 (and first Amiga 500 and Amiga 2000 model), the power LED had no relation to the filter's status, and a wire needed to be manually soldered between pins on the sound chip to disable the filter. Paula can read arbitrary waveforms at arbitrary rates and amplitudes directly from the system's RAM, using direct memory access (DMA), making sound playback without CPU intervention possible. Although the hardware is limited to four separate sound channels, software such as OctaMED uses software mixing to allow eight or more virtual channels, and it was possible for software to mix two hardware channels to achieve a single 14-bit resolution channel by playing with the volumes of the channels in such a way that one of the source channels contributes the most significant bits and the other the least. The quality of the Amiga's sound output, and the fact that sound hardware is part of the standard chipset and easily addressed by software, were standout features of Amiga hardware unavailable on IBM PC compatibles for years. Third-party sound cards exist that provide DSP functions, multi-track direct-to-disk recording, multiple hardware sound channels and 16-bit and beyond resolutions. A retargetable sound API called AHI was developed allowing these cards to be used transparently by the OS and software. Kickstart firmware Kickstart is the firmware upon which AmigaOS is bootstrapped. Its purpose is to initialize the Amiga hardware and core components of AmigaOS and then attempt to boot from a bootable volume, such as a floppy disk or hard disk drive. Most models (excluding the Amiga 1000) come equipped with Kickstart on an embedded ROM-chip. There are various editions of Kickstart ROMs starting with Kickstart v1.1 for the Amiga 1000, v1.2 and v1.3 for the A500, Kickstart v2.1 on A500+, Kickstart v2.2 for A600 and dual ROMs for Kickstart v3.0 and 3.1 for A1200 and A4000. After Commodore's demise there have been new Kickstart v3.1 ROMs made available for both the A500 and A600 Computers. Amiga Software is mostly backward compatible, but v2.1 ROMs and newer differ slightly, which can cause software glitches with earlier programs. To help address this and to get earlier programs to work with later Kickstart ROMs, some tools have been produced such as RELOKIK 1.4 and MAKE IT WORK! for the A600 and A1200. They revert the system to temporarily boot in Kickstart v1.3. Keyboard and mouse The keyboard on Amiga computers is similar to that found on a mid-80s IBM PC: Ten function keys, a numeric keypad, and four separate directional arrow keys. Caps Lock and Control share space to the left of A. Absent are Home, End, Page Up, and Page Down keys: These functions are accomplished on Amigas by pressing shift and the appropriate arrow key. The Amiga keyboard adds a Help key, which a function key usually acts as on PCs (usually F1). In addition to the Control and Alt modifier keys, the Amiga has 2 "Amiga" keys, rendered as "Open Amiga" and "Closed Amiga" similar to the Open/Closed Apple logo keys on Apple II keyboards. The left is used to manipulate the operating system (moving screens and the like) and the right delivers commands to the application. The absence of Num lock frees space for more mathematical symbols around the numeric pad. Like IBM-compatible computers, the mouse has two buttons, but in AmigaOS, pressing and holding the right button replaces the system status line at the top of the screen with a Maclike menu bar. As with Apple's Mac OS prior to Mac OS 8, menu options are selected by releasing the button over that option, not by left clicking. Menu items that have a Boolean toggle state can be left clicked whilst the menu is kept open with the right button, which allows the user – for example – to set some selected text to bold, underline and italics in one visit to the menus. The mouse plugs into one of two Atari joystick ports used for joysticks, game paddles, and graphics tablets. Although compatible with analog joysticks, Atari-style digital joysticks became standard. Unusually, two independent mice can be connected to the joystick ports; some games, such as Lemmings, were designed to take advantage of this. Other peripherals and expansions The Amiga was one of the first computers for which inexpensive sound sampling and video digitization accessories were available. As a result of this and the Amiga's audio and video capabilities, the Amiga became a popular system for editing and producing both music and video. Many expansion boards were produced for Amiga computers to improve the performance and capability of the hardware, such as memory expansions, SCSI controllers, CPU boards, and graphics boards. Other upgrades include genlocks, network cards for Ethernet, modems, sound cards and samplers, video digitizers, extra serial ports, and IDE controllers. Additions after the demise of Commodore company are USB cards. The most popular upgrades were memory, SCSI controllers and CPU accelerator cards. These were sometimes combined into one device. Early CPU accelerator cards used the full 32-bit CPUs of the 68000 family such as the Motorola 68020 and Motorola 68030, almost always with 32-bit memory and usually with FPUs and MMUs or the facility to add them. Later designs feature the Motorola 68040 or Motorola 68060. Both CPUs feature integrated FPUs and MMUs. Many CPU accelerator cards also had integrated SCSI controllers. Phase5 designed the PowerUP boards (Blizzard PPC and CyberStorm PPC) featuring both a 68k (a 68040 or 68060) and a PowerPC (603 or 604) CPU, which are able to run the two CPUs at the same time and share the system memory. The PowerPC CPU on PowerUP boards is usually used as a coprocessor for heavy computations; a powerful CPU is needed to run MAME for example, but even decoding JPEG pictures and MP3 audio was considered heavy computation at the time. It is also possible to ignore the 68k CPU and run Linux on the PPC via project Linux APUS, but a PowerPC-native AmigaOS promised by Amiga Technologies GmbH was not available when the PowerUP boards first appeared. 24-bit graphics cards and video cards were also available. Graphics cards were designed primarily for 2D artwork production, workstation use, and later, gaming. Video cards are designed for inputting and outputting video signals, and processing and manipulating video. In the North American market, the NewTek Video Toaster was a video effects board that turned the Amiga into an affordable video processing computer that found its way into many professional video environments. One well-known use was to create the special effects in early series of Babylon 5. Due to its NTSC-only design, it did not find a market in countries that used the PAL standard, such as in Europe. In those countries, the OpalVision card was popular, although less featured and supported than the Video Toaster. Low-cost time base correctors (TBC) specifically designed to work with the Toaster quickly came to market, most of which were designed as standard Amiga bus cards. Various manufacturers started producing PCI busboards for the A1200, A3000 and A4000, allowing standard Amiga computers to use PCI cards such as graphics cards, Sound Blaster sound cards, 10/100 Ethernet cards, USB cards, and television tuner cards. Other manufacturers produced hybrid boards that contained an Intel x86 series chip, allowing the Amiga to emulate a PC. PowerPC upgrades with Wide SCSI controllers, PCI busboards with Ethernet, sound and 3D graphics cards, and tower cases allowed the A1200 and A4000 to survive well into the late nineties. Expansion boards were made by Richmond Sound Design that allow their show control and sound design software to communicate with their custom hardware frames either by ribbon cable or fiber optic cable for long distances, allowing the Amiga to control up to eight million digitally controlled external audio, lighting, automation, relay and voltage control channels spread around a large theme park, for example. See Amiga software for more information on these applications. Other devices included the following: Amiga 501 with 512 KB RAM and real-time clock Trumpcard 500 Zorro-II SCSI interface GVP A530 Turbo, accelerator, RAM expansion, PC emulator A2091 / A590 SCSI hard disk controller + 2 MB RAM expansion A3070 SCSI tape backup unit with a capacity of , OEM Archive Viper 1/4-inch A2065 Ethernet Zorro-II interface – the first Ethernet interface for Amiga; uses the AMD Am7990 chip The same interface chip is used in DECstation as well. Ariadne Zorro-II Ethernet interface using the AMD Am7990 A4066 Zorro II Ethernet interface using the SMC 91C90QF X-Surf from Individual Computers using the Realtek 8019AS A2060 Arcnet A1010 floppy disk drive consisting of a 3.5-inch double density (DD), , drive unit connected via DB-23 connector; track-to-track delay is on the order of . The default capacity is . Many clone drives were available, and products such as the Catweasel and KryoFlux make it possible to read and write Amiga and other special disc formats on standard x86 PCs. NE2000-compatible PCMCIA Ethernet cards for Amiga 600 and Amiga 1200 Serial ports The Commodore A2232 board provides seven RS-232C serial ports in addition to the Amiga's built-in serial port. Each port can be driven independently at speeds of 50 to . There is, however, a driver available on Aminet that allows two of the serial ports to be driven at . The serial card used the 65CE02 CPU clocked at . This CPU was also part of the CSG 4510 CPU core that was used in the Commodore 65 computer. Networking Amiga has three networking interface APIs: AS225: the official Commodore TCP/IP stack API with hard-coded drivers in revision 1 (AS225r1) for the A2065 Ethernet and the A2060 Arcnet interfaces. In revision 2, (AS225r2) the SANA-II interface was used. SANA-II: a standardized API for hardware of network interfaces. It uses an inefficient buffer handling scheme, and lacks proper support for promiscuous and multicast modes. Miami Network Interface (MNI): an API that doesn't have the problems that SANA-II suffers from. It requires AmigaOS v2.04 or higher. Different network media were used: Models and variants The original Amiga models were produced from 1985 to 1996. They are, in order of production: 1000, 2000, 500, 1500, 2500, 3000, 3000UX, 3000T, CDTV, 500+, 600, 4000, 1200, CD32, and 4000T. The PowerPC-based AmigaOne computers were later marketed beginning in 2002. Several companies and private persons have also released Amiga clones and still do so today. Commodore Amiga The first Amiga model, the Amiga 1000, was launched in 1985. In 2006, PC World rated the Amiga 1000 as the seventh greatest PC of all time, stating "Years ahead of its time, the Amiga was the world's first multimedia, multitasking personal computer". Commodore updated the desktop line of Amiga computers with the Amiga 2000 in 1987, the Amiga 3000 in 1990, and the Amiga 4000 in 1992, each offering improved capabilities and expansion options. The best-selling models were the budget models, however, particularly the highly successful Amiga 500 (1987) and the Amiga 1200 (1992). The Amiga 500+ (1991) was the shortest-lived model, replacing the Amiga 500 and lasting only six months until it was phased out and replaced with the Amiga 600 (1992). The A600 was only intended as a temporary gap filler until the A1200 was available for sale. The A600 was actually designed as a portable system, hence the lack of numeric Keypad, and it was originally to be named Amiga 300. Some early A600 models have retained the original A300 logo printed on the mainboard. The Amiga 600 was quickly replaced by the Amiga 1200. The CDTV, launched in 1991, was a CD-ROM-based game console, Computer and multimedia appliance based on the Amiga A500 with the same v1.3 Kickstart ROM, several years before CD-ROM drives were common. The cost of CDTV media production and the CD-ROM drives at the time discouraged potential buyers and the system never achieved any real success. The CDTV was however one of the first ever CD-ROM-based machines that were mass produced. A CDTV legacy is the external A570 CD-ROM drive expansion for the A500 computer. Commodore's last Amiga offering before filing for bankruptcy was the Amiga CD32 (1993), a 32-bit CD-ROM games console produced until mid 1994. Although discontinued after Commodore's demise it met with moderate commercial success in Europe. The CD32 was a next-generation CDTV, and it was designed and released by Commodore before the Playstation. It was Commodore's last attempt to enter the ever growing video-game console market. Following purchase of Commodore's assets by Escom in 1995, the A1200 and A4000T continued to be sold in small quantities until 1996, though the ground lost since the initial launch and the prohibitive expense of these units meant that the Amiga line never regained any real popularity. Several Amiga models contained references to songs by the rock band The B-52's. Early A500 units had the words "B52/ROCK LOBSTER" silk-screen printed onto their printed circuit board, a reference to the song "Rock Lobster" The Amiga 600 referenced "JUNE BUG" (after the song "Junebug") and the Amiga 1200 had "CHANNEL Z" (after "Channel Z"), and the CD-32 had "Spellbound." AmigaOS 4 systems AmigaOS 4 is designed for PowerPC Amiga systems. It is mainly based on AmigaOS 3.1 source code, with some parts of version 3.9. Currently runs on both Amigas equipped with CyberstormPPC or BlizzardPPC accelerator boards, on the Teron series based AmigaOne computers built by Eyetech under license by Amiga, Inc., on the Pegasos II from Genesi/bPlan GmbH, on the ACube Systems Srl Sam440ep / Sam460ex / AmigaOne 500 systems and on the A-EON AmigaOne X1000. AmigaOS 4.0 had been available only in developer pre-releases for numerous years until it was officially released in December 2006. Due to the nature of some provisions of the contract between Amiga Inc. and Hyperion Entertainment (the Belgian company that is developing the OS), the commercial AmigaOS 4 had been available only to licensed buyers of AmigaOne motherboards. AmigaOS 4.0 for Amigas equipped with PowerUP accelerator boards was released in November 2007. Version 4.1 was released in August 2008 for AmigaOne systems, and in May 2011 for Amigas equipped with PowerUP accelerator boards. The most recent release of AmigaOS for all supported platforms is 4.1 update 5. Starting with release 4.1 update 4 there is an Emulation drawer containing official AmigaOS 3.x ROMs (all classic Amiga models including CD32) and relative Workbench files. Acube Systems entered an agreement with Hyperion under which it has ported AmigaOS 4 to its Sam440ep and Sam460ex line of PowerPC-based motherboards. In 2009 a version for Pegasos II was released in co-operation with Acube Systems. In 2012, A-EON Technology Ltd manufactured and released the AmigaOne X1000 to consumers through their partner, Amiga Kit who provided end-user support, assembly and worldwide distribution of the new system. Amiga hardware clones Long-time Amiga developer MacroSystem entered the Amiga-clone market with their DraCo non-linear video editing system. It appears in two versions, initially a tower model and later a cube. DraCo expanded upon and combined a number of earlier expansion cards developed for Amiga (VLabMotion, Toccata, WarpEngine, RetinaIII) into a true Amiga-clone powered by the Motorola 68060 processor. The DraCo can run AmigaOS 3.1 up through AmigaOS 3.9. It is the only Amiga-based system to support FireWire for video I/O. DraCo also offers an Amiga-compatible Zorro-II expansion bus and introduced a faster custom DraCoBus, capable of transfer rates (faster than Commodore's Zorro-III). The technology was later used in the Casablanca system, a set-top-box also designed for non-linear video editing. In 1998, Index Information released the Access, an Amiga-clone similar to the Amiga 1200, but on a motherboard that could fit into a standard -inch drive bay. It features either a 68020 or 68030 CPU, with a AGA chipset, and runs AmigaOS 3.1. In 1998, former Amiga employees (John Smith, Peter Kittel, Dave Haynie and Andy Finkel to mention few) formed a new company called PIOS. Their hardware platform, PIOS One, was aimed at Amiga, Atari and Macintosh users. The company was renamed to Met@box in 1999 until it folded. The NatAmi (short for Native Amiga) hardware project began in 2005 with the aim of designing and building an Amiga clone motherboard that is enhanced with modern features. The NatAmi motherboard is a standard Mini-ITX-compatible form factor computer motherboard, powered by a Motorola/Freescale 68060 and its chipset. It is compatible with the original Amiga chipset, which has been inscribed on a programmable FPGA Altera chip on the board. The NatAmi is the second Amiga clone project after the Minimig motherboard, and its history is very similar to that of the C-One mainboard developed by Jeri Ellsworth and Jens Schönfeld. From a commercial point of view, Natami's circuitry and design are currently closed source. One goal of the NatAmi project is to design an Amiga-compatible motherboard that includes up-to-date features but that does not rely on emulation (as in WinUAE), modern PC Intel components, or a modern PowerPC mainboard. As such, NatAmi is not intended to become another evolutionary heir to classic Amigas, such as with AmigaOne or Pegasos computers. This "purist" philosophy essentially limits the resulting processor speed but puts the focus on bandwidth and low latencies. The developers also recreated the entire Amiga chipset, freeing it from legacy Amiga limitations such as two megabytes of audio and video graphics RAM as in the AGA chipset, and rebuilt this new chipset by programming a modern FPGA Altera Cyclone IV chip. Later, the developers decided to create from scratch a new software-form processor chip, codenamed "N68050" that resides in the physical Altera FPGA programmable chip. In 2006, two new Amiga clones were announced, both using FPGA based hardware synthesis to replace the Amiga OCS custom chipset. The first, the Minimig, is a personal project of Dutch engineer Dennis van Weeren. Referred to as "new Amiga hardware", the original model was built on a Xilinx Spartan-3 development board, but soon a dedicated board was developed. The minimig uses the FPGA to reproduce the custom Denise, Agnus, Paula and Gary chips as well as both 8520 CIAs and implements a simple version of Amber. The rest of the chips are an actual 68000 CPU, ram chips, and a PIC microcontroller for BIOS control. The design for Minimig was released as open-source on July 25, 2007. In February 2008, an Italian company Acube Systems began selling Minimig boards. A third party upgrade replaces the PIC microcontroller with a more powerful ARM processor, providing more functionality such as write access and support for hard disk images. The Minimig core has been ported to the FPGArcade "Replay" board. The Replay uses an FPGA with about three times more capacity and that does support the AGA chipset and a 68020 soft core with 68030 capabilities. The Replay board is designed to implement many older computers and classic arcade machines. The second is the Clone-A system announced by Individual Computers. As of mid-2007 it has been shown in its development form, with FPGA-based boards replacing the Amiga chipset and mounted on an Amiga 500 motherboard. Operating systems AmigaOS AmigaOS is a single-user multitasking operating system. It was one of the first commercially available consumer operating systems for personal computers to implement preemptive multitasking. It was developed first by Commodore International and initially introduced in 1985 with the Amiga 1000. John C. Dvorak wrote in PC Magazine in 1996: AmigaOS combines a command-line interface and graphical user interface. AmigaDOS is the disk operating system and command line portion of the OS and Workbench the native graphical windowing, graphical environment for file management and launching applications. AmigaDOS allows long filenames (up to 107 characters) with whitespace and does not require filename extensions. The windowing system and user interface engine that handles all input events is called Intuition. The multi-tasking kernel is called Exec. It acts as a scheduler for tasks running on the system, providing pre-emptive multitasking with prioritised round-robin scheduling. It enabled true pre-emptive multitasking in as little as 256 KB of free memory. AmigaOS does not implement memory protection; the 68000 CPU does not include a memory management unit. Although this speeds and eases inter-process communication because programs can communicate by simply passing a pointer back and forth, the lack of memory protection made the AmigaOS more vulnerable to crashes from badly behaving programs than other multitasking systems that did implement memory protection, and Amiga OS is fundamentally incapable of enforcing any form of security model since any program had full access to the system. A co-operational memory protection feature was implemented in AmigaOS 4 and could be retrofitted to old AmigaOS systems using Enforcer or CyberGuard tools. The problem was somewhat exacerbated by Commodore's initial decision to release documentation relating not only to the OS's underlying software routines, but also to the hardware itself, enabling intrepid programmers who had developed their skills on the Commodore 64 to POKE the hardware directly, as was done on the older platform. While the decision to release the documentation was a popular one and allowed the creation of fast, sophisticated sound and graphics routines in games and demos, it also contributed to system instabilityas some programmers lacked the expertise to program at this level. For this reason, when the new AGA chipset was released, Commodore declined to release low-level documentation in an attempt to force developers into using the approved software routines. The latest version for the PPC Amigas is the AmigaOS 4.1 and for the 68k Amigas is the AmigaOS 3.2.2 Influence on other operating systems AmigaOS directly or indirectly inspired the development of various operating systems. MorphOS and AROS clearly inherit heavily from the structure of AmigaOS as explained directly in articles regarding these two operating systems. AmigaOS also influenced BeOS, which featured a centralized system of Datatypes, similar to that present in AmigaOS. Likewise, DragonFly BSD was also inspired by AmigaOS as stated by Dragonfly developer Matthew Dillon who is a former Amiga developer. WindowLab and amiwm are among several window managers for the X Window System seek to mimic the Workbench interface. IBM licensed the Amiga GUI from Commodore in exchange for the REXX language license. This allowed OS/2 to have the WPS (Workplace Shell) GUI shell for OS/2 2.0, a 32-bit operating system. Unix and Unix-like systems Commodore-Amiga produced Amiga Unix, informally known as Amix, based on AT&T SVR4. It supports the Amiga 2500 and Amiga 3000 and is included with the Amiga 3000UX. Among other unusual features of Amix is a hardware-accelerated windowing system that can scroll windows without copying data. Amix is not supported on the later Amiga systems based on 68040 or 68060 processors. Other, still maintained, operating systems are available for the classic Amiga platform, including Linux and NetBSD. Both require a CPU with MMU such as the 68020 with 68851 or full versions of the 68030, 68040 or 68060. There is also a version of Linux for Amigas with PowerPC accelerator cards. Debian and Yellow Dog Linux can run on the AmigaOne. There is an official, older version of OpenBSD. The last Amiga release is 3.2. MINIX 1.5.10 also runs on Amiga. Emulating other systems The Amiga Sidecar is a complete IBM PC XT compatible computer contained in an expansion card. It was released by Commodore in 1986 and promoted as a way to run business software on the Amiga 1000. Amiga software In the late 1980s and early 1990s the platform became particularly popular for gaming, demoscene activities and creative software uses. During this time commercial developers marketed a wide range of games and creative software, often developing titles simultaneously for the Atari ST due to the similar hardware architecture. Popular creative software included 3D rendering (ray-tracing) packages, bitmap graphics editors, desktop video software, software development packages and "tracker" music editors. Until the late 1990s the Amiga remained a popular platform for non-commercial software, often developed by enthusiasts, and much of which was freely redistributable. An on-line archive, Aminet, was created in 1991 and until the late-1990s was the largest public archive of software, art and documents for any platform. Marketing The name Amiga was chosen by the developers from the Spanish word for a female friend, because they knew Spanish, and because it occurred before Apple and Atari alphabetically. It also conveyed the message that the Amiga computer line was "user friendly" as a pun or play on words. The first official Amiga logo was a rainbow-colored double check mark. In later marketing material Commodore largely dropped the checkmark and used logos styled with various typefaces. Although it was never adopted as a trademark by Commodore, the "Boing Ball" has been synonymous with Amiga since its launch. It became an unofficial and enduring theme after a visually impressive animated demonstration at the 1984 Winter Consumer Electronics Show in January 1984 showing a checkered ball bouncing and rotating. Following Escom's purchase of Commodore in 1996, the Boing Ball theme was incorporated into a new logo. Early Commodore advertisements attempted to cast the computer as an all-purpose business machine, though the Amiga was most commercially successful as a home computer. Throughout the 1980s and early 1990s Commodore primarily placed advertising in computer magazines and occasionally in national newspapers and on television. Legacy Since the demise of Commodore, various groups have marketed successors to the original Amiga line: Genesi sold PowerPC based hardware under the Pegasos brand running AmigaOS and MorphOS; Eyetech sold PowerPC based hardware under the AmigaOne brand from 2002 to 2005 running AmigaOS 4; Amiga Kit distributes and sells PowerPC based hardware under the AmigaOne brand from 2010 to present day running AmigaOS 4; ACube Systems sells the AmigaOS 3 compatible Minimig system with a Freescale MC68SEC000 CPU (Motorola 68000 compatible) and AmigaOS 4 compatible Sam440 / Sam460 / AmigaOne 500 systems with PowerPC processors; A-EON Technology Ltd sells the AmigaOS 4 compatible AmigaOne X1000 system with P.A. Semi PWRficient PA6T-1682M processor, X5000 and A1222+ computers. AmigaKit Ltd produce the A600GS and A1200NG computers systems. They also manufacture and sell a wide range of aftermarket components to refurbished classic systems. ASB Computer Spain sell numerous items from aftermarket components to refurbished classic systems. AmigaOS and MorphOS are commercial proprietary operating systems. AmigaOS 4, based on AmigaOS 3.1 source code with some parts of version 3.9, is developed by Hyperion Entertainment and runs on PowerPC based hardware. MorphOS, based on some parts of AROS source code, is developed by MorphOS Team and is continued on Apple and other PowerPC based hardware. There is also AROS, a free and open source operating system (re-implementation of the AmigaOS 3.1 APIs), for Amiga 68k, x86 and ARM hardware (one version runs Linux-hosted on the Raspberry Pi). In particular, AROS for Amiga 68k hardware aims to create an open source Kickstart ROM replacement for emulation purpose and/or for use on real "classic" hardware. Magazines Amiga Format continued publication until 2000. Amiga Active was launched in 1999 and was published until 2001. Several magazines are in publication today: Print magazine Amiga Addict started publication in 2020.Amiga Future, which is available in both English and German; Bitplane.it, a bimonthly magazine in Italian; and AmigaPower, a long-running French magazine. Trade shows The Amiga continues to be popular enough that fans to support conferences such as Amiga37 which had over 50 vendors. Uses The Amiga series of computers found a place in early computer graphic design and television presentation. Season 1 and part of season 2 of the television series Babylon 5 were rendered in LightWave 3D on Amigas. Other television series using Amigas for special effects included SeaQuest DSV and Max Headroom. In addition, many celebrities and notable individuals have made use of the Amiga: Andy Warhol was an early user of the Amiga and appeared at the launch, where he made a computer artwork of Debbie Harry. Warhol used the Amiga to create a new style of art made with computers, and was the author of a multimedia opera called You Are the One, which consists of an animated sequence featuring images of actress Marilyn Monroe assembled in a short movie with a soundtrack. The video was discovered on two old Amiga floppies in a drawer in Warhol's studio and repaired in 2006 by the Detroit Museum of New Art. The pop artist has been quoted as saying: "The thing I like most about doing this kind of work on the Amiga is that it looks like my work in other media". Artist Jean "Moebius" Giraud credits the Amiga he bought for his son as a bridge to learning about "using paint box programs". He uploaded some of his early experiments to the file sharing forums on CompuServe. Futurist and science fiction author Arthur C. Clarke used an Amiga computer to calculate and explore Mandelbrot sets in the 1988 documentary film God, the Universe and Everything Else. The "Weird Al" Yankovic film UHF contains a computer-animated music video parody of the Dire Straits song "Money for Nothing", titled "Money for Nothing/Beverly Hillbillies*". According to the DVD commentary track, this spoof was created on an Amiga home computer. Rolf Harris used an Amiga to digitize his hand-drawn art work for animation on his television series Rolf's Cartoon Club. Debbie Harry appeared together with Andy Warhol (see above) at launch. Todd Rundgren's video "Change Myself" was produced with Toaster and Lightwave. Scottish pop artist Calvin Harris composed his 2007 debut album I Created Disco with an Amiga 1200. Susumu Hirasawa, a Japanese progressive-electronic artist, is known for using Amigas to compose and perform music, aid his live shows and make his promotional videos. He has also been inspired by the Amiga, and has referenced it in his lyrics. His December 13, 1994 "Adios Jay" Interactive Live Show was dedicated to (then recently deceased) Jay Miner. He also used the Amiga to create the virtual drummer TAINACO, who was a CG rendered figure whose performance was made with Elan Performer and was projected with DCTV. He also composed and performed "Eastern-boot", the AmigaOS 4 boot jingle. Electronic musician Max Tundra created his three albums with an Amiga 500. Bob Casale, keyboardist and guitarist of the new wave band Devo, used Amiga computer graphics on the album cover to Devo's album Total Devo. Most of Pokémon Gold and Silver's music was created on an Amiga computer, converted to MIDI, and then reconverted to the game's music format. American professional skateboarder Tony Hawk used an Amiga 2000 during the late 1980s to early 1990s. NewTek sent him a Video Toaster for his Amiga in exchange for appearing in a promotional video alongside Wil Wheaton and Penn Jillette, which he later used for editing a promotional video for the TurboDuo game Lords of Thunder in 1993. Veteran actor Dick Van Dyke also owned an Amiga equipped with a Video Toaster, where he is credited with the creation of 3D-rendered effects used on Diagnosis: Murder and The Dick Van Dyke Show Revisited. Van Dyke has displayed his computer-generated imagery work at SIGGRAPH, and continues to work with LightWave 3D. A number of notable producers used OctaMED for composition and live performance of Drum and Bass, Jungle, and various other sub-genres of electronic dance music on Amiga systems, occasionally in conjunction with additional synthesizers. These include: Aphrodite, DJ Zinc, Omni Trio, and Paradox, among others. Special purpose applications Amigas were used in various NASA laboratories to keep track of low orbiting satellites until 2004. Amigas were used at Kennedy Space Center to run strip-chart recorders, to format and display data, and control stations of platforms for Delta rocket launches. Palomar Observatory used Amigas to calibrate and control the charge-coupled devices in their telescopes, as well as to display and store the digitized images they collected. London Transport Museum developed their own interactive multi-media software for the CD32 including a virtual tour of the museum. Amiga 500 motherboards were used, in conjunction with a LaserDisc player and genlock device, in arcade games manufactured by American Laser Games. A custom Amiga 4000T motherboard was used in the HDI 1000 medical ultrasound system built by Advanced Technology Labs. , the Grand Rapids Public School district uses a Commodore Amiga 2000 with 1200 baud modem to automate its air conditioning and heating systems for the 19 schools covered by the GRPS district. The system has been operating day and night for decades. The Weather Network used Amigas to display the weather on TV.
Technology
Specific hardware
null
1997
https://en.wikipedia.org/wiki/Algebraic%20geometry
Algebraic geometry
Algebraic geometry is a branch of mathematics which uses abstract algebraic techniques, mainly from commutative algebra, to solve geometrical problems. Classically, it studies zeros of multivariate polynomials; the modern approach generalizes this in a few different aspects. The fundamental objects of study in algebraic geometry are algebraic varieties, which are geometric manifestations of solutions of systems of polynomial equations. Examples of the most studied classes of algebraic varieties are lines, circles, parabolas, ellipses, hyperbolas, cubic curves like elliptic curves, and quartic curves like lemniscates and Cassini ovals. These are plane algebraic curves. A point of the plane lies on an algebraic curve if its coordinates satisfy a given polynomial equation. Basic questions involve the study of points of special interest like singular points, inflection points and points at infinity. More advanced questions involve the topology of the curve and the relationship between curves defined by different equations. Algebraic geometry occupies a central place in modern mathematics and has multiple conceptual connections with such diverse fields as complex analysis, topology and number theory. As a study of systems of polynomial equations in several variables, the subject of algebraic geometry begins with finding specific solutions via equation solving, and then proceeds to understand the intrinsic properties of the totality of solutions of a system of equations. This understanding requires both conceptual theory and computational technique. In the 20th century, algebraic geometry split into several subareas. The mainstream of algebraic geometry is devoted to the study of the complex points of the algebraic varieties and more generally to the points with coordinates in an algebraically closed field. Real algebraic geometry is the study of the real algebraic varieties. Diophantine geometry and, more generally, arithmetic geometry is the study of algebraic varieties over fields that are not algebraically closed and, specifically, over fields of interest in algebraic number theory, such as the field of rational numbers, number fields, finite fields, function fields, and p-adic fields. A large part of singularity theory is devoted to the singularities of algebraic varieties. Computational algebraic geometry is an area that has emerged at the intersection of algebraic geometry and computer algebra, with the rise of computers. It consists mainly of algorithm design and software development for the study of properties of explicitly given algebraic varieties. Much of the development of the mainstream of algebraic geometry in the 20th century occurred within an abstract algebraic framework, with increasing emphasis being placed on "intrinsic" properties of algebraic varieties not dependent on any particular way of embedding the variety in an ambient coordinate space; this parallels developments in topology, differential and complex geometry. One key achievement of this abstract algebraic geometry is Grothendieck's scheme theory which allows one to use sheaf theory to study algebraic varieties in a way which is very similar to its use in the study of differential and analytic manifolds. This is obtained by extending the notion of point: In classical algebraic geometry, a point of an affine variety may be identified, through Hilbert's Nullstellensatz, with a maximal ideal of the coordinate ring, while the points of the corresponding affine scheme are all prime ideals of this ring. This means that a point of such a scheme may be either a usual point or a subvariety. This approach also enables a unification of the language and the tools of classical algebraic geometry, mainly concerned with complex points, and of algebraic number theory. Wiles' proof of the longstanding conjecture called Fermat's Last Theorem is an example of the power of this approach. Basic notions Zeros of simultaneous polynomials In classical algebraic geometry, the main objects of interest are the vanishing sets of collections of polynomials, meaning the set of all points that simultaneously satisfy one or more polynomial equations. For instance, the two-dimensional sphere of radius 1 in three-dimensional Euclidean space R3 could be defined as the set of all points with A "slanted" circle in R3 can be defined as the set of all points which satisfy the two polynomial equations Affine varieties First we start with a field k. In classical algebraic geometry, this field was always the complex numbers C, but many of the same results are true if we assume only that k is algebraically closed. We consider the affine space of dimension n over k, denoted An(k) (or more simply An, when k is clear from the context). When one fixes a coordinate system, one may identify An(k) with kn. The purpose of not working with kn is to emphasize that one "forgets" the vector space structure that kn carries. A function f : An → A1 is said to be polynomial (or regular) if it can be written as a polynomial, that is, if there is a polynomial p in k[x1,...,xn] such that f(M) = p(t1,...,tn) for every point M with coordinates (t1,...,tn) in An. The property of a function to be polynomial (or regular) does not depend on the choice of a coordinate system in An. When a coordinate system is chosen, the regular functions on the affine n-space may be identified with the ring of polynomial functions in n variables over k. Therefore, the set of the regular functions on An is a ring, which is denoted k[An]. We say that a polynomial vanishes at a point if evaluating it at that point gives zero. Let S be a set of polynomials in k[An]. The vanishing set of S (or vanishing locus or zero set) is the set V(S) of all points in An where every polynomial in S vanishes. Symbolically, A subset of An which is V(S), for some S, is called an algebraic set. The V stands for variety (a specific type of algebraic set to be defined below). Given a subset U of An, can one recover the set of polynomials which generate it? If U is any subset of An, define I(U) to be the set of all polynomials whose vanishing set contains U. The I stands for ideal: if two polynomials f and g both vanish on U, then f+g vanishes on U, and if h is any polynomial, then hf vanishes on U, so I(U) is always an ideal of the polynomial ring k[An]. Two natural questions to ask are: Given a subset U of An, when is U = V(I(U))? Given a set S of polynomials, when is S = I(V(S))? The answer to the first question is provided by introducing the Zariski topology, a topology on An whose closed sets are the algebraic sets, and which directly reflects the algebraic structure of k[An]. Then U = V(I(U)) if and only if U is an algebraic set or equivalently a Zariski-closed set. The answer to the second question is given by Hilbert's Nullstellensatz. In one of its forms, it says that I(V(S)) is the radical of the ideal generated by S. In more abstract language, there is a Galois connection, giving rise to two closure operators; they can be identified, and naturally play a basic role in the theory; the example is elaborated at Galois connection. For various reasons we may not always want to work with the entire ideal corresponding to an algebraic set U. Hilbert's basis theorem implies that ideals in k[An] are always finitely generated. An algebraic set is called irreducible if it cannot be written as the union of two smaller algebraic sets. Any algebraic set is a finite union of irreducible algebraic sets and this decomposition is unique. Thus its elements are called the irreducible components of the algebraic set. An irreducible algebraic set is also called a variety. It turns out that an algebraic set is a variety if and only if it may be defined as the vanishing set of a prime ideal of the polynomial ring. Some authors do not make a clear distinction between algebraic sets and varieties and use irreducible variety to make the distinction when needed. Regular functions Just as continuous functions are the natural maps on topological spaces and smooth functions are the natural maps on differentiable manifolds, there is a natural class of functions on an algebraic set, called regular functions or polynomial functions. A regular function on an algebraic set V contained in An is the restriction to V of a regular function on An. For an algebraic set defined on the field of the complex numbers, the regular functions are smooth and even analytic. It may seem unnaturally restrictive to require that a regular function always extend to the ambient space, but it is very similar to the situation in a normal topological space, where the Tietze extension theorem guarantees that a continuous function on a closed subset always extends to the ambient topological space. Just as with the regular functions on affine space, the regular functions on V form a ring, which we denote by k[V]. This ring is called the coordinate ring of V. Since regular functions on V come from regular functions on An, there is a relationship between the coordinate rings. Specifically, if a regular function on V is the restriction of two functions f and g in k[An], then f − g is a polynomial function which is null on V and thus belongs to I(V). Thus k[V] may be identified with k[An]/I(V). Morphism of affine varieties Using regular functions from an affine variety to A1, we can define regular maps from one affine variety to another. First we will define a regular map from a variety into affine space: Let V be a variety contained in An. Choose m regular functions on V, and call them f1, ..., fm. We define a regular map f from V to Am by letting . In other words, each fi determines one coordinate of the range of f. If V′ is a variety contained in Am, we say that f is a regular map from V to V′ if the range of f is contained in V′. The definition of the regular maps apply also to algebraic sets. The regular maps are also called morphisms, as they make the collection of all affine algebraic sets into a category, where the objects are the affine algebraic sets and the morphisms are the regular maps. The affine varieties is a subcategory of the category of the algebraic sets. Given a regular map g from V to V′ and a regular function f of k[V′], then . The map is a ring homomorphism from k[V′] to k[V]. Conversely, every ring homomorphism from k[V′] to k[V] defines a regular map from V to V′. This defines an equivalence of categories between the category of algebraic sets and the opposite category of the finitely generated reduced k-algebras. This equivalence is one of the starting points of scheme theory. Rational function and birational equivalence In contrast to the preceding sections, this section concerns only varieties and not algebraic sets. On the other hand, the definitions extend naturally to projective varieties (next section), as an affine variety and its projective completion have the same field of functions. If V is an affine variety, its coordinate ring is an integral domain and has thus a field of fractions which is denoted k(V) and called the field of the rational functions on V or, shortly, the function field of V. Its elements are the restrictions to V of the rational functions over the affine space containing V. The domain of a rational function f is not V but the complement of the subvariety (a hypersurface) where the denominator of f vanishes. As with regular maps, one may define a rational map from a variety V to a variety V'. As with the regular maps, the rational maps from V to V' may be identified to the field homomorphisms from k(V') to k(V). Two affine varieties are birationally equivalent if there are two rational functions between them which are inverse one to the other in the regions where both are defined. Equivalently, they are birationally equivalent if their function fields are isomorphic. An affine variety is a rational variety if it is birationally equivalent to an affine space. This means that the variety admits a rational parameterization, that is a parametrization with rational functions. For example, the circle of equation is a rational curve, as it has the parametric equation which may also be viewed as a rational map from the line to the circle. The problem of resolution of singularities is to know if every algebraic variety is birationally equivalent to a variety whose projective completion is nonsingular (see also smooth completion). It was solved in the affirmative in characteristic 0 by Heisuke Hironaka in 1964 and is yet unsolved in finite characteristic. Projective variety Just as the formulas for the roots of second, third, and fourth degree polynomials suggest extending real numbers to the more algebraically complete setting of the complex numbers, many properties of algebraic varieties suggest extending affine space to a more geometrically complete projective space. Whereas the complex numbers are obtained by adding the number i, a root of the polynomial , projective space is obtained by adding in appropriate points "at infinity", points where parallel lines may meet. To see how this might come about, consider the variety . If we draw it, we get a parabola. As x goes to positive infinity, the slope of the line from the origin to the point (x, x2) also goes to positive infinity. As x goes to negative infinity, the slope of the same line goes to negative infinity. Compare this to the variety V(y − x3). This is a cubic curve. As x goes to positive infinity, the slope of the line from the origin to the point (x, x3) goes to positive infinity just as before. But unlike before, as x goes to negative infinity, the slope of the same line goes to positive infinity as well; the exact opposite of the parabola. So the behavior "at infinity" of V(y − x3) is different from the behavior "at infinity" of V(y − x2). The consideration of the projective completion of the two curves, which is their prolongation "at infinity" in the projective plane, allows us to quantify this difference: the point at infinity of the parabola is a regular point, whose tangent is the line at infinity, while the point at infinity of the cubic curve is a cusp. Also, both curves are rational, as they are parameterized by x, and the Riemann-Roch theorem implies that the cubic curve must have a singularity, which must be at infinity, as all its points in the affine space are regular. Thus many of the properties of algebraic varieties, including birational equivalence and all the topological properties, depend on the behavior "at infinity" and so it is natural to study the varieties in projective space. Furthermore, the introduction of projective techniques made many theorems in algebraic geometry simpler and sharper: For example, Bézout's theorem on the number of intersection points between two varieties can be stated in its sharpest form only in projective space. For these reasons, projective space plays a fundamental role in algebraic geometry. Nowadays, the projective space Pn of dimension n is usually defined as the set of the lines passing through a point, considered as the origin, in the affine space of dimension , or equivalently to the set of the vector lines in a vector space of dimension . When a coordinate system has been chosen in the space of dimension , all the points of a line have the same set of coordinates, up to the multiplication by an element of k. This defines the homogeneous coordinates of a point of Pn as a sequence of elements of the base field k, defined up to the multiplication by a nonzero element of k (the same for the whole sequence). A polynomial in variables vanishes at all points of a line passing through the origin if and only if it is homogeneous. In this case, one says that the polynomial vanishes at the corresponding point of Pn. This allows us to define a projective algebraic set in Pn as the set , where a finite set of homogeneous polynomials vanishes. Like for affine algebraic sets, there is a bijection between the projective algebraic sets and the reduced homogeneous ideals which define them. The projective varieties are the projective algebraic sets whose defining ideal is prime. In other words, a projective variety is a projective algebraic set, whose homogeneous coordinate ring is an integral domain, the projective coordinates ring being defined as the quotient of the graded ring or the polynomials in variables by the homogeneous (reduced) ideal defining the variety. Every projective algebraic set may be uniquely decomposed into a finite union of projective varieties. The only regular functions which may be defined properly on a projective variety are the constant functions. Thus this notion is not used in projective situations. On the other hand, the field of the rational functions or function field is a useful notion, which, similarly to the affine case, is defined as the set of the quotients of two homogeneous elements of the same degree in the homogeneous coordinate ring. Real algebraic geometry Real algebraic geometry is the study of real algebraic varieties. The fact that the field of the real numbers is an ordered field cannot be ignored in such a study. For example, the curve of equation is a circle if , but has no real points if . Real algebraic geometry also investigates, more broadly, semi-algebraic sets, which are the solutions of systems of polynomial inequalities. For example, neither branch of the hyperbola of equation is a real algebraic variety. However, the branch in the first quadrant is a semi-algebraic set defined by and . One open problem in real algebraic geometry is the following part of Hilbert's sixteenth problem: Decide which respective positions are possible for the ovals of a nonsingular plane curve of degree 8. Computational algebraic geometry One may date the origin of computational algebraic geometry to meeting EUROSAM'79 (International Symposium on Symbolic and Algebraic Manipulation) held at Marseille, France, in June 1979. At this meeting, Dennis S. Arnon showed that George E. Collins's Cylindrical algebraic decomposition (CAD) allows the computation of the topology of semi-algebraic sets, Bruno Buchberger presented Gröbner bases and his algorithm to compute them, Daniel Lazard presented a new algorithm for solving systems of homogeneous polynomial equations with a computational complexity which is essentially polynomial in the expected number of solutions and thus simply exponential in the number of the unknowns. This algorithm is strongly related with Macaulay's multivariate resultant. Since then, most results in this area are related to one or several of these items either by using or improving one of these algorithms, or by finding algorithms whose complexity is simply exponential in the number of the variables. A body of mathematical theory complementary to symbolic methods called numerical algebraic geometry has been developed over the last several decades. The main computational method is homotopy continuation. This supports, for example, a model of floating point computation for solving problems of algebraic geometry. Gröbner basis A Gröbner basis is a system of generators of a polynomial ideal whose computation allows the deduction of many properties of the affine algebraic variety defined by the ideal. Given an ideal I defining an algebraic set V: V is empty (over an algebraically closed extension of the basis field), if and only if the Gröbner basis for any monomial ordering is reduced to {1}. By means of the Hilbert series one may compute the dimension and the degree of V from any Gröbner basis of I for a monomial ordering refining the total degree. If the dimension of V is 0, one may compute the points (finite in number) of V from any Gröbner basis of I (see Systems of polynomial equations). A Gröbner basis computation allows one to remove from V all irreducible components which are contained in a given hypersurface. A Gröbner basis computation allows one to compute the Zariski closure of the image of V by the projection on the k first coordinates, and the subset of the image where the projection is not proper. More generally Gröbner basis computations allow one to compute the Zariski closure of the image and the critical points of a rational function of V into another affine variety. Gröbner basis computations do not allow one to compute directly the primary decomposition of I nor the prime ideals defining the irreducible components of V, but most algorithms for this involve Gröbner basis computation. The algorithms which are not based on Gröbner bases use regular chains but may need Gröbner bases in some exceptional situations. Gröbner bases are deemed to be difficult to compute. In fact they may contain, in the worst case, polynomials whose degree is doubly exponential in the number of variables and a number of polynomials which is also doubly exponential. However, this is only a worst case complexity, and the complexity bound of Lazard's algorithm of 1979 may frequently apply. Faugère F5 algorithm realizes this complexity, as it may be viewed as an improvement of Lazard's 1979 algorithm. It follows that the best implementations allow one to compute almost routinely with algebraic sets of degree more than 100. This means that, presently, the difficulty of computing a Gröbner basis is strongly related to the intrinsic difficulty of the problem. Cylindrical algebraic decomposition (CAD) CAD is an algorithm which was introduced in 1973 by G. Collins to implement with an acceptable complexity the Tarski–Seidenberg theorem on quantifier elimination over the real numbers. This theorem concerns the formulas of the first-order logic whose atomic formulas are polynomial equalities or inequalities between polynomials with real coefficients. These formulas are thus the formulas which may be constructed from the atomic formulas by the logical operators and (∧), or (∨), not (¬), for all (∀) and exists (∃). Tarski's theorem asserts that, from such a formula, one may compute an equivalent formula without quantifier (∀, ∃). The complexity of CAD is doubly exponential in the number of variables. This means that CAD allows, in theory, to solve every problem of real algebraic geometry which may be expressed by such a formula, that is almost every problem concerning explicitly given varieties and semi-algebraic sets. While Gröbner basis computation has doubly exponential complexity only in rare cases, CAD has almost always this high complexity. This implies that, unless if most polynomials appearing in the input are linear, it may not solve problems with more than four variables. Since 1973, most of the research on this subject is devoted either to improving CAD or finding alternative algorithms in special cases of general interest. As an example of the state of art, there are efficient algorithms to find at least a point in every connected component of a semi-algebraic set, and thus to test if a semi-algebraic set is empty. On the other hand, CAD is yet, in practice, the best algorithm to count the number of connected components. Asymptotic complexity vs. practical efficiency The basic general algorithms of computational geometry have a double exponential worst case complexity. More precisely, if d is the maximal degree of the input polynomials and n the number of variables, their complexity is at most for some constant c, and, for some inputs, the complexity is at least for another constant c′. During the last 20 years of the 20th century, various algorithms have been introduced to solve specific subproblems with a better complexity. Most of these algorithms have a complexity . Among these algorithms which solve a sub problem of the problems solved by Gröbner bases, one may cite testing if an affine variety is empty and solving nonhomogeneous polynomial systems which have a finite number of solutions. Such algorithms are rarely implemented because, on most entries Faugère's F4 and F5 algorithms have a better practical efficiency and probably a similar or better complexity (probably because the evaluation of the complexity of Gröbner basis algorithms on a particular class of entries is a difficult task which has been done only in a few special cases). The main algorithms of real algebraic geometry which solve a problem solved by CAD are related to the topology of semi-algebraic sets. One may cite counting the number of connected components, testing if two points are in the same components or computing a Whitney stratification of a real algebraic set. They have a complexity of , but the constant involved by O notation is so high that using them to solve any nontrivial problem effectively solved by CAD, is impossible even if one could use all the existing computing power in the world. Therefore, these algorithms have never been implemented and this is an active research area to search for algorithms with have together a good asymptotic complexity and a good practical efficiency. Abstract modern viewpoint The modern approaches to algebraic geometry redefine and effectively extend the range of basic objects in various levels of generality to schemes, formal schemes, ind-schemes, algebraic spaces, algebraic stacks and so on. The need for this arises already from the useful ideas within theory of varieties, e.g. the formal functions of Zariski can be accommodated by introducing nilpotent elements in structure rings; considering spaces of loops and arcs, constructing quotients by group actions and developing formal grounds for natural intersection theory and deformation theory lead to some of the further extensions. Most remarkably, in the early 1960s, algebraic varieties were subsumed into Alexander Grothendieck's concept of a scheme. Their local objects are affine schemes or prime spectra which are locally ringed spaces which form a category which is antiequivalent to the category of commutative unital rings, extending the duality between the category of affine algebraic varieties over a field k, and the category of finitely generated reduced k-algebras. The gluing is along Zariski topology; one can glue within the category of locally ringed spaces, but also, using the Yoneda embedding, within the more abstract category of presheaves of sets over the category of affine schemes. The Zariski topology in the set theoretic sense is then replaced by a Grothendieck topology. Grothendieck introduced Grothendieck topologies having in mind more exotic but geometrically finer and more sensitive examples than the crude Zariski topology, namely the étale topology, and the two flat Grothendieck topologies: fppf and fpqc; nowadays some other examples became prominent including Nisnevich topology. Sheaves can be furthermore generalized to stacks in the sense of Grothendieck, usually with some additional representability conditions leading to Artin stacks and, even finer, Deligne–Mumford stacks, both often called algebraic stacks. Sometimes other algebraic sites replace the category of affine schemes. For example, Nikolai Durov has introduced commutative algebraic monads as a generalization of local objects in a generalized algebraic geometry. Versions of a tropical geometry, of an absolute geometry over a field of one element and an algebraic analogue of Arakelov's geometry were realized in this setup. Another formal generalization is possible to universal algebraic geometry in which every variety of algebras has its own algebraic geometry. The term variety of algebras should not be confused with algebraic variety. The language of schemes, stacks and generalizations has proved to be a valuable way of dealing with geometric concepts and became cornerstones of modern algebraic geometry. Algebraic stacks can be further generalized and for many practical questions like deformation theory and intersection theory, this is often the most natural approach. One can extend the Grothendieck site of affine schemes to a higher categorical site of derived affine schemes, by replacing the commutative rings with an infinity category of differential graded commutative algebras, or of simplicial commutative rings or a similar category with an appropriate variant of a Grothendieck topology. One can also replace presheaves of sets by presheaves of simplicial sets (or of infinity groupoids). Then, in presence of an appropriate homotopic machinery one can develop a notion of derived stack as such a presheaf on the infinity category of derived affine schemes, which is satisfying certain infinite categorical version of a sheaf axiom (and to be algebraic, inductively a sequence of representability conditions). Quillen model categories, Segal categories and quasicategories are some of the most often used tools to formalize this yielding the derived algebraic geometry, introduced by the school of Carlos Simpson, including Andre Hirschowitz, Bertrand Toën, Gabrielle Vezzosi, Michel Vaquié and others; and developed further by Jacob Lurie, Bertrand Toën, and Gabriele Vezzosi. Another (noncommutative) version of derived algebraic geometry, using A-infinity categories has been developed from the early 1990s by Maxim Kontsevich and followers. History Before the 16th century Some of the roots of algebraic geometry date back to the work of the Hellenistic Greeks from the 5th century BC. The Delian problem, for instance, was to construct a length x so that the cube of side x contained the same volume as the rectangular box a2b for given sides a and b. Menaechmus () considered the problem geometrically by intersecting the pair of plane conics ay = x2 and xy = ab. In the 3rd century BC, Archimedes and Apollonius systematically studied additional problems on conic sections using coordinates. Apollonius in the Conics further developed a method that is so similar to analytic geometry that his work is sometimes thought to have anticipated the work of Descartes by some 1800 years. His application of reference lines, a diameter and a tangent is essentially no different from our modern use of a coordinate frame, where the distances measured along the diameter from the point of tangency are the abscissas, and the segments parallel to the tangent and intercepted between the axis and the curve are the ordinates. He further developed relations between the abscissas and the corresponding coordinates using geometric methods like using parabolas and curves. Medieval mathematicians, including Omar Khayyam, Leonardo of Pisa, Gersonides and Nicole Oresme in the Medieval Period, solved certain cubic and quadratic equations by purely algebraic means and then interpreted the results geometrically. The Persian mathematician Omar Khayyám (born 1048 AD) believed that there was a relationship between arithmetic, algebra and geometry. This was criticized by Jeffrey Oaks, who claims that the study of curves by means of equations originated with Descartes in the seventeenth century. Renaissance Such techniques of applying geometrical constructions to algebraic problems were also adopted by a number of Renaissance mathematicians such as Gerolamo Cardano and Niccolò Fontana "Tartaglia" on their studies of the cubic equation. The geometrical approach to construction problems, rather than the algebraic one, was favored by most 16th and 17th century mathematicians, notably Blaise Pascal who argued against the use of algebraic and analytical methods in geometry. The French mathematicians Franciscus Vieta and later René Descartes and Pierre de Fermat revolutionized the conventional way of thinking about construction problems through the introduction of coordinate geometry. They were interested primarily in the properties of algebraic curves, such as those defined by Diophantine equations (in the case of Fermat), and the algebraic reformulation of the classical Greek works on conics and cubics (in the case of Descartes). During the same period, Blaise Pascal and Gérard Desargues approached geometry from a different perspective, developing the synthetic notions of projective geometry. Pascal and Desargues also studied curves, but from the purely geometrical point of view: the analog of the Greek ruler and compass construction. Ultimately, the analytic geometry of Descartes and Fermat won out, for it supplied the 18th century mathematicians with concrete quantitative tools needed to study physical problems using the new calculus of Newton and Leibniz. However, by the end of the 18th century, most of the algebraic character of coordinate geometry was subsumed by the calculus of infinitesimals of Lagrange and Euler. 19th and early 20th century It took the simultaneous 19th century developments of non-Euclidean geometry and Abelian integrals in order to bring the old algebraic ideas back into the geometrical fold. The first of these new developments was seized up by Edmond Laguerre and Arthur Cayley, who attempted to ascertain the generalized metric properties of projective space. Cayley introduced the idea of homogeneous polynomial forms, and more specifically quadratic forms, on projective space. Subsequently, Felix Klein studied projective geometry (along with other types of geometry) from the viewpoint that the geometry on a space is encoded in a certain class of transformations on the space. By the end of the 19th century, projective geometers were studying more general kinds of transformations on figures in projective space. Rather than the projective linear transformations which were normally regarded as giving the fundamental Kleinian geometry on projective space, they concerned themselves also with the higher degree birational transformations. This weaker notion of congruence would later lead members of the 20th century Italian school of algebraic geometry to classify algebraic surfaces up to birational isomorphism. The second early 19th century development, that of Abelian integrals, would lead Bernhard Riemann to the development of Riemann surfaces. In the same period began the algebraization of the algebraic geometry through commutative algebra. The prominent results in this direction are Hilbert's basis theorem and Hilbert's Nullstellensatz, which are the basis of the connection between algebraic geometry and commutative algebra, and Macaulay's multivariate resultant, which is the basis of elimination theory. Probably because of the size of the computation which is implied by multivariate resultants, elimination theory was forgotten during the middle of the 20th century until it was renewed by singularity theory and computational algebraic geometry. 20th century B. L. van der Waerden, Oscar Zariski and André Weil developed a foundation for algebraic geometry based on contemporary commutative algebra, including valuation theory and the theory of ideals. One of the goals was to give a rigorous framework for proving the results of the Italian school of algebraic geometry. In particular, this school used systematically the notion of generic point without any precise definition, which was first given by these authors during the 1930s. In the 1950s and 1960s, Jean-Pierre Serre and Alexander Grothendieck recast the foundations making use of sheaf theory. Later, from about 1960, and largely led by Grothendieck, the idea of schemes was worked out, in conjunction with a very refined apparatus of homological techniques. After a decade of rapid development the field stabilized in the 1970s, and new applications were made, both to number theory and to more classical geometric questions on algebraic varieties, singularities, moduli, and formal moduli. An important class of varieties, not easily understood directly from their defining equations, are the abelian varieties, which are the projective varieties whose points form an abelian group. The prototypical examples are the elliptic curves, which have a rich theory. They were instrumental in the proof of Fermat's Last Theorem and are also used in elliptic-curve cryptography. In parallel with the abstract trend of the algebraic geometry, which is concerned with general statements about varieties, methods for effective computation with concretely-given varieties have also been developed, which lead to the new area of computational algebraic geometry. One of the founding methods of this area is the theory of Gröbner bases, introduced by Bruno Buchberger in 1965. Another founding method, more specially devoted to real algebraic geometry, is the cylindrical algebraic decomposition, introduced by George E. Collins in 1973.
Mathematics
Algebra
null
2039
https://en.wikipedia.org/wiki/Avionics
Avionics
Avionics (a portmanteau of aviation and electronics) are the electronic systems used on aircraft. Avionic systems include communications, navigation, the display and management of multiple systems, and the hundreds of systems that are fitted to aircraft to perform individual functions. These can be as simple as a searchlight for a police helicopter or as complicated as the tactical system for an airborne early warning platform. History The term "avionics" was coined in 1949 by Philip J. Klass, senior editor at Aviation Week & Space Technology magazine as a portmanteau of "aviation electronics". Radio communication was first used in aircraft just prior to World War I. The first airborne radios were in zeppelins, but the military sparked development of light radio sets that could be carried by heavier-than-air craft, so that aerial reconnaissance biplanes could report their observations immediately in case they were shot down. The first experimental radio transmission from an airplane was conducted by the U.S. Navy in August 1910. The first aircraft radios transmitted by radiotelegraphy. They required a two-seat aircraft with a second crewman who operated a telegraph key to spell out messages in Morse code. During World War I, AM voice two way radio sets were made possible in 1917 (see TM (triode)) by the development of the triode vacuum tube, which were simple enough that the pilot in a single seat aircraft could use it while flying. Radar, the central technology used today in aircraft navigation and air traffic control, was developed by several nations, mainly in secret, as an air defense system in the 1930s during the runup to World War II. Many modern avionics have their origins in World War II wartime developments. For example, autopilot systems that are commonplace today began as specialized systems to help bomber planes fly steadily enough to hit precision targets from high altitudes. Britain's 1940 decision to share its radar technology with its U.S. ally, particularly the magnetron vacuum tube, in the famous Tizard Mission, significantly shortened the war. Modern avionics is a substantial portion of military aircraft spending. Aircraft like the F-15E and the now retired F-14 have roughly 20 percent of their budget spent on avionics. Most modern helicopters now have budget splits of 60/40 in favour of avionics. The civilian market has also seen a growth in cost of avionics. Flight control systems (fly-by-wire) and new navigation needs brought on by tighter airspaces, have pushed up development costs. The major change has been the recent boom in consumer flying. As more people begin to use planes as their primary method of transportation, more elaborate methods of controlling aircraft safely in these high restrictive airspaces have been invented. Modern avionics Avionics plays a heavy role in modernization initiatives like the Federal Aviation Administration's (FAA) Next Generation Air Transportation System project in the United States and the Single European Sky ATM Research (SESAR) initiative in Europe. The Joint Planning and Development Office put forth a roadmap for avionics in six areas: Published Routes and Procedures – Improved navigation and routing Negotiated Trajectories – Adding data communications to create preferred routes dynamically Delegated Separation – Enhanced situational awareness in the air and on the ground LowVisibility/CeilingApproach/Departure – Allowing operations with weather constraints with less ground infrastructure Surface Operations – To increase safety in approach and departure ATM Efficiencies – Improving the air traffic management (ATM) process Market The Aircraft Electronics Association reports $1.73 billion avionics sales for the first three quarters of 2017 in business and general aviation, a 4.1% yearly improvement: 73.5% came from North America, forward-fit represented 42.3% while 57.7% were retrofits as the U.S. deadline of January 1, 2020 for mandatory ADS-B out approach. Aircraft avionics The cockpit or, in larger aircraft, under the cockpit of an aircraft or in a movable nosecone, is a typical location for avionic bay equipment, including control, monitoring, communication, navigation, weather, and anti-collision systems. The majority of aircraft power their avionics using 14- or 28‑volt DC electrical systems; however, larger, more sophisticated aircraft (such as airliners or military combat aircraft) have AC systems operating at 115 volts 400 Hz, AC. There are several major vendors of flight avionics, including The Boeing Company, Panasonic Avionics Corporation, Honeywell (which now owns Bendix/King), Universal Avionics Systems Corporation, Rockwell Collins (now Collins Aerospace), Thales Group, GE Aviation Systems, Garmin, Raytheon, Parker Hannifin, UTC Aerospace Systems (now Collins Aerospace), Selex ES (now Leonardo), Shadin Avionics, and Avidyne Corporation. International standards for avionics equipment are prepared by the Airlines Electronic Engineering Committee (AEEC) and published by ARINC. Avionics Installation Avionics installation is a critical aspect of modern aviation, ensuring that aircraft are equipped with the necessary electronic systems for safe and efficient operation. These systems encompass a wide range of functions, including communication, navigation, monitoring, flight control, and weather detection. Avionics installations are performed on all types of aircraft, from small general aviation planes to large commercial jets and military aircraft. Installation Process The installation of avionics requires a combination of technical expertise, precision, and adherence to stringent regulatory standards. The process typically involves: Planning and Design: Before installation, the avionics shop works closely with the aircraft owner to determine the required systems based on the aircraft type, intended use, and regulatory requirements. Custom instrument panels are often designed to accommodate the new systems. Wiring and Integration: Avionics systems are integrated into the aircraft’s electrical and control systems, with wiring often requiring laser marking for durability and identification. Shops use detailed schematics to ensure correct installation. Testing and Calibration: After installation, each system must be thoroughly tested and calibrated to ensure proper function. This includes ground testing, flight testing, and system alignment with regulatory standards such as those set by the FAA. Certification: Once the systems are installed and tested, the avionics shop completes the necessary certifications. In the U.S., this often involves compliance with FAA Part 91.411 and 91.413 for IFR (Instrument Flight Rules) operations, as well as RVSM (Reduced Vertical Separation Minimum) certification. Regulatory Standards Avionics installation is governed by strict regulatory frameworks to ensure the safety and reliability of aircraft systems. In the United States, the Federal Aviation Administration (FAA) sets the standards for avionics installations. These include guidelines for: System Performance: Avionics systems must meet performance benchmarks as defined by the FAA, ensuring they function correctly in all phases of flight. Certification: Shops performing installations must be FAA-certified, and their technicians often hold certifications such as the General Radiotelephone Operator License (GROL). Inspections: Aircraft equipped with newly installed avionics systems must undergo rigorous inspections before being cleared for flight, including both ground and flight tests. Advancements in Avionics Technology The field of avionics has seen rapid technological advancements in recent years, leading to more integrated and automated systems. Key trends include: Glass Cockpits: Traditional analog gauges are being replaced by fully integrated glass cockpit displays, providing pilots with a centralized view of all flight parameters. NextGen Technologies: ADS-B and satellite-based navigation are part of the FAA’s NextGen initiative, aimed at modernizing air traffic control and improving the efficiency of the national airspace. Autonomous Systems: Advances in artificial intelligence and machine learning are paving the way for more autonomous aircraft systems, enhancing safety and reducing pilot workload. Communications Communications connect the flight deck to the ground and the flight deck to the passengers. On‑board communications are provided by public-address systems and aircraft intercoms. The VHF aviation communication system works on the airband of 118.000 MHz to 136.975 MHz. Each channel is spaced from the adjacent ones by 8.33 kHz in Europe, 25 kHz elsewhere. VHF is also used for line of sight communication such as aircraft-to-aircraft and aircraft-to-ATC. Amplitude modulation (AM) is used, and the conversation is performed in simplex mode. Aircraft communication can also take place using HF (especially for trans-oceanic flights) or satellite communication. Navigation Air navigation is the determination of position and direction on or above the surface of the Earth. Avionics can use satellite navigation systems (such as GPS and WAAS), inertial navigation system (INS), ground-based radio navigation systems (such as VOR or LORAN), or any combination thereof. Some navigation systems such as GPS calculate the position automatically and display it to the flight crew on moving map displays. Older ground-based Navigation systems such as VOR or LORAN requires a pilot or navigator to plot the intersection of signals on a paper map to determine an aircraft's location; modern systems calculate the position automatically and display it to the flight crew on moving map displays. Monitoring The first hints of glass cockpits emerged in the 1970s when flight-worthy cathode-ray tube (CRT) screens began to replace electromechanical displays, gauges and instruments. A "glass" cockpit refers to the use of computer monitors instead of gauges and other analog displays. Aircraft were getting progressively more displays, dials and information dashboards that eventually competed for space and pilot attention. In the 1970s, the average aircraft had more than 100 cockpit instruments and controls. Glass cockpits started to come into being with the Gulfstream G‑IV private jet in 1985. One of the key challenges in glass cockpits is to balance how much control is automated and how much the pilot should do manually. Generally they try to automate flight operations while keeping the pilot constantly informed. Aircraft flight-control system Aircraft have means of automatically controlling flight. Autopilot was first invented by Lawrence Sperry during World War I to fly bomber planes steady enough to hit accurate targets from 25,000 feet. When it was first adopted by the U.S. military, a Honeywell engineer sat in the back seat with bolt cutters to disconnect the autopilot in case of emergency. Nowadays most commercial planes are equipped with aircraft flight control systems in order to reduce pilot error and workload at landing or takeoff. The first simple commercial auto-pilots were used to control heading and altitude and had limited authority on things like thrust and flight control surfaces. In helicopters, auto-stabilization was used in a similar way. The first systems were electromechanical. The advent of fly-by-wire and electro-actuated flight surfaces (rather than the traditional hydraulic) has increased safety. As with displays and instruments, critical devices that were electro-mechanical had a finite life. With safety critical systems, the software is very strictly tested. Fuel Systems Fuel Quantity Indication System (FQIS) monitors the amount of fuel aboard. Using various sensors, such as capacitance tubes, temperature sensors, densitometers & level sensors, the FQIS computer calculates the mass of fuel remaining on board. Fuel Control and Monitoring System (FCMS) reports fuel remaining on board in a similar manner, but, by controlling pumps & valves, also manages fuel transfers around various tanks. Refuelling control to upload to a certain total mass of fuel and distribute it automatically. Transfers during flight to the tanks that feed the engines. E.G. from fuselage to wing tanks Centre of gravity control transfers from the tail (trim) tanks forward to the wings as fuel is expended Maintaining fuel in the wing tips (to alleviate wing bending due to lift in flight) & transferring to the main tanks after landing Controlling fuel jettison during an emergency to reduce the aircraft weight. Collision-avoidance systems To supplement air traffic control, most large transport aircraft and many smaller ones use a traffic alert and collision avoidance system (TCAS), which can detect the location of nearby aircraft, and provide instructions for avoiding a midair collision. Smaller aircraft may use simpler traffic alerting systems such as TPAS, which are passive (they do not actively interrogate the transponders of other aircraft) and do not provide advisories for conflict resolution. To help avoid controlled flight into terrain (CFIT), aircraft use systems such as ground-proximity warning systems (GPWS), which use radar altimeters as a key element. One of the major weaknesses of GPWS is the lack of "look-ahead" information, because it only provides altitude above terrain "look-down". In order to overcome this weakness, modern aircraft use a terrain awareness warning system (TAWS). Flight recorders Commercial aircraft cockpit data recorders, commonly known as "black boxes", store flight information and audio from the cockpit. They are often recovered from an aircraft after a crash to determine control settings and other parameters during the incident. Weather systems Weather systems such as weather radar (typically Arinc 708 on commercial aircraft) and lightning detectors are important for aircraft flying at night or in instrument meteorological conditions, where it is not possible for pilots to see the weather ahead. Heavy precipitation (as sensed by radar) or severe turbulence (as sensed by lightning activity) are both indications of strong convective activity and severe turbulence, and weather systems allow pilots to deviate around these areas. Lightning detectors like the Stormscope or Strikefinder have become inexpensive enough that they are practical for light aircraft. In addition to radar and lightning detection, observations and extended radar pictures (such as NEXRAD) are now available through satellite data connections, allowing pilots to see weather conditions far beyond the range of their own in-flight systems. Modern displays allow weather information to be integrated with moving maps, terrain, and traffic onto a single screen, greatly simplifying navigation. Modern weather systems also include wind shear and turbulence detection and terrain and traffic warning systems. In‑plane weather avionics are especially popular in Africa, India, and other countries where air-travel is a growing market, but ground support is not as well developed. Aircraft management systems There has been a progression towards centralized control of the multiple complex systems fitted to aircraft, including engine monitoring and management. Health and usage monitoring systems (HUMS) are integrated with aircraft management computers to give maintainers early warnings of parts that will need replacement. The integrated modular avionics concept proposes an integrated architecture with application software portable across an assembly of common hardware modules. It has been used in fourth generation jet fighters and the latest generation of airliners. Mission or tactical avionics Military aircraft have been designed either to deliver a weapon or to be the eyes and ears of other weapon systems. The vast array of sensors available to the military is used for whatever tactical means required. As with aircraft management, the bigger sensor platforms (like the E‑3D, JSTARS, ASTOR, Nimrod MRA4, Merlin HM Mk 1) have mission-management computers. Police and EMS aircraft also carry sophisticated tactical sensors. Military communications While aircraft communications provide the backbone for safe flight, the tactical systems are designed to withstand the rigors of the battle field. UHF, VHF Tactical (30–88 MHz) and SatCom systems combined with ECCM methods, and cryptography secure the communications. Data links such as Link 11, 16, 22 and BOWMAN, JTRS and even TETRA provide the means of transmitting data (such as images, targeting information etc.). Radar Airborne radar was one of the first tactical sensors. The benefit of altitude providing range has meant a significant focus on airborne radar technologies. Radars include airborne early warning (AEW), anti-submarine warfare (ASW), and even weather radar (Arinc 708) and ground tracking/proximity radar. The military uses radar in fast jets to help pilots fly at low levels. While the civil market has had weather radar for a while, there are strict rules about using it to navigate the aircraft. Sonar Dipping sonar fitted to a range of military helicopters allows the helicopter to protect shipping assets from submarines or surface threats. Maritime support aircraft can drop active and passive sonar devices (sonobuoys) and these are also used to determine the location of enemy submarines. Electro-optics Electro-optic systems include devices such as the head-up display (HUD), forward looking infrared (FLIR), infrared search and track and other passive infrared devices (Passive infrared sensor). These are all used to provide imagery and information to the flight crew. This imagery is used for everything from search and rescue to navigational aids and target acquisition. ESM/DAS Electronic support measures and defensive aids systems are used extensively to gather information about threats or possible threats. They can be used to launch devices (in some cases automatically) to counter direct threats against the aircraft. They are also used to determine the state of a threat and identify it. Aircraft networks The avionics systems in military, commercial and advanced models of civilian aircraft are interconnected using an avionics databus. Common avionics databus protocols, with their primary application, include: Aircraft Data Network (ADN): Ethernet derivative for Commercial Aircraft Avionics Full-Duplex Switched Ethernet (AFDX): Specific implementation of ARINC 664 (ADN) for Commercial Aircraft ARINC 429: Generic Medium-Speed Data Sharing for Private and Commercial Aircraft ARINC 664: See ADN above ARINC 629: Commercial Aircraft (Boeing 777) ARINC 708: Weather Radar for Commercial Aircraft ARINC 717: Flight Data Recorder for Commercial Aircraft ARINC 825: CAN bus for commercial aircraft (for example Boeing 787 and Airbus A350) Commercial Standard Digital Bus IEEE 1394b: Military Aircraft MIL-STD-1553: Military Aircraft MIL-STD-1760: Military Aircraft TTP – Time-Triggered Protocol: Boeing 787, Airbus A380, Fly-By-Wire Actuation Platforms from Parker Aerospace
Technology
Aircraft components
null
2052
https://en.wikipedia.org/wiki/Array%20%28data%20structure%29
Array (data structure)
In computer science, an array is a data structure consisting of a collection of elements (values or variables), of same memory size, each identified by at least one array index or key. An array is stored such that the position of each element can be computed from its index tuple by a mathematical formula. The simplest type of data structure is a linear array, also called a one-dimensional array. For example, an array of ten 32-bit (4-byte) integer variables, with indices 0 through 9, may be stored as ten words at memory addresses 2000, 2004, 2008, ..., 2036, (in hexadecimal: 0x7D0, 0x7D4, 0x7D8, ..., 0x7F4) so that the element with index i has the address 2000 + (i × 4). The memory address of the first element of an array is called first address, foundation address, or base address. Because the mathematical concept of a matrix can be represented as a two-dimensional grid, two-dimensional arrays are also sometimes called "matrices". In some cases the term "vector" is used in computing to refer to an array, although tuples rather than vectors are the more mathematically correct equivalent. Tables are often implemented in the form of arrays, especially lookup tables; the word "table" is sometimes used as a synonym of array. Arrays are among the oldest and most important data structures, and are used by almost every program. They are also used to implement many other data structures, such as lists and strings. They effectively exploit the addressing logic of computers. In most modern computers and many external storage devices, the memory is a one-dimensional array of words, whose indices are their addresses. Processors, especially vector processors, are often optimized for array operations. Arrays are useful mostly because the element indices can be computed at run time. Among other things, this feature allows a single iterative statement to process arbitrarily many elements of an array. For that reason, the elements of an array data structure are required to have the same size and should use the same data representation. The set of valid index tuples and the addresses of the elements (and hence the element addressing formula) are usually, but not always, fixed while the array is in use. The term "array" may also refer to an array data type, a kind of data type provided by most high-level programming languages that consists of a collection of values or variables that can be selected by one or more indices computed at run-time. Array types are often implemented by array structures; however, in some languages they may be implemented by hash tables, linked lists, search trees, or other data structures. The term is also used, especially in the description of algorithms, to mean associative array or "abstract array", a theoretical computer science model (an abstract data type or ADT) intended to capture the essential properties of arrays. History The first digital computers used machine-language programming to set up and access array structures for data tables, vector and matrix computations, and for many other purposes. John von Neumann wrote the first array-sorting program (merge sort) in 1945, during the building of the first stored-program computer. Array indexing was originally done by self-modifying code, and later using index registers and indirect addressing. Some mainframes designed in the 1960s, such as the Burroughs B5000 and its successors, used memory segmentation to perform index-bounds checking in hardware. Assembly languages generally have no special support for arrays, other than what the machine itself provides. The earliest high-level programming languages, including FORTRAN (1957), Lisp (1958), COBOL (1960), and ALGOL 60 (1960), had support for multi-dimensional arrays, and so has C (1972). In C++ (1983), class templates exist for multi-dimensional arrays whose dimension is fixed at runtime as well as for runtime-flexible arrays. Applications Arrays are used to implement mathematical vectors and matrices, as well as other kinds of rectangular tables. Many databases, small and large, consist of (or include) one-dimensional arrays whose elements are records. Arrays are used to implement other data structures, such as lists, heaps, hash tables, deques, queues, stacks, strings, and VLists. Array-based implementations of other data structures are frequently simple and space-efficient (implicit data structures), requiring little space overhead, but may have poor space complexity, particularly when modified, compared to tree-based data structures (compare a sorted array to a search tree). One or more large arrays are sometimes used to emulate in-program dynamic memory allocation, particularly memory pool allocation. Historically, this has sometimes been the only way to allocate "dynamic memory" portably. Arrays can be used to determine partial or complete control flow in programs, as a compact alternative to (otherwise repetitive) multiple IF statements. They are known in this context as control tables and are used in conjunction with a purpose-built interpreter whose control flow is altered according to values contained in the array. The array may contain subroutine pointers (or relative subroutine numbers that can be acted upon by SWITCH statements) that direct the path of the execution. Element identifier and addressing formulas When data objects are stored in an array, individual objects are selected by an index that is usually a non-negative scalar integer. Indexes are also called subscripts. An index maps the array value to a stored object. There are three ways in which the elements of an array can be indexed: 0 (zero-based indexing) The first element of the array is indexed by subscript of 0. 1 (one-based indexing) The first element of the array is indexed by subscript of 1. n (n-based indexing) The base index of an array can be freely chosen. Usually programming languages allowing n-based indexing also allow negative index values and other scalar data types like enumerations, or characters may be used as an array index. Using zero based indexing is the design choice of many influential programming languages, including C, Java and Lisp. This leads to simpler implementation where the subscript refers to an offset from the starting position of an array, so the first element has an offset of zero. Arrays can have multiple dimensions, thus it is not uncommon to access an array using multiple indices. For example, a two-dimensional array A with three rows and four columns might provide access to the element at the 2nd row and 4th column by the expression A[1][3] in the case of a zero-based indexing system. Thus two indices are used for a two-dimensional array, three for a three-dimensional array, and n for an n-dimensional array. The number of indices needed to specify an element is called the dimension, dimensionality, or rank of the array. In standard arrays, each index is restricted to a certain range of consecutive integers (or consecutive values of some enumerated type), and the address of an element is computed by a "linear" formula on the indices. One-dimensional arrays A one-dimensional array (or single dimension array) is a type of linear array. Accessing its elements involves a single subscript which can either represent a row or column index. As an example consider the C declaration int anArrayName[10]; which declares a one-dimensional array of ten integers. Here, the array can store ten elements of type int . This array has indices starting from zero through nine. For example, the expressions anArrayName[0] and anArrayName[9] are the first and last elements respectively. For a vector with linear addressing, the element with index i is located at the address , where B is a fixed base address and c a fixed constant, sometimes called the address increment or stride. If the valid element indices begin at 0, the constant B is simply the address of the first element of the array. For this reason, the C programming language specifies that array indices always begin at 0; and many programmers will call that element "zeroth" rather than "first". However, one can choose the index of the first element by an appropriate choice of the base address B. For example, if the array has five elements, indexed 1 through 5, and the base address B is replaced by , then the indices of those same elements will be 31 to 35. If the numbering does not start at 0, the constant B may not be the address of any element. Multidimensional arrays For a multidimensional array, the element with indices i,j would have address B + c · i + d · j, where the coefficients c and d are the row and column address increments, respectively. More generally, in a k-dimensional array, the address of an element with indices i1, i2, ..., ik is B + c1 · i1 + c2 · i2 + … + ck · ik. For example: int a[2][3]; This means that array a has 2 rows and 3 columns, and the array is of integer type. Here we can store 6 elements they will be stored linearly but starting from first row linear then continuing with second row. The above array will be stored as a11, a12, a13, a21, a22, a23. This formula requires only k multiplications and k additions, for any array that can fit in memory. Moreover, if any coefficient is a fixed power of 2, the multiplication can be replaced by bit shifting. The coefficients ck must be chosen so that every valid index tuple maps to the address of a distinct element. If the minimum legal value for every index is 0, then B is the address of the element whose indices are all zero. As in the one-dimensional case, the element indices may be changed by changing the base address B. Thus, if a two-dimensional array has rows and columns indexed from 1 to 10 and 1 to 20, respectively, then replacing B by will cause them to be renumbered from 0 through 9 and 4 through 23, respectively. Taking advantage of this feature, some languages (like FORTRAN 77) specify that array indices begin at 1, as in mathematical tradition while other languages (like Fortran 90, Pascal and Algol) let the user choose the minimum value for each index. Dope vectors The addressing formula is completely defined by the dimension d, the base address B, and the increments c1, c2, ..., ck. It is often useful to pack these parameters into a record called the array's descriptor, stride vector, or dope vector. The size of each element, and the minimum and maximum values allowed for each index may also be included in the dope vector. The dope vector is a complete handle for the array, and is a convenient way to pass arrays as arguments to procedures. Many useful array slicing operations (such as selecting a sub-array, swapping indices, or reversing the direction of the indices) can be performed very efficiently by manipulating the dope vector. Compact layouts Often the coefficients are chosen so that the elements occupy a contiguous area of memory. However, that is not necessary. Even if arrays are always created with contiguous elements, some array slicing operations may create non-contiguous sub-arrays from them. There are two systematic compact layouts for a two-dimensional array. For example, consider the matrix In the row-major order layout (adopted by C for statically declared arrays), the elements in each row are stored in consecutive positions and all of the elements of a row have a lower address than any of the elements of a consecutive row: {| class="wikitable" |- | 1 || 2 || 3 || 4 || 5 || 6 || 7 || 8 || 9 |} In column-major order (traditionally used by Fortran), the elements in each column are consecutive in memory and all of the elements of a column have a lower address than any of the elements of a consecutive column: {| class="wikitable" |- | 1 || 4 || 7 || 2 || 5 || 8 || 3 || 6 || 9 |} For arrays with three or more indices, "row major order" puts in consecutive positions any two elements whose index tuples differ only by one in the last index. "Column major order" is analogous with respect to the first index. In systems which use processor cache or virtual memory, scanning an array is much faster if successive elements are stored in consecutive positions in memory, rather than sparsely scattered. This is known as spatial locality, which is a type of locality of reference. Many algorithms that use multidimensional arrays will scan them in a predictable order. A programmer (or a sophisticated compiler) may use this information to choose between row- or column-major layout for each array. For example, when computing the product A·B of two matrices, it would be best to have A stored in row-major order, and B in column-major order. Resizing Static arrays have a size that is fixed when they are created and consequently do not allow elements to be inserted or removed. However, by allocating a new array and copying the contents of the old array to it, it is possible to effectively implement a dynamic version of an array; see dynamic array. If this operation is done infrequently, insertions at the end of the array require only amortized constant time. Some array data structures do not reallocate storage, but do store a count of the number of elements of the array in use, called the count or size. This effectively makes the array a dynamic array with a fixed maximum size or capacity; Pascal strings are examples of this. Non-linear formulas More complicated (non-linear) formulas are occasionally used. For a compact two-dimensional triangular array, for instance, the addressing formula is a polynomial of degree 2. Efficiency Both store and select take (deterministic worst case) constant time. Arrays take linear (O(n)) space in the number of elements n that they hold. In an array with element size k and on a machine with a cache line size of B bytes, iterating through an array of n elements requires the minimum of ceiling(nk/B) cache misses, because its elements occupy contiguous memory locations. This is roughly a factor of B/k better than the number of cache misses needed to access n elements at random memory locations. As a consequence, sequential iteration over an array is noticeably faster in practice than iteration over many other data structures, a property called locality of reference (this does not mean however, that using a perfect hash or trivial hash within the same (local) array, will not be even faster - and achievable in constant time). Libraries provide low-level optimized facilities for copying ranges of memory (such as memcpy) which can be used to move contiguous blocks of array elements significantly faster than can be achieved through individual element access. The speedup of such optimized routines varies by array element size, architecture, and implementation. Memory-wise, arrays are compact data structures with no per-element overhead. There may be a per-array overhead (e.g., to store index bounds) but this is language-dependent. It can also happen that elements stored in an array require less memory than the same elements stored in individual variables, because several array elements can be stored in a single word; such arrays are often called packed arrays. An extreme (but commonly used) case is the bit array, where every bit represents a single element. A single octet can thus hold up to 256 different combinations of up to 8 different conditions, in the most compact form. Array accesses with statically predictable access patterns are a major source of data parallelism. Comparison with other data structures Dynamic arrays or growable arrays are similar to arrays but add the ability to insert and delete elements; adding and deleting at the end is particularly efficient. However, they reserve linear (Θ(n)) additional storage, whereas arrays do not reserve additional storage. Associative arrays provide a mechanism for array-like functionality without huge storage overheads when the index values are sparse. For example, an array that contains values only at indexes 1 and 2 billion may benefit from using such a structure. Specialized associative arrays with integer keys include Patricia tries, Judy arrays, and van Emde Boas trees. Balanced trees require O(log n) time for indexed access, but also permit inserting or deleting elements in O(log n) time, whereas growable arrays require linear (Θ(n)) time to insert or delete elements at an arbitrary position. Linked lists allow constant time removal and insertion in the middle but take linear time for indexed access. Their memory use is typically worse than arrays, but is still linear. An Iliffe vector is an alternative to a multidimensional array structure. It uses a one-dimensional array of references to arrays of one dimension less. For two dimensions, in particular, this alternative structure would be a vector of pointers to vectors, one for each row(pointer on c or c++). Thus an element in row i and column j of an array A would be accessed by double indexing (A[i][j] in typical notation). This alternative structure allows jagged arrays, where each row may have a different size—or, in general, where the valid range of each index depends on the values of all preceding indices. It also saves one multiplication (by the column address increment) replacing it by a bit shift (to index the vector of row pointers) and one extra memory access (fetching the row address), which may be worthwhile in some architectures. Dimension The dimension of an array is the number of indices needed to select an element. Thus, if the array is seen as a function on a set of possible index combinations, it is the dimension of the space of which its domain is a discrete subset. Thus a one-dimensional array is a list of data, a two-dimensional array is a rectangle of data, a three-dimensional array a block of data, etc. This should not be confused with the dimension of the set of all matrices with a given domain, that is, the number of elements in the array. For example, an array with 5 rows and 4 columns is two-dimensional, but such matrices form a 20-dimensional space. Similarly, a three-dimensional vector can be represented by a one-dimensional array of size three.
Mathematics
Data structures and types
null
2082
https://en.wikipedia.org/wiki/Aeronautics
Aeronautics
Aeronautics is the science or art involved with the study, design, and manufacturing of air flight-capable machines, and the techniques of operating aircraft and rockets within the atmosphere. While the term originally referred solely to operating the aircraft, it has since been expanded to include technology, business, and other aspects related to aircraft. The term "aviation" is sometimes used interchangeably with aeronautics, although "aeronautics" includes lighter-than-air craft such as airships, and includes ballistic vehicles while "aviation" technically does not. A significant part of aeronautical science is a branch of dynamics called aerodynamics, which deals with the motion of air and the way that it interacts with objects in motion, such as an aircraft. History Early ideas Attempts to fly without any real aeronautical understanding have been made from the earliest times, typically by constructing wings and jumping from a tower with crippling or lethal results. Wiser investigators sought to gain some rational understanding through the study of bird flight. Medieval Islamic Golden Age scientists such as Abbas ibn Firnas also made such studies. The founders of modern aeronautics, Leonardo da Vinci in the Renaissance and Cayley in 1799, both began their investigations with studies of bird flight. Man-carrying kites are believed to have been used extensively in ancient China. In 1282 the Italian explorer Marco Polo described the Chinese techniques then current. The Chinese also constructed small hot air balloons, or lanterns, and rotary-wing toys. An early European to provide any scientific discussion of flight was Roger Bacon, who described principles of operation for the lighter-than-air balloon and the flapping-wing ornithopter, which he envisaged would be constructed in the future. The lifting medium for his balloon would be an "aether" whose composition he did not know. In the late fifteenth century, Leonardo da Vinci followed up his study of birds with designs for some of the earliest flying machines, including the flapping-wing ornithopter and the rotating-wing helicopter. Although his designs were rational, they were not based on particularly good science. Many of his designs, such as a four-person screw-type helicopter, have severe flaws. He did at least understand that "An object offers as much resistance to the air as the air does to the object." (Newton would not publish the Third law of motion until 1687.) His analysis led to the realisation that manpower alone was not sufficient for sustained flight, and his later designs included a mechanical power source such as a spring. Da Vinci's work was lost after his death and did not reappear until it had been overtaken by the work of George Cayley. Balloon flight The modern era of lighter-than-air flight began early in the 17th century with Galileo's experiments in which he showed that air has weight. Around 1650 Cyrano de Bergerac wrote some fantasy novels in which he described the principle of ascent using a substance (dew) he supposed to be lighter than air, and descending by releasing a controlled amount of the substance. Francesco Lana de Terzi measured the pressure of air at sea level and in 1670 proposed the first scientifically credible lifting medium in the form of hollow metal spheres from which all the air had been pumped out. These would be lighter than the displaced air and able to lift an airship. His proposed methods of controlling height are still in use today; by carrying ballast which may be dropped overboard to gain height, and by venting the lifting containers to lose height. In practice de Terzi's spheres would have collapsed under air pressure, and further developments had to wait for more practicable lifting gases. From the mid-18th century the Montgolfier brothers in France began experimenting with balloons. Their balloons were made of paper, and early experiments using steam as the lifting gas were short-lived due to its effect on the paper as it condensed. Mistaking smoke for a kind of steam, they began filling their balloons with hot smoky air which they called "electric smoke" and, despite not fully understanding the principles at work, made some successful launches and in 1783 were invited to give a demonstration to the French Académie des Sciences. Meanwhile, the discovery of hydrogen led Joseph Black in to propose its use as a lifting gas, though practical demonstration awaited a gas-tight balloon material. On hearing of the Montgolfier Brothers' invitation, the French Academy member Jacques Charles offered a similar demonstration of a hydrogen balloon. Charles and two craftsmen, the Robert brothers, developed a gas-tight material of rubberised silk for the envelope. The hydrogen gas was to be generated by chemical reaction during the filling process. The Montgolfier designs had several shortcomings, not least the need for dry weather and a tendency for sparks from the fire to set light to the paper balloon. The manned design had a gallery around the base of the balloon rather than the hanging basket of the first, unmanned design, which brought the paper closer to the fire. On their free flight, De Rozier and d'Arlandes took buckets of water and sponges to douse these fires as they arose. On the other hand, the manned design of Charles was essentially modern. As a result of these exploits, the hot air balloon became known as the Montgolfière type and the gas balloon the Charlière. Charles and the Robert brothers' next balloon, La Caroline, was a Charlière that followed Jean Baptiste Meusnier's proposals for an elongated dirigible balloon, and was notable for having an outer envelope with the gas contained in a second, inner ballonet. On 19 September 1784, it completed the first flight of over 100 km, between Paris and Beuvry, despite the man-powered propulsive devices proving useless. In an attempt the next year to provide both endurance and controllability, de Rozier developed a balloon having both hot air and hydrogen gas bags, a design which was soon named after him as the Rozière. The principle was to use the hydrogen section for constant lift and to navigate vertically by heating and allowing to cool the hot air section, in order to catch the most favourable wind at whatever altitude it was blowing. The balloon envelope was made of goldbeater's skin. The first flight ended in disaster and the approach has seldom been used since. Cayley and the foundation of modern aeronautics Sir George Cayley (1773–1857) is widely acknowledged as the founder of modern aeronautics. He was first called the "father of the aeroplane" in 1846 and Henson called him the "father of aerial navigation." He was the first true scientific aerial investigator to publish his work, which included for the first time the underlying principles and forces of flight. In 1809 he began the publication of a landmark three-part treatise titled "On Aerial Navigation" (1809–1810). In it he wrote the first scientific statement of the problem, "The whole problem is confined within these limits, viz. to make a surface support a given weight by the application of power to the resistance of air." He identified the four vector forces that influence an aircraft: thrust, lift, drag and weight and distinguished stability and control in his designs. He developed the modern conventional form of the fixed-wing aeroplane having a stabilising tail with both horizontal and vertical surfaces, flying gliders both unmanned and manned. He introduced the use of the whirling arm test rig to investigate the aerodynamics of flight, using it to discover the benefits of the curved or cambered aerofoil over the flat wing he had used for his first glider. He also identified and described the importance of dihedral, diagonal bracing and drag reduction, and contributed to the understanding and design of ornithopters and parachutes. Another significant invention was the tension-spoked wheel, which he devised in order to create a light, strong wheel for aircraft undercarriage. The 19th century: Otto Lilienthal and the first human flights During the 19th century Cayley's ideas were refined, proved and expanded on, culminating in the works of Otto Lilienthal. Lilienthal was a German engineer and businessman who became known as the "flying man". He was the first person to make well-documented, repeated, successful flights with gliders, therefore making the idea of "heavier than air" a reality. Newspapers and magazines published photographs of Lilienthal gliding, favourably influencing public and scientific opinion about the possibility of flying machines becoming practical. His work lead to him developing the concept of the modern wing. His flight attempts in Berlin in the year 1891 are seen as the beginning of human flight and the "Lilienthal Normalsegelapparat" is considered to be the first air plane in series production, making the Maschinenfabrik Otto Lilienthal in Berlin the first air plane production company in the world. Otto Lilienthal is often referred to as either the "father of aviation" or "father of flight". Other important investigators included Horatio Phillips. Branches Aeronautics may be divided into three main branches, Aviation, Aeronautical science and Aeronautical engineering. Aviation Aviation is the art or practice of aeronautics. Historically aviation meant only heavier-than-air flight, but nowadays it includes flying in balloons and airships. Aeronautical engineering Aeronautical engineering covers the design and construction of aircraft, including how they are powered, how they are used and how they are controlled for safe operation. A major part of aeronautical engineering is aerodynamics, the science of passing through the air. With the increasing activity in space flight, nowadays aeronautics and astronautics are often combined as aerospace engineering. Aerodynamics The science of aerodynamics deals with the motion of air and the way that it interacts with objects in motion, such as an aircraft. The study of aerodynamics falls broadly into three areas: Incompressible flow occurs where the air simply moves to avoid objects, typically at subsonic speeds below that of sound (Mach 1). Compressible flow occurs where shock waves appear at points where the air becomes compressed, typically at speeds above Mach 1. Transonic flow occurs in the intermediate speed range around Mach 1, where the airflow over an object may be locally subsonic at one point and locally supersonic at another. Rocketry A rocket or rocket vehicle is a missile, spacecraft, aircraft or other vehicle which obtains thrust from a rocket engine. In all rockets, the exhaust is formed entirely from propellants carried within the rocket before use. Rocket engines work by action and reaction. Rocket engines push rockets forwards simply by throwing their exhaust backwards extremely fast. Rockets for military and recreational uses date back to at least 13th-century China. Significant scientific, interplanetary and industrial use did not occur until the 20th century, when rocketry was the enabling technology of the Space Age, including setting foot on the Moon. Rockets are used for fireworks, weaponry, ejection seats, launch vehicles for artificial satellites, human spaceflight and exploration of other planets. While comparatively inefficient for low speed use, they are very lightweight and powerful, capable of generating large accelerations and of attaining extremely high speeds with reasonable efficiency. Chemical rockets are the most common type of rocket and they typically create their exhaust by the combustion of rocket propellant. Chemical rockets store a large amount of energy in an easily released form, and can be very dangerous. However, careful design, testing, construction and use minimizes risks.
Technology
Concepts of aviation
null
2088
https://en.wikipedia.org/wiki/Aphasia
Aphasia
Aphasia, also known as dysphasia, is an impairment in a person’s ability to comprehend or formulate language because of damage to specific brain regions. The major causes are stroke and head trauma; prevalence is hard to determine, but aphasia due to stroke is estimated to be 0.1–0.4% in developed countries. Aphasia can also be the result of brain tumors, epilepsy, autoimmune neurological diseases, brain infections, or neurodegenerative diseases (such as dementias). To be diagnosed with aphasia, a person's language must be significantly impaired in one or more of the four aspects of communication. In the case of progressive aphasia, a noticeable decline in language abilities over a short period of time is required. The four aspects of communication include spoken language production and comprehension, written language production and comprehension. Impairments in any of these aspects can impact functional communication. The difficulties of people with aphasia can range from occasional trouble finding words, to losing the ability to speak, read, or write; intelligence, however, is unaffected. Expressive language and receptive language can both be affected as well. Aphasia also affects visual language such as sign language. In contrast, the use of formulaic expressions in everyday communication is often preserved. For example, while a person with aphasia, particularly expressive aphasia (Broca's aphasia), may not be able to ask a loved one when their birthday is, they may still be able to sing "Happy Birthday". One prevalent deficit in all aphasias is anomia, which is a difficulty in finding the correct word. With aphasia, one or more modes of communication in the brain have been damaged and are therefore functioning incorrectly. Aphasia is not caused by damage to the brain resulting in motor or sensory deficits, thus producing abnormal speech — that is, aphasia is not related to the mechanics of speech, but rather the individual's language cognition. However, it is possible for a person to have both problems, e.g. in the case of a hemorrhage damaging a large area of the brain. An individual's language abilities incorporate the socially shared set of rules, as well as the thought processes that go behind communication (as it affects both verbal and nonverbal language). Aphasia is not a result of other peripheral motor or sensory difficulty, such as paralysis affecting the speech muscles, or a general hearing impairment. Neurodevelopmental forms of auditory processing disorder (APD) are differentiable from aphasia in that aphasia is by definition caused by acquired brain injury, but acquired epileptic aphasia has been viewed as a form of APD. Signs and symptoms People with aphasia may experience any of the following behaviors due to an acquired brain injury, although some of these symptoms may be due to related or concomitant problems, such as dysarthria or apraxia, and not primarily due to aphasia. Aphasia symptoms can vary based on the location of damage in the brain. Signs and symptoms may or may not be present in individuals with aphasia and may vary in severity and level of disruption to communication. Often those with aphasia may have a difficulty with naming objects, so they might use words such as thing or point at the objects. When asked to name a pencil they may say it is a "thing used to write". Inability to comprehend language Inability to pronounce, not due to muscle paralysis or weakness Inability to form words Inability to recall words (anomia) Poor enunciation Excessive creation and use of protologisms Inability to repeat a phrase Persistent repetition of one syllable, word, or phrase (stereotypies, recurrent/recurring utterances/speech automatism) also known as perseveration. Paraphasia (substituting letters, syllables or words) Agrammatism (inability to speak in a grammatically correct fashion) speaking in incomplete sentences Inability to read Inability to write Limited verbal output Difficulty in naming Speech disorder Speaking gibberish Inability to follow or understand simple requests Related behaviors Given the previously stated signs and symptoms, the following behaviors are often seen in people with aphasia as a result of attempted compensation for incurred speech and language deficits: Self-repairs: Further disruptions in fluent speech as a result of mis-attempts to repair erred speech production. Struggle in non-fluent aphasias: A severe increase in expelled effort to speak after a life where talking and communicating was an ability that came so easily can cause visible frustration. Preserved and automatic language: A behavior in which some language or language sequences that were used frequently prior to onset are still produced with more ease than other language post onset. Subcortical Subcortical aphasia's characteristics and symptoms depend upon the site and size of subcortical lesion. Possible sites of lesions include the thalamus, internal capsule, and basal ganglia. Cognitive deficits While aphasia has traditionally been described in terms of language deficits, there is increasing evidence that many people with aphasia commonly experience co-occurring non-linguistic cognitive deficits in areas such as attention, memory, executive functions and learning. By some accounts, cognitive deficits, such as attention and working memory constitute the underlying cause of language impairment in people with aphasia. Others suggest that cognitive deficits often co-occur, but are comparable to cognitive deficits in stroke patients without aphasia and reflect general brain dysfunction following injury. Whilst it has been shown that cognitive neural networks support language reorganisation after stroke, The degree to which deficits in attention and other cognitive domains underlie language deficits in aphasia is still unclear. In particular, people with aphasia often demonstrate short-term and working memory deficits. These deficits can occur in both the verbal domain as well as the visuospatial domain. Furthermore, these deficits are often associated with performance on language specific tasks such as naming, lexical processing, and sentence comprehension, and discourse production. Other studies have found that most, but not all people with aphasia demonstrate performance deficits on tasks of attention, and their performance on these tasks correlate with language performance and cognitive ability in other domains. Even patients with mild aphasia, who score near the ceiling on tests of language often demonstrate slower response times and interference effects in non-verbal attention abilities. In addition to deficits in short-term memory, working memory, and attention, people with aphasia can also demonstrate deficits in executive function. For instance, people with aphasia may demonstrate deficits in initiation, planning, self-monitoring, and cognitive flexibility. Other studies have found that people with aphasia demonstrate reduced speed and efficiency during completion of executive function assessments. Regardless of their role in the underlying nature of aphasia, cognitive deficits have a clear role in the study and rehabilitation of aphasia. For instance, the severity of cognitive deficits in people with aphasia has been associated with lower quality of life, even more so than the severity of language deficits. Furthermore, cognitive deficits may influence the learning process of rehabilitation and language treatment outcomes in aphasia. Non-linguistic cognitive deficits have also been the target of interventions directed at improving language ability, though outcomes are not definitive. While some studies have demonstrated language improvement secondary to cognitively-focused treatment, others have found little evidence that the treatment of cognitive deficits in people with aphasia has an influence on language outcomes. One important caveat in the measurement and treatment of cognitive deficits in people with aphasia is the degree to which assessments of cognition rely on language abilities for successful performance. Most studies have attempted to circumvent this challenge by utilizing non-verbal cognitive assessments to evaluate cognitive ability in people with aphasia. However, the degree to which these tasks are truly "non-verbal" and not mediated by language is unclear. For instance, Wall et al. found that language and non-linguistic performance was related, except when non-linguistic performance was measured by "real life" cognitive tasks. Causes Aphasia is most often caused by stroke, where about a quarter of patients who experience an acute stroke develop aphasia. However, any disease or damage to the parts of the brain that control language can cause aphasia. Some of these can include brain tumors, traumatic brain injury, epilepsy and progressive neurological disorders. In rare cases, aphasia may also result from herpesviral encephalitis. The herpes simplex virus affects the frontal and temporal lobes, subcortical structures, and the hippocampal tissue, which can trigger aphasia. In acute disorders, such as head injury or stroke, aphasia usually develops quickly. When caused by brain tumor, infection, or dementia, it develops more slowly. Substantial damage to tissue anywhere within the region shown in blue (on the figure in the infobox above) can potentially result in aphasia. Aphasia can also sometimes be caused by damage to subcortical structures deep within the left hemisphere, including the thalamus, the internal and external capsules, and the caudate nucleus of the basal ganglia. The area and extent of brain damage or atrophy will determine the type of aphasia and its symptoms. A very small number of people can experience aphasia after damage to the right hemisphere only. It has been suggested that these individuals may have had an unusual brain organization prior to their illness or injury, with perhaps greater overall reliance on the right hemisphere for language skills than in the general population. Primary progressive aphasia (PPA), while its name can be misleading, is actually a form of dementia that has some symptoms closely related to several forms of aphasia. It is characterized by a gradual loss in language functioning while other cognitive domains are mostly preserved, such as memory and personality. PPA usually initiates with sudden word-finding difficulties in an individual and progresses to a reduced ability to formulate grammatically correct sentences (syntax) and impaired comprehension. The etiology of PPA is not due to a stroke, traumatic brain injury (TBI), or infectious disease; it is still uncertain what initiates the onset of PPA in those affected by it. Epilepsy can also include transient aphasia as a prodromal or episodic symptom. However, the repeated seizure activity within language regions may also lead to chronic, and progressive aphasia. Aphasia is also listed as a rare side-effect of the fentanyl patch, an opioid used to control chronic pain. Diagnosis Neuroimaging methods Magnetic resonance imaging (MRI) and functional magnetic resonance imaging (fMRI) are the most common neuroimaging tools used in identifying aphasia and studying the extent of damage in the loss of language abilities. This is done by doing MRI scans and locating the extent of lesions or damage within brain tissue, particularly within areas of the left frontal and temporal regions- where a lot of language related areas lie. In fMRI studies a language related task is often completed and then the BOLD image is analyzed. If there are lower than normal BOLD responses that indicate a lessening of blood flow to the affected area and can show quantitatively that the cognitive task is not being completed. There are limitations to the use of fMRI in aphasic patients particularly. Because a high percentage of aphasic patients develop it because of stroke there can be infarct present which is the total loss of blood flow. This can be due to the thinning of blood vessels or the complete blockage of it. This is important in fMRI as it relies on the BOLD response (the oxygen levels of the blood vessels), and this can create a false hyporesponse upon fMRI study. Due to the limitations of fMRI such as a lower spatial resolution, it can show that some areas of the brain are not active during a task when they in reality are. Additionally, with stroke being the cause of many cases of aphasia the extent of damage to brain tissue can be difficult to quantify therefore the effects of stroke brain damage on the functionality of the patient can vary. Neural substrates of aphasia subtypes MRI is often used to predict or confirm the subtype of aphasia present. Researchers compared three subtypes of aphasia — nonfluent-variant primary progressive aphasia (nfPPA), logopenic-variant primary progressive aphasia (lvPPA), and semantic-variant primary progressive aphasia (svPPA), with primary progressive aphasia (PPA) and Alzheimer's disease. This was done by analyzing the MRIs of patients with each of the subsets of PPA. Images which compare subtypes of aphasia as well as for finding the extent of lesions are generated by overlapping images of different participant's brains (if applicable) and isolating areas of lesions or damage using third-party software such as MRIcron. MRI has also been used to study the relationship between the type of aphasia developed and the age of the person with aphasia. It was found that patients with fluent aphasia are on average older than people with non-fluent aphasia. It was also found that among patients with lesions confined to the anterior portion of the brain an unexpected portion of them presented with fluent aphasia and were remarkably older than those with non-fluent aphasia. This effect was not found when the posterior portion of the brain was studied. Associated conditions In a study on the features associated with different disease trajectories in Alzheimer's disease (AD)-related primary progressive aphasia (PPA), it was found that metabolic patterns via PET SPM analysis can help predict progression of total loss of speech and functional autonomy in AD and PPA patients. This was done by comparing an MRI or CT image of the brain and presence of a radioactive biomarker with normal levels in patients without Alzheimer's Disease. Apraxia is another disorder often correlated with aphasia. This is due to a subset of apraxia which affects speech. Specifically, this subset affects the movement of muscles associated with speech production, apraxia and aphasia are often correlated due to the proximity of neural substrates associated with each of the disorders. Researchers concluded that there were 2 areas of lesion overlap between patients with apraxia and aphasia, the anterior temporal lobe and the left inferior parietal lobe. Treatment and neuroimaging Evidence for positive treatment outcomes can also be quantified using neuroimaging tools. The use of fMRI and an automatic classifier can help predict language recovery outcomes in stroke patients with 86% accuracy when coupled with age and language test scores. The stimuli tested were sentences both correct and incorrect and the subject had to press a button whenever the sentence was incorrect. The fMRI data collected focused on responses in regions of interest identified by healthy subjects.  Recovery from aphasia can also be quantified using diffusion tensor imaging. The accurate fasciculus (AF) connects the right and left superior temporal lobe, premotor regions/posterior inferior frontal gyrus. and the primary motor cortex. In a study which enrolled patients in a speech therapy program, an increase in AF fibers and volume was found in patients after 6-weeks in the program which correlated with long-term improvement in those patients. The results of the experiment are pictured in Figure 2. This implies that DTI can be used to quantify the improvement in patients after speech and language treatment programs are applied. Classification Aphasia is best thought of as a collection of different disorders, rather than a single problem. Each individual with aphasia will present with their own particular combination of language strengths and weaknesses. Consequently, it is a major challenge just to document the various difficulties that can occur in different people, let alone decide how they might best be treated. Most classifications of the aphasias tend to divide the various symptoms into broad classes. A common approach is to distinguish between the fluent aphasias (where speech remains fluent, but content may be lacking, and the person may have difficulties understanding others), and the nonfluent aphasias (where speech is very halting and effortful, and may consist of just one or two words at a time). However, no such broad-based grouping has proven fully adequate, or reliable. There is wide variation among people even within the same broad grouping, and aphasias can be highly selective. For instance, people with naming deficits (anomic aphasia) might show an inability only for naming buildings, or people, or colors. Unfortunately, assessments that characterize aphasia in these groupings have persisted. This is not helpful to people living with aphasia, and provides inaccurate descriptions of an individual pattern of difficulties. There are typical difficulties with speech and language that come with normal aging as well. As we age, language can become more difficult to process, resulting in a slowing of verbal comprehension, reading abilities and more likely word finding difficulties. With each of these, though, unlike some aphasias, functionality within daily life remains intact. Boston classification Individuals with receptive aphasia (Wernicke's aphasia), also referred to as fluent aphasia, may speak in long sentences that have no meaning, add unnecessary words, and even create new "words" (neologisms). For example, someone with receptive aphasia may say, "delicious taco", meaning "The dog needs to go out so I will take him for a walk". They have poor auditory and reading comprehension, and fluent, but nonsensical, oral and written expression. Individuals with receptive aphasia usually have great difficulty understanding the speech of both themselves and others and are, therefore, often unaware of their mistakes. Receptive language deficits usually arise from lesions in the posterior portion of the left hemisphere at or near Wernicke's area. It is often the result of trauma to the temporal region of the brain, specifically damage to Wernicke's area. Trauma can be the result from an array of problems, however it is most commonly seen as a result of stroke Individuals with expressive aphasia (Broca's aphasia) frequently speak short, meaningful phrases that are produced with great effort. It is thus characterized as a nonfluent aphasia. Affected people often omit small words such as "is", "and", and "the". For example, a person with expressive aphasia may say, "walk dog", which could mean "I will take the dog for a walk", "you take the dog for a walk" or even "the dog walked out of the yard." Individuals with expressive aphasia are able to understand the speech of others to varying degrees. Because of this, they are often aware of their difficulties and can become easily frustrated by their speaking problems. While Broca's aphasia may appear to be solely an issue with language production, evidence suggests that it may be rooted in an inability to process syntactical information. Individuals with expressive aphasia may have a speech automatism (also called recurring or recurrent utterance). These speech automatisms can be repeated lexical speech automatisms; e.g., modalisations ('I can't ..., I can't ...'), expletives/swearwords, numbers ('one two, one two') or non-lexical utterances made up of repeated, legal, but meaningless, consonant-vowel syllables (e.g.., /tan tan/, /bi bi/). In severe cases, the individual may be able to utter only the same speech automatism each time they attempt speech. Individuals with anomic aphasia have difficulty with naming. People with this aphasia may have difficulties naming certain words, linked by their grammatical type (e.g., difficulty naming verbs and not nouns) or by their semantic category (e.g., difficulty naming words relating to photography, but nothing else) or a more general naming difficulty. People tend to produce grammatic, yet empty, speech. Auditory comprehension tends to be preserved. Anomic aphasia is the aphasial presentation of tumors in the language zone; it is the aphasial presentation of Alzheimer's disease. Anomic aphasia is the mildest form of aphasia, indicating a likely possibility for better recovery. Individuals with transcortical sensory aphasia, in principle the most general and potentially among the most complex forms of aphasia, may have similar deficits as in receptive aphasia, but their repetition ability may remain intact. Global aphasia is considered a severe impairment in many language aspects since it impacts expressive and receptive language, reading, and writing. Despite these many deficits, there is evidence that has shown individuals benefited from speech language therapy. Even though individuals with global aphasia will not become competent speakers, listeners, writers, or readers, goals can be created to improve the individual's quality of life. Individuals with global aphasia usually respond well to treatment that includes personally relevant information, which is also important to consider for therapy. Individuals with conduction aphasia have deficits in the connections between the speech-comprehension and speech-production areas. This might be caused by damage to the arcuate fasciculus, the structure that transmits information between Wernicke's area and Broca's area. Similar symptoms, however, can be present after damage to the insula or to the auditory cortex. Auditory comprehension is near normal, and oral expression is fluent with occasional paraphasic errors. Paraphasic errors include phonemic/literal or semantic/verbal. Repetition ability is poor. Conduction and transcortical aphasias are caused by damage to the white matter tracts. These aphasias spare the cortex of the language centers, but instead create a disconnection between them. Conduction aphasia is caused by damage to the arcuate fasciculus. The arcuate fasciculus is a white matter tract that connects Broca's and Wernicke's areas. People with conduction aphasia typically have good language comprehension, but poor speech repetition and mild difficulty with word retrieval and speech production. People with conduction aphasia are typically aware of their errors. Two forms of conduction aphasia have been described: reproduction conduction aphasia (repetition of a single relatively unfamiliar multisyllabic word) and repetition conduction aphasia (repetition of unconnected short familiar words. Transcortical aphasias include transcortical motor aphasia, transcortical sensory aphasia, and mixed transcortical aphasia. People with transcortical motor aphasia typically have intact comprehension and awareness of their errors, but poor word finding and speech production. People with transcortical sensory and mixed transcortical aphasia have poor comprehension and unawareness of their errors. Despite poor comprehension and more severe deficits in some transcortical aphasias, small studies have indicated that full recovery is possible for all types of transcortical aphasia. Classical-localizationist approaches Localizationist approaches aim to classify the aphasias according to their major presenting characteristics and the regions of the brain that most probably gave rise to them. Inspired by the early work of nineteenth-century neurologists Paul Broca and Carl Wernicke, these approaches identify two major subtypes of aphasia and several more minor subtypes: Expressive aphasia (also known as "motor aphasia" or "Broca's aphasia"), which is characterized by halted, fragmented, effortful speech, but well-preserved comprehension relative to expression. Damage is typically in the anterior portion of the left hemisphere, most notably Broca's area. Individuals with Broca's aphasia often have right-sided weakness or paralysis of the arm and leg, because the left frontal lobe is also important for body movement, particularly on the right side. Receptive aphasia (also known as "sensory aphasia" or "Wernicke's aphasia"), which is characterized by fluent speech, but marked difficulties understanding words and sentences. Although fluent, the speech may lack in key substantive words (nouns, verbs, adjectives), and may contain incorrect words or even nonsense words. This subtype has been associated with damage to the posterior left temporal cortex, most notably Wernicke's area. These individuals usually have no body weakness, because their brain injury is not near the parts of the brain that control movement. Conduction aphasia, where speech remains fluent, and comprehension is preserved, but the person may have disproportionate difficulty repeating words or sentences. Damage typically involves the arcuate fasciculus and the left parietal region. Transcortical motor aphasia and transcortical sensory aphasia, which are similar to Broca's and Wernicke's aphasia respectively, but the ability to repeat words and sentences is disproportionately preserved. Recent classification schemes adopting this approach, such as the Boston-Neoclassical Model, also group these classical aphasia subtypes into two larger classes: the nonfluent aphasias (which encompasses Broca's aphasia and transcortical motor aphasia) and the fluent aphasias (which encompasses Wernicke's aphasia, conduction aphasia and transcortical sensory aphasia). These schemes also identify several further aphasia subtypes, including: anomic aphasia, which is characterized by a selective difficulty finding the names for things; and global aphasia, where both expression and comprehension of speech are severely compromised. Many localizationist approaches also recognize the existence of additional, more "pure" forms of language disorder that may affect only a single language skill. For example, in pure alexia, a person may be able to write, but not read, and in pure word deafness, they may be able to produce speech and to read, but not understand speech when it is spoken to them. Cognitive neuropsychological approaches Although localizationist approaches provide a useful way of classifying the different patterns of language difficulty into broad groups, one problem is that most individuals do not fit neatly into one category or another. Another problem is that the categories, particularly the major ones such as Broca's and Wernicke's aphasia, still remain quite broad and do not meaningfully reflect a person's difficulties. Consequently, even amongst those who meet the criteria for classification into a subtype, there can be enormous variability in the types of difficulties they experience. Instead of categorizing every individual into a specific subtype, cognitive neuropsychological approaches aim to identify the key language skills or "modules" that are not functioning properly in each individual. A person could potentially have difficulty with just one module, or with a number of modules. This type of approach requires a framework or theory as to what skills/modules are needed to perform different kinds of language tasks. For example, the model of Max Coltheart identifies a module that recognizes phonemes as they are spoken, which is essential for any task involving recognition of words. Similarly, there is a module that stores phonemes that the person is planning to produce in speech, and this module is critical for any task involving the production of long words or long strings of speech. Once a theoretical framework has been established, the functioning of each module can then be assessed using a specific test or set of tests. In the clinical setting, use of this model usually involves conducting a battery of assessments, each of which tests one or a number of these modules. Once a diagnosis is reached as to the skills/modules where the most significant impairment lies, therapy can proceed to treat these skills. Progressive aphasias Primary progressive aphasia (PPA) is a neurodegenerative focal dementia that can be associated with progressive illnesses or dementia, such as frontotemporal dementia / Pick Complex Motor neuron disease, Progressive supranuclear palsy, and Alzheimer's disease, which is the gradual process of progressively losing the ability to think. Gradual loss of language function occurs in the context of relatively well-preserved memory, visual processing, and personality until the advanced stages. Symptoms usually begin with word-finding problems (naming) and progress to impaired grammar (syntax) and comprehension (sentence processing and semantics). The loss of language before the loss of memory differentiates PPA from typical dementias. People with PPA may have difficulties comprehending what others are saying. They can also have difficulty trying to find the right words to make a sentence. There are three classifications of Primary Progressive Aphasia : Progressive nonfluent aphasia (PNFA), Semantic Dementia (SD), and Logopenic progressive aphasia (LPA). Progressive Jargon Aphasia is a fluent or receptive aphasia in which the person's speech is incomprehensible, but appears to make sense to them. Speech is fluent and effortless with intact syntax and grammar, but the person has problems with the selection of nouns. Either they will replace the desired word with another that sounds or looks like the original one or has some other connection or they will replace it with sounds. As such, people with jargon aphasia often use neologisms, and may perseverate if they try to replace the words they cannot find with sounds. Substitutions commonly involve picking another (actual) word starting with the same sound (e.g., clocktower – colander), picking another semantically related to the first (e.g., letter – scroll), or picking one phonetically similar to the intended one (e.g., lane – late). Deaf aphasia There have been many instances showing that there is a form of aphasia among deaf individuals. Sign languages are, after all, forms of language that have been shown to use the same areas of the brain as verbal forms of language. Mirror neurons become activated when an animal is acting in a particular way or watching another individual act in the same manner. These mirror neurons are important in giving an individual the ability to mimic movements of hands. Broca's area of speech production has been shown to contain several of these mirror neurons resulting in significant similarities of brain activity between sign language and vocal speech communication. People use facial movements to create, what other people perceive, to be faces of emotions. While combining these facial movements with speech, a more full form of language is created which enables the species to interact with a much more complex and detailed form of communication. Sign language also uses these facial movements and emotions along with the primary hand movement way of communicating. These facial movement forms of communication come from the same areas of the brain. When dealing with damages to certain areas of the brain, vocal forms of communication are in jeopardy of severe forms of aphasia. Since these same areas of the brain are being used for sign language, these same, at least very similar, forms of aphasia can show in the Deaf community. Individuals can show a form of Wernicke's aphasia with sign language and they show deficits in their abilities in being able to produce any form of expressions. Broca's aphasia shows up in some people, as well. These individuals find tremendous difficulty in being able to actually sign the linguistic concepts they are trying to express. Severity The severity of the type of aphasia varies depending on the size of the stroke. However, there is much variance between how often one type of severity occurs in certain types of aphasia. For instance, any type of aphasia can range from mild to profound. Regardless of the severity of aphasia, people can make improvements due to spontaneous recovery and treatment in the acute stages of recovery. Additionally, while most studies propose that the greatest outcomes occur in people with severe aphasia when treatment is provided in the acute stages of recovery, Robey (1998) also found that those with severe aphasia are capable of making strong language gains in the chronic stage of recovery as well. This finding implies that persons with aphasia have the potential to have functional outcomes regardless of how severe their aphasia may be. While there is no distinct pattern of the outcomes of aphasia based on severity alone, global aphasia typically makes functional language gains, but may be gradual since global aphasia affects many language areas. Prevention Aphasia is largely caused by unavoidable instances. However, some precautions can be taken to decrease risk for experiencing one of the two major causes of aphasia: stroke and traumatic brain injury (TBI). To decrease the probability of having an ischemic or hemorrhagic stroke, one should take the following precautions: Exercising regularly Eating a healthy diet, avoiding cholesterol in particular Keeping alcohol consumption low and avoiding tobacco use Controlling blood pressure Going to the emergency room immediately if you begin to experience unilateral extremity (especially leg) swelling, warmth, redness, and/or tenderness as these are symptoms of a deep vein thrombosis which can lead to a stroke To prevent aphasia due to traumatic injury, one should take precautionary measures when engaging in dangerous activities such as: Wearing a helmet when operating a bicycle, motor cycle, ATV, or any other moving vehicle that could potentially be involved in an accident Wearing a seatbelt when driving or riding in a car Wearing proper protective gear when playing contact sports, especially American football, rugby, and hockey, or refraining from such activities Minimizing anticoagulant use (including aspirin) if at all possible as they increase the risk of hemorrhage after a head injury Additionally, one should always seek medical attention after sustaining head trauma due to a fall or accident. The sooner that one receives medical attention for a traumatic brain injury, the less likely one is to experience long-term or severe effects. Management Most acute cases of aphasia recover some or most skills by participating in speech and language therapy. Recovery and improvement can continue for years after the stroke. After the onset of aphasia, there is approximately a six-month period of spontaneous recovery; during this time, the brain is attempting to recover and repair the damaged neurons. Improvement varies widely, depending on the aphasia's cause, type, and severity. Recovery also depends on the person's age, health, motivation, handedness, and educational level. Speech and language therapy that is higher intensity, higher dose or provided over a long duration of time leads to significantly better functional communication, but people might be more likely to drop out of high intensity treatment (up to 15 hours per week). A total of 20–50 hours of speech and language therapy is necessary for the best recovery. The most improvement happens when 2–5 hours of therapy is provided each week over 4–5 days. Recovery is further improved when besides the therapy people practice tasks at home. Speech and language therapy is also effective if it is delivered online through video or by a family member who has been trained by a professional therapist. Recovery with therapy is also dependent on the recency of stroke and the age of the person. Receiving therapy within a month after the stroke leads to the greatest improvements. Three or six months after the stroke more therapy will be needed, but symptoms can still be improved. People with aphasia who are younger than 55 years are the most likely to improve, but people older than 75 years can still get better with therapy. There is no one treatment proven to be effective for all types of aphasias. The reason that there is no universal treatment for aphasia is because of the nature of the disorder and the various ways it is presented. Aphasia is rarely exhibited identically, implying that treatment needs to be catered specifically to the individual. Studies have shown that, although there is no consistency on treatment methodology in literature, there is a strong indication that treatment, in general, has positive outcomes. Therapy for aphasia ranges from increasing functional communication to improving speech accuracy, depending on the person's severity, needs and support of family and friends. Group therapy allows individuals to work on their pragmatic and communication skills with other individuals with aphasia, which are skills that may not often be addressed in individual one-on-one therapy sessions. It can also help increase confidence and social skills in a comfortable setting. Evidence does not support the use of transcranial direct current stimulation (tDCS) for improving aphasia after stroke. Moderate quality evidence does indicate naming performance improvements for nouns, but not verbs using tDCS Specific treatment techniques include the following: Copy and recall therapy (CART) – repetition and recall of targeted words within therapy may strengthen orthographic representations and improve single word reading, writing, and naming Visual communication therapy (VIC) – the use of index cards with symbols to represent various components of speech Visual action therapy (VAT) – typically treats individuals with global aphasia to train the use of hand gestures for specific items Functional communication treatment (FCT) – focuses on improving activities specific to functional tasks, social interaction, and self-expression Promoting aphasic's communicative effectiveness (PACE) – a means of encouraging normal interaction between people with aphasia and clinicians. In this kind of therapy, the focus is on pragmatic communication rather than treatment itself. People are asked to communicate a given message to their therapists by means of drawing, making hand gestures or even pointing to an object Melodic intonation therapy (MIT) – aims to use the intact melodic/prosodic processing skills of the right hemisphere to help cue retrieval of words and expressive language Centeredness Theory Interview (CTI) - Uses client centered goal formation into the nature of current patient interactions as well as future / desired interactions to improve subjective well-being, cognition and communication. Other – i.e., drawing as a way of communicating, trained conversation partners Semantic feature analysis (SFA) — a type of aphasia treatment that targets word-finding deficits — is based on the theory that neural connections can be strengthened by using related words and phrases that are similar to the target word, to eventually activate the target word in the brain. SFA can be implemented in multiple forms such as verbally, written, using picture cards, etc. The SLP provides prompting questions to the individual with aphasia in order for the person to name the picture provided. Studies show that SFA is an effective intervention for improving confrontational naming. Melodic intonation therapy is used to treat non-fluent aphasia and has proved to be effective in some cases. However, there is still no evidence from randomized controlled trials confirming the efficacy of MIT in chronic aphasia. MIT is used to help people with aphasia vocalize themselves through speech song, which is then transferred as a spoken word. Good candidates for this therapy include people who have had left hemisphere strokes, non-fluent aphasias such as Broca's, good auditory comprehension, poor repetition and articulation, and good emotional stability and memory. An alternative explanation is that the efficacy of MIT depends on neural circuits involved in the processing of rhythmicity and formulaic expressions (examples taken from the MIT manual: "I am fine," "how are you?" or "thank you"); while rhythmic features associated with melodic intonation may engage primarily left-hemisphere subcortical areas of the brain, the use of formulaic expressions is known to be supported by right-hemisphere cortical and bilateral subcortical neural networks. Systematic reviews support the effectiveness and importance of partner training. According to the National Institute on Deafness and Other Communication Disorders (NIDCD), involving family with the treatment of an aphasic loved one is ideal for all involved, because while it will no doubt assist in their recovery, it will also make it easier for members of the family to learn how best to communicate with them. When a person's speech is insufficient, different kinds of augmentative and alternative communication could be considered such as alphabet boards, pictorial communication books, specialized software for computers or apps for tablets or smartphones. When addressing Wernicke's aphasia, according to Bakheit et al. (2007), the lack of awareness of the language impairments, a common characteristic of Wernicke's aphasia, may affect the rate and extent of therapy outcomes. Robey (1998) determined that at least 2 hours of treatment per week is recommended for making significant language gains. Spontaneous recovery may cause some language gains, but without speech-language therapy, the outcomes can be half as strong as those with therapy. When addressing Broca's aphasia, better outcomes occur when the person participates in therapy, and treatment is more effective than no treatment for people in the acute period. Two or more hours of therapy per week in acute and post-acute stages produced the greatest results. High-intensity therapy was most effective, and low-intensity therapy was almost equivalent to no therapy. People with global aphasia are sometimes referred to as having irreversible aphasic syndrome, often making limited gains in auditory comprehension, and recovering no functional language modality with therapy. With this said, people with global aphasia may retain gestural communication skills that may enable success when communicating with conversational partners within familiar conditions. Process-oriented treatment options are limited, and people may not become competent language users as readers, listeners, writers, or speakers no matter how extensive therapy is. However, people's daily routines and quality of life can be enhanced with reasonable and modest goals. After the first month, there is limited to no healing to language abilities of most people. There is a grim prognosis, leaving 83% who were globally aphasic after the first month that will remain globally aphasic at the first year. Some people are so severely impaired that their existing process-oriented treatment approaches offer no signs of progress, and therefore cannot justify the cost of therapy. Perhaps due to the relative rareness of conduction aphasia, few studies have specifically studied the effectiveness of therapy for people with this type of aphasia. From the studies performed, results showed that therapy can help to improve specific language outcomes. One intervention that has had positive results is auditory repetition training. Kohn et al. (1990) reported that drilled auditory repetition training related to improvements in spontaneous speech, Francis et al. (2003) reported improvements in sentence comprehension, and Kalinyak-Fliszar et al. (2011) reported improvements in auditory-visual short-term memory. Individualized service delivery Intensity of treatment should be individualized based on the recency of stroke, therapy goals, and other specific characteristics such as age, size of lesion, overall health status, and motivation. Each individual reacts differently to treatment intensity and is able to tolerate treatment at different times post-stroke. Intensity of treatment after a stroke should be dependent on the person's motivation, stamina, and tolerance for therapy. Outcomes If the symptoms of aphasia last longer than two or three months after a stroke, a complete recovery is unlikely. However, it is important to note that some people continue to improve over a period of years and even decades. Improvement is a slow process that usually involves both helping the individual and family understand the nature of aphasia and learning compensatory strategies for communicating. After a traumatic brain injury (TBI) or cerebrovascular accident (CVA), the brain undergoes several healing and re-organization processes, which may result in improved language function. This is referred to as spontaneous recovery. Spontaneous recovery is the natural recovery the brain makes without treatment, and the brain begins to reorganize and change in order to recover. There are several factors that contribute to a person's chance of recovery caused by stroke, including stroke size and location. Age, sex, and education have not been found to be very predictive. There is also research pointing to damage in the left hemisphere healing more effectively than the right. Specific to aphasia, spontaneous recovery varies among affected people and may not look the same in everyone, making it difficult to predict recovery. Though some cases of Wernicke's aphasia have shown greater improvements than more mild forms of aphasia, people with Wernicke's aphasia may not reach as high a level of speech abilities as those with mild forms of aphasia. Prevalence Aphasia affects about two million people in the U.S. and 250,000 people in Great Britain. Nearly 180,000 people acquire the disorder every year in the U.S., 170,000 due to stroke. Any person of any age can develop aphasia, given that it is often caused by a traumatic injury. However, people who are middle aged and older are the most likely to acquire aphasia, as the other etiologies are more likely at older ages. For example, approximately 75% of all strokes occur in individuals over the age of 65. Strokes account for most documented cases of aphasia: 25% to 40% of people who survive a stroke develop aphasia as a result of damage to the language-processing regions of the brain. History The first recorded case of aphasia is from an Egyptian papyrus, the Edwin Smith Papyrus, which details speech problems in a person with a traumatic brain injury to the temporal lobe. During the second half of the 19th century, aphasia was a major focus for scientists and philosophers who were working in the beginning stages of the field of psychology. In medical research, speechlessness was described as an incorrect prognosis, and there was no assumption that underlying language complications existed. Broca and his colleagues were some of the first to write about aphasia, but Wernicke was the first credited to have written extensively about aphasia being a disorder that contained comprehension difficulties. Despite claims of who reported on aphasia first, it was F.J. Gall that gave the first full description of aphasia after studying wounds to the brain, as well as his observation of speech difficulties resulting from vascular lesions. A recent book on the entire history of aphasia is available (Reference: Tesak, J. & Code, C. (2008) Milestones in the History of Aphasia: Theories and Protagonists. Hove, East Sussex: Psychology Press). Etymology Aphasia is from Greek a- ("without", negative prefix) + phásis (φάσις, "speech"). The word aphasia comes from the word ἀφασία aphasia, in Ancient Greek, which means "speechlessness", derived from ἄφατος aphatos, "speechless" from ἀ- a-, "not, un" and φημί phemi, "I speak". Further research Research is currently being done using functional magnetic resonance imaging (fMRI) to witness the difference in how language is processed in normal brains vs aphasic brains. This will help researchers to understand exactly what the brain must go through in order to recover from Traumatic Brain Injury (TBI) and how different areas of the brain respond after such an injury. Another intriguing approach being tested is that of drug therapy. Research is in progress that will hopefully uncover whether or not certain drugs might be used in addition to speech-language therapy in order to facilitate recovery of proper language function. It's possible that the best treatment for Aphasia might involve combining drug treatment with therapy, instead of relying on one over the other. One other method being researched as a potential therapeutic combination with speech-language therapy is brain stimulation. One particular method, Transcranial Magnetic Stimulation (TMS), alters brain activity in whatever area it happens to stimulate, which has recently led scientists to wonder if this shift in brain function caused by TMS might help people re-learn language. Another type of external brain stimulation is transcranial Direct Current Stimulation (tDCS), but existing research has not shown it to be useful for improving aphasia after a stroke.
Biology and health sciences
Disabilities
Health
2089
https://en.wikipedia.org/wiki/Aorta
Aorta
The aorta ( ; : aortas or aortae) is the main and largest artery in the human body, originating from the left ventricle of the heart, branching upwards immediately after, and extending down to the abdomen, where it splits at the aortic bifurcation into two smaller arteries (the common iliac arteries). The aorta distributes oxygenated blood to all parts of the body through the systemic circulation. Structure Sections In anatomical sources, the aorta is usually divided into sections. One way of classifying a part of the aorta is by anatomical compartment, where the thoracic aorta (or thoracic portion of the aorta) runs from the heart to the diaphragm. The aorta then continues downward as the abdominal aorta (or abdominal portion of the aorta) from the diaphragm to the aortic bifurcation. Another system divides the aorta with respect to its course and the direction of blood flow. In this system, the aorta starts as the ascending aorta, travels superiorly from the heart, and then makes a hairpin turn known as the aortic arch. Following the aortic arch, the aorta then travels inferiorly as the descending aorta. The descending aorta has two parts. The aorta begins to descend in the thoracic cavity and is consequently known as the thoracic aorta. After the aorta passes through the diaphragm, it is known as the abdominal aorta. The aorta ends by dividing into two major blood vessels, the common iliac arteries and a smaller midline vessel, the median sacral artery. Ascending aorta The ascending aorta begins at the opening of the aortic valve in the left ventricle of the heart. It runs through a common pericardial sheath with the pulmonary trunk. These two blood vessels twist around each other, causing the aorta to start out posterior to the pulmonary trunk, but end by twisting to its right and anterior side. The transition from ascending aorta to aortic arch is at the pericardial reflection on the aorta. At the root of the ascending aorta, the lumen has small pockets between the cusps of the aortic valve and the wall of the aorta, which are called the aortic sinuses or the sinuses of Valsalva. The left aortic sinus contains the origin of the left coronary artery and the right aortic sinus likewise gives rise to the right coronary artery. Together, these two arteries supply the heart. The posterior aortic sinus does not give rise to a coronary artery. For this reason the left, right and posterior aortic sinuses are also called left-coronary, right-coronary and non-coronary sinuses. Aortic arch The aortic arch loops over the left pulmonary artery and the bifurcation of the pulmonary trunk, to which it remains connected by the ligamentum arteriosum, a remnant of the fetal circulation that is obliterated a few days after birth. In addition to these blood vessels, the aortic arch crosses the left main bronchus. Between the aortic arch and the pulmonary trunk is a network of autonomic nerve fibers, the cardiac plexus or aortic plexus. The left vagus nerve, which passes anterior to the aortic arch, gives off a major branch, the recurrent laryngeal nerve, which loops under the aortic arch just lateral to the ligamentum arteriosum. It then runs back to the neck. The aortic arch has three major branches: from proximal to distal, they are the brachiocephalic trunk, the left common carotid artery, and the left subclavian artery. The brachiocephalic trunk supplies the right side of the head and neck as well as the right arm and chest wall, while the latter two together supply the left side of the same regions. The aortic arch ends, and the descending aorta begins at the level of the intervertebral disc between the fourth and fifth thoracic vertebrae. Thoracic aorta The thoracic aorta gives rise to the intercostal and subcostal arteries, as well as to the superior and inferior left bronchial arteries and variable branches to the esophagus, mediastinum, and pericardium. Its lowest pair of branches are the superior phrenic arteries, which supply the diaphragm, and the subcostal arteries for the twelfth rib. Abdominal aorta The abdominal aorta begins at the aortic hiatus of the diaphragm at the level of the twelfth thoracic vertebra. It gives rise to lumbar and musculophrenic arteries, renal and middle suprarenal arteries, and visceral arteries (the celiac trunk, the superior mesenteric artery and the inferior mesenteric artery). It ends in a bifurcation into the left and right common iliac arteries. At the point of the bifurcation, there also springs a smaller branch, the median sacral artery. Development The ascending aorta develops from the outflow tract, which initially starts as a single tube connecting the heart with the aortic arches (which will form the great arteries) in early development but is then separated into the aorta and the pulmonary trunk. The aortic arches start as five pairs of symmetrical arteries connecting the heart with the dorsal aorta, and then undergo a significant remodelling to form the final asymmetrical structure of the great arteries, with the 3rd pair of arteries contributing to the common carotids, the right 4th forming the base and middle part of the right subclavian artery and the left 4th being the central part of the aortic arch. The smooth muscle of the great arteries and the population of cells that form the aorticopulmonary septum that separates the aorta and pulmonary artery is derived from cardiac neural crest. This contribution of the neural crest to the great artery smooth muscle is unusual as most smooth muscle is derived from mesoderm. In fact the smooth muscle within the abdominal aorta is derived from mesoderm, and the coronary arteries, which arise just above the semilunar valves, possess smooth muscle of mesodermal origin. A failure of the aorticopulmonary septum to divide the great vessels results in persistent truncus arteriosus. Microanatomy The aorta is an elastic artery, and as such is quite distensible. The aorta consists of a heterogeneous mixture of smooth muscle, nerves, intimal cells, endothelial cells, immune cells, fibroblast-like cells, and a complex extracellular matrix. The vascular wall is subdivided into three layers known as the tunica externa, tunica media, and tunica intima. The aorta is covered by an extensive network of tiny blood vessels called vasa vasorum, which feed the tunica externa and tunica media, the outer layers of the aorta. The aortic arch contains baroreceptors and chemoreceptors that relay information concerning blood pressure and blood pH and carbon dioxide levels to the medulla oblongata of the brain. This information along with information from baroreceptors and chemoreceptors located elsewhere is processed by the brain and the autonomic nervous system mediates appropriate homeostatic responses. Within the tunica media, smooth muscle and the extracellular matrix are quantitatively the largest components, these are arranged concentrically as musculoelastic layers (the elastic lamella) in mammals. The elastic lamella, which comprise smooth muscle and elastic matrix, can be considered as the fundamental structural unit of the aorta and consist of elastic fibers, collagens (predominately type III), proteoglycans, and glycoaminoglycans. The elastic matrix dominates the biomechanical properties of the aorta. The smooth muscle component, while contractile, does not substantially alter the diameter of the aorta, but rather serves to increase the stiffness and viscoelasticity of the aortic wall when activated. Variation Variations may occur in the location of the aorta, and the way in which arteries branch off the aorta. The aorta, normally on the left side of the body, may be found on the right in dextrocardia, in which the heart is found on the right, or situs inversus, in which the location of all organs are flipped. Variations in the branching of individual arteries may also occur. For example, the left vertebral artery may arise from the aorta, instead of the left common carotid artery. In patent ductus arteriosus, a congenital disorder, the fetal ductus arteriosus fails to close, leaving an open vessel connecting the pulmonary artery to the proximal descending aorta. Function The aorta supplies all of the systemic circulation, which means that the entire body, except for the respiratory zone of the lung, receives its blood from the aorta. Broadly speaking, branches from the ascending aorta supply the heart; branches from the aortic arch supply the head, neck, and arms; branches from the thoracic descending aorta supply the chest (excluding the heart and the respiratory zone of the lung); and branches from the abdominal aorta supply the abdomen. The pelvis and legs get their blood from the common iliac arteries. Blood flow and velocity The contraction of the heart during systole is responsible for ejection and creates a (pulse) wave that is propagated down the aorta, into the arterial tree. The wave is reflected at sites of impedance mismatching, such as bifurcations, where reflected waves rebound to return to semilunar valves and the origin of the aorta. These return waves create the dicrotic notch displayed in the aortic pressure curve during the cardiac cycle as these reflected waves push on the aortic semilunar valve. With age, the aorta stiffens such that the pulse wave is propagated faster and reflected waves return to the heart faster before the semilunar valve closes, which raises the blood pressure. The stiffness of the aorta is associated with a number of diseases and pathologies, and noninvasive measures of the pulse wave velocity are an independent indicator of hypertension. Measuring the pulse wave velocity (invasively and non-invasively) is a means of determining arterial stiffness. Maximum aortic velocity may be noted as Vmax or less commonly as AoVmax. Mean arterial pressure (MAP) is highest in the aorta, and the MAP decreases across the circulation from aorta to arteries to arterioles to capillaries to veins back to atrium. The difference between aortic and right atrial pressure accounts for blood flow in the circulation. When the left ventricle contracts to force blood into the aorta, the aorta expands. This stretching gives the potential energy that will help maintain blood pressure during diastole, as during this time the aorta contracts passively. This Windkessel effect of the great elastic arteries has important biomechanical implications. The elastic recoil helps conserve the energy from the pumping heart and smooth out the pulsatile nature created by the heart. Aortic pressure is highest at the aorta and becomes less pulsatile and lower pressure as blood vessels divide into arteries, arterioles, and capillaries such that flow is slow and smooth for gases and nutrient exchange. Clinical significance Central aortic blood pressure has frequently been shown to have greater prognostic value and to show a more accurate response to antihypertensive drugs than has peripheral blood pressure. Aortic aneurysm – mycotic, bacterial (e.g. syphilis), senile, genetic, associated with valvular heart disease Aortic coarctation – pre-ductal, post-ductal Aortic dissection Aortic stenosis Abdominal aortic aneurysm Aortitis, inflammation of the aorta that can be seen in trauma, infections, and autoimmune disease Atherosclerosis Ehlers–Danlos syndrome Marfan syndrome Trauma, such as traumatic aortic rupture, most often thoracic and distal to the left subclavian artery and often quickly fatal Transposition of the great vessels, see also dextro-Transposition of the great arteries and levo-Transposition of the great arteries Other animals All amniotes have a broadly similar arrangement to that of humans, albeit with a number of individual variations. In fish, however, there are two separate vessels referred to as aortas. The ventral aorta carries de-oxygenated blood from the heart to the gills; part of this vessel forms the ascending aorta in tetrapods (the remainder forms the pulmonary artery). A second, dorsal aorta carries oxygenated blood from the gills to the rest of the body and is homologous with the descending aorta of tetrapods. The two aortas are connected by a number of vessels, one passing through each of the gills. Amphibians also retain the fifth connecting vessel, so that the aorta has two parallel arches. History The word aorta stems from the Late Latin from Classical Greek aortē (), from aeirō, "I lift, raise" () This term was first applied by Aristotle when describing the aorta and describes accurately how it seems to be "suspended" above the heart. The function of the aorta is documented in the Talmud, where it is noted as one of three major vessels entering or leaving the heart, and where perforation is linked to death.
Biology and health sciences
Circulatory system
Biology
2113
https://en.wikipedia.org/wiki/Axiom%20of%20regularity
Axiom of regularity
In mathematics, the axiom of regularity (also known as the axiom of foundation) is an axiom of Zermelo–Fraenkel set theory that states that every non-empty set A contains an element that is disjoint from A. In first-order logic, the axiom reads: The axiom of regularity together with the axiom of pairing implies that no set is an element of itself, and that there is no infinite sequence (an) such that ai+1 is an element of ai for all i. With the axiom of dependent choice (which is a weakened form of the axiom of choice), this result can be reversed: if there are no such infinite sequences, then the axiom of regularity is true. Hence, in this context the axiom of regularity is equivalent to the sentence that there are no downward infinite membership chains. The axiom was originally formulated by von Neumann; it was adopted in a formulation closer to the one found in contemporary textbooks by Zermelo. Virtually all results in the branches of mathematics based on set theory hold even in the absence of regularity. However, regularity makes some properties of ordinals easier to prove; and it not only allows induction to be done on well-ordered sets but also on proper classes that are well-founded relational structures such as the lexicographical ordering on Given the other axioms of Zermelo–Fraenkel set theory, the axiom of regularity is equivalent to the axiom of induction. The axiom of induction tends to be used in place of the axiom of regularity in intuitionistic theories (ones that do not accept the law of the excluded middle), where the two axioms are not equivalent. In addition to omitting the axiom of regularity, non-standard set theories have indeed postulated the existence of sets that are elements of themselves. Elementary implications of regularity No set is an element of itself Let A be a set, and apply the axiom of regularity to {A}, which is a set by the axiom of pairing. We see that there must be an element of {A} which is disjoint from {A}. Since the only element of {A} is A, it must be that A is disjoint from {A}. So, since , we cannot have A the only element of A (by the definition of disjoint). No infinite descending sequence of sets exists Suppose, to the contrary, that there is a function, f, on the natural numbers with f(n+1) an element of f(n) for each n. Define S = {f(n): n a natural number}, the range of f, which can be seen to be a set from the axiom schema of replacement. Applying the axiom of regularity to S, let B be an element of S which is disjoint from S. By the definition of S, B must be f(k) for some natural number k. However, we are given that f(k) contains f(k+1) which is also an element of S. So f(k+1) is in the intersection of f(k) and S. This contradicts the fact that they are disjoint sets. Since our supposition led to a contradiction, there must not be any such function, f. The nonexistence of a set containing itself can be seen as a special case where the sequence is infinite and constant. Notice that this argument only applies to functions f that can be represented as sets as opposed to undefinable classes. The hereditarily finite sets, Vω, satisfy the axiom of regularity (and all other axioms of ZFC except the axiom of infinity). So if one forms a non-trivial ultrapower of Vω, then it will also satisfy the axiom of regularity. The resulting model will contain elements, called non-standard natural numbers, that satisfy the definition of natural numbers in that model but are not really natural numbers. They are "fake" natural numbers which are "larger" than any actual natural number. This model will contain infinite descending sequences of elements. For example, suppose n is a non-standard natural number, then and , and so on. For any actual natural number k, . This is an unending descending sequence of elements. But this sequence is not definable in the model and thus not a set. So no contradiction to regularity can be proved. Simpler set-theoretic definition of the ordered pair The axiom of regularity enables defining the ordered pair (a,b) as {a,{a,b}}; see ordered pair for specifics. This definition eliminates one pair of braces from the canonical Kuratowski definition (a,b) = {{a},{a,b}}. Every set has an ordinal rank This was actually the original form of the axiom in von Neumann's axiomatization. Suppose x is any set. Let t be the transitive closure of {x}. Let u be the subset of t consisting of unranked sets. If u is empty, then x is ranked and we are done. Otherwise, apply the axiom of regularity to u to get an element w of u which is disjoint from u. Since w is in u, w is unranked. w is a subset of t by the definition of transitive closure. Since w is disjoint from u, every element of w is ranked. Applying the axioms of replacement and union to combine the ranks of the elements of w, we get an ordinal rank for w, to wit . This contradicts the conclusion that w is unranked. So the assumption that u was non-empty must be false and x must have rank. For every two sets, only one can be an element of the other Let X and Y be sets. Then apply the axiom of regularity to the set {X,Y} (which exists by the axiom of pairing). We see there must be an element of {X,Y} which is also disjoint from it. It must be either X or Y. By the definition of disjoint then, we must have either Y is not an element of X or vice versa. The axiom of dependent choice and no infinite descending sequence of sets implies regularity Let the non-empty set S be a counter-example to the axiom of regularity; that is, every element of S has a non-empty intersection with S. We define a binary relation R on S by , which is entire by assumption. Thus, by the axiom of dependent choice, there is some sequence (an) in S satisfying anRan+1 for all n in N. As this is an infinite descending chain, we arrive at a contradiction and so, no such S exists. Regularity and the rest of ZF(C) axioms Regularity was shown to be relatively consistent with the rest of ZF by Skolem and von Neumann, meaning that if ZF without regularity is consistent, then ZF (with regularity) is also consistent. The axiom of regularity was also shown to be independent from the other axioms of ZFC, assuming they are consistent. The result was announced by Paul Bernays in 1941, although he did not publish a proof until 1954. The proof involves (and led to the study of) Rieger-Bernays permutation models (or method), which were used for other proofs of independence for non-well-founded systems. Regularity and Russell's paradox Naive set theory (the axiom schema of unrestricted comprehension and the axiom of extensionality) is inconsistent due to Russell's paradox. In early formalizations of sets, mathematicians and logicians have avoided that contradiction by replacing the axiom schema of comprehension with the much weaker axiom schema of separation. However, this step alone takes one to theories of sets which are considered too weak. So some of the power of comprehension was added back via the other existence axioms of ZF set theory (pairing, union, powerset, replacement, and infinity) which may be regarded as special cases of comprehension. So far, these axioms do not seem to lead to any contradiction. Subsequently, the axiom of choice and the axiom of regularity were added to exclude models with some undesirable properties. These two axioms are known to be relatively consistent. In the presence of the axiom schema of separation, Russell's paradox becomes a proof that there is no set of all sets. The axiom of regularity together with the axiom of pairing also prohibit such a universal set. However, Russell's paradox yields a proof that there is no "set of all sets" using the axiom schema of separation alone, without any additional axioms. In particular, ZF without the axiom of regularity already prohibits such a universal set. If a theory is extended by adding an axiom or axioms, then any (possibly undesirable) consequences of the original theory remain consequences of the extended theory. In particular, if ZF without regularity is extended by adding regularity to get ZF, then any contradiction (such as Russell's paradox) which followed from the original theory would still follow in the extended theory. The existence of Quine atoms (sets that satisfy the formula equation x = {x}, i.e. have themselves as their only elements) is consistent with the theory obtained by removing the axiom of regularity from ZFC. Various non-wellfounded set theories allow "safe" circular sets, such as Quine atoms, without becoming inconsistent by means of Russell's paradox. Regularity, the cumulative hierarchy, and types In ZF it can be proven that the class , called the von Neumann universe, is equal to the class of all sets. This statement is even equivalent to the axiom of regularity (if we work in ZF with this axiom omitted). From any model which does not satisfy the axiom of regularity, a model which satisfies it can be constructed by taking only sets in . Herbert Enderton wrote that "The idea of rank is a descendant of Russell's concept of type". Comparing ZF with type theory, Alasdair Urquhart wrote that "Zermelo's system has the notational advantage of not containing any explicitly typed variables, although in fact it can be seen as having an implicit type structure built into it, at least if the axiom of regularity is included. Dana Scott went further and claimed that: In the same paper, Scott shows that an axiomatic system based on the inherent properties of the cumulative hierarchy turns out to be equivalent to ZF, including regularity. History The concept of well-foundedness and rank of a set were both introduced by Dmitry Mirimanoff. Mirimanoff called a set x "regular" () if every descending chain x ∋ x1 ∋ x2 ∋ ... is finite. Mirimanoff however did not consider his notion of regularity (and well-foundedness) as an axiom to be observed by all sets; in later papers Mirimanoff also explored what are now called non-well-founded sets ( in Mirimanoff's terminology). Skolem and von Neumann pointed out that non-well-founded sets are superfluous and in the same publication von Neumann gives an axiom which excludes some, but not all, non-well-founded sets. In a subsequent publication, von Neumann gave an equivalent but more complex version of the axiom of class foundation: The contemporary and final form of the axiom is due to Zermelo. Regularity in the presence of urelements Urelements are objects that are not sets, but which can be elements of sets. In ZF set theory, there are no urelements, but in some other set theories such as ZFA, there are. In these theories, the axiom of regularity must be modified. The statement "" needs to be replaced with a statement that is not empty and is not an urelement. One suitable replacement is , which states that x is inhabited.
Mathematics
Axiomatic systems
null
2116
https://en.wikipedia.org/wiki/Apple%20II
Apple II
Apple II ("apple two") is a series of microcomputers manufactured by Apple Computer, Inc. from 1977 to 1993. The first Apple II model, that gave the series its name, was designed by Steve Wozniak, and was first sold on June 10, 1977. Its success led to it being followed by the Apple II Plus, Apple IIe, Apple IIc, and Apple IIc Plus, with the 1983 IIe being the most popular. The name is trademarked with square brackets as Apple ][, then, beginning with the IIe, as Apple //. The Apple II was a major advancement over its predecessor, the Apple I, in terms of ease of use, features, and expandability. It became one of several recognizable and successful computers during the 1980s and early 1990s, although this was mainly limited to the US. It was aggressively marketed through volume discounts and manufacturing arrangements to educational institutions, which made it the first computer in widespread use in American secondary schools, displacing the early leader Commodore PET. The effort to develop educational and business software for the Apple II, including the 1979 release of the popular VisiCalc spreadsheet, made the computer especially popular with business users and families. The Apple II computers are based on the 6502 8-bit processor and can display text and two resolutions of color graphics. A software-controlled speaker provides one channel of low-fidelity audio. A model with more advanced graphics and sound and a 16-bit processor, the Apple IIGS, was added in 1986. It remained compatible with earlier Apple II models, but the IIGS has more in common with mid-1980s systems like the Atari ST, Amiga, and Acorn Archimedes. Despite the introduction of the Motorola 68000-based Macintosh in 1984, the Apple II series still reportedly accounted for 85% of the company's hardware sales in the first quarter of fiscal 1985. Apple continued to sell Apple II systems alongside the Macintosh until terminating the IIGS in December 1992 and the IIe in November 1993. The last II-series Apple in production, the IIe card for Macintoshes, was discontinued on October 15, 1993; having been one of the longest running mass-produced home computer series, the total Apple II sales of all of its models during its 16-year production run were about 6 million units (including about 1.25 million Apple IIGS models) with the peak occurring in 1983 when 1 million were sold. Hardware Unlike preceding home microcomputers, the Apple II was sold as a finished consumer appliance rather than as a kit (unassembled or preassembled). Apple marketed the Apple II as a durable product, including a 1981 ad in which an Apple II survived a fire started when a cat belonging to one early user knocked over a lamp. All the machines in the series, except the IIc, share similar overall design elements. The plastic case was designed to look more like a home appliance than a piece of electronic equipment, and the case can be opened without the use of tools. All models in the Apple II series have a built-in keyboard, with the exception of the IIGS which has a separate keyboard. Apple IIs have color and high-resolution graphics modes, sound capabilities and a built-in BASIC programming language. The motherboard holds eight expansion slots and an array of random access memory (RAM) sockets that can hold up to 48 kilobytes. Over the course of the Apple II series' life, an enormous amount of first- and third-party hardware was made available to extend the capabilities of the machine. The IIc was designed as a compact, portable unit, not intended to be disassembled, and cannot use most of the expansion hardware sold for the other machines in the series. Software The original Apple II has the operating system in ROM along with a BASIC variant called Integer BASIC. Apple eventually released Applesoft BASIC, a more advanced variant of the language which users can run instead of Integer BASIC. The Apple II series eventually supported over 1,500 software programs. When the Disk II floppy disk drive was released in 1978, a new operating system, Apple DOS, was commissioned from Shepardson Microsystems and developed by Paul Laughton, adding support for the disk drive. The final and most popular version of this software was Apple DOS 3.3. Apple DOS was superseded by ProDOS, which supported a hierarchical file system and larger storage devices. With an optional third-party Z80-based expansion card, the Apple II could boot into the CP/M operating system and run WordStar, dBase II, and other CP/M software. With the release of MousePaint in 1984 and the Apple IIGS in 1986, the platform took on the look of the Macintosh user interface, including a mouse. Much commercial Apple II software shipped on self-booting disks and does not use standard DOS disk formats. This discouraged the copying or modifying of the software on the disks, and improved loading speed. Models Apple II The first Apple II computers went on sale on June 10, 1977 with a MOS Technology 6502 (later Synertek) microprocessor running at 1.023 MHz, 4 KB of RAM, an audio cassette interface for loading programs and storing data, and the Integer BASIC programming language built into the ROMs. The video controller displayed 40 columns by 24 lines of monochrome, upper-case-only (the original character set matches ASCII characters 0x20 to 0x5F) text on the screen, with NTSC composite video output suitable for display on a TV monitor, or on a regular TV set by way of a separate RF modulator. The original retail price of the computer was (with 4 KB of RAM) and (with the maximum 48 KB of RAM). To reflect the computer's color graphics capability, the Apple logo on the casing was represented using rainbow stripes, which remained a part of Apple's corporate logo until early 1998. The earliest Apple IIs were assembled in Silicon Valley, and later in Texas; printed circuit boards were manufactured in Ireland and Singapore. An external -inch floppy disk drive, the Disk II, attached via a controller card that plugged into one of the computer's expansion slots (usually slot 6), was used for data storage and retrieval to replace cassettes. The Disk II interface, created by Steve Wozniak, was regarded as an engineering masterpiece for its economy of electronic components. Rather than having a dedicated sound-synthesis chip, the Apple II had a toggle circuit that could only emit a click through a built-in speaker; all other sounds (including two, three and, eventually, four-voice music and playback of audio samples and speech synthesis) were generated entirely by software that clicked the speaker at just the right times. The Apple II's multiple expansion slots permitted a wide variety of third-party devices, including Apple II peripheral cards such as serial controllers, display controllers, memory boards, hard disks, networking components, and real-time clocks. There were plug-in expansion cards – such as the Z-80 SoftCard – that permitted the Apple to use the Z80 processor and run a multitude of programs developed under the CP/M operating system, including the dBase II database and the WordStar word processor. There was also a third-party 6809 card that would allow OS-9 Level One to be run. Third-party sound cards greatly improved audio capabilities, allowing simple music synthesis and text-to-speech functions. Eventually, Apple II accelerator cards were created to double or quadruple the computer's speed. Rod Holt designed the Apple II's power supply. He employed a switched-mode power supply design, which was far smaller and generated less unwanted heat than the linear power supply some other home computers used. The original Apple II was discontinued at the start of 1981, superseded by the Apple II+. Apple II Plus The Apple II Plus, introduced in June 1979, included the Applesoft BASIC programming language in ROM. This Microsoft-authored dialect of BASIC, which was previously available as an upgrade, supported floating-point arithmetic, and became the standard BASIC dialect on the Apple II series (though it ran at a noticeably slower speed than Steve Wozniak's Integer BASIC). Except for improved graphics and disk-booting support in the ROM, and the removal of the 2k 6502 assembler to make room for the floating point BASIC, the II+ was otherwise identical to the original II in terms of electronic functionality. There were small differences in the physical appearance and keyboard. RAM prices fell during 1980–81 and all II+ machines came from the factory with a full 48 KB of memory already installed. Apple II Europlus and J-Plus After the success of the first Apple II in the United States, Apple expanded its market to include Europe, Australia and the Far East in 1979, with the Apple II Europlus (Europe, Australia) and the Apple II J-Plus (Japan). In these models, Apple made the necessary hardware, software and firmware changes in order to comply to standards outside of the US. Apple IIe The Apple II Plus was followed in 1983 by the Apple IIe, a cost-reduced yet more powerful machine that used newer chips to reduce the component count and add new features, such as the display of upper and lowercase letters and a standard 64 KB of RAM. The IIe RAM was configured as if it were a 48 KB Apple II Plus with a language card. The machine had no slot 0, but instead had an auxiliary slot that could accept a 1 KB memory card to enable the 80-column display. This card contained only RAM; the hardware and firmware for the 80-column display was built into the Apple IIe. An "extended 80-column card" with more memory increased the machine's RAM to 128 KB. The Apple IIe was the most popular machine in the Apple II series. It has the distinction of being the longest-lived Apple computer of all time—it was manufactured and sold with only minor changes for nearly 11 years. The IIe was the last Apple II model to be sold, and was discontinued in November 1993. During its lifespan two variations were introduced: the Apple IIe Enhanced (four replacement chips to give it some of the features of the later model Apple IIc) and the Apple IIe Platinum (a modernized case color to match other Apple products of the era, along with the addition of a numeric keypad). Some of the feature of the IIe were carried over from the less successful Apple III, among them the ProDOS operating system. Apple IIc The Apple IIc was released in April 1984, billed as a portable Apple II because it could be easily carried due to its size and carrying handle, which could be flipped down to prop the machine up into a typing position. Unlike modern portables, it lacked a built-in display and battery. It was the first of three Apple II models to be made in the Snow White design language, and the only one that used its unique creamy off-white color. The Apple IIc was the first Apple II to use the 65C02 low-power variant of the 6502 processor, and featured a built-in 5.25-inch floppy drive and 128 KB RAM, with a built-in disk controller that could control external drives, composite video (NTSC or PAL), serial interfaces for modem and printer, and a port usable by either a joystick or mouse. Unlike previous Apple II models, the IIc had no internal expansion slots at all. Two different monochrome LC displays were sold for use with the IIc's video expansion port, although both were short-lived due to high cost and poor legibility. The IIc had an external power supply that converted AC power to 15 V DC, though the IIc itself will accept between 12 V and 17 V DC, allowing third parties to offer battery packs and automobile power adapters that connected in place of the supplied AC adapter. Apple IIGS The Apple IIGS, released on September 15, 1986, is the penultimate and most advanced model in the Apple II series, and a radical departure from prior models. It uses a 16-bit microprocessor, the 65C816 operating at 2.8 MHz with 24-bit addressing, allowing expansion up to 8 MB of RAM. The graphics are significantly improved, with 4096 colors and new modes with resolutions of 320×200 and 640×400. The audio capabilities are vastly improved, with a built-in music synthesizer that far exceeded any other home computer. The Apple IIGS evolved the platform while still maintaining near-complete backward compatibility. Its Mega II chip contains the functional equivalent of an entire Apple IIe computer (sans processor). This, combined with the 65816's ability to execute 65C02 code directly, provides full support for legacy software, while also supporting 16-bit software running under a new OS. The OS eventually included a Macintosh-like graphical Finder for managing disks and files and opening documents and applications, along with desk accessories. Later, the IIGS gained the ability to read and write Macintosh disks and, through third-party software, a multitasking Unix-like shell and TrueType font support. The GS includes a 32-voice Ensoniq 5503 DOC sample-based sound synthesizer chip with 64 KB dedicated RAM, 256 KB (or later 1.125 MB) of standard RAM, built-in peripheral ports (switchable between IIe-style card slots and IIc-style onboard controllers for disk drives, mouse, RGB video, and serial devices) and, built-in AppleTalk networking. Apple IIc Plus The final Apple II model was the Apple IIc Plus introduced in 1988. It was the same size and shape as the IIc that came before it, but the 5.25-inch floppy drive had been replaced with a -inch drive, the power supply was moved inside the case, and the processor was a fast 4 MHz 65C02 processor that actually ran 8-bit Apple II software faster than the IIGS. The IIc Plus also featured a new keyboard layout that matched the Platinum IIe and IIGS. Unlike the IIe IIc and IIGS, the IIc Plus came only in one version (American) and was not officially sold anywhere outside the US. The Apple IIc Plus ceased production in 1990, with its two-year production run being the shortest of all the Apple II computers. Apple IIe Card Although not an extension of the Apple II line, in 1990 the Apple IIe Card, an expansion card for the Macintosh LC, was released. Essentially a miniaturized Apple IIe computer on a card (using the Mega II chip from the Apple IIGS), it allowed the Macintosh to run 8-bit Apple IIe software through hardware emulation, with an option to run at roughly double the speed of the original IIe (about 1.8 MHz). However, the video output was emulated in software, and, depending on how much of the screen the currently running program was trying to update in a single frame, performance could be much slower compared to a real IIe. This is due to the fact that writes from the 65C02 on the IIe Card to video memory were caught by the additional hardware on the card, so the video emulation software running on the Macintosh side could process that write and update the video display. But, while the Macintosh was processing video updates, execution of Apple II code would be temporarily halted. With a breakout cable which connected to the back of the card, the user could attach up to two UniDisk or Apple 5.25 Drives, up to one UniDisk 3.5 drive, and a DE-9 Apple II joystick. Many of the LC's built-in Macintosh peripherals could also be "borrowed" by the card when in Apple II mode, including extra RAM, the Mac's internal 3.5-inch floppy drives, AppleTalk networking, any ProDOS-formatted hard disk partitions, the serial ports, mouse, and real-time clock. The IIe card could not, however, run software intended for the 16-bit Apple IIGS. Advertising, marketing, and packaging Mike Markkula, a retired Intel marketing manager, provided the early critical funding for Apple Computer. From 1977 to 1981, Apple used the Regis McKenna agency for its advertisements and marketing. In 1981, Chiat-Day acquired Regis McKenna's advertising operations and Apple used Chiat-Day. At Regis McKenna Advertising, the team assigned to launch the Apple II consisted of Rob Janoff, art director, Chip Schafer, copywriter and Bill Kelley, account executive. Janoff came up with the Apple logo with a bite out of it. The design was originally an olive green with matching company logotype all in lowercase. Steve Jobs insisted on promoting the color capability of the Apple II by putting rainbow stripes on the Apple logo. In its letterhead and business card implementation, the rounded "a" of the logotype echoed the "bite" in the logo. This logo was developed simultaneously with an advertisement and a brochure; the latter being produced for distribution initially at the first West Coast Computer Faire. Since the original Apple II, Apple has paid high attention to its quality of packaging, partly because of Steve Jobs' personal preferences and opinions on packaging and final product appearance. All of Apple's packaging for the Apple II series looked similar, featuring much clean white space and showing the Apple rainbow logo prominently. For several years up until the late 1980s, Apple used the Motter Tektura font for packaging, until changing to the Apple Garamond font. Apple ran the first advertisement for the Apple II, a two-page spread ad titled "Introducing Apple II", in BYTE in July 1977. The first brochure, was entitled "Simplicity" and the copy in both the ad and brochure pioneered "demystifying" language intended to make the new idea of a home computer more "personal." The Apple II introduction ad was later run in the September 1977 issue of Scientific American. Apple later aired eight television commercials for the Apple IIGS, emphasizing its benefits to education and students, along with some print ads. Clones The Apple II was frequently cloned, both in the United States and abroad, in a similar way to the IBM PC. According to some sources (see below), more than 190 different models of Apple II clones were manufactured. Most could not be legally imported into the United States. Apple sued and sought criminal charges against clone makers in more than a dozen countries. Data storage Cassette Originally the Apple II used Compact Cassette tapes for program and data storage. A dedicated tape recorder along the lines of the Commodore Datasette was never produced; Apple recommended using the Panasonic RQ309 in some of its early printed documentation. The uses of common consumer cassette recorders and a standard video monitor or television set (with a third-party RF modulator) made the total cost of owning an Apple II less expensive and helped contribute to the Apple II's success. Cassette storage may have been inexpensive, but it was also slow and unreliable. The Apple II's lack of a disk drive was "a glaring weakness" in what was otherwise intended to be a polished, professional product. Recognizing that the II needed a disk drive to be taken seriously, Apple set out to develop a disk drive and a DOS to run it. Wozniak spent the 1977 Christmas holidays designing a disk controller that reduced the number of chips used by a factor of 10 compared to existing controllers. Still lacking a DOS, and with Wozniak inexperienced in operating system design, Jobs approached Shepardson Microsystems with the project. On April 10, 1978, Apple signed a contract for $13,000 with Shepardson to develop the DOS. Even after disk drives made the cassette tape interfaces obsolete they were still used by enthusiasts as simple one-bit audio input-output ports. Ham radio operators used the cassette input to receive slow scan TV (single frame images). A commercial speech recognition Blackjack program was available, after some user-specific voice training it would recognize simple commands (Hit, stand). Bob Bishop's "Music Kaleidoscope" was a simple program that monitored the cassette input port and based on zero-crossings created color patterns on the screen, a predecessor to current audio visualization plug-ins for media players. Music Kaleidoscope was especially popular on projection TV sets in dance halls. The OS Disk Apple and many third-party developers made software available on tape at first, but after the Disk II became available in 1978, tape-based Apple II software essentially disappeared from the market. The initial price of the Disk II drive and controller was US$595, although a $100 off coupon was available through the Apple newsletter "Contact". The controller could handle two drives and a second drive (without controller) retailed for $495. The Disk II single-sided floppy drive used 5.25-inch floppy disks; double-sided disks could be used, one side at a time, by turning them over and notching a hole for the write protect sensor. The first disk operating systems for the were and DOS 3.2, which stored 113.75 KB on each disk, organized into 35 tracks of 13 256-byte sectors each. After about two years, DOS 3.3 was introduced, storing 140 KB thanks to a minor firmware change on the disk controller that allowed it to store 16 sectors per track. (This upgrade was user-installable as two PROMs on older controllers.) After the release of DOS 3.3, the user community discontinued use of except for running legacy software. Programs that required DOS 3.2 were fairly rare; however, as DOS 3.3 was not a major architectural change aside from the number of sectors per track, a program called MUFFIN was provided with DOS 3.3 to allow users to copy files from DOS 3.2 disks to DOS 3.3 disks. It was possible for software developers to create a DOS 3.2 disk which would also boot on a system with firmware. Later, double-sided drives, with heads to read both sides of the disk, became available from third-party companies. (Apple only produced double-sided 5.25-inch disks for the Lisa 1 computer). On a DOS 3.x disk, tracks 0, 1, and most of track 2 were reserved to store the operating system. (It was possible, with a special utility, to reclaim most of this space for data if a disk did not need to be bootable.) A short ROM program on the disk controller had the ability to seek to track zero which it did without regard for the read/write head's current position, resulting in the characteristic "chattering" sound of a Disk II boot, which was the read/write head hitting the rubber stop block at the end of the rail – and read and execute code from sector 0. The code contained in there would then pull in the rest of the operating system. DOS stored the disk's directory on track 17, smack in the middle of the 35-track disks, in order to reduce the average seek time to the frequently used directory track. The directory was fixed in size and could hold a maximum of 105 files. Subdirectories were not supported. Most game publishers did not include DOS on their floppy disks, since they needed the memory it occupied more than its capabilities; instead, they often wrote their own boot loaders and read-only file systems. This also served to discourage "crackers" from snooping around in the game's copy-protection code, since the data on the disk was not in files that could be accessed easily. Some third-party manufacturers produced floppy drives that could write 40 tracks to most 5.25-inch disks, yielding 160 KB of storage per disk, but the format did not catch on widely, and no known commercial software was published on 40-track media. Most drives, even Disk IIs, could write 36 tracks; a two byte modification to DOS to format the extra track was common. The Apple Disk II stored 140 KB on single-sided, "single-density" floppy disks, but it was very common for Apple II users to extend the capacity of a single-sided floppy disk to 280 KB by cutting out a second write-protect notch on the side of the disk using a "disk notcher" or hole puncher and inserting the disk flipped over. Double-sided disks, with notches on both sides, were available at a higher price, but in practice the magnetic coating on the reverse of nominally single-sided disks was usually of good enough quality to be used (both sides were coated in the same way to prevent warping, although only one side was certified for use). Early on, diskette manufacturers routinely warned that this technique would damage the read/write head of the drives or wear out the disk faster, and these warnings were frequently repeated in magazines of the day. In practice, however, this method was an inexpensive way to store twice as much data for no extra cost, and was widely used for commercially released floppies as well. Later, Apple IIs were able to use 3.5-inch disks with a total capacity of 800 KB and hard disks. did not support these drives natively; third-party software was required, and disks larger than about 400 KB had to be split up into multiple "virtual disk volumes." DOS 3.3 was succeeded by ProDOS, a 1983 descendant of the Apple ///'s SOS. It added support for subdirectories and volumes up to 32 MB in size. ProDOS became the DOS of choice; AppleWorks and other newer programs required it. Legacy The Apple II series of computers had an enormous impact on the technology industry and expanded the role of microcomputers in society. The Apple II was the first personal computer many people ever saw. Its price was within the reach of many middle-class families, and a partnership with MECC helped make the Apple II popular in schools. By the end of 1980 Apple had already sold over 100,000 Apple IIs, and at the introduction of the IIGS, models in the range had been sold. However, in other markets, the range saw rather more limited adoption, with only 120,000 units selling in the UK over this nine-year period. The Apple II's popularity bootstrapped the computer game and educational software markets and began the boom in the word processor and computer printer markets. The first spreadsheet application, VisiCalc, was initially released for the Apple II, and many businesses bought them just to run VisiCalc. Its success drove IBM in part to create the IBM PC, which many businesses purchased to run spreadsheet and word processing software, at first ported from Apple II versions. The Apple II's slots, allowing any peripheral card to take control of the bus and directly access memory, enabled an independent industry of card manufacturers who together created a flood of hardware products that let users build systems that were far more powerful and useful (at a lower cost) than any competing system, most of which were not nearly as expandable and were universally proprietary. The first peripheral card was a blank prototyping card intended for electronics enthusiasts who wanted to design their own peripherals for the Apple II. Specialty peripherals kept the Apple II in use in industry and education environments for many years after Apple Computer stopped supporting the Apple II. Well into the 1990s every clean-room (the super-clean facility where spacecraft are prepared for flight) at the Kennedy Space Center used an Apple II to monitor the environment and air quality. Most planetariums used Apple IIs to control their projectors and other equipment. Even the game port was unusually powerful and could be used for digital and analog input and output. The early manuals included instructions for how to build a circuit with only four commonly available components (one transistor and three resistors) and a software routine to drive a common Teletype Model 33 machine. Don Lancaster used the game port I/O to drive a LaserWriter printer. Modern use Today, emulators for various Apple II models are available to run Apple II software on macOS, Linux, Microsoft Windows, homebrew enabled Nintendo DS and other operating systems. Numerous disk images of Apple II software are available free over the Internet for use with these emulators. AppleWin and MESS are among the best emulators compatible with most Apple II images. The MESS emulator supports recording and playing back of Apple II emulation sessions, as does Home Action Replay Page (a.k.a. HARP). There is still a small annual convention, KansasFest, dedicated to the platform. In 2017, the band 8 Bit Weapon released the world's first 100% Apple II-based music album entitled, "Class Apples". The album featured dance-oriented cover versions of classical music by Bach, Beethoven, and Mozart recorded directly off the Apple II motherboard.
Technology
Early computers
null
2120
https://en.wikipedia.org/wiki/Aliphatic%20compound
Aliphatic compound
In organic chemistry, hydrocarbons (compounds composed solely of carbon and hydrogen) are divided into two classes: aromatic compounds and aliphatic compounds (; G. aleiphar, fat, oil). Aliphatic compounds can be saturated (in which all the C-C bonds are single requiring the structure to be completed, or 'saturated', by hydrogen) like hexane, or unsaturated, like hexene and hexyne. Open-chain compounds, whether straight or branched, and which contain no rings of any type, are always aliphatic. Cyclic compounds can be aliphatic if they are not aromatic. Structure Aliphatic compounds can be saturated, joined by single bonds (alkanes), or unsaturated, with double bonds (alkenes) or triple bonds (alkynes). If other elements (heteroatoms) are bound to the carbon chain, the most common being oxygen, nitrogen, sulfur, and chlorine, it is no longer a hydrocarbon, and therefore no longer an aliphatic compound. However, such compounds may still be referred to as aliphatic if the hydrocarbon portion of the molecule is aliphatic, e.g. aliphatic amines, to differentiate them from aromatic amines. The least complex aliphatic compound is methane (CH4). Properties Most aliphatic compounds are flammable, allowing the use of hydrocarbons as fuel, such as methane in natural gas for stoves or heating; butane in torches and lighters; various aliphatic (as well as aromatic) hydrocarbons in liquid transportation fuels like petrol/gasoline, diesel, and jet fuel; and other uses such as ethyne (acetylene) in welding. Examples of aliphatic compounds The most important aliphatic compounds are: n-, iso- and cyclo-alkanes (saturated hydrocarbons) n-, iso- and cyclo-alkenes and -alkynes (unsaturated hydrocarbons). Important examples of low-molecular aliphatic compounds can be found in the list below (sorted by the number of carbon-atoms):
Physical sciences
Aliphatic hydrocarbons
Chemistry
2147
https://en.wikipedia.org/wiki/Armour
Armour
Armour (Commonwealth English) or armor (American English; see spelling differences) is a covering used to protect an object, individual, or vehicle from physical injury or damage, especially direct contact weapons or projectiles during combat, or from a potentially dangerous environment or activity (e.g. cycling, construction sites, etc.). Personal armour is used to protect soldiers and war animals. Vehicle armour is used on warships, armoured fighting vehicles, and some combat aircraft, mostly ground attack aircraft. A second use of the term armour describes armoured forces, armoured weapons, and their role in combat. After the development of armoured warfare, tanks and mechanised infantry and their combat formations came to be referred to collectively as "armour". Etymology The word "armour" began to appear in the Middle Ages as a derivative of Old French. It is dated from 1297 as a "mail, defensive covering worn in combat". The word originates from the Old French , itself derived from the Latin meaning "arms and/or equipment", with the root meaning "arms or gear". Personal Armour has been used throughout recorded history. It has been made from a variety of materials, beginning with the use of leathers or fabrics as protection and evolving through chain mail and metal plate into today's modern composites. For much of military history the manufacture of metal personal armour has dominated the technology and employment of armour. Armour drove the development of many important technologies of the Ancient World, including wood lamination, mining, metal refining, vehicle manufacture, leather processing, and later decorative metal working. Its production was influential in the Industrial Revolution, and furthered commercial development of metallurgy and engineering. Armour was also an important factor in the development of firearms, which in turn revolutionised warfare. History Significant factors in the development of armour include the economic and technological necessities of its production. For instance, plate armour first appeared in Medieval Europe when water-powered trip hammers made the formation of plates faster and cheaper. At times the development of armour has paralleled the development of increasingly effective weaponry on the battlefield, with armourers seeking to create better protection without sacrificing mobility. Well-known armour types in European history include the lorica hamata, lorica squamata, and the lorica segmentata of the Roman legions, the mail hauberk of the early medieval age, and the full steel plate harness worn by later medieval and renaissance knights, and breast and back plates worn by heavy cavalry in several European countries until the first year of World War I (1914–1915). The samurai warriors of Feudal Japan utilised many types of armour for hundreds of years up to the 19th century. Early The first record of body armor in history was found on the Stele of Vultures in ancient Sumer in today's south Iraq, and various forms of scale mail can be seen in surviving records from the New Kingdom of Egypt, Zhou dynasty China, and dynastic India. Cuirasses and helmets were manufactured in Japan as early as the 4th century. Tankō, worn by foot soldiers and keikō, worn by horsemen were both pre-samurai types of early Japanese armour constructed from iron plates connected together by leather thongs. Japanese lamellar armour (keiko) passed through Korea and reached Japan around the 5th century. These early Japanese lamellar armours took the form of a sleeveless jacket, leggings and a helmet. Armour did not always cover all of the body; sometimes no more than a helmet and leg plates were worn. The rest of the body was generally protected by means of a large shield. Examples of armies equipping their troops in this fashion were the Aztecs (13th to 15th century CE). In East Asia, many types of armour were commonly used at different times by various cultures, including scale armour, lamellar armour, laminar armour, plated mail, mail, plate armour, and brigandine. Around the dynastic Tang, Song, and early Ming Period, cuirasses and plates (mingguangjia) were also used, with more elaborate versions for officers in war. The Chinese, during that time used partial plates for "important" body parts instead of covering their whole body since too much plate armour hinders their martial arts movement. The other body parts were covered in cloth, leather, lamellar, or mountain pattern armor. In pre-Qin dynasty times, leather armour was made out of various animals, with more exotic ones such as the rhinoceros. Mail, sometimes called "chainmail", made of interlocking iron rings is believed to have first appeared some time after 300 BC. Its invention is credited to the Celts; the Romans are thought to have adopted their design. Gradually, small additional plates or discs of iron were added to the mail to protect vulnerable areas. Hardened leather and splinted construction were used for arm and leg pieces. The coat of plates was developed, an armour made of large plates sewn inside a textile or leather coat. 13th to 18th century Europe Early plate in Italy, and elsewhere in the 13th–15th century, were made of iron. Iron armour could be carburised or case hardened to give a surface of harder steel. Plate armour became cheaper than mail by the 15th century as it required much less labour and labour had become much more expensive after the Black Death, though it did require larger furnaces to produce larger blooms. Mail continued to be used to protect those joints which could not be adequately protected by plate, such as the armpit, crook of the elbow and groin. Another advantage of plate was that a lance rest could be fitted to the breast plate. The small skull cap evolved into a bigger true helmet, the bascinet, as it was lengthened downward to protect the back of the neck and the sides of the head. Additionally, several new forms of fully enclosed helmets were introduced in the late 14th century. Probably the most recognised style of armour in the world became the plate armour associated with the knights of the European Late Middle Ages, but continuing to the early 17th century Age of Enlightenment in all European countries. By 1400, the full harness of plate armour had been developed in armouries of Lombardy. Heavy cavalry dominated the battlefield for centuries in part because of their armour. In the early 15th century, advances in weaponry allowed infantry to defeat armoured knights on the battlefield. The quality of the metal used in armour deteriorated as armies became bigger and armour was made thicker, necessitating breeding of larger cavalry horses. If during the 14–15th centuries armour seldom weighed more than 15 kg, then by the late 16th century it weighed 25 kg. The increasing weight and thickness of late 16th century armour therefore gave substantial resistance. In the early years of low velocity firearms, full suits of armour, or breast plates actually stopped bullets fired from a modest distance. Crossbow bolts, if still in use, would seldom penetrate good plate, nor would any bullet unless fired from close range. In effect, rather than making plate armour obsolete, the use of firearms stimulated the development of plate armour into its later stages. For most of that period, it allowed horsemen to fight while being the targets of defending arquebusiers without being easily killed. Full suits of armour were actually worn by generals and princely commanders right up to the second decade of the 18th century. It was the only way they could be mounted and survey the overall battlefield with safety from distant musket fire. The horse was afforded protection from lances and infantry weapons by steel plate barding. This gave the horse protection and enhanced the visual impression of a mounted knight. Late in the era, elaborate barding was used in parade armour. Later Gradually, starting in the mid-16th century, one plate element after another was discarded to save weight for foot soldiers. Back and breast plates continued to be used throughout the entire period of the 18th century and through Napoleonic times, in many European heavy cavalry units, until the early 20th century. From their introduction, muskets could pierce plate armour, so cavalry had to be far more mindful of the fire. In Japan, armour continued to be used until the late 19th century, with the last major fighting in which armour was used, this occurred in 1868. Samurai armour had one last short lived use in 1877 during the Satsuma Rebellion. Though the age of the knight was over, armour continued to be used in many capacities. Soldiers in the American Civil War bought iron and steel vests from peddlers (both sides had considered but rejected body armour for standard issue). The effectiveness of the vests varied widely, some successfully deflected bullets and saved lives, but others were poorly made and resulted in tragedy for the soldiers. In any case the vests were abandoned by many soldiers due to their increased weight on long marches, as well as the stigma they got for being cowards from their fellow troops. At the start of World War I, thousands of the French Cuirassiers rode out to engage the German Cavalry. By that period, the shiny metallic cuirass was covered in a dark paint and a canvas wrap covered their elaborate Napoleonic style helmets, to help mitigate the sunlight being reflected off the surfaces, thereby alerting the enemy of their location. Their armour was only meant for protection against edged weapons such as bayonets, sabres, and lances. Cavalry had to be wary of repeating rifles, machine guns, and artillery, unlike the foot soldiers, who at least had a trench to give them some protection. Present Today, ballistic vests, also known as flak jackets, made of ballistic cloth (e.g. kevlar, dyneema, twaron, spectra etc.) and ceramic or metal plates are common among police officers, security guards, corrections officers and some branches of the military. The US Army has adopted Interceptor body armour, which uses Enhanced Small Arms Protective Inserts (ESAPIs) in the chest, sides, and back of the armour. Each plate is rated to stop a range of ammunition including 3 hits from a 7.62×51 NATO AP round at a range of . Dragon Skin is another ballistic vest which is currently in testing with mixed results. As of 2019, it has been deemed too heavy, expensive, and unreliable, in comparison to more traditional plates, and it is outdated in protection compared to modern US IOTV armour, and even in testing was deemed a downgrade from the IBA. The British Armed Forces also have their own armour, known as Osprey. It is rated to the same general equivalent standard as the US counterpart, the Improved Outer Tactical Vest, and now the Soldier Plate Carrier System and Modular Tactical Vest. The Russian Armed Forces also have armour, known as the 6B43, all the way to 6B45, depending on variant. Their armour runs on the GOST system, which, due to regional conditions, has resulted in a technically higher protective level overall. Vehicle The first modern production technology for armour plating was used by navies in the construction of the ironclad warship, reaching its pinnacle of development with the battleship. The first tanks were produced during World War I. Aerial armour has been used to protect pilots and aircraft systems since the First World War. In modern ground forces' usage, the meaning of armour has expanded to include the role of troops in combat. After the evolution of armoured warfare, mechanised infantry were mounted in armoured fighting vehicles and replaced light infantry in many situations. In modern armoured warfare, armoured units equipped with tanks and infantry fighting vehicles serve the historic role of heavy cavalry, light cavalry, and dragoons, and belong to the armoured branch of warfare. History Ships The first ironclad battleship, with iron armour over a wooden hull, , was launched by the French Navy in 1859 prompting the British Royal Navy to build a counter. The following year they launched , which was twice the size and had iron armour over an iron hull. After the first battle between two ironclads took place in 1862 during the American Civil War, it became clear that the ironclad had replaced the unarmoured line-of-battle ship as the most powerful warship afloat. Ironclads were designed for several roles, including as high seas battleships, coastal defence ships, and long-range cruisers. The rapid evolution of warship design in the late 19th century transformed the ironclad from a wooden-hulled vessel which carried sails to supplement its steam engines into the steel-built, turreted battleships and cruisers familiar in the 20th century. This change was pushed forward by the development of heavier naval guns (the ironclads of the 1880s carried some of the heaviest guns ever mounted at sea), more sophisticated steam engines, and advances in metallurgy which made steel shipbuilding possible. The rapid pace of change in the ironclad period meant that many ships were obsolete as soon as they were complete, and that naval tactics were in a state of flux. Many ironclads were built to make use of the ram or the torpedo, which a number of naval designers considered the crucial weapons of naval combat. There is no clear end to the ironclad period, but towards the end of the 1890s the term ironclad dropped out of use. New ships were increasingly constructed to a standard pattern and designated battleships or armoured cruisers. Trains Armoured trains saw use from the mid-19th to the mid-20th century, including the American Civil War (1861–1865), the Franco-Prussian War (1870–1871), the First and Second Boer Wars (1880–81 and 1899–1902), the Polish–Soviet War (1919–1921), the First (1914–1918) and Second World Wars (1939–1945) and the First Indochina War (1946–1954). The most intensive use of armoured trains was during the Russian Civil War (1918–1920). Armoured fighting vehicles Ancient siege engines were usually protected by wooden armour, often covered with wet hides or thin metal to prevent being easily burned. Medieval war wagons were horse-drawn wagons that were similarly armoured. These contained guns or crossbowmen that could fire through gun-slits. The first modern armoured fighting vehicles were armoured cars, developed . These started as ordinary wheeled motor-cars protected by iron shields, typically mounting a machine gun. During the First World War, the stalemate of trench warfare during on the Western Front spurred the development of the tank. It was envisioned as an armoured machine that could advance under fire from enemy rifles and machine guns, and respond with its own heavy guns. It used caterpillar tracks to cross ground broken up by shellfire and trenches. Aircraft With the development of effective anti-aircraft artillery in the period before the Second World War, military pilots, once the "knights of the air" during the First World War, became far more vulnerable to ground fire. As a response, armour plating was added to aircraft to protect aircrew and vulnerable areas such as engines and fuel tanks. Self-sealing fuel tanks functioned like armour in that they added protection but also increased weight and cost. Present Tank armour has progressed from the Second World War armour forms, now incorporating not only harder composites, but also reactive armour designed to defeat shaped charges. As a result of this, the main battle tank (MBT) conceived in the Cold War era can survive multiple rocket-propelled grenade strikes with minimal effect on the crew or the operation of the vehicle. The light tanks that were the last descendants of the light cavalry during the Second World War have almost completely disappeared from the world's militaries due to increased lethality of the weapons available to the vehicle-mounted infantry. The armoured personnel carrier (APC) was devised during the First World War. It allows the safe and rapid movement of infantry in a combat zone, minimising casualties and maximising mobility. APCs are fundamentally different from the previously used armoured half-tracks in that they offer a higher level of protection from artillery burst fragments, and greater mobility in more terrain types. The basic APC design was substantially expanded to an infantry fighting vehicle (IFV) when properties of an APC and a light tank were combined in one vehicle. Naval armour has fundamentally changed from the Second World War doctrine of thicker plating to defend against shells, bombs and torpedoes. Passive defence naval armour is limited to kevlar or steel (either single layer or as spaced armour) protecting particularly vital areas from the effects of nearby impacts. Since ships cannot carry enough armour to completely protect against anti-ship missiles, they depend more on defensive weapons destroying incoming missiles, or causing them to miss by confusing their guidance systems with electronic warfare. Although the role of the ground attack aircraft significantly diminished after the Korean War, it re-emerged during the Vietnam War, and in the recognition of this, the US Air Force authorised the design and production of what became the A-10 dedicated anti-armour and ground-attack aircraft that first saw action in the Gulf War. High-voltage transformer fire barriers are often required to defeat ballistics from small arms as well as projectiles from transformer bushings and lightning arresters, which form part of large electrical transformers, per NFPA 850. Such fire barriers may be designed to inherently function as armour, or may be passive fire protection materials augmented by armour, where care must be taken to ensure that the armour's reaction to fire does not cause issues with regards to the fire barrier being armoured to defeat explosions and projectiles in addition to fire, especially since both functions must be provided simultaneously, meaning they must be fire-tested together to provide realistic evidence of fitness for purpose. Combat drones use little to no vehicular armour as they are not crewed vessels, this results in them being lightweight and small in size. Animal armour Horse armour Body armour for war horses has been used since at least 2000 BC. Cloth, leather, and metal protection covered cavalry horses in ancient civilisations, including ancient Egypt, Assyria, Persia, and Rome. Some formed heavy cavalry units of armoured horses and riders used to attack infantry and mounted archers. Armour for horses is called barding (also spelled bard or barb) especially when used by European knights. During the late Middle Ages as armour protection for knights became more effective, their mounts became targets. This vulnerability was exploited by the Scots at the Battle of Bannockburn in the 14th century, when horses were killed by the infantry, and for the English at the Battle of Crécy in the same century where longbowmen shot horses and the then dismounted French knights were killed by heavy infantry. Barding developed as a response to such events. Examples of armour for horses could be found as far back as classical antiquity. Cataphracts, with scale armour for both rider and horse, are believed by many historians to have influenced the later European knights, via contact with the Byzantine Empire. Surviving period examples of barding are rare; however, complete sets are on display at the Philadelphia Museum of Art, the Wallace Collection in London, the Royal Armouries in Leeds, and the Metropolitan Museum of Art in New York City. Horse armour could be made in whole or in part of cuir bouilli (hardened leather), but surviving examples of this are especially rare. Elephant armour War elephants were first used in ancient times without armour, but armour was introduced because elephants injured by enemy weapons would often flee the battlefield. Elephant armour was often made from hardened leather, which was fitted onto an individual elephant while moist, then dried to create a hardened shell. Alternatively, metal armour pieces were sometimes sewn into heavy cloth. Later lamellar armour (small overlapping metal plates) was introduced. Full plate armour was not typically used due to its expense and the danger of the animal overheating.
Technology
Weapons
null
2186
https://en.wikipedia.org/wiki/Armadillo
Armadillo
Armadillos () are New World placental mammals in the order Cingulata. They form part of the superorder Xenarthra, along with the anteaters and sloths. 21 extant species of armadillo have been described, some of which are distinguished by the number of bands on their armor. All species are native to the Americas, where they inhabit a variety of different environments. Living armadillos are characterized by a leathery armor shell and long, sharp claws for digging. They have short legs, but can move quite quickly. The average length of an armadillo is about , including its tail. The giant armadillo grows up to and weighs up to , while the pink fairy armadillo has a length of only . When threatened by a predator, Tolypeutes species frequently roll up into a ball; they are the only species of armadillo capable of this. Recent genetic research has shown that the megafaunal glyptodonts (up to tall with maximum body masses of around 2 tonnes), which became extinct around 12,000 years ago are true armadillos more closely related to all other living armadillos than to Dasypus (the long-nosed or naked-tailed armadillos). Armadillos are currently classified into two families, Dasypodidae, with Dasypus as the only living genus, and Chlamyphoridae, which contains all other living armadillos as well as the glyptodonts. Etymology The word means in Spanish; it is derived from , with the diminutive suffix attached. While the phrase little armored one would translate to normally, the suffix can be used in place of when the diminutive is used in an approximative tense. The Aztecs called them , Nahuatl for : and . The Portuguese word for is which is derived from the Tupi language and ; and used in Argentina, Bolivia, Brazil, Paraguay and Uruguay; similar names are also found in other, especially European, languages. Other various vernacular names given are: (from ) in Argentina, Bolivia, Chile, Colombia and Peru; (from Nahuatl) in Costa Rica, El Salvador, Honduras and Nicaragua; in Argentina and Uruguay; in Argentina, Chile, Colombia and Uruguay; in Argentina, Brazil, Chile, Colombia and Paraguay; in Colombia and Venezuela in Tolima, Caldas and Antioquia, Colombia; in Caribbean Colombia; in southeast Mexico; in the state of Veracruz, Mexico; in Perú. Classification Family Dasypodidae Subfamily Dasypodinae Genus Dasypus Nine-banded armadillo or long-nosed armadillo, Dasypus novemcinctus Seven-banded armadillo, Dasypus septemcinctus Southern long-nosed armadillo, Dasypus hybridus Llanos long-nosed armadillo, Dasypus sabanicola Greater long-nosed armadillo, Dasypus kappleri Hairy long-nosed armadillo, Dasypus pilosus Yepes's mulita, Dasypus yepesi †Beautiful armadillo, Dasypus bellus †Dasypus neogaeus Genus †Stegotherium Family Chlamyphoridae Subfamily Chlamyphorinae Genus Calyptophractus Greater fairy armadillo, Calyptophractus retusus Genus Chlamyphorus Pink fairy armadillo, Chlamyphorus truncatus Subfamily Euphractinae Genus Chaetophractus Screaming hairy armadillo, Chaetophractus vellerosus Big hairy armadillo, Chaetophractus villosus Andean hairy armadillo, Chaetophractus nationi Genus †Macroeuphractus Genus †Paleuphractus Genus †Proeuphractus Genus †Doellotatus Genus †Peltephilus †Horned armadillo, Peltephilus ferox Genus Euphractus Six-banded armadillo, Euphractus sexcinctus Genus Zaedyus Pichi, Zaedyus pichiy Subfamily Tolypeutinae Genus †Kuntinaru Genus Cabassous Northern naked-tailed armadillo, Cabassous centralis Chacoan naked-tailed armadillo, Cabassous chacoensis Southern naked-tailed armadillo, Cabassous unicinctus Greater naked-tailed armadillo, Cabassous tatouay Genus Priodontes Giant armadillo, Priodontes maximus Genus Tolypeutes Southern three-banded armadillo, Tolypeutes matacus Brazilian three-banded armadillo, Tolypeutes tricinctus † indicates extinct taxon Phylogeny Below is a recent simplified phylogeny of the xenarthran families, which includes armadillos. The dagger symbol, "†", denotes extinct groups. Evolution Recent genetic research suggests that an extinct group of giant armored mammals, the glyptodonts, should be included within the lineage of armadillos, having diverged some 35 million years ago, more recently than previously assumed. Distribution Like all of the Xenarthra lineages, armadillos originated in South America. Due to the continent's former isolation, they were confined there for most of the Cenozoic. The recent formation of the Isthmus of Panama allowed a few members of the family to migrate northward into southern North America by the early Pleistocene, as part of the Great American Interchange. (Some of their much larger cingulate relatives, the pampatheres and chlamyphorid glyptodonts, made the same journey.) Today, all extant armadillo species are still present in South America. They are particularly diverse in Paraguay (where 11 species exist) and surrounding areas. Many species are endangered. Some, including four species of Dasypus, are widely distributed over the Americas, whereas others, such as Yepes's mulita, are restricted to small ranges. Two species, the northern naked-tailed armadillo and nine-banded armadillo, are found in Central America; the latter has also reached the United States, primarily in the south-central states (notably Texas), but with a range that extends as far east as North Carolina and Florida, and as far north as southern Nebraska and southern Indiana. Their range has consistently expanded in North America over the last century due to a lack of natural predators. Armadillos are increasingly documented in southern Illinois and are tracking northwards due to climate change. Characteristics Size The smallest species of armadillo, the pink fairy armadillo, weighs around and is in total length. The largest species, the giant armadillo, can weigh up to , and can be long. Diet and predation The diets of different armadillo species vary, but consist mainly of insects, grubs, and other invertebrates. Some species, however, feed almost entirely on ants and termites. They are prolific diggers. Many species use their sharp claws to dig for food, such as grubs, and to dig dens. The nine-banded armadillo prefers to build burrows in moist soil near the creeks, streams, and arroyos around which it lives and feeds. Armadillos have very poor eyesight, and use their keen sense of smell to hunt for food. They use their claws not only for digging and finding food but also for digging burrows for their dwellings, each of which is a single corridor the width of the animal's body. They have five clawed toes on their hind feet, and three to five toes with heavy digging claws on their fore feet. Armadillos have numerous cheek teeth which are not divided into premolars and molars, but usually have no incisors or canines. The dentition of the nine-banded armadillo is P 7/7, M 1/1 = 32. Body temperature In common with other xenarthrans, armadillos, in general, have low body temperatures of and low basal metabolic rates (40–60% of that expected in placental mammals of their mass). This is particularly true of types that specialize in using termites as their primary food source (for example, Priodontes and Tolypeutes). Skin The armor is formed by plates of dermal bone covered in relatively small overlapping epidermal scales called "scutes" which are composed of keratin. The skin of an armadillo can glow under ultraviolet light. Most species have rigid shields over the shoulders and hips, with a number of bands separated by flexible skin covering the back and flanks. Additional armor covers the top of the head, the upper parts of the limbs, and the tail. The underside of the animal is never armored and is simply covered with soft skin and fur. This armor-like skin appears to be an important defense for many armadillos, although most escape predators by fleeing (often into thorny patches, from which their armor protects them) or digging to safety. Only the South American three-banded armadillos (Tolypeutes) rely heavily on their armor for protection. Defensive behavior When threatened by a predator, Tolypeutes species frequently roll up into a ball. Other armadillo species cannot roll up because they have too many plates. When surprised, the North American nine-banded armadillo tends to jump straight in the air, which can lead to a fatal collision with the undercarriage or fenders of passing vehicles. Movement Armadillos have short legs, but can move quite quickly. The nine-banded armadillo is noted for its movement through water, which is accomplished via two different methods: it can walk underwater for short distances, holding its breath for as long as six minutes; or, to cross larger bodies of water, it can increase its buoyancy by swallowing air to inflate its stomach and intestines. Reproduction Gestation lasts from 60 to 120 days, depending on species, although the nine-banded armadillo also exhibits delayed implantation, so the young are not typically born for eight months after mating. Most members of the genus Dasypus give birth to four monozygotic young (that is, identical quadruplets), but other species may have typical litter sizes that range from one to eight. The young are born with soft, leathery skin which hardens within a few weeks. They reach sexual maturity in three to twelve months, depending on the species. Armadillos are solitary animals that do not share their burrows with other adults. Armadillos and humans Science and education Armadillos are often used in the study of leprosy, since they, along with mangabey monkeys, rabbits, and mice (on their footpads), are among the few known species that can contract the disease systemically. They are particularly susceptible due to their unusually low body temperature, which is hospitable to the leprosy bacterium, Mycobacterium leprae. (The leprosy bacterium is difficult to culture and armadillos have a body temperature of , similar to human skin.) Humans can acquire a leprosy infection from armadillos by handling them or consuming armadillo meat. Armadillos are a presumed vector and natural reservoir for the disease in Texas, Louisiana and Florida. Prior to the arrival of Europeans in the late 15th century, leprosy was unknown in the New World. Given that armadillos are native to the New World, at some point they must have acquired the disease from old-world humans. The armadillo is also a natural reservoir for Chagas disease. The nine-banded armadillo also serves science through its unusual reproductive system, in which four genetically identical offspring are born, the result of one original egg. Because they are always genetically identical, the group of four young provides a good subject for scientific, behavioral, or medical tests that need consistent biological and genetic makeup in the test subjects. This is the only reliable manifestation of polyembryony in the class Mammalia, and exists only within the genus Dasypus and not in all armadillos, as is commonly believed. Other species that display this trait include parasitoid wasps, certain flatworms, and various aquatic invertebrates. Even though they have a leathery, tough shell, armadillos, (mainly Dasypus) are common roadkill due to their habit of jumping 3–4 ft vertically when startled, which puts them into collision with the underside of vehicles. Wildlife enthusiasts are using the northward march of the armadillo as an opportunity to educate others about the animals, which can be a burrowing nuisance to property owners and managers. Culture Armadillo shells have traditionally been used to make the back of the charango, an Andean lute instrument. In certain parts of Central and South America, armadillo meat is eaten; it is a popular ingredient in Oaxaca, Mexico. During the Great Depression, Americans were known to eat armadillo, known begrudgingly as "Hoover hogs", a nod to the belief that President Herbert Hoover was responsible for the economic despair facing the nation at that time. A whimsical account of The Beginning of the Armadillos is one of the chapters of Rudyard Kipling's Just So Stories 1902 children's book. The vocal and piano duo Flanders and Swann recorded a humorous song called "The Armadillo". Shel Silverstein wrote a two-line poem called "Instructions" on how to bathe an armadillo in his collection A Light in the Attic. The reference was "use one bar of soap, a whole lot of hope, and 72 pads of Brillo."
Biology and health sciences
Mammals
null
2202
https://en.wikipedia.org/wiki/Analytic%20geometry
Analytic geometry
In mathematics, analytic geometry, also known as coordinate geometry or Cartesian geometry, is the study of geometry using a coordinate system. This contrasts with synthetic geometry. Analytic geometry is used in physics and engineering, and also in aviation, rocketry, space science, and spaceflight. It is the foundation of most modern fields of geometry, including algebraic, differential, discrete and computational geometry. Usually the Cartesian coordinate system is applied to manipulate equations for planes, straight lines, and circles, often in two and sometimes three dimensions. Geometrically, one studies the Euclidean plane (two dimensions) and Euclidean space. As taught in school books, analytic geometry can be explained more simply: it is concerned with defining and representing geometric shapes in a numerical way and extracting numerical information from shapes' numerical definitions and representations. That the algebra of the real numbers can be employed to yield results about the linear continuum of geometry relies on the Cantor–Dedekind axiom. History Ancient Greece The Greek mathematician Menaechmus solved problems and proved theorems by using a method that had a strong resemblance to the use of coordinates and it has sometimes been maintained that he had introduced analytic geometry. Apollonius of Perga, in On Determinate Section, dealt with problems in a manner that may be called an analytic geometry of one dimension; with the question of finding points on a line that were in a ratio to the others. Apollonius in the Conics further developed a method that is so similar to analytic geometry that his work is sometimes thought to have anticipated the work of Descartes by some 1800 years. His application of reference lines, a diameter and a tangent is essentially no different from our modern use of a coordinate frame, where the distances measured along the diameter from the point of tangency are the abscissas, and the segments parallel to the tangent and intercepted between the axis and the curve are the ordinates. He further developed relations between the abscissas and the corresponding ordinates that are equivalent to rhetorical equations (expressed in words) of curves. However, although Apollonius came close to developing analytic geometry, he did not manage to do so since he did not take into account negative magnitudes and in every case the coordinate system was superimposed upon a given curve a posteriori instead of a priori. That is, equations were determined by curves, but curves were not determined by equations. Coordinates, variables, and equations were subsidiary notions applied to a specific geometric situation. Persia The 11th-century Persian mathematician Omar Khayyam saw a strong relationship between geometry and algebra and was moving in the right direction when he helped close the gap between numerical and geometric algebra with his geometric solution of the general cubic equations, but the decisive step came later with Descartes. Omar Khayyam is credited with identifying the foundations of algebraic geometry, and his book Treatise on Demonstrations of Problems of Algebra (1070), which laid down the principles of analytic geometry, is part of the body of Persian mathematics that was eventually transmitted to Europe. Because of his thoroughgoing geometrical approach to algebraic equations, Khayyam can be considered a precursor to Descartes in the invention of analytic geometry. Western Europe Analytic geometry was independently invented by René Descartes and Pierre de Fermat, although Descartes is sometimes given sole credit. Cartesian geometry, the alternative term used for analytic geometry, is named after Descartes. Descartes made significant progress with the methods in an essay titled La Géométrie (Geometry), one of the three accompanying essays (appendices) published in 1637 together with his Discourse on the Method for Rightly Directing One's Reason and Searching for Truth in the Sciences, commonly referred to as Discourse on Method. La Geometrie, written in his native French tongue, and its philosophical principles, provided a foundation for calculus in Europe. Initially the work was not well received, due, in part, to the many gaps in arguments and complicated equations. Only after the translation into Latin and the addition of commentary by van Schooten in 1649 (and further work thereafter) did Descartes's masterpiece receive due recognition. Pierre de Fermat also pioneered the development of analytic geometry. Although not published in his lifetime, a manuscript form of Ad locos planos et solidos isagoge (Introduction to Plane and Solid Loci) was circulating in Paris in 1637, just prior to the publication of Descartes' Discourse. Clearly written and well received, the Introduction also laid the groundwork for analytical geometry. The key difference between Fermat's and Descartes' treatments is a matter of viewpoint: Fermat always started with an algebraic equation and then described the geometric curve that satisfied it, whereas Descartes started with geometric curves and produced their equations as one of several properties of the curves. As a consequence of this approach, Descartes had to deal with more complicated equations and he had to develop the methods to work with polynomial equations of higher degree. It was Leonhard Euler who first applied the coordinate method in a systematic study of space curves and surfaces. Coordinates In analytic geometry, the plane is given a coordinate system, by which every point has a pair of real number coordinates. Similarly, Euclidean space is given coordinates where every point has three coordinates. The value of the coordinates depends on the choice of the initial point of origin. There are a variety of coordinate systems used, but the most common are the following: Cartesian coordinates (in a plane or space) The most common coordinate system to use is the Cartesian coordinate system, where each point has an x-coordinate representing its horizontal position, and a y-coordinate representing its vertical position. These are typically written as an ordered pair (x, y). This system can also be used for three-dimensional geometry, where every point in Euclidean space is represented by an ordered triple of coordinates (x, y, z). Polar coordinates (in a plane) In polar coordinates, every point of the plane is represented by its distance r from the origin and its angle θ, with θ normally measured counterclockwise from the positive x-axis. Using this notation, points are typically written as an ordered pair (r, θ). One may transform back and forth between two-dimensional Cartesian and polar coordinates by using these formulae: This system may be generalized to three-dimensional space through the use of cylindrical or spherical coordinates. Cylindrical coordinates (in a space) In cylindrical coordinates, every point of space is represented by its height z, its radius r from the z-axis and the angle θ its projection on the xy-plane makes with respect to the horizontal axis. Spherical coordinates (in a space) In spherical coordinates, every point in space is represented by its distance ρ from the origin, the angle θ its projection on the xy-plane makes with respect to the horizontal axis, and the angle φ that it makes with respect to the z-axis. The names of the angles are often reversed in physics. Equations and curves In analytic geometry, any equation involving the coordinates specifies a subset of the plane, namely the solution set for the equation, or locus. For example, the equation y = x corresponds to the set of all the points on the plane whose x-coordinate and y-coordinate are equal. These points form a line, and y = x is said to be the equation for this line. In general, linear equations involving x and y specify lines, quadratic equations specify conic sections, and more complicated equations describe more complicated figures. Usually, a single equation corresponds to a curve on the plane. This is not always the case: the trivial equation x = x specifies the entire plane, and the equation x2 + y2 = 0 specifies only the single point (0, 0). In three dimensions, a single equation usually gives a surface, and a curve must be specified as the intersection of two surfaces (see below), or as a system of parametric equations. The equation x2 + y2 = r2 is the equation for any circle centered at the origin (0, 0) with a radius of r. Lines and planes Lines in a Cartesian plane, or more generally, in affine coordinates, can be described algebraically by linear equations. In two dimensions, the equation for non-vertical lines is often given in the slope-intercept form: where: m is the slope or gradient of the line. b is the y-intercept of the line. x is the independent variable of the function y = f(x). In a manner analogous to the way lines in a two-dimensional space are described using a point-slope form for their equations, planes in a three dimensional space have a natural description using a point in the plane and a vector orthogonal to it (the normal vector) to indicate its "inclination". Specifically, let be the position vector of some point , and let be a nonzero vector. The plane determined by this point and vector consists of those points , with position vector , such that the vector drawn from to is perpendicular to . Recalling that two vectors are perpendicular if and only if their dot product is zero, it follows that the desired plane can be described as the set of all points such that (The dot here means a dot product, not scalar multiplication.) Expanded this becomes This is just a linear equation: Conversely, it is easily shown that if a, b, c and d are constants and a, b, and c are not all zero, then the graph of the equation This familiar equation for a plane is called the general form of the equation of the plane. In three dimensions, lines can not be described by a single linear equation, so they are frequently described by parametric equations: where: x, y, and z are all functions of the independent variable t which ranges over the real numbers. (x0, y0, z0) is any point on the line. a, b, and c are related to the slope of the line, such that the vector (a, b, c) is parallel to the line. Conic sections In the Cartesian coordinate system, the graph of a quadratic equation in two variables is always a conic section – though it may be degenerate, and all conic sections arise in this way. The equation will be of the form As scaling all six constants yields the same locus of zeros, one can consider conics as points in the five-dimensional projective space The conic sections described by this equation can be classified using the discriminant If the conic is non-degenerate, then: if , the equation represents an ellipse; if and , the equation represents a circle, which is a special case of an ellipse; if , the equation represents a parabola; if , the equation represents a hyperbola; if we also have , the equation represents a rectangular hyperbola. Quadric surfaces A quadric, or quadric surface, is a 2-dimensional surface in 3-dimensional space defined as the locus of zeros of a quadratic polynomial. In coordinates , the general quadric is defined by the algebraic equation Quadric surfaces include ellipsoids (including the sphere), paraboloids, hyperboloids, cylinders, cones, and planes. Distance and angle In analytic geometry, geometric notions such as distance and angle measure are defined using formulas. These definitions are designed to be consistent with the underlying Euclidean geometry. For example, using Cartesian coordinates on the plane, the distance between two points (x1, y1) and (x2, y2) is defined by the formula which can be viewed as a version of the Pythagorean theorem. Similarly, the angle that a line makes with the horizontal can be defined by the formula where m is the slope of the line. In three dimensions, distance is given by the generalization of the Pythagorean theorem: while the angle between two vectors is given by the dot product. The dot product of two Euclidean vectors A and B is defined by where θ is the angle between A and B. Transformations Transformations are applied to a parent function to turn it into a new function with similar characteristics. The graph of is changed by standard transformations as follows: Changing to moves the graph to the right units. Changing to moves the graph up units. Changing to stretches the graph horizontally by a factor of . (think of the as being dilated) Changing to stretches the graph vertically. Changing to and changing to rotates the graph by an angle . There are other standard transformation not typically studied in elementary analytic geometry because the transformations change the shape of objects in ways not usually considered. Skewing is an example of a transformation not usually considered. For more information, consult the Wikipedia article on affine transformations. For example, the parent function has a horizontal and a vertical asymptote, and occupies the first and third quadrant, and all of its transformed forms have one horizontal and vertical asymptote, and occupies either the 1st and 3rd or 2nd and 4th quadrant. In general, if , then it can be transformed into . In the new transformed function, is the factor that vertically stretches the function if it is greater than 1 or vertically compresses the function if it is less than 1, and for negative values, the function is reflected in the -axis. The value compresses the graph of the function horizontally if greater than 1 and stretches the function horizontally if less than 1, and like , reflects the function in the -axis when it is negative. The and values introduce translations, , vertical, and horizontal. Positive and values mean the function is translated to the positive end of its axis and negative meaning translation towards the negative end. Transformations can be applied to any geometric equation whether or not the equation represents a function. Transformations can be considered as individual transactions or in combinations. Suppose that is a relation in the plane. For example, is the relation that describes the unit circle. Finding intersections of geometric objects For two geometric objects P and Q represented by the relations and the intersection is the collection of all points which are in both relations. For example, might be the circle with radius 1 and center : and might be the circle with radius 1 and center . The intersection of these two circles is the collection of points which make both equations true. Does the point make both equations true? Using for , the equation for becomes or which is true, so is in the relation . On the other hand, still using for the equation for becomes or which is false. is not in so it is not in the intersection. The intersection of and can be found by solving the simultaneous equations: Traditional methods for finding intersections include substitution and elimination. Substitution: Solve the first equation for in terms of and then substitute the expression for into the second equation: We then substitute this value for into the other equation and proceed to solve for : Next, we place this value of in either of the original equations and solve for : So our intersection has two points: Elimination: Add (or subtract) a multiple of one equation to the other equation so that one of the variables is eliminated. For our current example, if we subtract the first equation from the second we get . The in the first equation is subtracted from the in the second equation leaving no term. The variable has been eliminated. We then solve the remaining equation for , in the same way as in the substitution method: We then place this value of in either of the original equations and solve for : So our intersection has two points: For conic sections, as many as 4 points might be in the intersection. Finding intercepts One type of intersection which is widely studied is the intersection of a geometric object with the and coordinate axes. The intersection of a geometric object and the -axis is called the -intercept of the object. The intersection of a geometric object and the -axis is called the -intercept of the object. For the line , the parameter specifies the point where the line crosses the axis. Depending on the context, either or the point is called the -intercept. Geometric axis Axis in geometry is the perpendicular line to any line, object or a surface. Also for this may be used the common language use as a: normal (perpendicular) line, otherwise in engineering as axial line. In geometry, a normal is an object such as a line or vector that is perpendicular to a given object. For example, in the two-dimensional case, the normal line to a curve at a given point is the line perpendicular to the tangent line to the curve at the point. In the three-dimensional case a surface normal, or simply normal, to a surface at a point P is a vector that is perpendicular to the tangent plane to that surface at P. The word "normal" is also used as an adjective: a line normal to a plane, the normal component of a force, the normal vector, etc. The concept of normality generalizes to orthogonality. Spherical and nonlinear planes and their tangents Tangent is the linear approximation of a spherical or other curved or twisted line of a function. Tangent lines and planes In geometry, the tangent line (or simply tangent) to a plane curve at a given point is the straight line that "just touches" the curve at that point. Informally, it is a line through a pair of infinitely close points on the curve. More precisely, a straight line is said to be a tangent of a curve at a point on the curve if the line passes through the point on the curve and has slope where f is the derivative of f. A similar definition applies to space curves and curves in n-dimensional Euclidean space. As it passes through the point where the tangent line and the curve meet, called the point of tangency, the tangent line is "going in the same direction" as the curve, and is thus the best straight-line approximation to the curve at that point. Similarly, the tangent plane to a surface at a given point is the plane that "just touches" the surface at that point. The concept of a tangent is one of the most fundamental notions in differential geometry and has been extensively generalized; see Tangent space.
Mathematics
Geometry
null
2208
https://en.wikipedia.org/wiki/Arctic%20fox
Arctic fox
The Arctic fox (Vulpes lagopus), also known as the white fox, polar fox, or snow fox, is a small species of fox native to the Arctic regions of the Northern Hemisphere and common throughout the Arctic tundra biome. It is well adapted to living in cold environments, and is best known for its thick, warm fur that is also used as camouflage. It has a large and very fluffy tail. In the wild, most individuals do not live past their first year but some exceptional ones survive up to 11 years. Its body length ranges from , with a generally rounded body shape to minimize the escape of body heat. The Arctic fox preys on many small creatures such as lemmings, voles, ringed seal pups, fish, waterfowl, and seabirds. It also eats carrion, berries, seaweed, and insects and other small invertebrates. Arctic foxes form monogamous pairs during the breeding season and they stay together to raise their young in complex underground dens. Occasionally, other family members may assist in raising their young. Natural predators of the Arctic fox are golden eagles, Arctic wolves, polar bears, wolverines, red foxes, and grizzly bears. Behavior Arctic foxes must endure a temperature difference of up to between the external environment and their internal core temperature. To prevent heat loss, the Arctic fox curls up tightly tucking its legs and head under its body and behind its furry tail. This position gives the fox the smallest surface area to volume ratio and protects the least insulated areas. Arctic foxes also stay warm by getting out of the wind and residing in their dens. Although the Arctic foxes are active year-round and do not hibernate, they attempt to preserve fat by reducing their locomotor activity. They build up their fat reserves in the autumn, sometimes increasing their body weight by more than 50%. This provides greater insulation during the winter and a source of energy when food is scarce. Reproduction In the spring, the Arctic fox's attention switches to reproduction and a home for their potential offspring. They live in large dens in frost-free, slightly raised ground. These are complex systems of tunnels covering as much as and are often in eskers, long ridges of sedimentary material deposited in formerly glaciated regions. These dens may be in existence for many decades and are used by many generations of foxes. Arctic foxes tend to select dens that are easily accessible with many entrances, and that are clear from snow and ice making it easier to burrow in. The Arctic fox builds and chooses dens that face southward towards the sun, which makes the den warmer. Arctic foxes prefer large, maze-like dens for predator evasion and a quick escape especially when red foxes are in the area. Natal dens are typically found in rugged terrain, which may provide more protection for the pups. But, the parents will also relocate litters to nearby dens to avoid predators. When red foxes are not in the region, Arctic foxes will use dens that the red fox previously occupied. Shelter quality is more important to the Arctic fox than the proximity of spring prey to a den. The main prey of the Arctic fox in the tundra are lemmings, which is why the white fox is often called the "lemming fox". The white fox's reproduction rates reflect the lemming population density, which cyclically fluctuates every 3–5 years. When lemmings are abundant, the white fox can give birth to 18 pups, but they often do not reproduce when food is scarce. The "coastal fox" or blue fox lives in an environment where food availability is relatively consistent, and they will have up to 5 pups every year. Breeding usually takes place in April and May, and the gestation period is about 52 days. Litters may contain as many as 25 (the largest litter size in the order Carnivora). The young emerge from the den when 3 to 4 weeks old and are weaned by 9 weeks of age. Arctic foxes are primarily monogamous and both parents will care for the offspring. When predators and prey are abundant, Arctic foxes are more likely to be promiscuous (exhibited in both males and females) and display more complex social structures. Larger packs of foxes consisting of breeding or non-breeding males or females can guard a single territory more proficiently to increase pup survival. When resources are scarce, competition increases and the number of foxes in a territory decreases. On the coasts of Svalbard, the frequency of complex social structures is larger than inland foxes that remain monogamous due to food availability. In Scandinavia, there are more complex social structures compared to other populations due to the presence of the red fox. Also, conservationists are supplying the declining population with supplemental food. One unique case, however, is Iceland where monogamy is the most prevalent. The older offspring (1-year-olds) often remain within their parent's territory even though predators are absent and there are fewer resources, which may indicate kin selection in the fox. Diet Arctic foxes generally eat any small animal they can find, including lemmings, voles, other rodents, hares, birds, eggs, fish, and carrion. They scavenge on carcasses left by larger predators such as wolves and polar bears, and in times of scarcity also eat their feces. In areas where they are present, lemmings are their most common prey, and a family of foxes can eat dozens of lemmings each day. In some locations in northern Canada, a high seasonal abundance of migrating birds that breed in the area may provide an important food source. On the coast of Iceland and other islands, their diet consists predominantly of birds. During April and May, the Arctic fox also preys on ringed seal pups when the young animals are confined to a snow den and are relatively helpless. They also consume berries and seaweed, so they may be considered omnivores. This fox is a significant bird-egg predator, consuming eggs of all except the largest tundra bird species. Arctic foxes survive harsh winters and food scarcity by either hoarding food or storing body fat subcutaneously and viscerally. At the beginning of winter, one Arctic fox has approximately 14740 kJ of energy storage from fat alone. Using the lowest BMR value measured in Arctic foxes, an average sized fox of would need 471 kJ/day during the winter to survive. In Canada, Arctic foxes acquire from snow goose eggs at a rate of 2.7–7.3 eggs/h and store 80–97% of them. Scats provide evidence that they eat the eggs during the winter after caching. Isotope analysis shows that eggs can still be eaten after a year, and the metabolizable energy of a stored goose egg only decreases by 11% after 60 days; a fresh egg has about 816 kJ. Eggs stored in the summer are accessed the following spring prior to reproduction. Adaptations The Arctic fox lives in some of the most frigid extremes on the planet, but they do not start to shiver until the temperature drops to . Among its adaptations for survival in the cold is its dense, multilayered pelage, which provides excellent insulation. Additionally, the Arctic fox is the only canid whose foot pads are covered in fur. There are two genetically distinct coat color morphs: white and blue. The white morph has seasonal camouflage, white in winter and brown along the back with light grey around the abdomen in summer. The blue morph is often a dark blue, brown, or grey color year-round. Although the blue allele is dominant over the white allele, 99% of the Arctic fox population is the white morph. Two similar mutations to MC1R cause the blue color and the lack of seasonal color change. The fur of the Arctic fox provides the best insulation of any mammal. The Arctic fox has a low surface area to volume ratio, as evidenced by its generally compact body shape, short muzzle and legs, and short, thick ears. Since less of its surface area is exposed to the Arctic cold, less heat escapes from its body. Sensory modalities The Arctic fox has a functional hearing range between 125 Hz–16 kHz with a sensitivity that is ≤ 60 dB in air, and an average peak sensitivity of 24 dB at 4 kHz. Overall, the Arctic foxes hearing is less sensitive than the dog and the kit fox. The Arctic fox and the kit fox have a low upper-frequency limit compared to the domestic dog and other carnivores. The Arctic fox can easily hear lemmings burrowing under 4-5 inches of snow. When it has located its prey, it pounces and punches through the snow to catch its prey. The Arctic fox also has a keen sense of smell. They can smell carcasses that are often left by polar bears anywhere from . It is possible that they use their sense of smell to also track down polar bears. Additionally, Arctic foxes can smell and find frozen lemmings under of snow, and can detect a subnivean seal lair under of snow. Physiology The Arctic fox contains advantageous genes to overcome extreme cold and starvation periods. Transcriptome sequencing has identified two genes that are under positive selection: Glycolipid transfer protein domain containing 1 (GLTPD1) and V-akt murine thymoma viral oncogene homolog 2 (AKT2). GLTPD1 is involved in the fatty acid metabolism, while AKT2 pertains to the glucose metabolism and insulin signaling. The average mass specific BMR and total BMR are 37% and 27% lower in the winter than the summer. The Arctic fox decreases its BMR via metabolic depression in the winter to conserve fat storage and minimize energy requirements. According to the most recent data, the lower critical temperature of the Arctic fox is at in the winter and in the summer. It was commonly believed that the Arctic fox had a lower critical temperature below . However, some scientists have concluded that this statistic is not accurate since it was never tested using the proper equipment. About 22% of the total body surface area of the Arctic fox dissipates heat readily compared to red foxes at 33%. The regions that have the greatest heat loss are the nose, ears, legs, and feet, which is useful in the summer for thermal heat regulation. Also, the Arctic fox has a beneficial mechanism in their nose for evaporative cooling like dogs, which keeps the brain cool during the summer and exercise. The thermal conductivity of Arctic fox fur in the summer and winter is the same; however, the thermal conductance of the Arctic fox in the winter is lower than the summer since fur thickness increases by 140%. In the summer, the thermal conductance of the Arctic foxes body is 114% higher than the winter, but their body core temperature is constant year-round. One way that Arctic foxes regulate their body temperature is by utilizing a countercurrent heat exchange in the blood of their legs. Arctic foxes can constantly keep their feet above the tissue freezing point () when standing on cold substrates without losing mobility or feeling pain. They do this by increasing vasodilation and blood flow to a capillary rete in the pad surface, which is in direct contact with the snow rather than the entire foot. They selectively vasoconstrict blood vessels in the center of the foot pad, which conserves energy and minimizes heat loss. Arctic foxes maintain the temperature in their paws independently from the core temperature. If the core temperature drops, the pad of the foot will remain constantly above the tissue freezing point. Size The average head-and-body length of the male is , with a range of , while the female averages with a range of . In some regions, no difference in size is seen between males and females. The tail is about long in both sexes. The height at the shoulder is . On average males weigh , with a range of , while females average , with a range of . Taxonomy Vulpes lagopus is a 'true fox' belonging to the genus Vulpes of the fox tribe Vulpini, which consists of 12 extant species. It is classified under the subfamily Caninae of the canid family Canidae. Although it has previously been assigned to its own monotypic genus Alopex, recent genetic evidence now places it in the genus Vulpes along with the majority of other foxes. It was originally described by Carl Linnaeus in the 10th edition of Systema Naturae in 1758 as Canis lagopus. The type specimen was recovered from Lapland, Sweden. The generic name vulpes is Latin for "fox". The specific name lagopus is derived from Ancient Greek λαγώς (lagōs, "hare") and πούς (pous, "foot"), referring to the hair on its feet similar to those found in cold-climate species of hares. Looking at the most recent phylogeny, the Arctic fox and the red fox (Vulpes vulpes) diverged approximately 3.17MYA. Additionally, the Arctic fox diverged from its sister group, the kit fox (Vulpes macrotis), at about 0.9MYA. Origins The origins of the Arctic fox have been described by the "out of Tibet" hypothesis. On the Tibetan Plateau, fossils of the extinct ancestral Arctic fox (Vulpes qiuzhudingi) from the early Pliocene (5.08–3.6 MYA) were found along with many other precursors of modern mammals that evolved during the Pliocene (5.3–2.6 MYA). It is believed that this ancient fox is the ancestor of the modern Arctic fox. Globally, the Pliocene was about 2–3 °C warmer than today, and the Arctic during the summer in the mid-Pliocene was 8 °C warmer. By using stable carbon and oxygen isotope analysis of fossils, researchers claim that the Tibetan Plateau experienced tundra-like conditions during the Pliocene and harbored cold-adapted mammals that later spread to North America and Eurasia during the Pleistocene Epoch (2.6 million-11,700 years ago). Subspecies Besides the nominate subspecies, the common Arctic fox, V. l. lagopus, four other subspecies of this fox have been described: Bering Islands Arctic fox, V. l. beringensis Greenland Arctic fox, V. l. foragoapusis Iceland Arctic fox, V. l. fuliginosus Pribilof Islands Arctic fox, V. l. pribilofensis Distribution and habitat The Arctic fox has a circumpolar distribution and occurs in Arctic tundra habitats in northern Europe, northern Asia, and North America. Its range includes Greenland, Iceland, Fennoscandia, Svalbard, Jan Mayen (where it was hunted to extinction) and other islands in the Barents Sea, northern Russia, islands in the Bering Sea, Alaska, and Canada as far south as Hudson Bay. In the late 19th century, it was introduced into the Aleutian Islands southwest of Alaska. However, the population on the Aleutian Islands is currently being eradicated in conservation efforts to preserve the local bird population. It mostly inhabits tundra and pack ice, but is also present in Canadian boreal forests (northeastern Alberta, northern Saskatchewan, northern Manitoba, Northern Ontario, Northern Quebec, and Newfoundland and Labrador) and the Kenai Peninsula in Alaska. They are found at elevations up to above sea level and have been seen on sea ice close to the North Pole. The Arctic fox is the only land mammal native to Iceland. It came to the isolated North Atlantic island at the end of the last ice age, walking over the frozen sea. The Arctic Fox Center in Súðavík contains an exhibition on the Arctic fox and conducts studies on the influence of tourism on the population. Its range during the last ice age was much more extensive than it is now, and fossil remains of the Arctic fox have been found over much of northern Europe and Siberia. The color of the fox's coat also determines where they are most likely to be found. The white morph mainly lives inland and blends in with the snowy tundra, while the blue morph occupies the coasts because its dark color blends in with the cliffs and rocks. Migrations and travel During the winter, 95.5% of Arctic foxes utilize commuting trips, which remain within the fox's home range. Commuting trips in Arctic foxes last less than 3 days and occur between 0–2.9 times a month. Nomadism is found in 3.4% of the foxes, and loop migrations (where the fox travels to a new range, then returns to its home range) are the least common at 1.1%. Arctic foxes in Canada that undergo nomadism and migrations voyage from the Canadian archipelago to Greenland and northwestern Canada. The duration and distance traveled between males and females is not significantly different. Arctic foxes closer to goose colonies (located at the coasts) are less likely to migrate. Meanwhile, foxes experiencing low-density lemming populations are more likely to make sea ice trips. Residency is common in the Arctic fox population so that they can maintain their territories. Migratory foxes have a mortality rate >3 times higher than resident foxes. Nomadic behavior becomes more common as the foxes age. In July 2019, the Norwegian Polar Institute reported the story of a yearling female which was fitted with a GPS tracking device and then released by their researchers on the east coast of Spitsbergen in the Svalbard group of islands. The young fox crossed the polar ice from the islands to Greenland in 21 days, a distance of . She then moved on to Ellesmere Island in northern Canada, covering a total recorded distance of in 76 days, before her GPS tracker stopped working. She averaged just over a day, and managed as much as in a single day. Conservation status The Arctic fox has been assessed as least concern on the IUCN Red List since 2004. However, the Scandinavian mainland population is acutely endangered, despite being legally protected from hunting and persecution for several decades. The estimate of the adult population in all of Norway, Sweden, and Finland is fewer than 200 individuals. Of these, especially in Finland, the Arctic fox is even classified as critically endangered, because even though the animal was declared a protected species in Finland in 1940, the population has not recovered despite that. As a result, the populations of Arctic fox have been carefully studied and inventoried in places such as the Vindelfjällens Nature Reserve (Sweden), which has the Arctic fox as its symbol. The abundance of the Arctic fox tends to fluctuate in a cycle along with the population of lemmings and voles (a 3- to 4-year cycle). The populations are especially vulnerable during the years when the prey population crashes, and uncontrolled trapping has almost eradicated two subpopulations. The pelts of Arctic foxes with a slate-blue coloration were especially valuable. They were transported to various previously fox-free Aleutian Islands during the 1920s. The program was successful in terms of increasing the population of blue foxes, but their predation of Aleutian Canada geese conflicted with the goal of preserving that species. The Arctic fox is losing ground to the larger red fox. This has been attributed to climate change—the camouflage value of its lighter coat decreases with less snow cover. Red foxes dominate where their ranges begin to overlap by killing Arctic foxes and their kits. An alternative explanation of the red fox's gains involves the gray wolf. Historically, it has kept red fox numbers down, but as the wolf has been hunted to near extinction in much of its former range, the red fox population has grown larger, and it has taken over the niche of top predator. In areas of northern Europe, programs are in place that allow the hunting of red foxes in the Arctic fox's previous range. As with many other game species, the best sources of historical and large-scale population data are hunting bag records and questionnaires. Several potential sources of error occur in such data collections. In addition, numbers vary widely between years due to the large population fluctuations. However, the total population of the Arctic fox must be in the order of several hundred thousand animals. The world population of Arctic foxes is thus not endangered, but two Arctic fox subpopulations are. One is on Medny Island (Commander Islands, Russia), which was reduced by some 85–90%, to around 90 animals, as a result of mange caused by an ear tick introduced by dogs in the 1970s. The population is currently under treatment with antiparasitic drugs, but the result is still uncertain. The other threatened population is the one in Fennoscandia (Norway, Sweden, Finland, and Kola Peninsula). This population decreased drastically around the start of the 20th century as a result of extreme fur prices, which caused severe hunting also during population lows. The population has remained at a low density for more than 90 years, with additional reductions during the last decade. The total population estimate for 1997 is around 60 adults in Sweden, 11 adults in Finland, and 50 in Norway. From Kola, there are indications of a similar situation, suggesting a population of around 20 adults. The Fennoscandian population thus numbers around 140 breeding adults. Even after local lemming peaks, the Arctic fox population tends to collapse back to levels dangerously close to nonviability. The Arctic fox is classed as a "prohibited new organism" under New Zealand's Hazardous Substances and New Organisms Act 1996, preventing it from being imported into the country.
Biology and health sciences
Canines
Animals
2219
https://en.wikipedia.org/wiki/Aircraft%20carrier
Aircraft carrier
An aircraft carrier is a warship that serves as a seagoing airbase, equipped with a full-length flight deck and hangar facilities for supporting, arming, deploying and recovering shipborne aircraft. Typically it is the capital ship of a fleet (known as a carrier battle group), as it allows a naval force to project seaborne air power far from homeland without depending on local airfields for staging aircraft operations. Since their inception in the early 20th century, aircraft carriers have evolved from wooden vessels used to deploy individual tethered reconnaissance balloons, to nuclear-powered supercarriers that carry dozens of fighters, strike aircraft, military helicopters, AEW&Cs and other types of aircraft such as UCAVs. While heavier fixed-wing aircraft such as airlifters, gunships and bombers have been launched from aircraft carriers, these aircraft have not landed on a carrier due to flight deck limitations. The aircraft carrier, along with its onboard aircraft and defensive ancillary weapons, is the largest weapon system ever created. By their tactical prowess, mobility, autonomy and the variety of operational means, aircraft carriers are often the centerpiece of modern naval warfare, and have significant diplomatic influence in deterrence, command of the sea and air supremacy. Since the Second World War, the aircraft carrier has replaced the battleship in the role of flagship of a fleet, and largely transformed naval battles from gunfire to beyond-visual-range air strikes. In addition to tactical aptitudes, it has great strategic advantages in that, by sailing in international waters, it does not need to interfere with any territorial sovereignty and thus does not risk diplomatic complications or conflict escalation due to trespassing, and obviates the need for land use authorizations from third-party countries, reduces the times and transit logistics of aircraft and therefore significantly increases the time of availability on the combat zone. There is no single definition of an "aircraft carrier", and modern navies use several variants of the type. These variants are sometimes categorized as sub-types of aircraft carriers, and sometimes as distinct types of aviation-capable ships. Aircraft carriers may be classified according to the type of aircraft they carry and their operational assignments. Admiral Sir Mark Stanhope, RN, former First Sea Lord (head) of the Royal Navy, has said, "To put it simply, countries that aspire to strategic international influence have aircraft carriers." Henry Kissinger, while United States Secretary of State, also said: "An aircraft carrier is 100,000 tons of diplomacy." As of , there are 50 active aircraft carriers in the world operated by fifteen navies. The United States has 11 large nuclear-powered CATOBAR fleet carriers — each carrying around 80 fighters — the largest in the world, with the total combined deck space over twice that of all other nations combined. In addition, the US Navy has nine amphibious assault ships used primarily as helicopter carriers, although these also each carry up to 20 vertical/short takeoff and landing (V/STOL) jetfighters and are similar in size to medium-sized fleet carriers. China, the United Kingdom and India each currently operate two STOBAR/STOVL aircraft carriers with ski-jump flight decks, with China in the process to commission a third carrier with catapult capabilities, and France and Russia each operate a single aircraft carrier with a capacity of 30 to 60 fighters. Italy operates two light V/STOL carriers, while Spain and Turkey operate one V/STOL aircraft-carrying assault ship. Helicopter carriers are also operated by Japan (4, two of which are being converted to operate V/STOL fighters), France (3), Australia (2, previously also owned 3 light carriers), Egypt (2), South Korea (2), China (3), Thailand (1) and Brazil (1). Future aircraft carriers are under construction or in planning by China, France, India, Italy, Russia, South Korea, Turkey and the United States. Types of carriers General features Speed is a crucial attribute for aircraft carriers, as they need to be able to be deployed quickly anywhere in the world and have to be fast enough to evade detection and targeting from enemy forces. A high speed also increases the "wind over the deck", boosting the lift available for fixed-wing aircraft to carry fuel and ammunition. To evade nuclear submarines, the carriers should have a speed of more than . Aircraft carriers are among the largest types of warships due to their need for ample deck space. An aircraft carrier must be able to perform increasingly diverse mission sets. Diplomacy, power projection, quick crisis response force, land attack from the sea, sea base for helicopter and amphibious assault forces, anti-surface warfare (ASUW), defensive counter air (DCA), and humanitarian aid & disaster relief (HADR) are some of the missions the aircraft carrier is expected to accomplish. Traditionally an aircraft carrier is supposed to be one ship that can perform at least power projection and sea control missions. An aircraft carrier must be able to efficiently operate an air combat group. This means it should handle fixed-wing jets as well as helicopters. This includes ships designed to support operations of short-takeoff/vertical-landing (STOVL) jets. Basic types Aircraft cruiser Amphibious assault ship and sub-types Anti-submarine warfare carrier Balloon carrier and balloon tenders Escort carrier Fleet carrier Flight deck cruiser Helicopter carrier Light aircraft carrier Seaplane tender and seaplane carriers Utility carrier: This type was mainly used in the US Navy, in the decade after World War 2 to ferry aircraft. Some of the types listed here are not strictly defined as aircraft carriers by some sources. By role A fleet carrier is intended to operate with the main fleet and usually provides an offensive capability. These are the largest carriers capable of fast speeds. By comparison, escort carriers were developed to provide defense for convoys of ships. They were smaller and slower with lower numbers of aircraft carried. Most were built from mercantile hulls or, in the case of merchant aircraft carriers, were bulk cargo ships with a flight deck added on top. Light aircraft carriers were fast enough to operate with the main fleet but of smaller size with reduced aircraft capacity. The Soviet aircraft carrier Admiral Kusnetsov was termed a "heavy aircraft-carrying cruiser". This was primarily a legal construct to avoid the limitations of the Montreux Convention preventing 'aircraft carriers' transiting the Turkish Straits between the Soviet Black Sea bases and the Mediterranean Sea. These ships, while sized in the range of large fleet carriers, were designed to deploy alone or with escorts. In addition to supporting fighter aircraft and helicopters, they provide both strong defensive weaponry and heavy offensive missiles equivalent to a guided-missile cruiser. By configuration Aircraft carriers today are usually divided into the following four categories based on the way that aircraft take off and land: Catapult-assisted take-off barrier-arrested recovery (CATOBAR): these carriers generally carry the largest, heaviest, and most heavily armed aircraft, although smaller CATOBAR carriers may have other limitations (weight capacity of aircraft elevator, etc.). All CATOBAR carriers in service today are nuclear-powered, as the last conventionally powered CATOBAR carrier USS Kitty Hawk was decommissioned in 2009. Twelve are in service: ten and one fleet carriers in the United States; and the Charles de Gaulle in France. Short take-off barrier-arrested recovery (STOBAR): these carriers are generally limited to carrying lighter fixed-wing aircraft with more limited payloads. STOBAR carrier air wings, such as the Sukhoi Su-33 and future Mikoyan MiG-29K wings of are often geared primarily towards air superiority and fleet defense roles rather than strike/power projection tasks, which require heavier payloads (bombs and air-to-ground missiles). Five are in service: two in China, two in India, and one in Russia. Short take-off vertical-landing (STOVL): limited to carrying STOVL aircraft. STOVL aircraft, such as the Harrier family and Yakovlev Yak-38 generally have limited payloads, lower performance, and high fuel consumption when compared with conventional fixed-wing aircraft; however, a new generation of STOVL aircraft, currently consisting of the Lockheed Martin F-35B Lightning II, has much improved performance. Fourteen are in service; nine STOVL amphibious assault ships in the US; two carriers each in Italy and the UK; and one STOVL amphibious assault ship in Spain. Helicopter carrier: Helicopter carriers have a similar appearance to other aircraft carriers but operate only helicopters – those that mainly operate helicopters but can also operate fixed-wing aircraft are known as STOVL carriers (see above). Seventeen are in service: four in Japan; three in France; two each in Australia, China, Egypt and South Korea; and one each in Brazil and Thailand. In the past, some conventional carriers were converted and these were called "commando carriers" by the Royal Navy. Some helicopter carriers, but not all, are classified as amphibious assault ships, tasked with landing and supporting ground forces on enemy territory. By size Fleet carrier Light aircraft carrier Escort carrier Supercarrier The appellation "supercarrier" is not an official designation with any national navy, but a term used predominantly by the media and typically when reporting on larger and more advanced carrier types. It is also used when comparing carriers of various sizes and capabilities, both current and past. It was first used by The New York Times in 1938, in an article about the Royal Navy's , that had a length of , a displacement of 22,000 tons and was designed to carry 72 aircraft. Since then, aircraft carriers have consistently grown in size, both in length and displacement, as well as improved capabilities; in defense, sensors, electronic warfare, propulsion, range, launch and recovery systems, number and types of aircraft carried and number of sorties flown per day. Both China (Type 003), and the United Kingdom (Queen Elizabeth class) have carriers undergoing trials or in service with full load displacements between 80,000 to 85,000 tonnes and lengths from which are described as "supercarriers". France is also developing a new aircraft carrier (PANG) which is to have a full load displacement of c. 75,000 tonnes and also be considered a supercarrier. The largest supercarriers in service as of 2024, however, are with the US Navy, with full load displacements in excess 100,000 tons, lengths of over , and capabilities that exceed those of any other class. Hull type identification symbols Several systems of identification symbol for aircraft carriers and related types of ship have been used. These include the pennant numbers used by the Royal Navy, Commonwealth countries, and Europe, along with the hull classification symbols used by the US and Canada. History Origins The 1903 advent of the heavier-than-air fixed-wing airplane with the Wright brothers' first flight at Kitty Hawk, North Carolina, was closely followed on 14 November 1910, by Eugene Burton Ely's first experimental take-off of a Curtiss Pusher airplane from the deck of a United States Navy ship, the cruiser anchored off Norfolk Navy Base in Virginia. Two months later, on 18 January 1911, Ely landed his Curtiss Pusher airplane on a platform on the armored cruiser anchored in San Francisco Bay. On 9 May 1912, the first take off of an airplane from a ship while underway was made by Commander Charles Samson flying a Short Improved S.27 biplane "S.38" of the Royal Naval Air Service (RNAS) from the deck of the Royal Navy's pre-dreadnought battleship , thus providing the first practical demonstration of the aircraft carrier for naval operations at sea. Seaplane tender support ships came next, with the French of 1911. Early in World War I, the Imperial Japanese Navy ship conducted the world's first successful ship-launched air raid: on 6 September 1914, a Farman aircraft launched by Wakamiya attacked the Austro-Hungarian cruiser and the Imperial German gunboat Jaguar in Jiaozhou Bay off Qingdao; neither was hit. The first attack using an air-launched torpedo occurred on 2 August, when a torpedo was fired by Flight Commander Charles Edmonds from a Short Type 184 seaplane, launched from the seaplane carrier . The first carrier-launched airstrike was the Tondern raid in July 1918. Seven Sopwith Camels were launched from the battlecruiser which had been completed as a carrier by replacing her planned forward turret with a flight deck and hangar prior to commissioning. The Camels attacked and damaged the German airbase at Tondern, Germany (modern day Tønder, Denmark), and destroyed two zeppelin airships. The first landing of an airplane on a moving ship was by Squadron Commander Edwin Harris Dunning, when he landed his Sopwith Pup on HMS Furious in Scapa Flow, Orkney on 2 August 1917. Landing on the forward flight deck required the pilot to approach round the ship's superstructure, a difficult and dangerous manoeuver and Dunning was later killed when his airplane was thrown overboard while attempting another landing on Furious. HMS Furious was modified again when her rear turret was removed and another flight deck added over a second hangar for landing aircraft over the stern. Her funnel and superstructure remained intact however and turbulence from the funnel and superstructure was severe enough that only three landing attempts were successful before further attempts were forbidden. This experience prompted the development of vessels with a flush deck and produced the first large fleet ships. In 1918, became the world's first carrier capable of launching and recovering naval aircraft. As a result of the Washington Naval Treaty of 1922, which limited the construction of new heavy surface combat ships, most early aircraft carriers were conversions of ships that were laid down (or had served) as different ship types: cargo ships, cruisers, battlecruisers, or battleships. These conversions gave rise to the in 1922, the US s (1927), Japanese and , and British (of which Furious was one). Specialist carrier evolution was well underway, with several navies ordering and building warships that were purposefully designed to function as aircraft carriers by the mid-1920s. This resulted in the commissioning of ships such as the Japanese (1922), (1924, although laid down in 1918 before Hōshō), and (1927). During World War II, these ships would become known as fleet carriers. World War II The aircraft carrier dramatically changed naval warfare in World War II, because air power was becoming a significant factor in warfare. The advent of aircraft as focal weapons was driven by the superior range, flexibility, and effectiveness of carrier-launched aircraft. They had greater range and precision than naval guns, making them highly effective. The versatility of the carrier was demonstrated in November 1940, when launched a long-range strike on the Italian fleet at their base in Taranto, signalling the beginning of the effective and highly mobile aircraft strikes. This operation in the shallow water harbor incapacitated three of the six anchored battleships at a cost of two torpedo bombers. World War II in the Pacific Ocean involved clashes between aircraft carrier fleets. The Japanese surprise attack on the American Pacific fleet at Pearl Harbor naval and air bases on Sunday, 7 December 1941, was a clear illustration of the power projection capability afforded by a large force of modern carriers. Concentrating six carriers in a single unit turned naval history about, as no other nation had fielded anything comparable. In the "Doolittle Raid", on 18 April 1942, the US Navy carrier sailed to within of Japan and launched 16 B-25 Mitchell medium bombers from her deck in a demonstrative retaliatory strike on the mainland, including the capital, Tokyo. However, the vulnerability of carriers compared to traditional capital ships was illustrated by the sinking of by German battleships during the Norwegian campaign in 1940. This new-found importance of naval aviation forced nations to create a number of carriers, in efforts to provide air superiority cover for every major fleet to ward off enemy aircraft. This extensive usage led to the development and construction of 'light' carriers. Escort aircraft carriers, such as , were sometimes purpose-built but most were converted from merchant ships as a stop-gap measure to provide anti-submarine air support for convoys and amphibious invasions. Following this concept, light aircraft carriers built by the US, such as (commissioned in 1943), represented a larger, more "militarized" version of the escort carrier. Although with similar complement to escort carriers, they had the advantage of speed from their converted cruiser hulls. The UK 1942 Design Light Fleet Carrier was designed for building quickly by civilian shipyards and with an expected service life of about 3 years. They served the Royal Navy during the war, and the hull design was chosen for nearly all aircraft carrier equipped navies after the war, until the 1980s. Emergencies also spurred the creation or conversion of highly unconventional aircraft carriers. CAM ships were cargo-carrying merchant ships that could launch (but not retrieve) a single fighter aircraft from a catapult to defend the convoy from long range land-based German aircraft. Postwar era Before World War II, international naval treaties of 1922, 1930, and 1936 limited the size of capital ships including carriers. Since World War II, aircraft carrier designs have increased in size to accommodate a steady increase in aircraft size. The large, modern of US Navy carriers has a displacement nearly four times that of the World War II–era , yet its complement of aircraft is roughly the same—a consequence of the steadily increasing size and weight of individual military aircraft over the years. Today's aircraft carriers are so expensive that some nations which operate them risk significant economic and military impact if a carrier is lost. Some changes were made after 1945 in carriers: The angled flight deck was invented by Royal Navy Captain (later Rear Admiral) Dennis Cambell, as naval aviation jets' higher speeds required carriers be modified to fit their needs. Additionally, the angled flight deck allows for simultaneous launch and recovery. Jet blast deflectors became necessary to protect aircraft and handlers from jet blast. The first US Navy carriers to be fitted with them were the wooden-decked s which were adapted to operate jets in the late 1940s. Later versions had to be water-cooled because of increasing engine power. Optical landing systems were developed to facilitate the very precise landing angles required by jet aircraft, which have a faster landing speed giving the pilot little time to correct misalignments, or mistakes. The first system was fitted to in 1952. Aircraft carrier designs have increased in size to accommodate continuous increase in aircraft size. The 1950s saw US Navy's commission of "supercarriers", designed to operate naval jets, which offered better performance at the expense of bigger size and demanded more ordnance to be carried on-board (fuel, spare parts, electronics, etc.). The combination of increased carrier size, speed requirements above , and a requirement to operate at sea for long periods mean that modern large aircraft carriers often use nuclear reactors to create power for propulsion, electricity, catapulting airplanes from aircraft carriers, and a few more minor uses. Modern navies that operate such aircraft carriers treat them as capital ships of fleets, a role previously held by the galleons, ships-of-the-line and battleships. This change took place during World War II in response to air power becoming a significant factor in warfare, driven by the superior range, flexibility and effectiveness of carrier-launched aircraft. Following the war, carrier operations continued to increase in size and importance, and along with, carrier designs also increased in size and ability. Some of these larger carriers, dubbed by the media as "supercarriers", displacing 75,000 tons or greater, have become the pinnacle of carrier development. Some are powered by nuclear reactors and form the core of a fleet designed to operate far from home. Amphibious assault ships, such as the and classes, serve the purpose of carrying and landing Marines, and operate a large contingent of helicopters for that purpose. Also known as "commando carriers" or "helicopter carriers", many have the capability to operate VSTOL aircraft. The threatening role of aircraft carriers has a place in modern asymmetric warfare, like the gunboat diplomacy of the past. Carriers also facilitate quick and precise projections of overwhelming military power into such local and regional conflicts. Lacking the firepower of other warships, carriers by themselves are considered vulnerable to attack by other ships, aircraft, submarines, or missiles. Therefore, an aircraft carrier is generally accompanied by a number of other ships to provide protection for the relatively unwieldy carrier, to carry supplies, re-supply (Many carriers are self-sufficient and will supply their escorts) and perform other support services, and to provide additional offensive capabilities. The resulting group of ships is often termed a carrier strike group, battle group, carrier group, or carrier battle group. There is a view among some military pundits that modern anti-ship weapons systems, such as torpedoes and missiles, or even ballistic missiles with nuclear warheads have made aircraft carriers and carrier groups too vulnerable for modern combat. like the German U24 of the conventional 206 class which in 2001 "fired" at the Enterprise during the exercise JTFEX 01-2 in the Caribbean Sea by firing flares and taking a photograph through its periscope or the Swedish Gotland which managed the same feat in 2006 during JTFEX 06-2 by penetrating the defensive measures of Carrier Strike Group 7 which was protecting . Description Structure Carriers are large and long ships, although there is a high degree of variation depending on their intended role and aircraft complement. The size of the carrier has varied over history and among navies, to cater to the various roles that global climates have demanded from naval aviation. Regardless of size, the ship itself must house their complement of aircraft, with space for launching, storing, and maintaining them. Space is also required for the large crew, supplies (food, munitions, fuel, engineering parts), and propulsion. US aircraft carriers are notable for having nuclear reactors powering their systems and propulsion. The top of the carrier is the flight deck, where aircraft are launched and recovered. On the starboard side of this is the island, where the funnel, air-traffic control and the bridge are located. The constraints of constructing a flight deck affect the role of a given carrier strongly, as they influence the weight, type, and configuration of the aircraft that may be launched. For example, assisted launch mechanisms are used primarily for heavy aircraft, especially those loaded with air-to-ground weapons. CATOBAR is most commonly used on US Navy fleet carriers as it allows the deployment of heavy jets with full load-outs, especially on ground-attack missions. STOVL is used by other navies because it is cheaper to operate and still provides good deployment capability for fighter aircraft. Due to the busy nature of the flight deck, only 20 or so aircraft may be on it at any one time. A hangar storage several decks below the flight deck is where most aircraft are kept, and aircraft are taken from the lower storage decks to the flight deck through the use of an elevator. The hangar is usually quite large and can take up several decks of vertical space. Munitions are commonly stored on the lower decks because they are highly explosive. Usually this is below the waterline so that the area can be flooded in case of emergency. Flight deck As "runways at sea", aircraft carriers have a flat-top flight deck, which launches and recovers aircraft. Aircraft launch forward, into the wind, and are recovered from astern. The flight deck is where the most notable differences between a carrier and a land runway are found. Creating such a surface at sea poses constraints on the carrier. For example, the size of the vessel is a fundamental limitation on runway length. This affects take-off procedure, as a shorter runway length of the deck requires that aircraft accelerate more quickly to gain lift. This either requires a thrust boost, a vertical component to its velocity, or a reduced take-off load (to lower mass). The differing types of deck configuration, as above, influence the structure of the flight deck. The form of launch assistance a carrier provides is strongly related to the types of aircraft embarked and the design of the carrier itself. There are two main philosophies to keep the deck short: add thrust to the aircraft, such as using a Catapult Assisted Take-Off (CATO-); and changing the direction of the airplanes' thrust, as in Vertical and/or Short Take-Off (V/STO-). Each method has advantages and disadvantages of its own: Catapult Assisted Take-Off Barrier Arrested Recovery (CATOBAR): A steam- or electric-powered catapult is connected to the aircraft, and is used to accelerate conventional aircraft to a safe flying speed. By the end of the catapult stroke, the aircraft is airborne and further propulsion is provided by its own engines. This is the most expensive method as it requires complex machinery to be installed under the flight deck, but allows for even heavily loaded aircraft to take off. Short Take-Off Barrier Arrested Recovery (STOBAR) depends on increasing the net lift on the aircraft. Aircraft do not require catapult assistance for take off; instead on nearly all ships of this type an upwards vector is provided by a ski-jump at the forward end of the flight deck, often combined with thrust vectoring by the aircraft. Alternatively, by reducing the fuel and weapon load, an aircraft is able to reach faster speeds and generate more upwards lift and launch without a ski-jump or catapult. Short Take-Off Vertical-Landing (STOVL): On aircraft carriers, non-catapult-assisted, fixed-wing short takeoffs are accomplished with the use of thrust vectoring, which may also be used in conjunction with a runway "ski-jump". Use of STOVL tends to allow aircraft to carry a larger payload as compared to during VTOL use, while still only requiring a short runway. The most famous examples are the Hawker Siddeley Harrier and the BAe Sea Harrier. Although technically VTOL aircraft, they are operationally STOVL aircraft due to the extra weight carried at take-off for fuel and armaments. The same is true of the Lockheed F-35B Lightning II, which demonstrated VTOL capability in test flights but is operationally STOVL or in the case of UK uses "shipborne rolling vertical landing". Vertical Take-Off and Landing (VTOL): Certain aircraft are specifically designed for the purpose of using very high degrees of thrust vectoring (e.g. if the thrust to weight-force ratio is greater than 1, it can take off vertically), but are usually slower than conventionally propelled aircraft due to the additional weight from associated systems. On the recovery side of the flight deck, the adaptation to the aircraft load-out is mirrored. Non-VTOL or conventional aircraft cannot decelerate on their own, and almost all carriers using them must have arrested-recovery systems (-BAR, e.g. CATOBAR or STOBAR) to recover their aircraft. Aircraft that are landing extend a tailhook that catches on arrestor wires stretched across the deck to bring themselves to a stop in a short distance. Post-World War II Royal Navy research on safer CATOBAR recovery eventually led to universal adoption of a landing area angled off axis to allow aircraft who missed the arresting wires to "bolt" and safely return to flight for another landing attempt rather than crashing into aircraft on the forward deck. If the aircraft are VTOL-capable or helicopters, they do not need to decelerate and hence there is no such need. The arrested-recovery system has used an angled deck since the 1950s because, in case the aircraft does not catch the arresting wire, the short deck allows easier take off by reducing the number of objects between the aircraft and the end of the runway. It also has the advantage of separating the recovery operation area from the launch area. Helicopters and aircraft capable of vertical or short take-off and landing (V/STOL) usually recover by coming abreast of the carrier on the port side and then using their hover capability to move over the flight deck and land vertically without the need for arresting gear. Staff and deck operations Carriers steam at speed, up to into the wind during flight deck operations to increase wind speed over the deck to a safe minimum. This increase in effective wind speed provides a higher launch airspeed for aircraft at the end of the catapult stroke or ski-jump, as well as making recovery safer by reducing the difference between the relative speeds of the aircraft and ship. Since the early 1950s on conventional carriers it has been the practice to recover aircraft at an angle to port of the axial line of the ship. The primary function of this angled deck is to allow aircraft that miss the arresting wires, referred to as a bolter, to become airborne again without the risk of hitting aircraft parked forward. The angled deck allows the installation of one or two "waist" catapults in addition to the two bow cats. An angled deck also improves launch and recovery cycle flexibility with the option of simultaneous launching and recovery of aircraft. Conventional ("tailhook") aircraft rely upon a landing signal officer (LSO, radio call sign 'paddles') to monitor the aircraft's approach, visually gauge glideslope, attitude, and airspeed, and transmit that data to the pilot. Before the angled deck emerged in the 1950s, LSOs used colored paddles to signal corrections to the pilot (hence the nickname). From the late 1950s onward, visual landing aids such as the optical landing system have provided information on proper glide slope, but LSOs still transmit voice calls to approaching pilots by radio. Key personnel involved in the flight deck include the shooters, the handler, and the air boss. Shooters are naval aviators or naval flight officers and are responsible for launching aircraft. The handler works just inside the island from the flight deck and is responsible for the movement of aircraft before launching and after recovery. The "air boss" (usually a commander) occupies the top bridge (Primary Flight Control, also called primary or the tower) and has the overall responsibility for controlling launch, recovery and "those aircraft in the air near the ship, and the movement of planes on the flight deck, which itself resembles a well-choreographed ballet". The captain of the ship spends most of his time one level below primary on the Navigation Bridge. Below this is the Flag Bridge, designated for the embarked admiral and his staff. To facilitate working on the flight deck of a US aircraft carrier, the sailors wear colored shirts that designate their responsibilities. There are at least seven different colors worn by flight deck personnel for modern United States Navy carrier air operations. Carrier operations of other nations use similar color schemes. Deck structures The superstructure of a carrier (such as the bridge, flight control tower) are concentrated in a relatively small area called an island, a feature pioneered on in 1923. While the island is usually built on the starboard side of the flight deck, the Japanese aircraft carriers and had their islands built on the port side. Very few carriers have been designed or built without an island. The flush deck configuration proved to have significant drawbacks, primary of which was management of the exhaust from the power plant. Fumes coming across the deck were a major issue in . In addition, lack of an island meant difficulties managing the flight deck, performing air traffic control, a lack of radar housing placements and problems with navigating and controlling the ship itself. Another deck structure that can be seen is a ski-jump ramp at the forward end of the flight deck. This was first developed to help launch short take off vertical landing (STOVL) aircraft take off at far higher weights than is possible with a vertical or rolling takeoff on flat decks. Originally developed by the Royal Navy, it since has been adopted by many navies for smaller carriers. A ski-jump ramp works by converting some of the forward rolling movement of the aircraft into vertical velocity and is sometimes combined with the aiming of jet thrust partly downward. This allows heavily loaded and fueled aircraft a few more precious seconds to attain sufficient air velocity and lift to sustain normal flight. Without a ski-jump, launching fully-loaded and fueled aircraft such as the Harrier would not be possible on a smaller flat deck ship before either stalling out or crashing directly into the sea. Although STOVL aircraft are capable of taking off vertically from a spot on the deck, using the ramp and a running start is far more fuel efficient and permits a heavier launch weight. As catapults are unnecessary, carriers with this arrangement reduce weight, complexity, and space needed for complex steam or electromagnetic launching equipment. Vertical landing aircraft also remove the need for arresting cables and related hardware. Russian, Chinese, and Indian carriers include a ski-jump ramp for launching lightly loaded conventional fighter aircraft but recover using traditional carrier arresting cables and a tailhook on their aircraft. The disadvantage of the ski-jump is the penalty it exacts on aircraft size, payload, and fuel load (and thus range); heavily laden aircraft cannot launch using a ski-jump because their high loaded weight requires either a longer takeoff roll than is possible on a carrier deck, or assistance from a catapult or JATO rocket. For example, the Russian Sukhoi Su-33 is only able to launch from the carrier with a minimal armament and fuel load. Another disadvantage is on mixed flight deck operations where helicopters are also present, such as on a US landing helicopter dock or landing helicopter assault amphibious assault ship. A ski jump is not included as this would eliminate one or more helicopter landing areas; this flat deck limits the loading of Harriers but is somewhat mitigated by the longer rolling start provided by a long flight deck compared to many STOVL carriers. National fleets The US Navy has the largest fleet of carriers in the world, with eleven supercarriers in service as of 2024. China and India each have two STOBAR carriers in service. The UK has two STOVL carriers in service. The navies of France and Russia each operate a single medium-sized carrier. The US also has nine similarly sized Amphibious Warfare Ships. There are five small light carriers in use capable of operating both fixed-wing aircraft and helicopters; Japan and Italy each operate two, and Spain one. Additionally there are eighteen small carriers which only operate helicopters serving the navies of Australia (2), Brazil (1), China (2), Egypt (2), France (3), Japan (4), South Korea (2), Thailand (1) and Turkey (1). Algeria Current Kalaat Béni Abbès (L-474) is an amphibious transport dock of the Algerian National Navy with two deck-landing spots for helicopters. Australia Current The Royal Australian Navy operates two s. The two-ship class, based on the Spanish vessel and built by Navantia and BAE Systems Australia, represents the largest ships ever built for the Royal Australian Navy. underwent sea trials in late 2013 and was commissioned in 2014. Her sister ship, , was commissioned in December 2015. The Australian ships retain the ski-ramp from the Juan Carlos I design, although the RAN has not acquired carrier-based fixed-wing aircraft. Brazil Current In December 2017, the Brazilian Navy confirmed the purchase of for (GBP) £84.6 million (equivalent to R$359.5M and US$113.2M) and renamed her . The ship was decommissioned from Royal Navy service in March 2018. The Brazilian Navy commissioned the carrier on 29 June 2018 in the United Kingdom. After undertaking a period of maintenance in the UK, the ship travelled to its new home port, Arsenal de Marinha do Rio de Janeiro (AMRJ) to be fully operational by 2020. The ship displaces 21,578 tonnes, is long and has a range of . Before leaving HMNB Devonport for her new homeport in Rio's AMRJ, Atlântico underwent operational sea training under the Royal Navy's Flag Officer Sea Training (FOST) program. On 12 November 2020, Atlântico was redesignated "NAM", for "multipurpose aircraft carrier" (), from "PHM", for "multipurpose helicopter carrier" (), to reflect the ship's capability to operate with fixed-wing medium-altitude long-endurance unmanned aerial vehicles as well as crewed tiltrotor VTOL aircraft. China Current 2 STOBAR carriers: (60,900 tons) was originally built as the uncompleted Soviet carrier Varyag and was later purchased as an hulk from Ukraine in 1998 on the pretext of commercial use as a floating casino, then towed to China for rebuild and completion. Liaoning was commissioned on 25 September 2012 and began service for testing and training. In November 2012, Liaoning launched and recovered Shenyang J-15 naval fighter aircraft for the first time. After a refit in January 2019, she was assigned to the North Sea Fleet, a change from her previous role as a training ship. (60,000–70,000 tons) was launched on 26 April 2017, the first to be built domestically based on an improved Kuznetsov-class design. Shandong started sea trials on 23 April 2018, and entered service in December 2019. 1 CATOBAR carrier: (80,000 tons) is a conventionally powered CATOBAR carrier that was under construction between 2015 and 2016 before being completed in June 2022. She is being fitted out as of 2022 and will commence service in 2023–2024. 3 LHD amphibious assault ships A Type 075 LHD, was commissioned on 23 April 2021 at the South Sea Fleet naval base in Sanya. A second ship, Guangxi, was commissioned on 26 December 2021 and a third ship, Anhui, was commissioned in October 2022. Future China has had a long-term plan to operate six large aircraft carriers with two carriers per fleet. China is planning a class of eight LHD vessels, the Type 075 (NATO reporting name Yushen-class landing helicopter assault). This is a class of amphibious assault ship under construction by the Hudong–Zhonghua Shipbuilding company. The first ship was commissioned in April 2021. China is also planning a modified class of the same concept, the Type 076 landing helicopter dock, that will be equipped with an electromagnetic catapult system and will likely support launching unmanned combat aerial vehicles. Egypt Current Egypt signed a contract with French shipbuilder DCNS to buy two helicopter carriers for approximately 950 million euros. The two ships were originally to be sold to Russia, but the deal was cancelled by France due to the Russian invasion of Ukraine in 2014. On 2 June 2016, Egypt received the first of two helicopter carriers acquired in October 2015, the landing helicopter dock . The flag transfer ceremony took place in the presence of Egyptian and French Navies' chiefs of staff, chairman and chief executive officers of both DCNS and STX France, and senior Egyptian and French officials. On 16 September 2016, DCNS delivered the second of two helicopter carriers, the landing helicopter dock which also participated in a joint military exercise with the French Navy before arriving at her home port of Alexandria. France Current The French Navy operates the 42,000-tonne nuclear-powered aircraft carrier, . Commissioned in 2001, she is the flagship of the French Navy. The ship carries a complement of Dassault Rafale M and E-2C Hawkeye aircraft, EC725 Caracal and AS532 Cougar helicopters for combat search and rescue, as well as modern electronics and Aster missiles. She is a CATOBAR-type carrier that uses two 75 m C13-3 steam catapults of a shorter version of the catapult system installed on the US carriers, one catapult at the bow and one across the front of the landing area. In addition, the French Navy operates three s. Future In October 2018, the French Ministry of Defence began an 18-month study for €40 million for the eventual future replacement of the beyond 2030. In December 2020, President Macron announced that construction of the next generation carrier would begin in around 2025 with sea trials to start in about 2036. The carrier is planned to have a displacement of around 75,000 tons and to carry about 32 next-generation fighters, two to three E-2D Advanced Hawkeyes and a yet-to-be-determined number of unmanned carrier air vehicles. India Current 2 STOBAR carriers: , 45,400 tonnes, modified Kiev class. The carrier was purchased by India on 20 January 2004 after years of negotiations at a final price of $2.35 billion (). The ship successfully completed her sea trials in July 2013 and aviation trials in September 2013. She was formally commissioned on 16 November 2013 at a ceremony held at Severodvinsk, Russia. , also known as Indigenous Aircraft Carrier 1 (IAC-1) a 45,000-tonne, aircraft carrier whose keel was laid in 2009. The new carrier will operate MiG-29K and naval HAL Tejas aircraft. The ship is powered by gas-turbines and has a range of and deploys 10 helicopters and 30 aircraft. The ship was launched in 2013, sea-trials began in August 2021 and was commissioned on 2 September 2022. Future India has plans for a third carrier, , also known as Indigenous Aircraft Carrier 2 (IAC-2) with a displacement of over 65,000 tonnes and is planned with a CATOBAR system to launch and recover heavier aircraft. India has also issued a request for information (RFI) to procure four Landing helicopter dock displacing 30,000–40,000 tons with a capacity to operate 12 medium lift special ops and two heavy lift helicopters and troops for amphibious operations. Italy Current 1 STOVL carrier: : 30,000-tonne Italian STOVL carrier designed and built with secondary amphibious assault facilities, commissioned in 2008. Future Italy plans to replace the now decommissioned aircraft carrier Giuseppe Garibaldi, as well as one of the landing helicopter docks, with a new amphibious assault ship, to be named . The ship will be significantly larger than her predecessors with a displacement of 38,000 tonnes at full load. Trieste is to carry the F-35B Joint Strike Fighter. Meanwhile, Giuseppe Garibaldi will be transferred to Italian Space Operation Command for use as a satellite launch platform. Japan Current 2 s – , 19,500-tonne (27,000 tonnes full load) STOVL carrier Izumo was launched August 2013 and commissioned March 2015. Izumos sister ship, Kaga, was commissioned in 2017. In December 2018, the Japanese Cabinet gave approval to convert both Izumo-class destroyers into aircraft carriers for F-35B STOVL operations. The conversion of Izumo was underway as of mid-2020. The modification of maritime escort vessels is to "increase operational flexibility" and enhance Pacific air defense, the Japanese defense ministry's position is "We are not creating carrier air wings or carrier air squadrons" similar to the US Navy. The Japanese STOVL F-35s, when delivered, will be operated by the Japan Air Self Defense Force from land bases; according to the 2020 Japanese Defense Ministry white paper the STOVL model was chosen for the JASDF due the lack of appropriately long runways to support air superiority capability across all of Japanese airspace. Japan has requested that the USMC deploy STOVL F-35s and crews aboard the Izumo-class ships "for cooperation and advice on how to operate the fighter on the deck of the modified ships". On 3 October 2021, two USMC F-35Bs performed the first vertical landings and horizontal take-offs from JS Izumo, marking 75 years since fixed-wing aircraft operated from a Japanese carrier. 2 s – 19,000-tonne (full load) anti-submarine warfare carriers with enhanced command-and-control capabilities allowing them to serve as fleet flagships. Qatar Current Qatari amphibious transport dock Al Fulk Russia Current 1 STOBAR carrier: Admiral Flota Sovetskogo Soyuza Kuznetsov: 55,000-tonne STOBAR aircraft carrier. Launched in 1985 as Tbilisi, renamed and operational from 1995. Without catapults she can launch and recover lightly fueled naval fighters for air defense or anti-ship missions but not heavy conventional bombing strikes. Officially designated an aircraft carrying cruiser, she is unique in carrying a heavy cruiser's complement of defensive weapons and large P-700 Granit offensive missiles. The P-700 systems will be removed in the coming refit to enlarge her below decks aviation facilities as well as upgrading her defensive systems. The ship has been out of service and in repairs since 2018. The current projection is that repairs will be completed and the ship will be transferred back to the Russian Navy sometime in 2024, however this may be pushed back to 2025 if issues arise during overhaul and testing. Future The Russian Government has been considering the potential replacement of Admiral Kuznetsov for some time and has considered the Shtorm-class aircraft carrier as a possible option. This carrier will be a hybrid of CATOBAR and STOBAR, given the fact that she uses both systems of launching aircraft. The carrier is expected to cost As of 2020, the project had not yet been approved and, given the financial costs, it was unclear whether it would be made a priority over other elements of Russian naval modernization. A class of 2 LHD, Project 23900 is planned and an official keel laying ceremony for the project happened on 20 July 2020. South Korea Current Two 18,860-tonne full deck amphibious assault ships with hospital and well deck and facilities to serve as fleet flagships. Future South Korea has set tentative plans for procuring two light aircraft carriers by 2033, which would help make the ROKN a blue water navy. In December 2020, details of South Korea's planned carrier program (CVX) were finalized. A vessel of about 40,000 tons is envisaged carrying about 20 F-35B fighters as well as future maritime attack helicopters. Service entry had been anticipated in the early 2030s. The program has encountered opposition in the National Assembly. In November 2021, the National Defense Committee of the National Assembly reduced the program's requested budget of 7.2 billion KRW and to just 500 million KRW (about $400K USD), effectively putting the project on hold, at least temporarily. However, on 3 December 2021 the full budget of 7.2 billion won was passed by the National Assembly. Basic design work is to begin in earnest starting 2022. Spain Current : a 27,000-tonne, specially designed multipurpose strategic projection ship which can operate as an amphibious assault ship and as an aircraft carrier. Juan Carlos I has full facilities for both functions including a ski jump for STOVL operations, is equipped with the AV-8B Harrier II attack aircraft. She also features a well deck and a vehicle storage area which can be used as additional hangar space. The vessel was launched in 2008 and commissioned on 30 September 2010. Thailand Current 1 offshore helicopter support ship: helicopter carrier: 11,400-tonne STOVL carrier based on Spanish design. Commissioned in 1997. The AV-8S Matador/Harrier STOVL fighter wing, mostly inoperable by 1999, was retired from service without replacement in 2006. As of 2010, the ship is used for helicopter operations and for disaster relief. Turkey Current is a 27,079-tonne amphibious assault ship (LHD) (outfitted as drone carrier) of the Turkish Navy that can be configured as a 24,660-tonne V/STOL aircraft carrier. Construction began on 30 April 2016 by Sedef Shipbuilding Inc. at their Istanbul shipyard. TCG Anadolu was commissioned with a ceremony on 10 April 2023. The construction of a sister ship, to be named TCG Trakya, is currently being planned by the Turkish Navy. The Sikorsky S-70B Seahawk and the Bell AH-1 SuperCobra are the two main types of helicopters used on TCG Anadolu, with the occasional use of CH-47F Chinook helicopters of the Turkish Army during military exercises and operations. The AH-1W Super Cobras will eventually be complemented and replaced by the TAI T929 ATAK 2. The jet-powered, low-observable drone Bayraktar MIUS Kızılelma and the MALE UAV Bayraktar TB3 are two UCAVs that are specifically designed and manufactured by Baykar Technologies to be used on TCG Anadolu. The maiden flight of TAI Anka-3 (also part of Project MIUS), a jet-powered, flying wing type UCAV with stealth technology, was successfully completed on 28 December 2023. On 19 November 2024, Baykar Bayraktar TB3 UCAV successfully took-off from the flight deck of TCG Anadolu and landed on the ship. It was the first time a fixed-wing unmanned aircraft of this size and class had successfully landed on a short-runway landing helicopter dock, without the use of an arresting gear. Future On 3 January 2024, the Turkish government approved the plan for the design and construction of a larger aircraft carrier, named the MUGEM-class. On 15 February 2024, the Design and Projects Office of the Turkish Navy announced that it will be a STOBAR aircraft carrier with an overall length of , beam of , draught of , and displacement of 60,000 tons. It is to have a COGAG propulsion system and a maximum speed of more than . The construction of the first MUGEM-class aircraft carrier began on 2 January 2025. The first MUGEM-class aircraft carrier is being built at the Istanbul Naval Shipyard. United Kingdom Current Two 80,600-tonne (est. full load) Queen Elizabeth-class STOVL carriers which operate the F-35 Lightning II. was commissioned in December 2017 and in December 2019. Queen Elizabeth undertook her first operational deployment in 2021. Each Queen Elizabeth-class ship is able to operate around 40 aircraft during peacetime operations and is thought to be able to carry up to 72 at maximum capacity. As of the end of April 2020, 18 F-35B aircraft had been delivered to the Royal Navy and the Royal Air Force. "Full operating capability" for the UK's carrier strike capability had been planned for 2023 (two squadrons or 24 jets operating from one carrier). The longer-term aim remains for the ability to conduct a wide range of air operations and support amphibious operations worldwide from both carriers by 2026. They form the central part of the UK Carrier Strike Group. The Queen Elizabeth-class ships are expected to have service lives of 50 years. United States Current 11 CATOBAR carriers, all nuclear-powered: : ten 101,000-tonne, fleet carriers, the first of which was commissioned in 1975. A Nimitz-class carrier is powered by two nuclear reactors providing steam to four steam turbines. , one 100,000-tonne, fleet carrier. The lead of the class came into service in 2017, with another nine planned to replace the aging Nimitz-class ships. Nine amphibious assault ships carrying vehicles, Marine fighters, attack and transport helicopters, and landing craft with STOVL fighters for Close Air Support (CAS) and Combat Air Patrol (CAP): : a class of 45,000-tonne amphibious assault ships, although the first two ships in this class, (Flight 0) do not have well decks, all subsequent ships (Flight I) are to have well decks. Two ships are currently in service out of a planned 11 ships. Ships of this class can have a secondary mission as a light aircraft carrier with 20 AV-8B Harrier II, and in the future the F-35B Lightning II aircraft after unloading their Marine expeditionary unit. : a class of 41,000-tonne amphibious assault ships, members of this class have been used in wartime in their secondary mission as light carriers with 20 to 25 AV-8Bs after unloading their Marine expeditionary unit. Seven ship currently in service of an original eight, with one lost to fire. Future The current US fleet of Nimitz-class carriers will be followed into service (and in some cases replaced) by the . It is expected that the ships will be more automated in an effort to reduce the amount of funding required to maintain and operate the vessels. The main new features are implementation of Electromagnetic Aircraft Launch System (EMALS) (which replaces the old steam catapults) and unmanned aerial vehicles. In terms of future carrier developments, Congress has discussed the possibility of accelerating the phasing-out of one or more Nimitz-class carriers, postponing or canceling the procurement of CVN-81 and CVN-82, or modifying the purchase contract. Following the deactivation of in December 2012, the US fleet comprised 10 fleet carriers, but that number increased back to 11 with the commissioning of Gerald R. Ford in July 2017. The House Armed Services Seapower subcommittee on 24 July 2007, recommended seven or eight new carriers (one every four years). However, the debate has deepened over budgeting for the $12–14.5 billion (plus $12 billion for development and research) for the 100,000-tonne Gerald R. Ford-class carrier (estimated service 2017) compared to the smaller $2 billion 45,000-tonne s, which are able to deploy squadrons of F-35Bs. The first of this class, , is now in active service with another, , and 9 more are planned. In a report to Congress in February 2018, the Navy stated it intends to maintain a "12 CVN force" as part of its 30-year acquisition plan. Aircraft carriers in preservation Current museum carriers A few aircraft carriers have been preserved as museum ships. They are: in Mount Pleasant, South Carolina in New York City in Alameda, California in Corpus Christi, Texas in San Diego, California in Tianjin, China in Nantong, China Former museum carriers was moored as a museum in Mumbai from 2001 to 2012, but was never able to find an industrial partner and was closed that year. She was scrapped in 2014. was acquired for preservation by the Cabot Museum Foundation and moored in New Orleans from 1989 to 1997, but due to the Cabot Museum Foundation's failure to repay the U.S. Coast Guard over $1 million for removal of hazardous materials and fees associated with its docking, it was seized by the U.S. Marshals in 1999 and auctioned off to Sabe Marine Salvage. Scrapping of the ship began in November 2000. Planned but cancelled museum carriers had a preservation campaign to bring her to the West Coast of the United States as the world's first amphibious assault ship museum. However, at RIMPAC 2024, on 9 July 2024, the Tarawa was sunk alongside as SINKEXs.
Technology
Naval warfare
null
2221
https://en.wikipedia.org/wiki/Apicomplexa
Apicomplexa
The Apicomplexa (also called Apicomplexia; single: apicomplexan) are organisms of a large phylum of mainly parasitic alveolates. Most possess a unique form of organelle structure that comprises a type of non-photosynthetic plastid called an apicoplastwith an apical complex membrane. The organelle's apical shape (e.g., see Ceratium furca) is an adaptation that the apicomplexan applies in penetrating a host cell. The Apicomplexa are unicellular and spore-forming. Most are obligate endoparasites of animals, except Nephromyces, a symbiont in marine animals, originally classified as a chytrid fungus, and the Chromerida, some of which are photosynthetic partners of corals. Motile structures such as flagella or pseudopods are present only in certain gamete stages. The Apicomplexa are a diverse group that includes organisms such as the coccidia, gregarines, piroplasms, haemogregarines, and plasmodia. Diseases caused by Apicomplexa include: Babesiosis (Babesia) Malaria (Plasmodium) Cryptosporidiosis (Cryptosporidium parvum) Cyclosporiasis (Cyclospora cayetanensis) Cystoisosporiasis (Cystoisospora belli) Toxoplasmosis (Toxoplasma gondii) The name Apicomplexa derives from two Latin words—apex (top) and complexus (infolds)—for the set of organelles in the sporozoite. The Apicomplexa comprise the bulk of what used to be called the Sporozoa, a group of parasitic protozoans, in general without flagella, cilia, or pseudopods. Most of the Apicomplexa are motile, however, with a gliding mechanism that uses adhesions and small static myosin motors. The other main lines of this obsolete grouping were the Ascetosporea (a group of Rhizaria), the Myxozoa (highly derived cnidarian animals), and the Microsporidia (derived from fungi). Sometimes, the name Sporozoa is taken as a synonym for the Apicomplexa, or occasionally as a subset. Description The phylum Apicomplexa contains all eukaryotes with a group of structures and organelles collectively termed the apical complex. This complex consists of structural components and secretory organelles required for invasion of host cells during the parasitic stages of the Apicomplexan life cycle. Apicomplexa have complex life cycles, involving several stages and typically undergoing both asexual and sexual replication. All Apicomplexa are obligate parasites for some portion of their life cycle, with some parasitizing two separate hosts for their asexual and sexual stages. Besides the conserved apical complex, Apicomplexa are morphologically diverse. Different organisms within Apicomplexa, as well as different life stages for a given apicomplexan, can vary substantially in size, shape, and subcellular structure. Like other eukaryotes, Apicomplexa have a nucleus, endoplasmic reticulum and Golgi complex. Apicomplexa generally have a single mitochondrion, as well as another endosymbiont-derived organelle called the apicoplast which maintains a separate 35 kilobase circular genome (with the exception of Cryptosporidium species and Gregarina niphandrodes which lack an apicoplast). All members of this phylum have an infectious stage—the sporozoite—which possesses three distinct structures in an apical complex. The apical complex consists of a set of spirally arranged microtubules (the conoid), a secretory body (the rhoptry) and one or more polar rings. Additional slender electron-dense secretory bodies (micronemes) surrounded by one or two polar rings may also be present. This structure gives the phylum its name. A further group of spherical organelles is distributed throughout the cell rather than being localized at the apical complex and are known as the dense granules. These typically have a mean diameter around 0.7 μm. Secretion of the dense-granule content takes place after parasite invasion and localization within the parasitophorous vacuole and persists for several minutes. Flagella are found only in the motile gamete. These are posteriorly directed and vary in number (usually one to three). Basal bodies are present. Although hemosporidians and piroplasmids have normal triplets of microtubules in their basal bodies, coccidians and gregarines have nine singlets. The mitochondria have tubular cristae. Centrioles, chloroplasts, ejectile organelles, and inclusions are absent. The cell is surrounded by a pellicle of three membrane layers (the alveolar structure) penetrated by micropores. Replication: Mitosis is usually closed, with an intranuclear spindle; in some species, it is open at the poles. Cell division is usually by schizogony. Meiosis occurs in the zygote. Mobility: Apicomplexans have a unique gliding capability which enables them to cross through tissues and enter and leave their host cells. This gliding ability is made possible by the use of adhesions and small static myosin motors. Other features common to this phylum are a lack of cilia, sexual reproduction, use of micropores for feeding, and the production of oocysts containing sporozoites as the infective form. Transposons appear to be rare in this phylum, but have been identified in the genera Ascogregarina and Eimeria. Life cycle Most members have a complex lifecycle, involving both asexual and sexual reproduction. Typically, a host is infected via an active invasion by the parasites (similar to entosis), which divide to produce sporozoites that enter its cells. Eventually, the cells burst, releasing merozoites, which infect new cells. This may occur several times, until gamonts are produced, forming gametes that fuse to create new cysts. Many variations occur on this basic pattern, however, and many Apicomplexa have more than one host. The apical complex includes vesicles called rhoptries and micronemes, which open at the anterior of the cell. These secrete enzymes that allow the parasite to enter other cells. The tip is surrounded by a band of microtubules, called the polar ring, and among the Conoidasida is also a funnel of tubulin proteins called the conoid. Over the rest of the cell, except for a diminished mouth called the micropore, the membrane is supported by vesicles called alveoli, forming a semirigid pellicle. The presence of alveoli and other traits place the Apicomplexa among a group called the alveolates. Several related flagellates, such as Perkinsus and Colpodella, have structures similar to the polar ring and were formerly included here, but most appear to be closer relatives of the dinoflagellates. They are probably similar to the common ancestor of the two groups. Another similarity is that many apicomplexan cells contain a single plastid, called the apicoplast, surrounded by either three or four membranes. Its functions are thought to include tasks such as lipid and heme biosynthesis, and it appears to be necessary for survival. In general, plastids are considered to have a common origin with the chloroplasts of dinoflagellates, and evidence points to an origin from red algae rather than green. Subgroups Within this phylum are four groups — coccidians, gregarines, haemosporidians (or haematozoans, including in addition piroplasms), and marosporidians. The coccidians and haematozoans appear to be relatively closely related. Perkinsus , while once considered a member of the Apicomplexa, has been moved to a new phylum — Perkinsozoa. Gregarines The gregarines are generally parasites of annelids, arthropods, and molluscs. They are often found in the guts of their hosts, but may invade the other tissues. In the typical gregarine lifecycle, a trophozoite develops within a host cell into a schizont. This then divides into a number of merozoites by schizogony. The merozoites are released by lysing the host cell, which in turn invade other cells. At some point in the apicomplexan lifecycle, gametocytes are formed. These are released by lysis of the host cells, which group together. Each gametocyte forms multiple gametes. The gametes fuse with another to form oocysts. The oocysts leave the host to be taken up by a new host. Coccidians In general, coccidians are parasites of vertebrates. Like gregarines, they are commonly parasites of the epithelial cells of the gut, but may infect other tissues. The coccidian lifecycle involves merogony, gametogony, and sporogony. While similar to that of the gregarines it differs in zygote formation. Some trophozoites enlarge and become macrogamete, whereas others divide repeatedly to form microgametes (anisogamy). The microgametes are motile and must reach the macrogamete to fertilize it. The fertilized macrogamete forms a zygote that in its turn forms an oocyst that is normally released from the body. Syzygy, when it occurs, involves markedly anisogamous gametes. The lifecycle is typically haploid, with the only diploid stage occurring in the zygote, which is normally short-lived. The main difference between the coccidians and the gregarines is in the gamonts. In the coccidia, these are small, intracellular, and without epimerites or mucrons. In the gregarines, these are large, extracellular, and possess epimerites or mucrons. A second difference between the coccidia and the gregarines also lies in the gamonts. In the coccidians, a single gamont becomes a macrogametocyte, whereas in the gregarines, the gamonts give rise to multiple gametocytes. Haemosporidia The Haemosporidia have more complex lifecycles that alternate between an arthropod and a vertebrate host. The trophozoite parasitises erythrocytes or other tissues in the vertebrate host. Microgametes and macrogametes are always found in the blood. The gametes are taken up by the insect vector during a blood meal. The microgametes migrate within the gut of the insect vector and fuse with the macrogametes. The fertilized macrogamete now becomes an ookinete, which penetrates the body of the vector. The ookinete then transforms into an oocyst and divides initially by meiosis and then by mitosis (haplontic lifecycle) to give rise to the sporozoites. The sporozoites escape from the oocyst and migrate within the body of the vector to the salivary glands where they are injected into the new vertebrate host when the insect vector feeds again. Marosporida The class Marosporida Mathur, Kristmundsson, Gestal, Freeman, and Keeling 2020 is a newly recognized lineage of apicomplexans that is sister to the Coccidia and Hematozoa. It is defined as a phylogenetic clade containing Aggregata octopiana Frenzel 1885, Merocystis kathae Dakin, 1911 (both Aggregatidae, originally coccidians), Rhytidocystis sp. 1 and Rhytidocystis sp. 2 Janouškovec et al. 2019 (Rhytidocystidae Levine, 1979, originally coccidians, Agamococcidiorida), and Margolisiella islandica Kristmundsson et al. 2011 (closely related to Rhytidocystidae). Marosporida infect marine invertebrates. Members of this clade retain plastid genomes and the canonical apicomplexan plastid metabolism. However, marosporidians have the most reduced apicoplast genomes sequenced to date, lack canonical plastidial RNA polymerase and so provide new insights into reductive organelle evolution. Ecology and distribution Many of the apicomplexan parasites are important pathogens of humans and domestic animals. In contrast to bacterial pathogens, these apicomplexan parasites are eukaryotic and share many metabolic pathways with their animal hosts. This makes therapeutic target development extremely difficult – a drug that harms an apicomplexan parasite is also likely to harm its human host. At present, no effective vaccines are available for most diseases caused by these parasites. Biomedical research on these parasites is challenging because it is often difficult, if not impossible, to maintain live parasite cultures in the laboratory and to genetically manipulate these organisms. In recent years, several of the apicomplexan species have been selected for genome sequencing. The availability of genome sequences provides a new opportunity for scientists to learn more about the evolution and biochemical capacity of these parasites. The predominant source of this genomic information is the EuPathDB family of websites, which currently provides specialised services for Plasmodium species (PlasmoDB), coccidians (ToxoDB), piroplasms (PiroplasmaDB), and Cryptosporidium species (CryptoDB). One possible target for drugs is the plastid, and in fact existing drugs such as tetracyclines, which are effective against apicomplexans, seem to operate against the plastid. Many Coccidiomorpha have an intermediate host, as well as a primary host, and the evolution of hosts proceeded in different ways and at different times in these groups. For some coccidiomorphs, the original host has become the intermediate host, whereas in others it has become the definitive host. In the genera Aggregata, Atoxoplasma, Cystoisospora, Schellackia, and Toxoplasma, the original is now definitive, whereas in Akiba, Babesiosoma, Babesia, Haemogregarina, Haemoproteus, Hepatozoon, Karyolysus, Leucocytozoon, Plasmodium, Sarcocystis, and Theileria, the original hosts are now intermediate. Similar strategies to increase the likelihood of transmission have evolved in multiple genera. Polyenergid oocysts and tissue cysts are found in representatives of the orders Protococcidiorida and Eimeriida. Hypnozoites are found in Karyolysus lacerate and most species of Plasmodium; transovarial transmission of parasites occurs in lifecycles of Karyolysus and Babesia. Horizontal gene transfer appears to have occurred early on in this phylum's evolution with the transfer of a histone H4 lysine 20 (H4K20) modifier, KMT5A (Set8), from an animal host to the ancestor of apicomplexans. A second gene—H3K36 methyltransferase (Ashr3 in plants)—may have also been horizontally transferred. Blood-borne genera Within the Apicomplexa are three suborders of parasites: suborder Adeleorina—eight genera suborder Laveraniina (formerly Haemosporina)—all genera in this suborder suborder Eimeriorina—two genera (Lankesterella and Schellackia) Within the Adelorina are species that infect invertebrates and others that infect vertebrates. The Eimeriorina—the largest suborder in this phylum—the lifecycle involves both sexual and asexual stages. The asexual stages reproduce by schizogony. The male gametocyte produces a large number of gametes and the zygote gives rise to an oocyst, which is the infective stage. The majority are monoxenous (infect one host only), but a few are heteroxenous (lifecycle involves two or more hosts). The number of families in this later suborder is debated, with the number of families being between one and 20 depending on the authority and the number of genera being between 19 and 25. Taxonomy History The first Apicomplexa protozoan was seen by Antonie van Leeuwenhoek, who in 1674 saw probably oocysts of Eimeria stiedae in the gall bladder of a rabbit. The first species of the phylum to be described, Gregarina ovata, in earwigs' intestines, was named by Dufour in 1828. He thought that they were a peculiar group related to the trematodes, at that time included in Vermes. Since then, many more have been identified and named. During 1826–1850, 41 species and six genera of Apicomplexa were named. In 1951–1975, 1873 new species and 83 new genera were added. The older taxon Sporozoa, included in Protozoa, was created by Leuckart in 1879 and adopted by Bütschli in 1880. Through history, it grouped with the current Apicomplexa many unrelated groups. For example, Kudo (1954) included in the Sporozoa species of the Ascetosporea (Rhizaria), Microsporidia (Fungi), Myxozoa (Animalia), and Helicosporidium (Chlorophyta), while Zierdt (1978) included the genus Blastocystis (Stramenopiles). Dermocystidium was also thought to be sporozoan. Not all of these groups had spores, but all were parasitic. However, other parasitic or symbiotic unicellular organisms were included too in protozoan groups outside Sporozoa (Flagellata, Ciliophora and Sarcodina), if they had flagella (e.g., many Kinetoplastida, Retortamonadida, Diplomonadida, Trichomonadida, Hypermastigida), cilia (e.g., Balantidium) or pseudopods (e.g., Entamoeba, Acanthamoeba, Naegleria). If they had cell walls, they also could be included in plant kingdom between bacteria or yeasts. Sporozoa is no longer regarded as biologically valid and its use is discouraged, although some authors still use it as a synonym for the Apicomplexa. More recently, other groups were excluded from Apicomplexa, e.g., Perkinsus and Colpodella (now in Protalveolata). The field of classifying Apicomplexa is in flux and classification has changed throughout the years since it was formally named in 1970. By 1987, a comprehensive survey of the phylum was completed: in all, 4516 species and 339 genera had been named. They consisted of: Class Conoidasida Subclass Gregarinasina p.p. Order Eugregarinorida, with 1624 named species and 231 named genera Subclass Coccidiasina p.p Order Eucoccidiorida p.p Suborder Adeleorina p.p Group Hemogregarines, with 399 species and four genera Suborder Eimeriorina, with 1771 species and 43 genera Class Aconoidasida Order Haemospororida, with 444 species and nine genera Order Piroplasmorida, with 173 species and 20 genera Other minor groups omitted above, with 105 species and 32 genera Although considerable revision of this phylum has been done (the order Haemosporidia now has 17 genera rather than 9), these numbers are probably still approximately correct. Jacques Euzéby (1988) Jacques Euzéby in 1988 created a new class Haemosporidiasina by merging subclass Piroplasmasina and suborder Haemospororina. Subclass Gregarinasina (the gregarines) Subclass Coccidiasina Suborder Adeleorina (the adeleorins) Suborder Eimeriorina (the eimeriorins) Subclass Haemosporidiasina Order Achromatorida Order Chromatorida The division into Achromatorida and Chromatorida, although proposed on morphological grounds, may have a biological basis, as the ability to store haemozoin appears to have evolved only once. Roberts and Janovy (1996) Roberts and Janovy in 1996 divided the phylum into the following subclasses and suborders (omitting classes and orders): Subclass Gregarinasina (the gregarines) Subclass Coccidiasina Suborder Adeleorina (the adeleorins) Suborder Eimeriorina (the eimeriorins) Suborder Haemospororina (the haemospororins) Subclass Piroplasmasina (the piroplasms) These form the following five taxonomic groups: The gregarines are, in general, one-host parasites of invertebrates. The adeleorins are one-host parasites of invertebrates or vertebrates, or two-host parasites that alternately infect haematophagous (blood-feeding) invertebrates and the blood of vertebrates. The eimeriorins are a diverse group that includes one host species of invertebrates, two-host species of invertebrates, one-host species of vertebrates and two-host species of vertebrates. The eimeriorins are frequently called the coccidia. This term is often used to include the adeleorins. Haemospororins, often known as the malaria parasites, are two-host Apicomplexa that parasitize blood-feeding dipteran flies and the blood of various tetrapod vertebrates. Piroplasms where all the species included are two-host parasites infecting ticks and vertebrates. Perkins (2000) Perkins et al. proposed the following scheme. It is outdated as the Perkinsidae have since been recognised as a sister group to the dinoflagellates rather that the Apicomplexia: Class Aconoidasida Conoid present only in the ookinete of some species Order Haemospororida Macrogamete and microgamete develop separately. Syzygy does not occur. Ookinete has a conoid. Sporozoites have three walls. Heteroxenous: alternates between vertebrate host (in which merogony occurs) and invertebrate host (in which sporogony occurs). Usually blood parasites, transmitted by blood-sucking insects. Order Piroplasmorida Class Conoidasida Subclass Gregarinasina Order Archigregarinorida Order Eugregarinorida Suborder Adeleorina Suborder Eimeriorina Order Neogregarinorida Subclass Coccidiasina Order Agamococcidiorida Order Eucoccidiorida Order Ixorheorida Order Protococcidiorida Class Perkinsasida Order Perkinsorida Family Perkinsidae The name Protospiromonadida has been proposed for the common ancestor of the Gregarinomorpha and Coccidiomorpha. Another group of organisms that belong in this taxon are the corallicolids. These are found in coral reef gastric cavities. Their relationship to the others in this phylum has yet to be established. Another genus has been identified - Nephromyces - which appears to be a sister taxon to the Hematozoa. This genus is found in the renal sac of molgulid ascidian tunicates. Evolution Members of this phylum, except for the photosynthetic chromerids, are parasitic and evolved from a free-living ancestor. This lifestyle is presumed to have evolved at the time of the divergence of dinoflagellates and apicomplexans. Further evolution of this phylum has been estimated to have occurred about . The oldest extant clade is thought to be the archigregarines. These phylogenetic relations have rarely been studied at the subclass level. The Haemosporidia are related to the gregarines, and the piroplasms and coccidians are sister groups. The Haemosporidia and the Piroplasma appear to be sister clades, and are more closely related to the coccidians than to the gregarines. Marosporida is a sister group to Coccidiomorphea. Janouškovec et al. 2015 presents a somewhat different phylogeny, supporting the work of others showing multiple events of plastids losing photosynthesis. More importantly this work provides the first phylogenetic evidence that there have also been multiple events of plastids becoming genome-free.
Biology and health sciences
SAR supergroup
Plants
2230
https://en.wikipedia.org/wiki/Analysis%20of%20algorithms
Analysis of algorithms
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm. The term "analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms. In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used to this end. For instance, binary search is said to run in a number of steps proportional to the logarithm of the size of the sorted list being searched, or in , colloquially "in logarithmic time". Usually asymptotic estimates are used because different implementations of the same algorithm may differ in efficiency. However the efficiencies of any two "reasonable" implementations of a given algorithm are related by a constant multiplicative factor called a hidden constant. Exact (not asymptotic) measures of efficiency can sometimes be computed but they usually require certain assumptions concerning the particular implementation of the algorithm, called a model of computation. A model of computation may be defined in terms of an abstract computer, e.g. Turing machine, and/or by postulating that certain operations are executed in unit time. For example, if the sorted list to which we apply binary search has elements, and we can guarantee that each lookup of an element in the list can be done in unit time, then at most time units are needed to return an answer. Cost models Time efficiency estimates depend on what we define to be a step. For the analysis to correspond usefully to the actual run-time, the time required to perform a step must be guaranteed to be bounded above by a constant. One must be careful here; for instance, some analyses count an addition of two numbers as one step. This assumption may not be warranted in certain contexts. For example, if the numbers involved in a computation may be arbitrarily large, the time required by a single addition can no longer be assumed to be constant. Two cost models are generally used: the uniform cost model, also called unit-cost model (and similar variations), assigns a constant cost to every machine operation, regardless of the size of the numbers involved the logarithmic cost model, also called logarithmic-cost measurement (and similar variations), assigns a cost to every machine operation proportional to the number of bits involved The latter is more cumbersome to use, so it is only employed when necessary, for example in the analysis of arbitrary-precision arithmetic algorithms, like those used in cryptography. A key point which is often overlooked is that published lower bounds for problems are often given for a model of computation that is more restricted than the set of operations that you could use in practice and therefore there are algorithms that are faster than what would naively be thought possible. Run-time analysis Run-time analysis is a theoretical classification that estimates and anticipates the increase in running time (or run-time or execution time) of an algorithm as its input size (usually denoted as ) increases. Run-time efficiency is a topic of great interest in computer science: A program can take seconds, hours, or even years to finish executing, depending on which algorithm it implements. While software profiling techniques can be used to measure an algorithm's run-time in practice, they cannot provide timing data for all infinitely many possible inputs; the latter can only be achieved by the theoretical methods of run-time analysis. Shortcomings of empirical metrics Since algorithms are platform-independent (i.e. a given algorithm can be implemented in an arbitrary programming language on an arbitrary computer running an arbitrary operating system), there are additional significant drawbacks to using an empirical approach to gauge the comparative performance of a given set of algorithms. Take as an example a program that looks up a specific entry in a sorted list of size n. Suppose this program were implemented on Computer A, a state-of-the-art machine, using a linear search algorithm, and on Computer B, a much slower machine, using a binary search algorithm. Benchmark testing on the two computers running their respective programs might look something like the following: Based on these metrics, it would be easy to jump to the conclusion that Computer A is running an algorithm that is far superior in efficiency to that of Computer B. However, if the size of the input-list is increased to a sufficient number, that conclusion is dramatically demonstrated to be in error: Computer A, running the linear search program, exhibits a linear growth rate. The program's run-time is directly proportional to its input size. Doubling the input size doubles the run-time, quadrupling the input size quadruples the run-time, and so forth. On the other hand, Computer B, running the binary search program, exhibits a logarithmic growth rate. Quadrupling the input size only increases the run-time by a constant amount (in this example, 50,000 ns). Even though Computer A is ostensibly a faster machine, Computer B will inevitably surpass Computer A in run-time because it is running an algorithm with a much slower growth rate. Orders of growth Informally, an algorithm can be said to exhibit a growth rate on the order of a mathematical function if beyond a certain input size , the function times a positive constant provides an upper bound or limit for the run-time of that algorithm. In other words, for a given input size greater than some 0 and a constant , the run-time of that algorithm will never be larger than . This concept is frequently expressed using Big O notation. For example, since the run-time of insertion sort grows quadratically as its input size increases, insertion sort can be said to be of order . Big O notation is a convenient way to express the worst-case scenario for a given algorithm, although it can also be used to express the average-case — for example, the worst-case scenario for quicksort is , but the average-case run-time is . Empirical orders of growth Assuming the run-time follows power rule, , the coefficient can be found by taking empirical measurements of run-time } at some problem-size points }, and calculating so that . In other words, this measures the slope of the empirical line on the log–log plot of run-time vs. input size, at some size point. If the order of growth indeed follows the power rule (and so the line on the log–log plot is indeed a straight line), the empirical value of will stay constant at different ranges, and if not, it will change (and the line is a curved line)—but still could serve for comparison of any two given algorithms as to their empirical local orders of growth behaviour. Applied to the above table: It is clearly seen that the first algorithm exhibits a linear order of growth indeed following the power rule. The empirical values for the second one are diminishing rapidly, suggesting it follows another rule of growth and in any case has much lower local orders of growth (and improving further still), empirically, than the first one. Evaluating run-time complexity The run-time complexity for the worst-case scenario of a given algorithm can sometimes be evaluated by examining the structure of the algorithm and making some simplifying assumptions. Consider the following pseudocode: 1 get a positive integer n from input 2 if n > 10 3 print "This might take a while..." 4 for i = 1 to n 5 for j = 1 to i 6 print i * j 7 print "Done!" A given computer will take a discrete amount of time to execute each of the instructions involved with carrying out this algorithm. Say that the actions carried out in step 1 are considered to consume time at most T1, step 2 uses time at most T2, and so forth. In the algorithm above, steps 1, 2 and 7 will only be run once. For a worst-case evaluation, it should be assumed that step 3 will be run as well. Thus the total amount of time to run steps 1-3 and step 7 is: The loops in steps 4, 5 and 6 are trickier to evaluate. The outer loop test in step 4 will execute ( n + 1 ) times, which will consume T4( n + 1 ) time. The inner loop, on the other hand, is governed by the value of j, which iterates from 1 to i. On the first pass through the outer loop, j iterates from 1 to 1: The inner loop makes one pass, so running the inner loop body (step 6) consumes T6 time, and the inner loop test (step 5) consumes 2T5 time. During the next pass through the outer loop, j iterates from 1 to 2: the inner loop makes two passes, so running the inner loop body (step 6) consumes 2T6 time, and the inner loop test (step 5) consumes 3T5 time. Altogether, the total time required to run the inner loop body can be expressed as an arithmetic progression: which can be factored as The total time required to run the inner loop test can be evaluated similarly: which can be factored as Therefore, the total run-time for this algorithm is: which reduces to As a rule-of-thumb, one can assume that the highest-order term in any given function dominates its rate of growth and thus defines its run-time order. In this example, n2 is the highest-order term, so one can conclude that . Formally this can be proven as follows: A more elegant approach to analyzing this algorithm would be to declare that [T1..T7] are all equal to one unit of time, in a system of units chosen so that one unit is greater than or equal to the actual times for these steps. This would mean that the algorithm's run-time breaks down as follows: Growth rate analysis of other resources The methodology of run-time analysis can also be utilized for predicting other growth rates, such as consumption of memory space. As an example, consider the following pseudocode which manages and reallocates memory usage by a program based on the size of a file which that program manages: while file is still open: let n = size of file for every 100,000 kilobytes of increase in file size double the amount of memory reserved In this instance, as the file size n increases, memory will be consumed at an exponential growth rate, which is order . This is an extremely rapid and most likely unmanageable growth rate for consumption of memory resources. Relevance Algorithm analysis is important in practice because the accidental or unintentional use of an inefficient algorithm can significantly impact system performance. In time-sensitive applications, an algorithm taking too long to run can render its results outdated or useless. An inefficient algorithm can also end up requiring an uneconomical amount of computing power or storage in order to run, again rendering it practically useless. Constant factors Analysis of algorithms typically focuses on the asymptotic performance, particularly at the elementary level, but in practical applications constant factors are important, and real-world data is in practice always limited in size. The limit is typically the size of addressable memory, so on 32-bit machines 232 = 4 GiB (greater if segmented memory is used) and on 64-bit machines 264 = 16 EiB. Thus given a limited size, an order of growth (time or space) can be replaced by a constant factor, and in this sense all practical algorithms are for a large enough constant, or for small enough data. This interpretation is primarily useful for functions that grow extremely slowly: (binary) iterated logarithm (log*) is less than 5 for all practical data (265536 bits); (binary) log-log (log log n) is less than 6 for virtually all practical data (264 bits); and binary log (log n) is less than 64 for virtually all practical data (264 bits). An algorithm with non-constant complexity may nonetheless be more efficient than an algorithm with constant complexity on practical data if the overhead of the constant time algorithm results in a larger constant factor, e.g., one may have so long as and . For large data linear or quadratic factors cannot be ignored, but for small data an asymptotically inefficient algorithm may be more efficient. This is particularly used in hybrid algorithms, like Timsort, which use an asymptotically efficient algorithm (here merge sort, with time complexity ), but switch to an asymptotically inefficient algorithm (here insertion sort, with time complexity ) for small data, as the simpler algorithm is faster on small data.
Mathematics
Algorithms
null
2246
https://en.wikipedia.org/wiki/Analgesic
Analgesic
An analgesic drug, also called simply an analgesic, antalgic, pain reliever, or painkiller, is any member of the group of drugs used for pain management. Analgesics are conceptually distinct from anesthetics, which temporarily reduce, and in some instances eliminate, sensation, although analgesia and anesthesia are neurophysiologically overlapping and thus various drugs have both analgesic and anesthetic effects. Analgesic choice is also determined by the type of pain: For neuropathic pain, recent research has suggested that classes of drugs that are not normally considered analgesics, such as tricyclic antidepressants and anticonvulsants may be considered as an alternative. Various analgesics, such as many NSAIDs, are available over the counter in most countries, whereas various others are prescription drugs owing to the substantial risks and high chances of overdose, misuse, and addiction in the absence of medical supervision. Etymology The word analgesic derives from Greek an- (, "without"), álgos (, "pain"), and -ikos (, forming adjectives). Such drugs were usually known as "anodynes" before the 20th century. Classification Analgesics are typically classified based on their mechanism of action. Paracetamol (acetaminophen) Paracetamol, also known as acetaminophen or APAP, is a medication used to treat pain and fever. It is typically used for mild to moderate pain. In combination with opioid pain medication, paracetamol is now used for more severe pain such as cancer pain and after surgery. It is typically used either by mouth or rectally but is also available intravenously. Effects last between two and four hours. Paracetamol is classified as a mild analgesic. Paracetamol is generally safe at recommended doses. NSAIDs Nonsteroidal anti-inflammatory drugs (usually abbreviated to NSAIDs), are a drug class that groups together drugs that decrease pain and lower fever, and, in higher doses, decrease inflammation. The most prominent members of this group of drugs—aspirin, ibuprofen and naproxen, and Diclofenac—are all available over the counter in most countries. COX-2 inhibitors These drugs have been derived from NSAIDs. The cyclooxygenase enzyme inhibited by NSAIDs was discovered to have at least two different versions: COX1 and COX2. Research suggested most of the adverse effects of NSAIDs to be mediated by blocking the COX1 (constitutive) enzyme, with the analgesic effects being mediated by the COX2 (inducible) enzyme. Thus, the COX2 inhibitors were developed to inhibit only the COX2 enzyme (traditional NSAIDs block both versions in general). These drugs (such as rofecoxib, celecoxib, and etoricoxib) are equally effective analgesics when compared with NSAIDs, but cause less gastrointestinal hemorrhage in particular. After widespread adoption of the COX-2 inhibitors, it was discovered that most of the drugs in this class increase the risk of cardiovascular events by 40% on average. This led to the withdrawal of rofecoxib and valdecoxib, and warnings on others. Etoricoxib seems relatively safe, with the risk of thrombotic events similar to that of non-coxib NSAID diclofenac. Opioids Morphine, the archetypal opioid, and other opioids (e.g., codeine, oxycodone, hydrocodone, dihydromorphine, pethidine) all exert a similar influence on the cerebral opioid receptor system. Buprenorphine is a partial agonist of the μ-opioid receptor, and tramadol is a serotonin norepinephrine reuptake inhibitor (SNRI) with weak μ-opioid receptor agonist properties. Tramadol is structurally closer to venlafaxine than to codeine and delivers analgesia by not only delivering "opioid-like" effects (through mild agonism of the mu receptor) but also by acting as a weak but fast-acting serotonin releasing agent and norepinephrine reuptake inhibitor. Tapentadol, with some structural similarities to tramadol, presents what is believed to be a novel drug working through two (and possibly three) different modes of action in the fashion of both a traditional opioid and as an SNRI. The effects of serotonin and norepinephrine on pain, while not completely understood, have had causal links established and drugs in the SNRI class are commonly used in conjunction with opioids (especially tapentadol and tramadol) with greater success in pain relief. Dosing of all opioids may be limited by opioid toxicity (confusion, respiratory depression, myoclonic jerks and pinpoint pupils), seizures (tramadol), but opioid-tolerant individuals usually have higher dose ceilings than patients without tolerance. Opioids, while very effective analgesics, may have some unpleasant side-effects. Patients starting morphine may experience nausea and vomiting (generally relieved by a short course of antiemetics such as phenergan). Pruritus (itching) may require switching to a different opioid. Constipation occurs in almost all patients on opioids, and laxatives (lactulose, macrogol-containing or co-danthramer) are typically co-prescribed. When used appropriately, opioids and other central analgesics are safe and effective; however, risks such as addiction and the body's becoming used to the drug (tolerance) can occur. The effect of tolerance means that frequent use of the drug may result in its diminished effect. When safe to do so, the dosage may need to be increased to maintain effectiveness against tolerance, which may be of particular concern regarding patients with chronic pain and requiring an analgesic over long periods. Opioid tolerance is often addressed with opioid rotation therapy in which a patient is routinely switched between two or more non-cross-tolerant opioid medications in order to prevent exceeding safe dosages in the attempt to achieve an adequate analgesic effect. Opioid tolerance should not be confused with opioid-induced hyperalgesia. The symptoms of these two conditions can appear very similar but the mechanism of action is different. Opioid-induced hyperalgesia is when exposure to opioids increases the sensation of pain (hyperalgesia) and can even make non-painful stimuli painful (allodynia). Alcohol Alcohol has biological, mental, and social effects which influence the consequences of using alcohol for pain. Moderate use of alcohol can lessen certain types of pain in certain circumstances. The majority of its analgesic effects come from antagonizing NMDA receptors, similarly to ketamine, thus decreasing the activity of the primary excitatory (signal boosting) neurotransmitter, glutamate. It also functions as an analgesic to a lesser degree by increasing the activity of the primary inhibitory (signal reducing) neurotransmitter, GABA. Attempting to use alcohol to treat pain has also been observed to lead to negative outcomes including excessive drinking and alcohol use disorder. Cannabis Medical cannabis, or medical marijuana, refers to cannabis or its cannabinoids used to treat disease or improve symptoms. There is evidence suggesting that cannabis can be used to treat chronic pain and muscle spasms, with some trials indicating improved relief of neuropathic pain over opioids. Combinations Analgesics are frequently used in combination, such as the paracetamol and codeine preparations found in many non-prescription pain relievers. They can also be found in combination with vasoconstrictor drugs such as pseudoephedrine for sinus-related preparations, or with antihistamine drugs for people with allergies. While the use of paracetamol, aspirin, ibuprofen, naproxen, and other NSAIDS concurrently with weak to mid-range opiates (up to about the hydrocodone level) has been said to show beneficial synergistic effects by combating pain at multiple sites of action, several combination analgesic products have been shown to have few efficacy benefits when compared to similar doses of their individual components. Moreover, these combination analgesics can often result in significant adverse events, including accidental overdoses, most often due to confusion that arises from the multiple (and often non-acting) components of these combinations. Alternative medicine There is some evidence that some treatments using alternative medicine can relieve some types of pain more effectively than placebo. The available research concludes that more research would be necessary to better understand the use of alternative medicine. Other drugs Nefopam—a monoamine reuptake inhibitor, and calcium and sodium channel modulator—is also approved for the treatment of moderate to severe pain in some countries. Flupirtine is a centrally acting K+ channel opener with weak NMDA antagonist properties. It was used in Europe for moderate to strong pain, as well as its migraine-treating and muscle-relaxant properties. It has no significant anticholinergic properties, and is believed to be devoid of any activity on dopamine, serotonin, or histamine receptors. It is not addictive, and tolerance usually does not develop. However, tolerance may develop in some cases. Ziconotide, a blocker of potent N-type voltage-gated calcium channels, is administered intrathecally for the relief of severe, usually cancer-related pain. Adjuvants Certain drugs that have been introduced for uses other than analgesics are also used in pain management. Both first-generation (such as amitriptyline) and newer antidepressants (such as duloxetine) are used alongside NSAIDs and opioids for pain involving nerve damage and similar problems. Other agents directly potentiate the effects of analgesics, such as using hydroxyzine, promethazine, carisoprodol, or tripelennamine to increase the pain-killing ability of a given dose of opioid analgesic. Adjuvant analgesics, also called atypical analgesics, include orphenadrine, mexiletine, pregabalin, gabapentin, cyclobenzaprine, hyoscine (scopolamine), and other drugs possessing anticonvulsant, anticholinergic, and/or antispasmodic properties, as well as many other drugs with CNS actions. These drugs are used along with analgesics to modulate and/or modify the action of opioids when used against pain, especially of neuropathic origin. Dextromethorphan has been noted to slow the development of and reverse tolerance to opioids, as well as to exert additional analgesia by acting upon NMDA receptors, as does ketamine. Some analgesics such as methadone and ketobemidone and perhaps piritramide have intrinsic NMDA action. The anticonvulsant carbamazepine is used to treat neuropathic pain. Similarly, the gabapentinoids gabapentin and pregabalin are prescribed for neuropathic pain, and phenibut is available without prescription. Gabapentinoids work as α2δ-subunit blockers of voltage-gated calcium channels, and tend to have other mechanisms of action as well. Gabapentinoids are all anticonvulsants, which are most commonly used for neuropathic pain, as their mechanism of action tends to inhibit pain sensation originating from the nervous system. Other uses Topical analgesia is generally recommended to avoid systemic side-effects. Painful joints, for example, may be treated with an ibuprofen- or diclofenac-containing gel (The labeling for topical diclofenac has been updated to warn about drug-induced hepatotoxicity.); capsaicin also is used topically. Lidocaine, an anesthetic, and steroids may be injected into joints for longer-term pain relief. Lidocaine is also used for painful mouth sores and to numb areas for dental work and minor medical procedures. In February 2007 the FDA notified consumers and healthcare professionals of the potential hazards of topical anesthetics entering the bloodstream when applied in large doses to the skin without medical supervision. These topical anesthetics contain anesthetic drugs such as lidocaine, tetracaine, benzocaine, and prilocaine in a cream, ointment, or gel. Uses Topical nonsteroidal anti-inflammatory drugs provide pain relief in common conditions such as muscle sprains and overuse injuries. Since the side effects are also lesser, topical preparations could be preferred over oral medications in these conditions. List of drugs with comparison Research Some novel and investigational analgesics include subtype-selective voltage-gated sodium channel blockers such as funapide and raxatrigine, as well as multimodal agents such as ralfinamide.
Biology and health sciences
Drugs and pharmacology
null
2296
https://en.wikipedia.org/wiki/Adrenal%20gland
Adrenal gland
The adrenal glands (also known as suprarenal glands) are endocrine glands that produce a variety of hormones including adrenaline and the steroids aldosterone and cortisol. They are found above the kidneys. Each gland has an outer cortex which produces steroid hormones and an inner medulla. The adrenal cortex itself is divided into three main zones: the zona glomerulosa, the zona fasciculata and the zona reticularis. The adrenal cortex produces three main types of steroid hormones: mineralocorticoids, glucocorticoids, and androgens. Mineralocorticoids (such as aldosterone) produced in the zona glomerulosa help in the regulation of blood pressure and electrolyte balance. The glucocorticoids cortisol and cortisone are synthesized in the zona fasciculata; their functions include the regulation of metabolism and immune system suppression. The innermost layer of the cortex, the zona reticularis, produces androgens that are converted to fully functional sex hormones in the gonads and other target organs. The production of steroid hormones is called steroidogenesis, and involves a number of reactions and processes that take place in cortical cells. The medulla produces the catecholamines, which function to produce a rapid response throughout the body in stress situations. A number of endocrine diseases involve dysfunctions of the adrenal gland. Overproduction of cortisol leads to Cushing's syndrome, whereas insufficient production is associated with Addison's disease. Congenital adrenal hyperplasia is a genetic disease produced by dysregulation of endocrine control mechanisms. A variety of tumors can arise from adrenal tissue and are commonly found in medical imaging when searching for other diseases. Structure The adrenal glands are located on both sides of the body in the retroperitoneum, above and slightly medial to the kidneys. In humans, the right adrenal gland is pyramidal in shape, whereas the left is semilunar or crescent shaped and somewhat larger. The adrenal glands measure approximately 5 cm in length, 3 cm in width, and up to 1 cm in thickness. Their combined weight in an adult human ranges from 7 to 10 grams. The glands are yellowish in colour. The adrenal glands are surrounded by a fatty capsule and lie within the renal fascia, which also surrounds the kidneys. A weak septum (wall) of connective tissue separates the glands from the kidneys. The adrenal glands are directly below the diaphragm, and are attached to the crura of the diaphragm by the renal fascia. Each adrenal gland has two distinct parts, each with a unique function, the outer adrenal cortex and the inner medulla, both of which produce hormones. Adrenal cortex The adrenal cortex is the outer region and also the largest part of an adrenal gland. It is divided into three separate zones: zona glomerulosa, zona fasciculata and zona reticularis. Each zone is responsible for producing specific hormones. The adrenal cortex is the outermost layer of the adrenal gland. Within the cortex are three layers, called "zones". When viewed under a microscope each layer has a distinct appearance, and each has a different function. The adrenal cortex is devoted to production of hormones, namely aldosterone, cortisol, and androgens. Zona glomerulosa The outermost zone of the adrenal cortex is the zona glomerulosa. It lies immediately under the fibrous capsule of the gland. Cells in this layer form oval groups, separated by thin strands of connective tissue from the fibrous capsule of the gland and carry wide capillaries. This layer is the main site for production of aldosterone, a mineralocorticoid, by the action of the enzyme aldosterone synthase. Aldosterone plays an important role in the long-term regulation of blood pressure. Zona fasciculata The zona fasciculata is situated between the zona glomerulosa and zona reticularis. Cells in this layer are responsible for producing glucocorticoids such as cortisol. It is the largest of the three layers, accounting for nearly 80% of the volume of the cortex. In the zona fasciculata, cells are arranged in columns radially oriented towards the medulla. Cells contain numerous lipid droplets, abundant mitochondria and a complex smooth endoplasmic reticulum. Zona reticularis The innermost cortical layer, the zona reticularis, lies directly adjacent to the medulla. It produces androgens, mainly dehydroepiandrosterone (DHEA), DHEA sulfate (DHEA-S), and androstenedione (the precursor to testosterone) in humans. Its small cells form irregular cords and clusters, separated by capillaries and connective tissue. The cells contain relatively small quantities of cytoplasm and lipid droplets, and sometimes display brown lipofuscin pigment. Medulla The adrenal medulla is at the centre of each adrenal gland, and is surrounded by the adrenal cortex. The chromaffin cells of the medulla are the body's main source of the catecholamines, such as adrenaline and noradrenaline, released by the medulla. Approximately 20% noradrenaline (norepinephrine) and 80% adrenaline (epinephrine) are secreted here. The adrenal medulla is driven by the sympathetic nervous system via preganglionic fibers originating in the thoracic spinal cord, from vertebrae T5–T11. Because it is innervated by preganglionic nerve fibers, the adrenal medulla can be considered as a specialized sympathetic ganglion. Unlike other sympathetic ganglia, however, the adrenal medulla lacks distinct synapses and releases its secretions directly into the blood. Blood supply The adrenal glands have one of the greatest blood supply rates per gram of tissue of any organ: up to 60 small arteries may enter each gland. Three arteries usually supply each adrenal gland: The superior suprarenal artery, a branch of the inferior phrenic artery The middle suprarenal artery, a direct branch of the abdominal aorta The inferior suprarenal artery, a branch of the renal artery These blood vessels supply a network of small arteries within the capsule of the adrenal glands. Thin strands of the capsule enter the glands, carrying blood to them. Venous blood is drained from the glands by the suprarenal veins, usually one for each gland: The right suprarenal vein drains into the inferior vena cava. The left suprarenal vein drains into the left renal vein or the left inferior phrenic vein. The central adrenomedullary vein, in the adrenal medulla, is an unusual type of blood vessel. Its structure is different from the other veins in that the smooth muscle in its tunica media (the middle layer of the vessel) is arranged in conspicuous, longitudinally oriented bundles. Variability The adrenal glands may not develop at all, or may be fused in the midline behind the aorta. These are associated with other congenital abnormalities, such as failure of the kidneys to develop, or fused kidneys. The gland may develop with a partial or complete absence of the cortex, or may develop in an unusual location. Function The adrenal gland secretes a number of different hormones which are metabolised by enzymes either within the gland or in other parts of the body. These hormones are involved in a number of essential biological functions. Corticosteroids Corticosteroids are a group of steroid hormones produced from the cortex of the adrenal gland, from which they are named. Mineralocorticoids such as aldosterone regulate salt ("mineral") balance and blood pressure Glucocorticoids such as cortisol influence metabolism rates of proteins, fats and sugars ("glucose"). Androgens such as dehydroepiandrosterone. Mineralocorticoids The adrenal gland produces aldosterone, a mineralocorticoid, which is important in the regulation of salt ("mineral") balance and blood volume. In the kidneys, aldosterone acts on the distal convoluted tubules and the collecting ducts by increasing the reabsorption of sodium and the excretion of both potassium and hydrogen ions. Aldosterone is responsible for the reabsorption of about 2% of filtered glomerular filtrate. Sodium retention is also a response of the distal colon and sweat glands to aldosterone receptor stimulation. Angiotensin II and extracellular potassium are the two main regulators of aldosterone production. The amount of sodium present in the body affects the extracellular volume, which in turn influences blood pressure. Therefore, the effects of aldosterone in sodium retention are important for the regulation of blood pressure. Glucocorticoids Cortisol is the main glucocorticoid in humans. In species that do not create cortisol, this role is played by corticosterone instead. Glucocorticoids have many effects on metabolism. As their name suggests, they increase the circulating level of glucose. This is the result of an increase in the mobilization of amino acids from protein and the stimulation of synthesis of glucose from these amino acids in the liver. In addition, they increase the levels of free fatty acids, which cells can use as an alternative to glucose to obtain energy. Glucocorticoids also have effects unrelated to the regulation of blood sugar levels, including the suppression of the immune system and a potent anti-inflammatory effect. Cortisol reduces the capacity of osteoblasts to produce new bone tissue and decreases the absorption of calcium in the gastrointestinal tract. The adrenal gland secretes a basal level of cortisol but can also produce bursts of the hormone in response to adrenocorticotropic hormone (ACTH) from the anterior pituitary. Cortisol is not evenly released during the day – its concentrations in the blood are highest in the early morning and lowest in the evening as a result of the circadian rhythm of ACTH secretion. Cortisone is an inactive product of the action of the enzyme 11β-HSD on cortisol. The reaction catalyzed by 11β-HSD is reversible, which means that it can turn administered cortisone into cortisol, the biologically active hormone. Formation All corticosteroid hormones share cholesterol as a common precursor. Therefore, the first step in steroidogenesis is cholesterol uptake or synthesis. Cells that produce steroid hormones can acquire cholesterol through two paths. The main source is through dietary cholesterol transported via the blood as cholesterol esters within low density lipoproteins (LDL). LDL enters the cells through receptor-mediated endocytosis. The other source of cholesterol is synthesis in the cell's endoplasmic reticulum. Synthesis can compensate when LDL levels are abnormally low. In the lysosome, cholesterol esters are converted to free cholesterol, which is then used for steroidogenesis or stored in the cell. The initial part of conversion of cholesterol into steroid hormones involves a number of enzymes of the cytochrome P450 family that are located in the inner membrane of mitochondria. Transport of cholesterol from the outer to the inner membrane is facilitated by steroidogenic acute regulatory protein and is the rate-limiting step of steroid synthesis. The layers of the adrenal gland differ by function, with each layer having distinct enzymes that produce different hormones from a common precursor. The first enzymatic step in the production of all steroid hormones is cleavage of the cholesterol side chain, a reaction that forms pregnenolone as a product and is catalyzed by the enzyme P450scc, also known as cholesterol desmolase. After the production of pregnenolone, specific enzymes of each cortical layer further modify it. Enzymes involved in this process include both mitochondrial and microsomal P450s and hydroxysteroid dehydrogenases. Usually a number of intermediate steps in which pregnenolone is modified several times are required to form the functional hormones. Enzymes that catalyze reactions in these metabolic pathways are involved in a number of endocrine diseases. For example, the most common form of congenital adrenal hyperplasia develops as a result of deficiency of 21-hydroxylase, an enzyme involved in an intermediate step of cortisol production. Regulation Glucocorticoids are under the regulatory influence of the hypothalamic–pituitary–adrenal axis (HPA) axis. Glucocorticoid synthesis is stimulated by adrenocorticotropic hormone (ACTH), a hormone released into the bloodstream by the anterior pituitary. In turn, production of ACTH is stimulated by the presence of corticotropin-releasing hormone (CRH), which is released by neurons of the hypothalamus. ACTH acts on the adrenal cells first by increasing the levels of StAR within the cells, and then of all steroidogenic P450 enzymes. The HPA axis is an example of a negative feedback system, in which cortisol itself acts as a direct inhibitor of both CRH and ACTH synthesis. The HPA axis also interacts with the immune system through increased secretion of ACTH at the presence of certain molecules of the inflammatory response. Mineralocorticoid secretion is regulated mainly by the renin–angiotensin–aldosterone system (RAAS), the concentration of potassium, and to a lesser extent the concentration of ACTH. Sensors of blood pressure in the juxtaglomerular apparatus of the kidneys release the enzyme renin into the blood, which starts a cascade of reactions that lead to formation of angiotensin II. Angiotensin receptors in cells of the zona glomerulosa recognize the substance, and upon binding they stimulate the release of aldosterone. Androgens Cells in zona reticularis of the adrenal glands produce male sex hormones, or androgens, the most important of which is DHEA. In general, these hormones do not have an overall effect in the male body, and are converted to more potent androgens such as testosterone and DHT or to estrogens (female sex hormones) in the gonads, acting in this way as a metabolic intermediate. Catecholamines Primarily referred to in the United States as epinephrine and norepinephrine, adrenaline and noradrenaline are catecholamines, water-soluble compounds that have a structure made of a catechol group and an amine group. The adrenal glands are responsible for most of the adrenaline that circulates in the body, but only for a small amount of circulating noradrenaline. These hormones are released by the adrenal medulla, which contains a dense network of blood vessels. Adrenaline and noradrenaline act by interacting with adrenoreceptors throughout the body, with effects that include an increase in blood pressure and heart rate. Actions of adrenaline and noradrenaline are responsible for the fight or flight response, characterised by a quickening of breathing and heart rate, an increase in blood pressure, and constriction of blood vessels in many parts of the body. Formation Catecholamines are produced in chromaffin cells in the medulla of the adrenal gland, from tyrosine, a non-essential amino acid derived from food or produced from phenylalanine in the liver. The enzyme tyrosine hydroxylase converts tyrosine to L-DOPA in the first step of catecholamine synthesis. L-DOPA is then converted to dopamine before it can be turned into noradrenaline. In the cytosol, noradrenaline is converted to epinephrine by the enzyme phenylethanolamine N-methyltransferase (PNMT) and stored in granules. Glucocorticoids produced in the adrenal cortex stimulate the synthesis of catecholamines by increasing the levels of tyrosine hydroxylase and PNMT. Catecholamine release is stimulated by the activation of the sympathetic nervous system. Splanchnic nerves of the sympathetic nervous system innervate the medulla of the adrenal gland. When activated, it evokes the release of catecholamines from the storage granules by stimulating the opening of calcium channels in the cell membrane. Gene and protein expression The human genome includes approximately 20,000 protein coding genes and 70% of these genes are expressed in the normal adult adrenal glands. Only some 250 genes are more specifically expressed in the adrenal glands compared to other organs and tissues. The adrenal-gland-specific genes with the highest level of expression include members of the cytochrome P450 superfamily of enzymes. Corresponding proteins are expressed in the different compartments of the adrenal gland, such as CYP11A1, HSD3B2 and FDX1 involved in steroid hormone synthesis and expressed in cortical cell layers, and PNMT and DBH involved in noradrenaline and adrenaline synthesis and expressed in the medulla. Development The adrenal glands are composed of two heterogenous types of tissue. In the center is the adrenal medulla, which produces adrenaline and noradrenaline and releases them into the bloodstream, as part of the sympathetic nervous system. Surrounding the medulla is the cortex, which produces a variety of steroid hormones. These tissues come from different embryological precursors and have distinct prenatal development paths. The cortex of the adrenal gland is derived from mesoderm, whereas the medulla is derived from the neural crest, which is of ectodermal origin. The adrenal glands in a newborn baby are much larger as a proportion of the body size than in an adult. For example, at age three months the glands are four times the size of the kidneys. The size of the glands decreases relatively after birth, mainly because of shrinkage of the cortex. The cortex, which almost completely disappears by age 1, develops again from age 4–5. The glands weigh about at birth and develop to an adult weight of about each. In a fetus the glands are first detectable after the sixth week of development. Cortex Adrenal cortex tissue is derived from the intermediate mesoderm. It first appears 33 days after fertilisation, shows steroid hormone production capabilities by the eighth week and undergoes rapid growth during the first trimester of pregnancy. The fetal adrenal cortex is different from its adult counterpart, as it is composed of two distinct zones: the inner "fetal" zone, which carries most of the hormone-producing activity, and the outer "definitive" zone, which is in a proliferative phase. The fetal zone produces large amounts of adrenal androgens (male sex hormones) that are used by the placenta for estrogen biosynthesis. Cortical development of the adrenal gland is regulated mostly by ACTH, a hormone produced by the pituitary gland that stimulates cortisol synthesis. During midgestation, the fetal zone occupies most of the cortical volume and produces 100–200 mg/day of DHEA-S, an androgen and precursor of both androgens and estrogens (female sex hormones). Adrenal hormones, especially glucocorticoids such as cortisol, are essential for prenatal development of organs, particularly for the maturation of the lungs. The adrenal gland decreases in size after birth because of the rapid disappearance of the fetal zone, with a corresponding decrease in androgen secretion. Adrenarche During early childhood androgen synthesis and secretion remain low, but several years before puberty (from 6–8 years of age) changes occur in both anatomical and functional aspects of cortical androgen production that lead to increased secretion of the steroids DHEA and DHEA-S. These changes are part of a process called adrenarche, which has only been described in humans and some other primates. Adrenarche is independent of ACTH or gonadotropins and correlates with a progressive thickening of the zona reticularis layer of the cortex. Functionally, adrenarche provides a source of androgens for the development of axillary and pubic hair before the beginning of puberty. Medulla The adrenal medulla is derived from neural crest cells, which come from the ectoderm layer of the embryo. These cells migrate from their initial position and aggregate in the vicinity of the dorsal aorta, a primitive blood vessel, which activates the differentiation of these cells through the release of proteins known as BMPs. These cells then undergo a second migration from the dorsal aorta to form the adrenal medulla and other organs of the sympathetic nervous system. Cells of the adrenal medulla are called chromaffin cells because they contain granules that stain with chromium salts, a characteristic not present in all sympathetic organs. Glucocorticoids produced in the adrenal cortex were once thought to be responsible for the differentiation of chromaffin cells. More recent research suggests that BMP-4 secreted in adrenal tissue is the main responsible for this, and that glucocorticoids only play a role in the subsequent development of the cells. Clinical significance The normal function of the adrenal gland may be impaired by conditions such as infections, tumors, genetic disorders and autoimmune diseases, or as a side effect of medical therapy. These disorders affect the gland either directly (as with infections or autoimmune diseases) or as a result of the dysregulation of hormone production (as in some types of Cushing's syndrome) leading to an excess or insufficiency of adrenal hormones and the related symptoms. Corticosteroid overproduction Cushing's syndrome Cushing's syndrome is the manifestation of glucocorticoid excess. It can be the result of a prolonged treatment with glucocorticoids or be caused by an underlying disease which produces alterations in the HPA axis or the production of cortisol. Causes can be further classified into ACTH-dependent or ACTH-independent. The most common cause of endogenous Cushing's syndrome is a pituitary adenoma which causes an excessive production of ACTH. The disease produces a wide variety of signs and symptoms which include obesity, diabetes, increased blood pressure, excessive body hair (hirsutism), osteoporosis, depression, and most distinctively, stretch marks in the skin, caused by its progressive thinning. Primary aldosteronism When the zona glomerulosa produces excess aldosterone, the result is primary aldosteronism. Causes for this condition are bilateral hyperplasia (excessive tissue growth) of the glands, or aldosterone-producing adenomas (a condition called Conn's syndrome). Primary aldosteronism produces hypertension and electrolyte imbalance, increasing potassium depletion sodium retention. Adrenal insufficiency Adrenal insufficiency (the deficiency of glucocorticoids) occurs in about 5 in 10,000 in the general population. Diseases classified as primary adrenal insufficiency (including Addison's disease and genetic causes) directly affect the adrenal cortex. If a problem that affects the hypothalamic–pituitary–adrenal axis arises outside the gland, it is a secondary adrenal insufficiency. Addison's disease Addison's disease refers to primary hypoadrenalism, which is a deficiency in glucocorticoid and mineralocorticoid production by the adrenal gland. In the Western world, Addison's disease is most commonly an autoimmune condition, in which the body produces antibodies against cells of the adrenal cortex. Worldwide, the disease is more frequently caused by infection, especially from tuberculosis. A distinctive feature of Addison's disease is hyperpigmentation of the skin, which presents with other nonspecific symptoms such as fatigue. A complication seen in untreated Addison's disease and other types of primary adrenal insufficiency is the adrenal crisis, a medical emergency in which low glucocorticoid and mineralocorticoid levels result in hypovolemic shock and symptoms such as vomiting and fever. An adrenal crisis can progressively lead to stupor and coma. The management of adrenal crises includes the application of hydrocortisone injections. Secondary adrenal insufficiency In secondary adrenal insufficiency, a dysfunction of the hypothalamic–pituitary–adrenal axis leads to decreased stimulation of the adrenal cortex. Apart from suppression of the axis by glucocorticoid therapy, the most common cause of secondary adrenal insufficiency are tumors that affect the production of adrenocorticotropic hormone (ACTH) by the pituitary gland. This type of adrenal insufficiency usually does not affect the production of mineralocorticoids, which are under regulation of the renin–angiotensin system instead. Congenital adrenal hyperplasia Congenital adrenal hyperplasia is a family of congenital diseases in which mutations of enzymes that produce steroid hormones result in a glucocorticoid deficiency and malfunction of the negative feedback loop of the HPA axis. In the HPA axis, cortisol (a glucocorticoid) inhibits the release of CRH and ACTH, hormones that in turn stimulate corticosteroid synthesis. As cortisol cannot be synthesized, these hormones are released in high quantities and stimulate production of other adrenal steroids instead. The most common form of congenital adrenal hyperplasia is due to 21-hydroxylase deficiency. 21-hydroxylase is necessary for production of both mineralocorticoids and glucocorticoids, but not androgens. Therefore, ACTH stimulation of the adrenal cortex induces the release of excessive amounts of adrenal androgens, which can lead to the development of ambiguous genitalia and secondary sex characteristics. Adrenal tumors Adrenal tumors are commonly found as incidentalomas, unexpected asymptomatic tumors found during medical imaging. They are seen in around 3.4% of CT scans, and in most cases they are benign adenomas. Adrenal carcinomas are very rare, with an incidence of 1 case per million per year. Pheochromocytomas are tumors of the adrenal medulla that arise from chromaffin cells. They can produce a variety of nonspecific symptoms, which include headaches, sweating, anxiety and palpitations. Common signs include hypertension and tachycardia. Surgery, especially adrenal laparoscopy, is the most common treatment for small pheochromocytomas. History Bartolomeo Eustachi, an Italian anatomist, is credited with the first description of the adrenal glands in 1563–4. However, these publications were part of the papal library and did not receive public attention, which was first received with Caspar Bartholin the Elder's illustrations in 1611. The adrenal glands are named for their location relative to the kidneys. The term "adrenal" comes from Latin ad, "near", and ren, "kidney". Similarly, "suprarenal", as termed by Jean Riolan the Younger in 1629, is derived from the Latin supra, "above", and ren, "kidney", as well. The suprarenal nature of the glands was not truly accepted until the 19th century, as anatomists clarified the ductless nature of the glands and their likely secretory role – prior to this, there was some debate as to whether the glands were indeed suprarenal or part of the kidney. One of the most recognized works on the adrenal glands came in 1855 with the publication of On the Constitutional and Local Effects of Disease of the Suprarenal Capsule, by the English physician Thomas Addison. In his monography, Addison described what the French physician George Trousseau would later name Addison's disease, an eponym still used today for a condition of adrenal insufficiency and its related clinical manifestations. In 1894, English physiologists George Oliver and Edward Schafer studied the action of adrenal extracts and observed their pressor effects. In the following decades several physicians experimented with extracts from the adrenal cortex to treat Addison's disease. Edward Calvin Kendall, Philip Hench and Tadeusz Reichstein were then awarded the 1950 Nobel Prize in Physiology or Medicine for their discoveries on the structure and effects of the adrenal hormones.
Biology and health sciences
Animal: General
null
2308
https://en.wikipedia.org/wiki/Actinide
Actinide
The actinide () or actinoid () series encompasses at least the 14 metallic chemical elements in the 5f series, with atomic numbers from 89 to 102, actinium through nobelium. Number 103, lawrencium, is also generally included despite being part of the 6d transition series. The actinide series derives its name from the first element in the series, actinium. The informal chemical symbol An is used in general discussions of actinide chemistry to refer to any actinide. The 1985 IUPAC Red Book recommends that actinoid be used rather than actinide, since the suffix -ide normally indicates a negative ion. However, owing to widespread current use, actinide is still allowed. Since actinoid literally means actinium-like (cf. humanoid or android), it has been argued for semantic reasons that actinium cannot logically be an actinoid, but IUPAC acknowledges its inclusion based on common usage. Actinium through nobelium are f-block elements, while lawrencium is a d-block element and a transition metal. The series mostly corresponds to the filling of the 5f electron shell, although as isolated atoms in the ground state many have anomalous configurations involving the filling of the 6d shell due to interelectronic repulsion. In comparison with the lanthanides, also mostly f-block elements, the actinides show much more variable valence. They all have very large atomic and ionic radii and exhibit an unusually large range of physical properties. While actinium and the late actinides (from curium onwards) behave similarly to the lanthanides, the elements thorium, protactinium, and uranium are much more similar to transition metals in their chemistry, with neptunium, plutonium, and americium occupying an intermediate position. All actinides are radioactive and release energy upon radioactive decay; naturally occurring uranium and thorium, and synthetically produced plutonium are the most abundant actinides on Earth. These have been used in nuclear reactors, and uranium and plutonium are critical elements of nuclear weapons. Uranium and thorium also have diverse current or historical uses, and americium is used in the ionization chambers of most modern smoke detectors. Of the actinides, primordial thorium and uranium occur naturally in substantial quantities. The radioactive decay of uranium produces transient amounts of actinium and protactinium, and atoms of neptunium and plutonium are occasionally produced from transmutation reactions in uranium ores. The other actinides are purely synthetic elements. Nuclear weapons tests have released at least six actinides heavier than plutonium into the environment; analysis of debris from a 1952 hydrogen bomb explosion showed the presence of americium, curium, berkelium, californium, einsteinium and fermium. In presentations of the periodic table, the f-block elements are customarily shown as two additional rows below the main body of the table. This convention is entirely a matter of aesthetics and formatting practicality; a rarely used wide-formatted periodic table inserts the 4f and 5f series in their proper places, as parts of the table's sixth and seventh rows (periods). Actinides Discovery, isolation and synthesis Like the lanthanides, the actinides form a family of elements with similar properties. Within the actinides, there are two overlapping groups: transuranium elements, which follow uranium in the periodic table; and transplutonium elements, which follow plutonium. Compared to the lanthanides, which (except for promethium) are found in nature in appreciable quantities, most actinides are rare. Most do not occur in nature, and of those that do, only thorium and uranium do so in more than trace quantities. The most abundant or easily synthesized actinides are uranium and thorium, followed by plutonium, americium, actinium, protactinium, neptunium, and curium. The existence of transuranium elements was suggested in 1934 by Enrico Fermi, based on his experiments. However, even though four actinides were known by that time, it was not yet understood that they formed a family similar to lanthanides. The prevailing view that dominated early research into transuranics was that they were regular elements in the 7th period, with thorium, protactinium and uranium corresponding to 6th-period hafnium, tantalum and tungsten, respectively. Synthesis of transuranics gradually undermined this point of view. By 1944, an observation that curium failed to exhibit oxidation states above 4 (whereas its supposed 6th period homolog, platinum, can reach oxidation state of 6) prompted Glenn Seaborg to formulate an "actinide hypothesis". Studies of known actinides and discoveries of further transuranic elements provided more data in support of this position, but the phrase "actinide hypothesis" (the implication being that a "hypothesis" is something that has not been decisively proven) remained in active use by scientists through the late 1950s. At present, there are two major methods of producing isotopes of transplutonium elements: (1) irradiation of the lighter elements with neutrons; (2) irradiation with accelerated charged particles. The first method is more important for applications, as only neutron irradiation using nuclear reactors allows the production of sizeable amounts of synthetic actinides; however, it is limited to relatively light elements. The advantage of the second method is that elements heavier than plutonium, as well as neutron-deficient isotopes, can be obtained, which are not formed during neutron irradiation. In 1962–1966, there were attempts in the United States to produce transplutonium isotopes using a series of six underground nuclear explosions. Small samples of rock were extracted from the blast area immediately after the test to study the explosion products, but no isotopes with mass number greater than 257 could be detected, despite predictions that such isotopes would have relatively long half-lives of α-decay. This non-observation was attributed to spontaneous fission owing to the large speed of the products and to other decay channels, such as neutron emission and nuclear fission. From actinium to uranium Uranium and thorium were the first actinides discovered. Uranium was identified in 1789 by the German chemist Martin Heinrich Klaproth in pitchblende ore. He named it after the planet Uranus, which had been discovered eight years earlier. Klaproth was able to precipitate a yellow compound (likely sodium diuranate) by dissolving pitchblende in nitric acid and neutralizing the solution with sodium hydroxide. He then reduced the obtained yellow powder with charcoal, and extracted a black substance that he mistook for metal. Sixty years later, the French scientist Eugène-Melchior Péligot identified it as uranium oxide. He also isolated the first sample of uranium metal by heating uranium tetrachloride with metallic potassium. The atomic mass of uranium was then calculated as 120, but Dmitri Mendeleev in 1872 corrected it to 240 using his periodicity laws. This value was confirmed experimentally in 1882 by K. Zimmerman. Thorium oxide was discovered by Friedrich Wöhler in the mineral thorianite, which was found in Norway (1827). Jöns Jacob Berzelius characterized this material in more detail in 1828. By reduction of thorium tetrachloride with potassium, he isolated the metal and named it thorium after the Norse god of thunder and lightning Thor. The same isolation method was later used by Péligot for uranium. Actinium was discovered in 1899 by André-Louis Debierne, an assistant of Marie Curie, in the pitchblende waste left after removal of radium and polonium. He described the substance (in 1899) as similar to titanium and (in 1900) as similar to thorium. The discovery of actinium by Debierne was however questioned in 1971 and 2000, arguing that Debierne's publications in 1904 contradicted his earlier work of 1899–1900. This view instead credits the 1902 work of Friedrich Oskar Giesel, who discovered a radioactive element named emanium that behaved similarly to lanthanum. The name actinium comes from the , meaning beam or ray. This metal was discovered not by its own radiation but by the radiation of the daughter products. Owing to the close similarity of actinium and lanthanum and low abundance, pure actinium could only be produced in 1950. The term actinide was probably introduced by Victor Goldschmidt in 1937. Protactinium was possibly isolated in 1900 by William Crookes. It was first identified in 1913, when Kasimir Fajans and Oswald Helmuth Göhring encountered the short-lived isotope 234mPa (half-life 1.17 minutes) during their studies of the 238U decay chain. They named the new element brevium (from Latin brevis meaning brief); the name was changed to protoactinium (from Greek πρῶτος + ἀκτίς meaning "first beam element") in 1918 when two groups of scientists, led by the Austrian Lise Meitner and Otto Hahn of Germany and Frederick Soddy and John Arnold Cranston of Great Britain, independently discovered the much longer-lived 231Pa. The name was shortened to protactinium in 1949. This element was little characterized until 1960, when Alfred Maddock and his co-workers in the U.K. isolated 130 grams of protactinium from 60 tonnes of waste left after extraction of uranium from its ore. Neptunium and above Neptunium (named for the planet Neptune, the next planet out from Uranus, after which uranium was named) was discovered by Edwin McMillan and Philip H. Abelson in 1940 in Berkeley, California. They produced the 239Np isotope (half-life 2.4 days) by bombarding uranium with slow neutrons. It was the first transuranium element produced synthetically. Transuranium elements do not occur in sizeable quantities in nature and are commonly synthesized via nuclear reactions conducted with nuclear reactors. For example, under irradiation with reactor neutrons, uranium-238 partially converts to plutonium-239: This synthesis reaction was used by Fermi and his collaborators in their design of the reactors located at the Hanford Site, which produced significant amounts of plutonium-239 for the nuclear weapons of the Manhattan Project and the United States' post-war nuclear arsenal. Actinides with the highest mass numbers are synthesized by bombarding uranium, plutonium, curium and californium with ions of nitrogen, oxygen, carbon, neon or boron in a particle accelerator. Thus nobelium was produced by bombarding uranium-238 with neon-22 as _{92}^{238}U + _{10}^{22}Ne -> _{102}^{256}No + 4_0^1n. The first isotopes of transplutonium elements, americium-241 and curium-242, were synthesized in 1944 by Glenn T. Seaborg, Ralph A. James and Albert Ghiorso. Curium-242 was obtained by bombarding plutonium-239 with 32-MeV α-particles: _{94}^{239}Pu + _2^4He -> _{96}^{242}Cm + _0^1n. The americium-241 and curium-242 isotopes also were produced by irradiating plutonium in a nuclear reactor. The latter element was named after Marie Curie and her husband Pierre who are noted for discovering radium and for their work in radioactivity. Bombarding curium-242 with α-particles resulted in an isotope of californium 245Cf in 1950, and a similar procedure yielded berkelium-243 from americium-241 in 1949. The new elements were named after Berkeley, California, by analogy with its lanthanide homologue terbium, which was named after the village of Ytterby in Sweden. In 1945, B. B. Cunningham obtained the first bulk chemical compound of a transplutonium element, namely americium hydroxide. Over the few years, milligram quantities of americium and microgram amounts of curium were accumulated that allowed production of isotopes of berkelium and californium. Sizeable amounts of these elements were produced in 1958, and the first californium compound (0.3 μg of CfOCl) was obtained in 1960 by B. B. Cunningham and J. C. Wallmann. Einsteinium and fermium were identified in 1952–1953 in the fallout from the "Ivy Mike" nuclear test (1 November 1952), the first successful test of a hydrogen bomb. Instantaneous exposure of uranium-238 to a large neutron flux resulting from the explosion produced heavy isotopes of uranium, which underwent a series of beta decays to nuclides such as einsteinium-253 and fermium-255. The discovery of the new elements and the new data on neutron capture were initially kept secret on the orders of the US military until 1955 due to Cold War tensions. Nevertheless, the Berkeley team were able to prepare einsteinium and fermium by civilian means, through the neutron bombardment of plutonium-239, and published this work in 1954 with the disclaimer that it was not the first studies that had been carried out on those elements. The "Ivy Mike" studies were declassified and published in 1955. The first significant (submicrogram) amounts of einsteinium were produced in 1961 by Cunningham and colleagues, but this has not been done for fermium yet. The first isotope of mendelevium, 256Md (half-life 87 min), was synthesized by Albert Ghiorso, Glenn T. Seaborg, Gregory Robert Choppin, Bernard G. Harvey and Stanley Gerald Thompson when they bombarded an 253Es target with alpha particles in the 60-inch cyclotron of Berkeley Radiation Laboratory; this was the first isotope of any element to be synthesized one atom at a time. There were several attempts to obtain isotopes of nobelium by Swedish (1957) and American (1958) groups, but the first reliable result was the synthesis of 256No by the Russian group of Georgy Flyorov in 1965, as acknowledged by the IUPAC in 1992. In their experiments, Flyorov et al. bombarded uranium-238 with neon-22. In 1961, Ghiorso et al. obtained the first isotope of lawrencium by irradiating californium (mostly californium-252) with boron-10 and boron-11 ions. The mass number of this isotope was not clearly established (possibly 258 or 259) at the time. In 1965, 256Lr was synthesized by Flyorov et al. from 243Am and 18O. Thus IUPAC recognized the nuclear physics teams at Dubna and Berkeley as the co-discoverers of lawrencium. Isotopes Thirty-four isotopes of actinium and eight excited isomeric states of some of its nuclides are known, ranging in mass number from 203 to 236. Three isotopes, 225Ac, 227Ac and 228Ac, were found in nature and the others were produced in the laboratory; only the three natural isotopes are used in applications. Actinium-225 is a member of the radioactive neptunium series; it was first discovered in 1947 as a decay product of uranium-233 and it is an α-emitter with a half-life of 10 days. Actinium-225 is less available than actinium-228, but is more promising in radiotracer applications. Actinium-227 (half-life 21.77 years) occurs in all uranium ores, but in small quantities. One gram of uranium (in radioactive equilibrium) contains only 2 gram of 227Ac. Actinium-228 is a member of the radioactive thorium series formed by the decay of 228Ra; it is a β− emitter with a half-life of 6.15 hours. In one tonne of thorium there is 5 gram of 228Ac. It was discovered by Otto Hahn in 1906. There are 32 known isotopes of thorium ranging in mass number from 207 to 238. Of these, the longest-lived is 232Th, whose half-life of means that it still exists in nature as a primordial nuclide. The next longest-lived is 230Th, an intermediate decay product of 238U with a half-life of 75,400 years. Several other thorium isotopes have half-lives over a day; all of these are also transient in the decay chains of 232Th, 235U, and 238U. Twenty-nine isotopes of protactinium are known with mass numbers 211–239 as well as three excited isomeric states. Only 231Pa and 234Pa have been found in nature. All the isotopes have short lifetimes, except for protactinium-231 (half-life 32,760 years). The most important isotopes are 231Pa and 233Pa, which is an intermediate product in obtaining uranium-233 and is the most affordable among artificial isotopes of protactinium. 233Pa has convenient half-life and energy of γ-radiation, and thus was used in most studies of protactinium chemistry. Protactinium-233 is a β-emitter with a half-life of 26.97 days. There are 27 known isotopes of uranium, having mass numbers 215–242 (except 220). Three of them, 234U, 235U and 238U, are present in appreciable quantities in nature. Among others, the most important is 233U, which is a final product of transformation of 232Th irradiated by slow neutrons. 233U has a much higher fission efficiency by low-energy (thermal) neutrons, compared e.g. with 235U. Most uranium chemistry studies were carried out on uranium-238 owing to its long half-life of 4.4 years. There are 25 isotopes of neptunium with mass numbers 219–244 (except 221); they are all highly radioactive. The most popular among scientists are long-lived 237Np (t1/2 = 2.20 years) and short-lived 239Np, 238Np (t1/2 ~ 2 days). There are 21 known isotopes of plutonium, having mass numbers 227–247. The most stable isotope of plutonium is 244Pu with half-life of 8.13 years. Eighteen isotopes of americium are known with mass numbers from 229 to 247 (with the exception of 231). The most important are 241Am and 243Am, which are alpha-emitters and also emit soft, but intense γ-rays; both of them can be obtained in an isotopically pure form. Chemical properties of americium were first studied with 241Am, but later shifted to 243Am, which is almost 20 times less radioactive. The disadvantage of 243Am is production of the short-lived daughter isotope 239Np, which has to be considered in the data analysis. Among 19 isotopes of curium, ranging in mass number from 233 to 251, the most accessible are 242Cm and 244Cm; they are α-emitters, but with much shorter lifetime than the americium isotopes. These isotopes emit almost no γ-radiation, but undergo spontaneous fission with the associated emission of neutrons. More long-lived isotopes of curium (245–248Cm, all α-emitters) are formed as a mixture during neutron irradiation of plutonium or americium. Upon short irradiation, this mixture is dominated by 246Cm, and then 248Cm begins to accumulate. Both of these isotopes, especially 248Cm, have a longer half-life (3.48 years) and are much more convenient for carrying out chemical research than 242Cm and 244Cm, but they also have a rather high rate of spontaneous fission. 247Cm has the longest lifetime among isotopes of curium (1.56 years), but is not formed in large quantities because of the strong fission induced by thermal neutrons. Seventeen isotopes of berkelium have been identified with mass numbers 233, 234, 236, 238, and 240–252. Only 249Bk is available in large quantities; it has a relatively short half-life of 330 days and emits mostly soft β-particles, which are inconvenient for detection. Its alpha radiation is rather weak (1.45% with respect to β-radiation), but is sometimes used to detect this isotope. 247Bk is an alpha-emitter with a long half-life of 1,380 years, but it is hard to obtain in appreciable quantities; it is not formed upon neutron irradiation of plutonium because β-decay of curium isotopes with mass number below 248 is not known. (247Cm would actually release energy by β-decaying to 247Bk, but this has never been seen.) The 20 isotopes of californium with mass numbers 237–256 are formed in nuclear reactors; californium-253 is a β-emitter and the rest are α-emitters. The isotopes with even mass numbers (250Cf, 252Cf and 254Cf) have a high rate of spontaneous fission, especially 254Cf of which 99.7% decays by spontaneous fission. Californium-249 has a relatively long half-life (352 years), weak spontaneous fission and strong γ-emission that facilitates its identification. 249Cf is not formed in large quantities in a nuclear reactor because of the slow β-decay of the parent isotope 249Bk and a large cross section of interaction with neutrons, but it can be accumulated in the isotopically pure form as the β-decay product of (pre-selected) 249Bk. Californium produced by reactor-irradiation of plutonium mostly consists of 250Cf and 252Cf, the latter being predominant for large neutron fluences, and its study is hindered by the strong neutron radiation. Among the 18 known isotopes of einsteinium with mass numbers from 240 to 257, the most affordable is 253Es. It is an α-emitter with a half-life of 20.47 days, a relatively weak γ-emission and small spontaneous fission rate as compared with the isotopes of californium. Prolonged neutron irradiation also produces a long-lived isotope 254Es (t1/2 = 275.5 days). Twenty isotopes of fermium are known with mass numbers of 241–260. 254Fm, 255Fm and 256Fm are α-emitters with a short half-life (hours), which can be isolated in significant amounts. 257Fm (t1/2 = 100 days) can accumulate upon prolonged and strong irradiation. All these isotopes are characterized by high rates of spontaneous fission. Among the 17 known isotopes of mendelevium (mass numbers from 244 to 260), the most studied is 256Md, which mainly decays through electron capture (α-radiation is ≈10%) with a half-life of 77 minutes. Another alpha emitter, 258Md, has a half-life of 53 days. Both these isotopes are produced from rare einsteinium (253Es and 255Es respectively), that therefore limits their availability. Long-lived isotopes of nobelium and isotopes of lawrencium (and of heavier elements) have relatively short half-lives. For nobelium, 13 isotopes are known, with mass numbers 249–260 and 262. The chemical properties of nobelium and lawrencium were studied with 255No (t1/2 = 3 min) and 256Lr (t1/2 = 35 s). The longest-lived nobelium isotope, 259No, has a half-life of approximately 1 hour. Lawrencium has 14 known isotopes with mass numbers 251–262, 264, and 266. The most stable of them is 266Lr with a half life of 11 hours. Among all of these, the only isotopes that occur in sufficient quantities in nature to be detected in anything more than traces and have a measurable contribution to the atomic weights of the actinides are the primordial 232Th, 235U, and 238U, and three long-lived decay products of natural uranium, 230Th, 231Pa, and 234U. Natural thorium consists of 0.02(2)% 230Th and 99.98(2)% 232Th; natural protactinium consists of 100% 231Pa; and natural uranium consists of 0.0054(5)% 234U, 0.7204(6)% 235U, and 99.2742(10)% 238U. Formation in nuclear reactors The figure buildup of actinides is a table of nuclides with the number of neutrons on the horizontal axis (isotopes) and the number of protons on the vertical axis (elements). The red dot divides the nuclides in two groups, so the figure is more compact. Each nuclide is represented by a square with the mass number of the element and its half-life. Naturally existing actinide isotopes (Th, U) are marked with a bold border, alpha emitters have a yellow colour, and beta emitters have a blue colour. Pink indicates electron capture (236Np), whereas white stands for a long-lasting metastable state (242Am). The formation of actinide nuclides is primarily characterised by: Neutron capture reactions (n,γ), which are represented in the figure by a short right arrow. The (n,2n) reactions and the less frequently occurring (γ,n) reactions are also taken into account, both of which are marked by a short left arrow. Even more rarely and only triggered by fast neutrons, the (n,3n) reaction occurs, which is represented in the figure with one example, marked by a long left arrow. In addition to these neutron- or gamma-induced nuclear reactions, the radioactive conversion of actinide nuclides also affects the nuclide inventory in a reactor. These decay types are marked in the figure by diagonal arrows. The beta-minus decay, marked with an arrow pointing up-left, plays a major role for the balance of the particle densities of the nuclides. Nuclides decaying by positron emission (beta-plus decay) or electron capture (ϵ) do not occur in a nuclear reactor except as products of knockout reactions; their decays are marked with arrows pointing down-right. Due to the long half-lives of the given nuclides, alpha decay plays almost no role in the formation and decay of the actinides in a power reactor, as the residence time of the nuclear fuel in the reactor core is rather short (a few years). Exceptions are the two relatively short-lived nuclides 242Cm (T1/2 = 163 d) and 236Pu (T1/2 = 2.9 y). Only for these two cases, the α decay is marked on the nuclide map by a long arrow pointing down-left. A few long-lived actinide isotopes, such as 244Pu and 250Cm, cannot be produced in reactors because neutron capture does not happen quickly enough to bypass the short-lived beta-decaying nuclides 243Pu and 249Cm; they can however be generated in nuclear explosions, which have much higher neutron fluxes. Distribution in nature Thorium and uranium are the most abundant actinides in nature with the respective mass concentrations of 16 ppm and 4 ppm. Uranium mostly occurs in the Earth's crust as a mixture of its oxides in the mineral uraninite, which is also called pitchblende because of its black color. There are several dozens of other uranium minerals such as carnotite (KUO2VO4·3H2O) and autunite (Ca(UO2)2(PO4)2·nH2O). The isotopic composition of natural uranium is 238U (relative abundance 99.2742%), 235U (0.7204%) and 234U (0.0054%); of these 238U has the largest half-life of 4.51 years. The worldwide production of uranium in 2009 amounted to 50,572 tonnes, of which 27.3% was mined in Kazakhstan. Other important uranium mining countries are Canada (20.1%), Australia (15.7%), Namibia (9.1%), Russia (7.0%), and Niger (6.4%). The most abundant thorium minerals are thorianite (), thorite () and monazite, (). Most thorium minerals contain uranium and vice versa; and they all have significant fraction of lanthanides. Rich deposits of thorium minerals are located in the United States (440,000 tonnes), Australia and India (~300,000 tonnes each) and Canada (~100,000 tonnes). The abundance of actinium in the Earth's crust is only about 5%. Actinium is mostly present in uranium-containing, but also in other minerals, though in much smaller quantities. The content of actinium in most natural objects corresponds to the isotopic equilibrium of parent isotope 235U, and it is not affected by the weak Ac migration. Protactinium is more abundant (10−12%) in the Earth's crust than actinium. It was discovered in uranium ore in 1913 by Fajans and Göhring. As actinium, the distribution of protactinium follows that of 235U. The half-life of the longest-lived isotope of neptunium, 237Np, is negligible compared to the age of the Earth. Thus neptunium is present in nature in negligible amounts produced as intermediate decay products of other isotopes. Traces of plutonium in uranium minerals were first found in 1942, and the more systematic results on 239Pu are summarized in the table (no other plutonium isotopes could be detected in those samples). The upper limit of abundance of the longest-living isotope of plutonium, 244Pu, is 3%. Plutonium could not be detected in samples of lunar soil. Owing to its scarcity in nature, most plutonium is produced synthetically. Extraction Owing to the low abundance of actinides, their extraction is a complex, multistep process. Fluorides of actinides are usually used because they are insoluble in water and can be easily separated with redox reactions. Fluorides are reduced with calcium, magnesium or barium: Among the actinides, thorium and uranium are the easiest to isolate. Thorium is extracted mostly from monazite: thorium pyrophosphate (ThP2O7) is reacted with nitric acid, and the produced thorium nitrate treated with tributyl phosphate. Rare-earth impurities are separated by increasing the pH in sulfate solution. In another extraction method, monazite is decomposed with a 45% aqueous solution of sodium hydroxide at 140 °C. Mixed metal hydroxides are extracted first, filtered at 80 °C, washed with water and dissolved with concentrated hydrochloric acid. Next, the acidic solution is neutralized with hydroxides to pH = 5.8 that results in precipitation of thorium hydroxide (Th(OH)4) contaminated with ~3% of rare-earth hydroxides; the rest of rare-earth hydroxides remains in solution. Thorium hydroxide is dissolved in an inorganic acid and then purified from the rare earth elements. An efficient method is the dissolution of thorium hydroxide in nitric acid, because the resulting solution can be purified by extraction with organic solvents: Th(OH)4 + 4 HNO3 → Th(NO3)4 + 4 H2O Metallic thorium is separated from the anhydrous oxide, chloride or fluoride by reacting it with calcium in an inert atmosphere: ThO2 + 2 Ca → 2 CaO + Th Sometimes thorium is extracted by electrolysis of a fluoride in a mixture of sodium and potassium chloride at 700–800 °C in a graphite crucible. Highly pure thorium can be extracted from its iodide with the crystal bar process. Uranium is extracted from its ores in various ways. In one method, the ore is burned and then reacted with nitric acid to convert uranium into a dissolved state. Treating the solution with a solution of tributyl phosphate (TBP) in kerosene transforms uranium into an organic form UO2(NO3)2(TBP)2. The insoluble impurities are filtered and the uranium is extracted by reaction with hydroxides as (NH4)2U2O7 or with hydrogen peroxide as UO4·2H2O. When the uranium ore is rich in such minerals as dolomite, magnesite, etc., those minerals consume much acid. In this case, the carbonate method is used for uranium extraction. Its main component is an aqueous solution of sodium carbonate, which converts uranium into a complex [UO2(CO3)3]4−, which is stable in aqueous solutions at low concentrations of hydroxide ions. The advantages of the sodium carbonate method are that the chemicals have low corrosivity (compared to nitrates) and that most non-uranium metals precipitate from the solution. The disadvantage is that tetravalent uranium compounds precipitate as well. Therefore, the uranium ore is treated with sodium carbonate at elevated temperature and under oxygen pressure: 2 UO2 + O2 + 6 → 2 [UO2(CO3)3]4− This equation suggests that the best solvent for the uranyl carbonate processing is a mixture of carbonate with bicarbonate. At high pH, this results in precipitation of diuranate, which is treated with hydrogen in the presence of nickel yielding an insoluble uranium tetracarbonate. Another separation method uses polymeric resins as a polyelectrolyte. Ion exchange processes in the resins result in separation of uranium. Uranium from resins is washed with a solution of ammonium nitrate or nitric acid that yields uranyl nitrate, UO2(NO3)2·6H2O. When heated, it turns into UO3, which is converted to UO2 with hydrogen: UO3 + H2 → UO2 + H2O Reacting uranium dioxide with hydrofluoric acid changes it to uranium tetrafluoride, which yields uranium metal upon reaction with magnesium metal: 4 HF + UO2 → UF4 + 2 H2O To extract plutonium, neutron-irradiated uranium is dissolved in nitric acid, and a reducing agent (FeSO4, or H2O2) is added to the resulting solution. This addition changes the oxidation state of plutonium from +6 to +4, while uranium remains in the form of uranyl nitrate (UO2(NO3)2). The solution is treated with a reducing agent and neutralized with ammonium carbonate to pH = 8 that results in precipitation of Pu4+ compounds. In another method, Pu4+ and are first extracted with tributyl phosphate, then reacted with hydrazine washing out the recovered plutonium. The major difficulty in separation of actinium is the similarity of its properties with those of lanthanum. Thus actinium is either synthesized in nuclear reactions from isotopes of radium or separated using ion-exchange procedures. Properties Actinides have similar properties to lanthanides. Just as the 4f electron shells are filled in the lanthanides, the 5f electron shells are filled in the actinides. Because the 5f, 6d, 7s, and 7p shells are close in energy, many irregular configurations arise; thus, in gas-phase atoms, just as the first 4f electron only appears in cerium, so the first 5f electron appears even later, in protactinium. However, just as lanthanum is the first element to use the 4f shell in compounds, so actinium is the first element to use the 5f shell in compounds. The f-shells complete their filling together, at ytterbium and nobelium. The first experimental evidence for the filling of the 5f shell in actinides was obtained by McMillan and Abelson in 1940. As in lanthanides (see lanthanide contraction), the ionic radius of actinides monotonically decreases with atomic number (see also actinoid contraction). The shift of electron configurations in the gas phase does not always match the chemical behaviour. For example, the early-transition-metal-like prominence of the highest oxidation state, corresponding to removal of all valence electrons, extends up to uranium even though the 5f shells begin filling before that. On the other hand, electron configurations resembling the lanthanide congeners already begin at plutonium, even though lanthanide-like behaviour does not become dominant until the second half of the series begins at curium. The elements between uranium and curium form a transition between these two kinds of behaviour, where higher oxidation states continue to exist, but lose stability with respect to the +3 state. The +2 state becomes more important near the end of the series, and is the most stable oxidation state for nobelium, the last 5f element. Oxidation states rise again only after nobelium, showing that a new series of 6d transition metals has begun: lawrencium shows only the +3 oxidation state, and rutherfordium only the +4 state, making them respectively congeners of lutetium and hafnium in the 5d row. Physical properties Actinides are typical metals. All of them are soft and have a silvery color (but tarnish in air), relatively high density and plasticity. Some of them can be cut with a knife. Their electrical resistivity varies between 15 and 150 μΩ·cm. The hardness of thorium is similar to that of soft steel, so heated pure thorium can be rolled in sheets and pulled into wire. Thorium is nearly half as dense as uranium and plutonium, but is harder than either of them. All actinides are radioactive, paramagnetic, and, with the exception of actinium, have several crystalline phases: plutonium has seven, and uranium, neptunium and californium three. The crystal structures of protactinium, uranium, neptunium and plutonium do not have clear analogs among the lanthanides and are more similar to those of the 3d-transition metals. All actinides are pyrophoric, especially when finely divided, that is, they spontaneously ignite upon reaction with air at room temperature. The melting point of actinides does not have a clear dependence on the number of f-electrons. The unusually low melting point of neptunium and plutonium (~640 °C) is explained by hybridization of 5f and 6d orbitals and the formation of directional bonds in these metals. Chemical properties Like the lanthanides, all actinides are highly reactive with halogens and chalcogens; however, the actinides react more easily. Actinides, especially those with a small number of 5f-electrons, are prone to hybridization. This is explained by the similarity of the electron energies at the 5f, 7s and 6d shells. Most actinides exhibit a larger variety of valence states, and the most stable are +6 for uranium, +5 for protactinium and neptunium, +4 for thorium and plutonium and +3 for actinium and other actinides. Actinium is chemically similar to lanthanum, which is explained by their similar ionic radii and electronic structures. Like lanthanum, actinium almost always has an oxidation state of +3 in compounds, but it is less reactive and has more pronounced basic properties. Among other trivalent actinides Ac3+ is least acidic, i.e. has the weakest tendency to hydrolyze in aqueous solutions. Thorium is rather active chemically. Owing to lack of electrons on 6d and 5f orbitals, tetravalent thorium compounds are colorless. At pH < 3, solutions of thorium salts are dominated by the cations [Th(H2O)8]4+. The Th4+ ion is relatively large, and depending on the coordination number can have a radius between 0.95 and 1.14 Å. As a result, thorium salts have a weak tendency to hydrolyse. The distinctive ability of thorium salts is their high solubility both in water and polar organic solvents. Protactinium exhibits two valence states; the +5 is stable, and the +4 state easily oxidizes to protactinium(V). Thus tetravalent protactinium in solutions is obtained by the action of strong reducing agents in a hydrogen atmosphere. Tetravalent protactinium is chemically similar to uranium(IV) and thorium(IV). Fluorides, phosphates, hypophosphates, iodates and phenylarsonates of protactinium(IV) are insoluble in water and dilute acids. Protactinium forms soluble carbonates. The hydrolytic properties of pentavalent protactinium are close to those of tantalum(V) and niobium(V). The complex chemical behavior of protactinium is a consequence of the start of the filling of the 5f shell in this element. Uranium has a valence from 3 to 6, the last being most stable. In the hexavalent state, uranium is very similar to the group 6 elements. Many compounds of uranium(IV) and uranium(VI) are non-stoichiometric, i.e. have variable composition. For example, the actual chemical formula of uranium dioxide is UO2+x, where x varies between −0.4 and 0.32. Uranium(VI) compounds are weak oxidants. Most of them contain the linear "uranyl" group, . Between 4 and 6 ligands can be accommodated in an equatorial plane perpendicular to the uranyl group. The uranyl group acts as a hard acid and forms stronger complexes with oxygen-donor ligands than with nitrogen-donor ligands. and are also the common form of Np and Pu in the +6 oxidation state. Uranium(IV) compounds exhibit reducing properties, e.g., they are easily oxidized by atmospheric oxygen. Uranium(III) is a very strong reducing agent. Owing to the presence of d-shell, uranium (as well as many other actinides) forms organometallic compounds, such as UIII(C5H5)3 and UIV(C5H5)4. Neptunium has valence states from 3 to 7, which can be simultaneously observed in solutions. The most stable state in solution is +5, but the valence +4 is preferred in solid neptunium compounds. Neptunium metal is very reactive. Ions of neptunium are prone to hydrolysis and formation of coordination compounds. Plutonium also exhibits valence states between 3 and 7 inclusive, and thus is chemically similar to neptunium and uranium. It is highly reactive, and quickly forms an oxide film in air. Plutonium reacts with hydrogen even at temperatures as low as 25–50 °C; it also easily forms halides and intermetallic compounds. Hydrolysis reactions of plutonium ions of different oxidation states are quite diverse. Plutonium(V) can enter polymerization reactions. The largest chemical diversity among actinides is observed in americium, which can have valence between 2 and 6. Divalent americium is obtained only in dry compounds and non-aqueous solutions (acetonitrile). Oxidation states +3, +5 and +6 are typical for aqueous solutions, but also in the solid state. Tetravalent americium forms stable solid compounds (dioxide, fluoride and hydroxide) as well as complexes in aqueous solutions. It was reported that in alkaline solution americium can be oxidized to the heptavalent state, but these data proved erroneous. The most stable valence of americium is 3 in aqueous solution and 3 or 4 in solid compounds. Valence 3 is dominant in all subsequent elements up to lawrencium (with the exception of nobelium). Curium can be tetravalent in solids (fluoride, dioxide). Berkelium, along with a valence of +3, also shows the valence of +4, more stable than that of curium; the valence 4 is observed in solid fluoride and dioxide. The stability of Bk4+ in aqueous solution is close to that of Ce4+. Only valence 3 was observed for californium, einsteinium and fermium. The divalent state is proven for mendelevium and nobelium, and in nobelium it is more stable than the trivalent state. Lawrencium shows valence 3 both in solutions and solids. The redox potential \mathit E_\frac{M^4+}{AnO2^2+} increases from −0.32 V in uranium, through 0.34 V (Np) and 1.04 V (Pu) to 1.34 V in americium revealing the increasing reduction ability of the An4+ ion from americium to uranium. All actinides form AnH3 hydrides of black color with salt-like properties. Actinides also produce carbides with the general formula of AnC or AnC2 (U2C3 for uranium) as well as sulfides An2S3 and AnS2. Compounds Oxides and hydroxides An – actinide **Depending on the isotopes Some actinides can exist in several oxide forms such as An2O3, AnO2, An2O5 and AnO3. For all actinides, oxides AnO3 are amphoteric and An2O3, AnO2 and An2O5 are basic, they easily react with water, forming bases: An2O3 + 3 H2O → 2 An(OH)3. These bases are poorly soluble in water and by their activity are close to the hydroxides of rare-earth metals. Np(OH)3 has not yet been synthesized, Pu(OH)3 has a blue color while Am(OH)3 is pink and Cm(OH)3 is colorless. Bk(OH)3 and Cf(OH)3 are also known, as are tetravalent hydroxides for Np, Pu and Am and pentavalent for Np and Am. The strongest base is of actinium. All compounds of actinium are colorless, except for black actinium sulfide (Ac2S3). Dioxides of tetravalent actinides crystallize in the cubic system, same as in calcium fluoride. Thorium reacting with oxygen exclusively forms the dioxide: Th{} + O2 ->[\ce{1000^\circ C}] \overbrace{ThO2}^{Thorium~dioxide} Thorium dioxide is a refractory material with the highest melting point among any known oxide (3390 °C). Adding 0.8–1% ThO2 to tungsten stabilizes its structure, so the doped filaments have better mechanical stability to vibrations. To dissolve ThO2 in acids, it is heated to 500–600 °C; heating above 600 °C produces a very resistant to acids and other reagents form of ThO2. Small addition of fluoride ions catalyses dissolution of thorium dioxide in acids. Two protactinium oxides have been obtained: PaO2 (black) and Pa2O5 (white); the former is isomorphic with ThO2 and the latter is easier to obtain. Both oxides are basic, and Pa(OH)5 is a weak, poorly soluble base. Decomposition of certain salts of uranium, for example UO2(NO3)·6H2O in air at 400 °C, yields orange or yellow UO3. This oxide is amphoteric and forms several hydroxides, the most stable being uranyl hydroxide UO2(OH)2. Reaction of uranium(VI) oxide with hydrogen results in uranium dioxide, which is similar in its properties with ThO2. This oxide is also basic and corresponds to the uranium hydroxide U(OH)4. Plutonium, neptunium and americium form two basic oxides: An2O3 and AnO2. Neptunium trioxide is unstable; thus, only Np3O8 could be obtained so far. However, the oxides of plutonium and neptunium with the chemical formula AnO2 and An2O3 are well characterized. Salts *An – actinide **Depending on the isotopes Actinides easily react with halogens forming salts with the formulas MX3 and MX4 (X = halogen). So the first berkelium compound, BkCl3, was synthesized in 1962 with an amount of 3 nanograms. Like the halogens of rare earth elements, actinide chlorides, bromides, and iodides are water-soluble, and fluorides are insoluble. Uranium easily yields a colorless hexafluoride, which sublimates at a temperature of 56.5 °C; because of its volatility, it is used in the separation of uranium isotopes with gas centrifuge or gaseous diffusion. Actinide hexafluorides have properties close to anhydrides. They are very sensitive to moisture and hydrolyze forming AnO2F2. The pentachloride and black hexachloride of uranium were synthesized, but they are both unstable. Action of acids on actinides yields salts, and if the acids are non-oxidizing then the actinide in the salt is in low-valence state: U + 2 H2SO4 → U(SO4)2 + 2 H2 2 Pu + 6 HCl → 2 PuCl3 + 3 H2 However, in these reactions the regenerating hydrogen can react with the metal, forming the corresponding hydride. Uranium reacts with acids and water much more easily than thorium. Actinide salts can also be obtained by dissolving the corresponding hydroxides in acids. Nitrates, chlorides, sulfates and perchlorates of actinides are water-soluble. When crystallizing from aqueous solutions, these salts form hydrates, such as Th(NO3)4·6H2O, Th(SO4)2·9H2O and Pu2(SO4)3·7H2O. Salts of high-valence actinides easily hydrolyze. So, colorless sulfate, chloride, perchlorate and nitrate of thorium transform into basic salts with formulas Th(OH)2SO4 and Th(OH)3NO3. The solubility and insolubility of trivalent and tetravalent actinides is like that of lanthanide salts. So phosphates, fluorides, oxalates, iodates and carbonates of actinides are weakly soluble in water; they precipitate as hydrates, such as ThF4·3H2O and Th(CrO4)2·3H2O. Actinides with oxidation state +6, except for the AnO22+-type cations, form [AnO4]2−, [An2O7]2− and other complex anions. For example, uranium, neptunium and plutonium form salts of the Na2UO4 (uranate) and (NH4)2U2O7 (diuranate) types. In comparison with lanthanides, actinides more easily form coordination compounds, and this ability increases with the actinide valence. Trivalent actinides do not form fluoride coordination compounds, whereas tetravalent thorium forms K2ThF6, KThF5, and even K5ThF9 complexes. Thorium also forms the corresponding sulfates (for example Na2SO4·Th(SO4)2·5H2O), nitrates and thiocyanates. Salts with the general formula An2Th(NO3)6·nH2O are of coordination nature, with the coordination number of thorium equal to 12. Even easier is to produce complex salts of pentavalent and hexavalent actinides. The most stable coordination compounds of actinides – tetravalent thorium and uranium – are obtained in reactions with diketones, e.g. acetylacetone. Applications While actinides have some established daily-life applications, such as in smoke detectors (americium) and gas mantles (thorium), they are mostly used in nuclear weapons and as fuel in nuclear reactors. The last two areas exploit the property of actinides to release enormous energy in nuclear reactions, which under certain conditions may become self-sustaining chain reactions. The most important isotope for nuclear power applications is uranium-235. It is used in the thermal reactor, and its concentration in natural uranium does not exceed 0.72%. This isotope strongly absorbs thermal neutrons releasing much energy. One fission act of 1 gram of 235U converts into about 1 MW·day. Of importance, is that emits more neutrons than it absorbs; upon reaching the critical mass, enters into a self-sustaining chain reaction. Typically, uranium nucleus is divided into two fragments with the release of 2–3 neutrons, for example: + ⟶ + + 3 Other promising actinide isotopes for nuclear power are thorium-232 and its product from the thorium fuel cycle, uranium-233. Emission of neutrons during the fission of uranium is important not only for maintaining the nuclear chain reaction, but also for the synthesis of the heavier actinides. Uranium-239 converts via β-decay into plutonium-239, which, like uranium-235, is capable of spontaneous fission. The world's first nuclear reactors were built not for energy, but for producing plutonium-239 for nuclear weapons. About half of produced thorium is used as the light-emitting material of gas mantles. Thorium is also added into multicomponent alloys of magnesium and zinc. Mg-Th alloys are light and strong, but also have high melting point and ductility and thus are widely used in the aviation industry and in the production of missiles. Thorium also has good electron emission properties, with long lifetime and low potential barrier for the emission. The relative content of thorium and uranium isotopes is widely used to estimate the age of various objects, including stars (see radiometric dating). The major application of plutonium has been in nuclear weapons, where the isotope plutonium-239 was a key component due to its ease of fission and availability. Plutonium-based designs allow reducing the critical mass to about a third of that for uranium-235. The "Fat Man"-type plutonium bombs produced during the Manhattan Project used explosive compression of plutonium to obtain significantly higher densities than normal, combined with a central neutron source to begin the reaction and increase efficiency. Thus only 6.2 kg of plutonium was needed for an explosive yield equivalent to 20 kilotons of TNT. (
Physical sciences
Chemical element groups
null
2341
https://en.wikipedia.org/wiki/Alkaloid
Alkaloid
Alkaloids are a broad class of naturally occurring organic compounds that contain at least one nitrogen atom. Some synthetic compounds of similar structure may also be termed alkaloids. Alkaloids are produced by a large variety of organisms including bacteria, fungi, plants, and animals. They can be purified from crude extracts of these organisms by acid-base extraction, or solvent extractions followed by silica-gel column chromatography. Alkaloids have a wide range of pharmacological activities including antimalarial (e.g. quinine), antiasthma (e.g. ephedrine), anticancer (e.g. homoharringtonine), cholinomimetic (e.g. galantamine), vasodilatory (e.g. vincamine), antiarrhythmic (e.g. quinidine), analgesic (e.g. morphine), antibacterial (e.g. chelerythrine), and antihyperglycemic activities (e.g. berberine). Many have found use in traditional or modern medicine, or as starting points for drug discovery. Other alkaloids possess psychotropic (e.g. psilocin) and stimulant activities (e.g. cocaine, caffeine, nicotine, theobromine), and have been used in entheogenic rituals or as recreational drugs. Alkaloids can be toxic too (e.g. atropine, tubocurarine). Although alkaloids act on a diversity of metabolic systems in humans and other animals, they almost uniformly evoke a bitter taste. The boundary between alkaloids and other nitrogen-containing natural compounds is not clear-cut. Most alkaloids are basic, although some have neutral and even weakly acidic properties. In addition to carbon, hydrogen and nitrogen, alkaloids may also contain oxygen or sulfur. Rarer still, they may contain elements such as phosphorus, chlorine, and bromine. Compounds like amino acid peptides, proteins, nucleotides, nucleic acid, amines, and antibiotics are usually not called alkaloids. Natural compounds containing nitrogen in the exocyclic position (mescaline, serotonin, dopamine, etc.) are usually classified as amines rather than as alkaloids. Some authors, however, consider alkaloids a special case of amines. Naming The name "alkaloids" () was introduced in 1819 by German chemist Carl Friedrich Wilhelm Meissner, and is derived from late Latin root and the Greek-language suffix -('like'). However, the term came into wide use only after the publication of a review article, by Oscar Jacobsen in the chemical dictionary of Albert Ladenburg in the 1880s. There is no unique method for naming alkaloids. Many individual names are formed by adding the suffix "ine" to the species or genus name. For example, atropine is isolated from the plant Atropa belladonna; strychnine is obtained from the seed of the Strychnine tree (Strychnos nux-vomica L.). Where several alkaloids are extracted from one plant their names are often distinguished by variations in the suffix: "idine", "anine", "aline", "inine" etc. There are also at least 86 alkaloids whose names contain the root "vin" because they are extracted from vinca plants such as Vinca rosea (Catharanthus roseus); these are called vinca alkaloids. History Alkaloid-containing plants have been used by humans since ancient times for therapeutic and recreational purposes. For example, medicinal plants have been known in Mesopotamia from about 2000 BC. The Odyssey of Homer referred to a gift given to Helen by the Egyptian queen, a drug bringing oblivion. It is believed that the gift was an opium-containing drug. A Chinese book on houseplants written in 1st–3rd centuries BC mentioned a medical use of ephedra and opium poppies. Also, coca leaves have been used by Indigenous South Americans since ancient times. Extracts from plants containing toxic alkaloids, such as aconitine and tubocurarine, were used since antiquity for poisoning arrows. Studies of alkaloids began in the 19th century. In 1804, the German chemist Friedrich Sertürner isolated from opium a "soporific principle" (), which he called "morphium", referring to Morpheus, the Greek god of dreams; in German and some other Central-European languages, this is still the name of the drug. The term "morphine", used in English and French, was given by the French physicist Joseph Louis Gay-Lussac. A significant contribution to the chemistry of alkaloids in the early years of its development was made by the French researchers Pierre Joseph Pelletier and Joseph Bienaimé Caventou, who discovered quinine (1820) and strychnine (1818). Several other alkaloids were discovered around that time, including xanthine (1817), atropine (1819), caffeine (1820), coniine (1827), nicotine (1828), colchicine (1833), sparteine (1851), and cocaine (1860). The development of the chemistry of alkaloids was accelerated by the emergence of spectroscopic and chromatographic methods in the 20th century, so that by 2008 more than 12,000 alkaloids had been identified. The first complete synthesis of an alkaloid was achieved in 1886 by the German chemist Albert Ladenburg. He produced coniine by reacting 2-methylpyridine with acetaldehyde and reducing the resulting 2-propenyl pyridine with sodium. Classifications Compared with most other classes of natural compounds, alkaloids are characterized by a great structural diversity. There is no uniform classification. Initially, when knowledge of chemical structures was lacking, botanical classification of the source plants was relied on. This classification is now considered obsolete. More recent classifications are based on similarity of the carbon skeleton (e.g., indole-, isoquinoline-, and pyridine-like) or biochemical precursor (ornithine, lysine, tyrosine, tryptophan, etc.). However, they require compromises in borderline cases; for example, nicotine contains a pyridine fragment from nicotinamide and a pyrrolidine part from ornithine and therefore can be assigned to both classes. Alkaloids are often divided into the following major groups: "True alkaloids" contain nitrogen in the heterocycle and originate from amino acids. Their characteristic examples are atropine, nicotine, and morphine. This group also includes some alkaloids that besides the nitrogen heterocycle contain terpene (e.g., evonine) or peptide fragments (e.g. ergotamine). The piperidine alkaloids coniine and coniceine may be regarded as true alkaloids (rather than pseudoalkaloids: see below) although they do not originate from amino acids. "Protoalkaloids", which contain nitrogen (but not the nitrogen heterocycle) and also originate from amino acids. Examples include mescaline, adrenaline and ephedrine. Polyamine alkaloids – derivatives of putrescine, spermidine, and spermine. Peptide and cyclopeptide alkaloids. Pseudoalkaloids – alkaloid-like compounds that do not originate from amino acids. This group includes terpene-like and steroid-like alkaloids, as well as purine-like alkaloids such as caffeine, theobromine, theacrine and theophylline. Some authors classify ephedrine and cathinone as pseudoalkaloids. Those originate from the amino acid phenylalanine, but acquire their nitrogen atom not from the amino acid but through transamination. Some alkaloids do not have the carbon skeleton characteristic of their group. So, galanthamine and homoaporphines do not contain isoquinoline fragment, but are, in general, attributed to isoquinoline alkaloids. Main classes of monomeric alkaloids are listed in the table below: Properties Most alkaloids contain oxygen in their molecular structure; those compounds are usually colorless crystals at ambient conditions. Oxygen-free alkaloids, such as nicotine or coniine, are typically volatile, colorless, oily liquids. Some alkaloids are colored, like berberine (yellow) and sanguinarine (orange). Most alkaloids are weak bases, but some, such as theobromine and theophylline, are amphoteric. Many alkaloids dissolve poorly in water but readily dissolve in organic solvents, such as diethyl ether, chloroform or 1,2-dichloroethane. Caffeine, cocaine, codeine and nicotine are slightly soluble in water (with a solubility of ≥1g/L), whereas others, including morphine and yohimbine are very slightly water-soluble (0.1–1 g/L). Alkaloids and acids form salts of various strengths. These salts are usually freely soluble in water and ethanol and poorly soluble in most organic solvents. Exceptions include scopolamine hydrobromide, which is soluble in organic solvents, and the water-soluble quinine sulfate. Most alkaloids have a bitter taste or are poisonous when ingested. Alkaloid production in plants appeared to have evolved in response to feeding by herbivorous animals; however, some animals have evolved the ability to detoxify alkaloids. Some alkaloids can produce developmental defects in the offspring of animals that consume but cannot detoxify the alkaloids. One example is the alkaloid cyclopamine, produced in the leaves of corn lily. During the 1950s, up to 25% of lambs born by sheep that had grazed on corn lily had serious facial deformations. These ranged from deformed jaws to cyclopia. After decades of research, in the 1980s, the compound responsible for these deformities was identified as the alkaloid 11-deoxyjervine, later renamed to cyclopamine. Distribution in nature Alkaloids are generated by various living organisms, especially by higher plants – about 10 to 25% of those contain alkaloids. Therefore, in the past the term "alkaloid" was associated with plants. The alkaloids content in plants is usually within a few percent and is inhomogeneous over the plant tissues. Depending on the type of plants, the maximum concentration is observed in the leaves (for example, black henbane), fruits or seeds (Strychnine tree), root (Rauvolfia serpentina) or bark (cinchona). Furthermore, different tissues of the same plants may contain different alkaloids. Beside plants, alkaloids are found in certain types of fungus, such as psilocybin in the fruiting bodies of the genus Psilocybe, and in animals, such as bufotenin in the skin of some toads and a number of insects, markedly ants. Many marine organisms also contain alkaloids. Some amines, such as adrenaline and serotonin, which play an important role in higher animals, are similar to alkaloids in their structure and biosynthesis and are sometimes called alkaloids. Extraction Because of the structural diversity of alkaloids, there is no single method of their extraction from natural raw materials. Most methods exploit the property of most alkaloids to be soluble in organic solvents but not in water, and the opposite tendency of their salts. Most plants contain several alkaloids. Their mixture is extracted first and then individual alkaloids are separated. Plants are thoroughly ground before extraction. Most alkaloids are present in the raw plants in the form of salts of organic acids. The extracted alkaloids may remain salts or change into bases. Base extraction is achieved by processing the raw material with alkaline solutions and extracting the alkaloid bases with organic solvents, such as 1,2-dichloroethane, chloroform, diethyl ether or benzene. Then, the impurities are dissolved by weak acids; this converts alkaloid bases into salts that are washed away with water. If necessary, an aqueous solution of alkaloid salts is again made alkaline and treated with an organic solvent. The process is repeated until the desired purity is achieved. In the acidic extraction, the raw plant material is processed by a weak acidic solution (e.g., acetic acid in water, ethanol, or methanol). A base is then added to convert alkaloids to basic forms that are extracted with organic solvent (if the extraction was performed with alcohol, it is removed first, and the remainder is dissolved in water). The solution is purified as described above. Alkaloids are separated from their mixture using their different solubility in certain solvents and different reactivity with certain reagents or by distillation. A number of alkaloids are identified from insects, among which the fire ant venom alkaloids known as solenopsins have received greater attention from researchers. These insect alkaloids can be efficiently extracted by solvent immersion of live fire ants or by centrifugation of live ants followed by silica-gel chromatography purification. Tracking and dosing the extracted solenopsin ant alkaloids has been described as possible based on their absorbance peak around 232 nanometers. Biosynthesis Biological precursors of most alkaloids are amino acids, such as ornithine, lysine, phenylalanine, tyrosine, tryptophan, histidine, aspartic acid, and anthranilic acid. Nicotinic acid can be synthesized from tryptophan or aspartic acid. Ways of alkaloid biosynthesis are too numerous and cannot be easily classified. However, there are a few typical reactions involved in the biosynthesis of various classes of alkaloids, including synthesis of Schiff bases and Mannich reaction. Synthesis of Schiff bases Schiff bases can be obtained by reacting amines with ketones or aldehydes. These reactions are a common method of producing C=N bonds. In the biosynthesis of alkaloids, such reactions may take place within a molecule, such as in the synthesis of piperidine: Mannich reaction An integral component of the Mannich reaction, in addition to an amine and a carbonyl compound, is a carbanion, which plays the role of the nucleophile in the nucleophilic addition to the ion formed by the reaction of the amine and the carbonyl. The Mannich reaction can proceed both intermolecularly and intramolecularly: Dimer alkaloids In addition to the described above monomeric alkaloids, there are also dimeric, and even trimeric and tetrameric alkaloids formed upon condensation of two, three, and four monomeric alkaloids. Dimeric alkaloids are usually formed from monomers of the same type through the following mechanisms: Mannich reaction, resulting in, e.g., voacamine Michael reaction (villalstonine) Condensation of aldehydes with amines (toxiferine) Oxidative addition of phenols (dauricine, tubocurarine) Lactonization (carpaine). There are also dimeric alkaloids formed from two distinct monomers, such as the vinca alkaloids vinblastine and vincristine, which are formed from the coupling of catharanthine and vindoline. The newer semi-synthetic chemotherapeutic agent vinorelbine is used in the treatment of non-small-cell lung cancer. It is another derivative dimer of vindoline and catharanthine and is synthesised from anhydrovinblastine, starting either from leurosine or the monomers themselves. Biological role Alkaloids are among the most important and best-known secondary metabolites, i.e. biogenic substances not directly involved in the normal growth, development, or reproduction of the organism. Instead, they generally mediate ecological interactions, which may produce a selective advantage for the organism by increasing its survivability or fecundity. In some cases their function, if any, remains unclear. An early hypothesis, that alkaloids are the final products of nitrogen metabolism in plants, as urea and uric acid are in mammals, was refuted by the finding that their concentration fluctuates rather than steadily increasing. Most of the known functions of alkaloids are related to protection. For example, aporphine alkaloid liriodenine produced by the tulip tree protects it from parasitic mushrooms. In addition, the presence of alkaloids in the plant prevents insects and chordate animals from eating it. However, some animals are adapted to alkaloids and even use them in their own metabolism. Such alkaloid-related substances as serotonin, dopamine and histamine are important neurotransmitters in animals. Alkaloids are also known to regulate plant growth. One example of an organism that uses alkaloids for protection is the Utetheisa ornatrix, more commonly known as the ornate moth. Pyrrolizidine alkaloids render these larvae and adult moths unpalatable to many of their natural enemies like coccinelid beetles, green lacewings, insectivorous hemiptera and insectivorous bats. Another example of alkaloids being utilized occurs in the poison hemlock moth (Agonopterix alstroemeriana). This moth feeds on its highly toxic and alkaloid-rich host plant poison hemlock (Conium maculatum) during its larval stage. A. alstroemeriana may benefit twofold from the toxicity of the naturally-occurring alkaloids, both through the unpalatability of the species to predators and through the ability of A. alstroemeriana to recognize Conium maculatum as the correct location for oviposition. A fire ant venom alkaloid known as solenopsin has been demonstrated to protect queens of invasive fire ants during the foundation of new nests, thus playing a central role in the spread of this pest ant species around the world. Applications In medicine Medical use of alkaloid-containing plants has a long history, and, thus, when the first alkaloids were isolated in the 19th century, they immediately found application in clinical practice. Many alkaloids are still used in medicine, usually in the form of salts widely used including the following: Many synthetic and semisynthetic drugs are structural modifications of the alkaloids, which were designed to enhance or change the primary effect of the drug and reduce unwanted side-effects. For example, naloxone, an opioid receptor antagonist, is a derivative of thebaine that is present in opium. In agriculture Prior to the development of a wide range of relatively low-toxic synthetic pesticides, some alkaloids, such as salts of nicotine and anabasine, were used as insecticides. Their use was limited by their high toxicity to humans. Use as psychoactive drugs Preparations of plants and fungi containing alkaloids and their extracts, and later pure alkaloids, have long been used as psychoactive substances. Cocaine, caffeine, and cathinone are stimulants of the central nervous system. Mescaline and many indole alkaloids (such as psilocybin, dimethyltryptamine and ibogaine) have hallucinogenic effect. Morphine and codeine are strong narcotic pain killers. There are alkaloids that do not have strong psychoactive effect themselves, but are precursors for semi-synthetic psychoactive drugs. For example, ephedrine and pseudoephedrine are used to produce methcathinone and methamphetamine. Thebaine is used in the synthesis of many painkillers such as oxycodone.
Biology and health sciences
Biochemistry and molecular biology
null
2349
https://en.wikipedia.org/wiki/Abstract%20data%20type
Abstract data type
In computer science, an abstract data type (ADT) is a mathematical model for data types, defined by its behavior (semantics) from the point of view of a user of the data, specifically in terms of possible values, possible operations on data of this type, and the behavior of these operations. This mathematical model contrasts with data structures, which are concrete representations of data, and are the point of view of an implementer, not a user. For example, a stack has push/pop operations that follow a Last-In-First-Out rule, and can be concretely implemented using either a list or an array. Another example is a set which stores values, without any particular order, and no repeated values. Values themselves are not retrieved from sets; rather, one tests a value for membership to obtain a Boolean "in" or "not in". ADTs are a theoretical concept, used in formal semantics and program verification and, less strictly, in the design and analysis of algorithms, data structures, and software systems. Most mainstream computer languages do not directly support formally specifying ADTs. However, various language features correspond to certain aspects of implementing ADTs, and are easily confused with ADTs proper; these include abstract types, opaque data types, protocols, and design by contract. For example, in modular programming, the module declares procedures that correspond to the ADT operations, often with comments that describe the constraints. This information hiding strategy allows the implementation of the module to be changed without disturbing the client programs, but the module only informally defines an ADT. The notion of abstract data types is related to the concept of data abstraction, important in object-oriented programming and design by contract methodologies for software engineering. History ADTs were first proposed by Barbara Liskov and Stephen N. Zilles in 1974, as part of the development of the CLU language. Algebraic specification was an important subject of research in CS around 1980 and almost a synonym for abstract data types at that time. It has a mathematical foundation in universal algebra. Definition Formally, an ADT is analogous to an algebraic structure in mathematics, consisting of a domain, a collection of operations, and a set of constraints the operations must satisfy. The domain is often defined implicitly, for example the free object over the set of ADT operations. The interface of the ADT typically refers only to the domain and operations, and perhaps some of the constraints on the operations, such as pre-conditions and post-conditions; but not to other constraints, such as relations between the operations, which are considered behavior. There are two main styles of formal specifications for behavior, axiomatic semantics and operational semantics. Despite not being part of the interface, the constraints are still important to the definition of the ADT; for example a stack and a queue have similar add element/remove element interfaces, but it is the constraints that distinguish last-in-first-out from first-in-first-out behavior. The constraints do not consist only of equations such as but also logical formulas. Axiomatic semantics In the spirit of functional programming, each state of an abstract data structure is a separate entity or value. In this view, each operation is modelled as a mathematical function with no side effects. Operations that modify the ADT are modeled as functions that take the old state as an argument and returns the new state as part of the result. The order in which operations are evaluated is immaterial, and the same operation applied to the same arguments (including the same input states) will always return the same results (and output states). The constraints are specified as axioms or algebraic laws that the operations must satisfy. Operational semantics In the spirit of imperative programming, an abstract data structure is conceived as an entity that is mutable—meaning that there is a notion of time and the ADT may be in different states at different times. Operations then change the state of the ADT over time; therefore, the order in which operations are evaluated is important, and the same operation on the same entities may have different effects if executed at different times. This is analogous to the instructions of a computer or the commands and procedures of an imperative language. To underscore this view, it is customary to say that the operations are executed or applied, rather than evaluated, similar to the imperative style often used when describing abstract algorithms. The constraints are typically specified in prose. Auxiliary operations Presentations of ADTs are often limited in scope to only key operations. More thorough presentations often specify auxiliary operations on ADTs, such as: (), that yields a new instance of the ADT; (s, t), that tests whether two instances' states are equivalent in some sense; (s), that computes some standard hash function from the instance's state; (s) or (s), that produces a human-readable representation of the instance's state. These names are illustrative and may vary between authors. In imperative-style ADT definitions, one often finds also: (s), that prepares a newly created instance s for further operations, or resets it to some "initial state"; (s, t), that puts instance s in a state equivalent to that of t; (t), that performs s ← (), (s, t), and returns s; (s) or (s), that reclaims the memory and other resources used by s. The operation is not normally relevant or meaningful, since ADTs are theoretical entities that do not "use memory". However, it may be necessary when one needs to analyze the storage used by an algorithm that uses the ADT. In that case, one needs additional axioms that specify how much memory each ADT instance uses, as a function of its state, and how much of it is returned to the pool by . Restricted types The definition of an ADT often restricts the stored value(s) for its instances, to members of a specific set X called the range of those variables. For example, an abstract variable may be constrained to only store integers. As in programming languages, such restrictions may simplify the description and analysis of algorithms, and improve its readability. Aliasing In the operational style, it is often unclear how multiple instances are handled and if modifying one instance may affect others. A common style of defining ADTs writes the operations as if only one instance exists during the execution of the algorithm, and all operations are applied to that instance. For example, a stack may have operations (x) and (), that operate on the only existing stack. ADT definitions in this style can be easily rewritten to admit multiple coexisting instances of the ADT, by adding an explicit instance parameter (like S in the stack example below) to every operation that uses or modifies the implicit instance. Some ADTs cannot be meaningfully defined without allowing multiple instances, for example when a single operation takes two distinct instances of the ADT as parameters, such as a operation on sets or a operation on lists. The multiple instance style is sometimes combined with an aliasing axiom, namely that the result of () is distinct from any instance already in use by the algorithm. Implementations of ADTs may still reuse memory and allow implementations of () to yield a previously created instance; however, defining that such an instance even is "reused" is difficult in the ADT formalism. More generally, this axiom may be strengthened to exclude also partial aliasing with other instances, so that composite ADTs (such as trees or records) and reference-style ADTs (such as pointers) may be assumed to be completely disjoint. For example, when extending the definition of an abstract variable to include abstract records, operations upon a field F of a record variable R, clearly involve F, which is distinct from, but also a part of, R. A partial aliasing axiom would state that changing a field of one record variable does not affect any other records. Complexity analysis Some authors also include the computational complexity ("cost") of each operation, both in terms of time (for computing operations) and space (for representing values), to aid in analysis of algorithms. For example, one may specify that each operation takes the same time and each value takes the same space regardless of the state of the ADT, or that there is a "size" of the ADT and the operations are linear, quadratic, etc. in the size of the ADT. Alexander Stepanov, designer of the C++ Standard Template Library, included complexity guarantees in the STL specification, arguing: Other authors disagree, arguing that a stack ADT is the same whether it is implemented with a linked list or an array, despite the difference in operation costs, and that an ADT specification should be independent of implementation. Examples Abstract variable An abstract variable may be regarded as the simplest non-trivial ADT, with the semantics of an imperative variable. It admits two operations, and . Operational definitions are often written in terms of abstract variables. In the axiomatic semantics, letting be the type of the abstract variable and be the type of its contents, is a function and is a function of type . The main constraint is that always returns the value x used in the most recent operation on the same variable V, i.e. . We may also require that overwrites the value fully, . In the operational semantics, (V) is a procedure that returns the current value in the location V, and (V, x) is a procedure with return type that stores the value x in the location V. The constraints are described informally as that reads are consistent with writes. As in many programming languages, the operation (V, x) is often written V ← x (or some similar notation), and (V) is implied whenever a variable V is used in a context where a value is required. Thus, for example, V ← V + 1 is commonly understood to be a shorthand for (V,(V) + 1). In this definition, it is implicitly assumed that names are always distinct: storing a value into a variable U has no effect on the state of a distinct variable V. To make this assumption explicit, one could add the constraint that: if U and V are distinct variables, the sequence { (U, x); (V, y) } is equivalent to { (V, y); (U, x) }. This definition does not say anything about the result of evaluating (V) when V is un-initialized, that is, before performing any operation on V. Fetching before storing can be disallowed, defined to have a certain result, or left unspecified. There are some algorithms whose efficiency depends on the assumption that such a is legal, and returns some arbitrary value in the variable's range. Abstract stack An abstract stack is a last-in-first-out structure, It is generally defined by three key operations: , that inserts a data item onto the stack; , that removes a data item from it; and or , that accesses a data item on top of the stack without removal. A complete abstract stack definition includes also a Boolean-valued function (S) and a () operation that returns an initial stack instance. In the axiomatic semantics, letting be the type of stack states and be the type of values contained in the stack, these could have the types , , , , and . In the axiomatic semantics, creating the initial stack is a "trivial" operation, and always returns the same distinguished state. Therefore, it is often designated by a special symbol like Λ or "()". The operation predicate can then be written simply as or . The constraints are then , , () = T (a newly created stack is empty), ((S, x)) = F (pushing something into a stack makes it non-empty). These axioms do not define the effect of (s) or (s), unless s is a stack state returned by a . Since leaves the stack non-empty, those two operations can be defined to be invalid when s = Λ. From these axioms (and the lack of side effects), it can be deduced that (Λ, x) ≠ Λ. Also, (s, x) = (t, y) if and only if x = y and s = t. As in some other branches of mathematics, it is customary to assume also that the stack states are only those whose existence can be proved from the axioms in a finite number of steps. In this case, it means that every stack is a finite sequence of values, that becomes the empty stack (Λ) after a finite number of s. By themselves, the axioms above do not exclude the existence of infinite stacks (that can be ped forever, each time yielding a different state) or circular stacks (that return to the same state after a finite number of s). In particular, they do not exclude states s such that (s) = s or (s, x) = s for some x. However, since one cannot obtain such stack states from the initial stack state with the given operations, they are assumed "not to exist". In the operational definition of an abstract stack, (S, x) returns nothing and (S) yields the value as the result but not the new state of the stack. There is then the constraint that, for any value x and any abstract variable V, the sequence of operations { (S, x); V ← (S) } is equivalent to V ← x. Since the assignment V ← x, by definition, cannot change the state of S, this condition implies that V ← (S) restores S to the state it had before the (S, x). From this condition and from the properties of abstract variables, it follows, for example, that the sequence: { (S, x); (S, y); U ← (S); (S, z); V ← (S); W ← (S) } where x, y, and z are any values, and U, V, W are pairwise distinct variables, is equivalent to: { U ← y; V ← z; W ← x } Unlike the axiomatic semantics, the operational semantics can suffer from aliasing. Here it is implicitly assumed that operations on a stack instance do not modify the state of any other ADT instance, including other stacks; that is: For any values x, y, and any distinct stacks S and T, the sequence { (S, x); (T, y) } is equivalent to { (T, y); (S, x) }. Boom hierarchy A more involved example is the Boom hierarchy of the binary tree, list, bag and set abstract data types. All these data types can be declared by three operations: null, which constructs the empty container, single, which constructs a container from a single element and append, which combines two containers of the same type. The complete specification for the four data types can then be given by successively adding the following rules over these operations: Access to the data can be specified by pattern-matching over the three operations, e.g. a member function for these containers by: Care must be taken to ensure that the function is invariant under the relevant rules for the data type. Within each of the equivalence classes implied by the chosen subset of equations, it has to yield the same result for all of its members. Common ADTs Some common ADTs, which have proved useful in a great variety of applications, are Collection Container List String Set Multiset Map Multimap Graph Tree Stack Queue Priority queue Double-ended queue Double-ended priority queue Each of these ADTs may be defined in many ways and variants, not necessarily equivalent. For example, an abstract stack may or may not have a operation that tells how many items have been pushed and not yet popped. This choice makes a difference not only for its clients but also for the implementation. Abstract graphical data type An extension of ADT for computer graphics was proposed in 1979: an abstract graphical data type (AGDT). It was introduced by Nadia Magnenat Thalmann, and Daniel Thalmann. AGDTs provide the advantages of ADTs with facilities to build graphical objects in a structured way. Implementation Abstract data types are theoretical entities, used (among other things) to simplify the description of abstract algorithms, to classify and evaluate data structures, and to formally describe the type systems of programming languages. However, an ADT may be implemented. This means each ADT instance or state is represented by some concrete data type or data structure, and for each abstract operation there is a corresponding procedure or function, and these implemented procedures satisfy the ADT's specifications and axioms up to some standard. In practice, the implementation is not perfect, and users must be aware of issues due to limitations of the representation and implemented procedures. For example, integers may be specified as an ADT, defined by the distinguished values 0 and 1, the operations of addition, subtraction, multiplication, division (with care for division by zero), comparison, etc., behaving according to the familiar mathematical axioms in abstract algebra such as associativity, commutativity, and so on. However, in a computer, integers are most commonly represented as fixed-width 32-bit or 64-bit binary numbers. Users must be aware of issues with this representation, such as arithmetic overflow, where the ADT specifies a valid result but the representation is unable to accommodate this value. Nonetheless, for many purposes, the user can ignore these infidelities and simply use the implementation as if it were the abstract data type. Usually, there are many ways to implement the same ADT, using several different concrete data structures. Thus, for example, an abstract stack can be implemented by a linked list or by an array. Different implementations of the ADT, having all the same properties and abilities, can be considered semantically equivalent and may be used somewhat interchangeably in code that uses the ADT. This provides a form of abstraction or encapsulation, and gives a great deal of flexibility when using ADT objects in different situations. For example, different implementations of the ADT may be more efficient in different situations; it is possible to use each in the situation where they are preferable, thus increasing overall efficiency. Code that uses an ADT implementation according to its interface will continue working even if the implementation of the ADT is changed. In order to prevent clients from depending on the implementation, an ADT is often packaged as an opaque data type or handle of some sort, in one or more modules, whose interface contains only the signature (number and types of the parameters and results) of the operations. The implementation of the module—namely, the bodies of the procedures and the concrete data structure used—can then be hidden from most clients of the module. This makes it possible to change the implementation without affecting the clients. If the implementation is exposed, it is known instead as a transparent data type. Modern object-oriented languages, such as C++ and Java, support a form of abstract data types. When a class is used as a type, it is an abstract type that refers to a hidden representation. In this model, an ADT is typically implemented as a class, and each instance of the ADT is usually an object of that class. The module's interface typically declares the constructors as ordinary procedures, and most of the other ADT operations as methods of that class. Many modern programming languages, such as C++ and Java, come with standard libraries that implement numerous ADTs in this style. However, such an approach does not easily encapsulate multiple representational variants found in an ADT. It also can undermine the extensibility of object-oriented programs. In a pure object-oriented program that uses interfaces as types, types refer to behaviours, not representations. The specification of some programming languages is intentionally vague about the representation of certain built-in data types, defining only the operations that can be done on them. Therefore, those types can be viewed as "built-in ADTs". Examples are the arrays in many scripting languages, such as Awk, Lua, and Perl, which can be regarded as an implementation of the abstract list. In a formal specification language, ADTs may be defined axiomatically, and the language then allows manipulating values of these ADTs, thus providing a straightforward and immediate implementation. The OBJ family of programming languages for instance allows defining equations for specification and rewriting to run them. Such automatic implementations are usually not as efficient as dedicated implementations, however. Example: implementation of the abstract stack As an example, here is an implementation of the abstract stack above in the C programming language. Imperative-style interface An imperative-style interface might be: typedef struct stack_Rep stack_Rep; // type: stack instance representation (opaque record) typedef stack_Rep* stack_T; // type: handle to a stack instance (opaque pointer) typedef void* stack_Item; // type: value stored in stack instance (arbitrary address) stack_T stack_create(void); // creates a new empty stack instance void stack_push(stack_T s, stack_Item x); // adds an item at the top of the stack stack_Item stack_pop(stack_T s); // removes the top item from the stack and returns it bool stack_empty(stack_T s); // checks whether stack is empty This interface could be used in the following manner: #include <stack.h> // includes the stack interface stack_T s = stack_create(); // creates a new empty stack instance int x = 17; stack_push(s, &x); // adds the address of x at the top of the stack void* y = stack_pop(s); // removes the address of x from the stack and returns it if (stack_empty(s)) { } // does something if stack is empty This interface can be implemented in many ways. The implementation may be arbitrarily inefficient, since the formal definition of the ADT, above, does not specify how much space the stack may use, nor how long each operation should take. It also does not specify whether the stack state s continues to exist after a call x ← (s). In practice the formal definition should specify that the space is proportional to the number of items pushed and not yet popped; and that every one of the operations above must finish in a constant amount of time, independently of that number. To comply with these additional specifications, the implementation could use a linked list, or an array (with dynamic resizing) together with two integers (an item count and the array size). Functional-style interface Functional-style ADT definitions are more appropriate for functional programming languages, and vice versa. However, one can provide a functional-style interface even in an imperative language like C. For example: typedef struct stack_Rep stack_Rep; // type: stack state representation (opaque record) typedef stack_Rep* stack_T; // type: handle to a stack state (opaque pointer) typedef void* stack_Item; // type: value of a stack state (arbitrary address) stack_T stack_empty(void); // returns the empty stack state stack_T stack_push(stack_T s, stack_Item x); // adds an item at the top of the stack state and returns the resulting stack state stack_T stack_pop(stack_T s); // removes the top item from the stack state and returns the resulting stack state stack_Item stack_top(stack_T s); // returns the top item of the stack state
Mathematics
Data structures and types
null
2362
https://en.wikipedia.org/wiki/Antibody
Antibody
An antibody (Ab) or immunoglobulin (Ig) is a large, Y-shaped protein belonging to the immunoglobulin superfamily which is used by the immune system to identify and neutralize antigens such as bacteria and viruses, including those that cause disease. Antibodies can recognize virtually any size antigen, able to perceive diverse chemical compositions. Each antibody recognizes one or more specific antigens. Antigen literally means "antibody generator", as it is the presence of an antigen that drives the formation of an antigen-specific antibody. Each tip of the "Y" of an antibody contains a paratope that specifically binds to one particular epitope on an antigen, allowing the two molecules to bind together with precision. Using this mechanism, antibodies can effectively "tag" a microbe or an infected cell for attack by other parts of the immune system, or can neutralize it directly (for example, by blocking a part of a virus that is essential for its invasion). More narrowly, an antibody (Ab) can refer to the free (secreted) form of these proteins, as opposed to the membrane-bound form found in a B cell receptor. The term immunoglobulin can then refer to both forms. Since they are, broadly speaking, the same protein, the terms are often treated as synonymous. To allow the immune system to recognize millions of different antigens, the antigen-binding sites at both tips of the antibody come in an equally wide variety. The rest of the antibody structure is much less variable; in humans, antibodies occur in five classes, sometimes called isotypes: IgA, IgD, IgE, IgG, and IgM. Human IgG and IgA antibodies are also divided into discrete subclasses (IgG1, IgG2, IgG3, IgG4; IgA1 and IgA2). The class refers to the functions triggered by the antibody (also known as effector functions), in addition to some other structural features. Antibodies from different classes also differ in where they are released in the body and at what stage of an immune response. Between species, while classes and subclasses of antibodies may be shared (at least in name), their functions and distribution throughout the body may be different. For example, mouse IgG1 is closer to human IgG2 than human IgG1 in terms of its function. The term humoral immunity is often treated as synonymous with the antibody response, describing the function of the immune system that exists in the body's humors (fluids) in the form of soluble proteins, as distinct from cell-mediated immunity, which generally describes the responses of T cells (especially cytotoxic T cells). In general, antibodies are considered part of the adaptive immune system, though this classification can become complicated. For example, natural IgM, which are made by B-1 lineage cells that have properties more similar to innate immune cells than adaptive, refers to IgM antibodies made independently of an immune response that demonstrate polyreactivity- they recognize multiple distinct (unrelated) antigens. These can work with the complement system in the earliest phases of an immune response to help facilitate clearance of the offending antigen and delivery of the resulting immune complexes to the lymph nodes or spleen for initiation of an immune response. Hence in this capacity, the function of antibodies is more akin to that of innate immunity than adaptive. Nonetheless, in general antibodies are regarded as part of the adaptive immune system because they demonstrate exceptional specificity (with some exception), are produced through genetic rearrangements (rather than being encoded directly in germline), and are a manifestation of immunological memory. In the course of an immune response, B cells can progressively differentiate into antibody-secreting cells or into memory B cells. Antibody-secreting cells comprise plasmablasts and plasma cells, which differ mainly in the degree to which they secrete antibody, their lifespan, metabolic adaptations, and surface markers. Plasmablasts are rapidly proliferating, short-lived cells produced in the early phases of the immune response (classically described as arising extrafollicularly rather than from the germinal center) which have the potential to differentiate further into plasma cells. Occasionally plasmablasts are described as short-lived plasma cells, formally this is incorrect. Plasma cells, in contrast, do not divide (they are terminally differentiated), and rely on survival niches comprising specific cell types and cytokines to persist. Plasma cells will secrete huge quantities of antibody regardless of whether or not their cognate antigen is present, ensuring that antibody levels to the antigen in question do not fall to 0, provided the plasma cell stays alive. The rate of antibody secretion, however, can be regulated, for example, by the presence of adjuvant molecules that stimulate the immune response such as TLR ligands. Long-lived plasma cells can live for potentially the entire lifetime of the organism. Classically, the survival niches that house long-lived plasma cells reside in the bone marrow, though it cannot be assumed that any given plasma cell in the bone marrow will be long-lived. However, other work indicates that survival niches can readily be established within the mucosal tissues- though the classes of antibodies involved show a different hierarchy from those in the bone marrow. B cells can also differentiate into memory B cells which can persist for decades similarly to long-lived plasma cells. These cells can be rapidly recalled in a secondary immune response, undergoing class switching, affinity maturation, and differentiating into antibody-secreting cells. Antibodies are central to the immune protection elicited by most vaccines and infections (although other components of the immune system certainly participate and for some diseases are considerably more important than antibodies in generating an immune response, e.g. herpes zoster). Durable protection from infections caused by a given microbe – that is, the ability of the microbe to enter the body and begin to replicate (not necessarily to cause disease) – depends on sustained production of large quantities of antibodies, meaning that effective vaccines ideally elicit persistent high levels of antibody, which relies on long-lived plasma cells. At the same time, many microbes of medical importance have the ability to mutate to escape antibodies elicited by prior infections, and long-lived plasma cells cannot undergo affinity maturation or class switching. This is compensated for through memory B cells: novel variants of a microbe that still retain structural features of previously encountered antigens can elicit memory B cell responses that adapt to those changes. It has been suggested that long-lived plasma cells secrete B cell receptors with higher affinity than those on the surfaces of memory B cells, but findings are not entirely consistent on this point. Structure Antibodies are heavy (~150 kDa) proteins of about 10 nm in size, arranged in three globular regions that roughly form a Y shape. In humans and most other mammals, an antibody unit consists of four polypeptide chains; two identical heavy chains and two identical light chains connected by disulfide bonds. Each chain is a series of domains: somewhat similar sequences of about 110 amino acids each. These domains are usually represented in simplified schematics as rectangles. Light chains consist of one variable domain VL and one constant domain CL, while heavy chains contain one variable domain VH and three to four constant domains CH1, CH2, ... Structurally an antibody is also partitioned into two antigen-binding fragments (Fab), containing one VL, VH, CL, and CH1 domain each, as well as the crystallisable fragment (Fc), forming the trunk of the Y shape. In between them is a hinge region of the heavy chains, whose flexibility allows antibodies to bind to pairs of epitopes at various distances, to form complexes (dimers, trimers, etc.), and to bind effector molecules more easily. In an electrophoresis test of blood proteins, antibodies mostly migrate to the last, gamma globulin fraction. Conversely, most gamma-globulins are antibodies, which is why the two terms were historically used as synonyms, as were the symbols Ig and γ. This variant terminology fell out of use due to the correspondence being inexact and due to confusion with γ (gamma) heavy chains which characterize the IgG class of antibodies. Antigen-binding site The variable domains can also be referred to as the FV region. It is the subregion of Fab that binds to an antigen. More specifically, each variable domain contains three hypervariable regions – the amino acids seen there vary the most from antibody to antibody. When the protein folds, these regions give rise to three loops of β-strands, localized near one another on the surface of the antibody. These loops are referred to as the complementarity-determining regions (CDRs), since their shape complements that of an antigen. Three CDRs from each of the heavy and light chains together form an antibody-binding site whose shape can be anything from a pocket to which a smaller antigen binds, to a larger surface, to a protrusion that sticks out into a groove in an antigen. Typically though, only a few residues contribute to most of the binding energy. The existence of two identical antibody-binding sites allows antibody molecules to bind strongly to multivalent antigen (repeating sites such as polysaccharides in bacterial cell walls, or other sites at some distance apart), as well as to form antibody complexes and larger antigen-antibody complexes. The structures of CDRs have been clustered and classified by Chothia et al. and more recently by North et al. and Nikoloudis et al. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities. In the framework of the immune network theory, CDRs are also called idiotypes. According to immune network theory, the adaptive immune system is regulated by interactions between idiotypes. Fc region The Fc region (the trunk of the Y shape) is composed of constant domains from the heavy chains. Its role is in modulating immune cell activity: it is where effector molecules bind to, triggering various effects after the antibody Fab region binds to an antigen. Effector cells (such as macrophages or natural killer cells) bind via their Fc receptors (FcR) to the Fc region of an antibody, while the complement system is activated by binding the C1q protein complex. IgG or IgM can bind to C1q, but IgA cannot, therefore IgA does not activate the classical complement pathway. Another role of the Fc region is to selectively distribute different antibody classes across the body. In particular, the neonatal Fc receptor (FcRn) binds to the Fc region of IgG antibodies to transport it across the placenta, from the mother to the fetus. In addition to this, binding to FcRn endows IgG with an exceptionally long half-life relative to other plasma proteins of 3-4 weeks. IgG3 in most cases (depending on allotype) has mutations at the FcRn binding site which lower affinity for FcRn, which are thought to have evolved to limit the highly inflammatory effects of this subclass. Antibodies are glycoproteins, that is, they have carbohydrates (glycans) added to conserved amino acid residues. These conserved glycosylation sites occur in the Fc region and influence interactions with effector molecules. Protein structure The N-terminus of each chain is situated at the tip. Each immunoglobulin domain has a similar structure, characteristic of all the members of the immunoglobulin superfamily: it is composed of between 7 (for constant domains) and 9 (for variable domains) β-strands, forming two beta sheets in a Greek key motif. The sheets create a "sandwich" shape, the immunoglobulin fold, held together by a disulfide bond. Antibody complexes Secreted antibodies can occur as a single Y-shaped unit, a monomer. However, some antibody classes also form dimers with two Ig units (as with IgA), tetramers with four Ig units (like teleost fish IgM), or pentamers with five Ig units (like shark IgW or mammalian IgM, which occasionally forms hexamers as well, with six units). IgG can also form hexamers, though no J chain is required. IgA tetramers and pentamers have also been reported. Antibodies also form complexes by binding to antigen: this is called an antigen-antibody complex or immune complex. Small antigens can cross-link two antibodies, also leading to the formation of antibody dimers, trimers, tetramers, etc. Multivalent antigens (e.g., cells with multiple epitopes) can form larger complexes with antibodies. An extreme example is the clumping, or agglutination, of red blood cells with antibodies in blood typing to determine blood groups: the large clumps become insoluble, leading to visually apparent precipitation. B cell receptors The membrane-bound form of an antibody may be called a surface immunoglobulin (sIg) or a membrane immunoglobulin (mIg). It is part of the B cell receptor (BCR), which allows a B cell to detect when a specific antigen is present in the body and triggers B cell activation. The BCR is composed of surface-bound IgD or IgM antibodies and associated Ig-α and Ig-β heterodimers, which are capable of signal transduction. A typical human B cell will have 50,000 to 100,000 antibodies bound to its surface. Upon antigen binding, they cluster in large patches, which can exceed 1 micrometer in diameter, on lipid rafts that isolate the BCRs from most other cell signaling receptors. These patches may improve the efficiency of the cellular immune response. In humans, the cell surface is bare around the B cell receptors for several hundred nanometers, which further isolates the BCRs from competing influences. Classes Antibodies can come in different varieties known as isotypes or classes. In humans there are five antibody classes known as IgA, IgD, IgE, IgG, and IgM, which are further subdivided into subclasses such as IgA1, IgA2. The prefix "Ig" stands for immunoglobulin, while the suffix denotes the type of heavy chain the antibody contains: the heavy chain types α (alpha), γ (gamma), δ (delta), ε (epsilon), μ (mu) give rise to IgA, IgG, IgD, IgE, IgM, respectively. The distinctive features of each class are determined by the part of the heavy chain within the hinge and Fc region. The classes differ in their biological properties, functional locations and ability to deal with different antigens, as depicted in the table. For example, IgE antibodies are responsible for an allergic response consisting of histamine release from mast cells, often a sole contributor to asthma (though other pathways exist as do exist symptoms very similar to yet not technically asthma). The antibody's variable region binds to allergic antigen, for example house dust mite particles, while its Fc region (in the ε heavy chains) binds to Fc receptor ε on a mast cell, triggering its degranulation: the release of molecules stored in its granules. The antibody isotype of a B cell changes during cell development and activation. Immature B cells, which have never been exposed to an antigen, express only the IgM isotype in a cell surface bound form. The B lymphocyte, in this ready-to-respond form, is known as a "naive B lymphocyte." The naive B lymphocyte expresses both surface IgM and IgD. The co-expression of both of these immunoglobulin isotypes renders the B cell ready to respond to antigen. B cell activation follows engagement of the cell-bound antibody molecule with an antigen, causing the cell to divide and differentiate into an antibody-producing cell called a plasma cell. In this activated form, the B cell starts to produce antibody in a secreted form rather than a membrane-bound form. Some daughter cells of the activated B cells undergo isotype switching, a mechanism that causes the production of antibodies to change from IgM or IgD to the other antibody isotypes, IgE, IgA, or IgG, that have defined roles in the immune system. Light chain types In mammals there are two types of immunoglobulin light chain, which are called lambda (λ) and kappa (κ). However, there is no known functional difference between them, and both can occur with any of the five major types of heavy chains. Each antibody contains two identical light chains: both κ or both λ. Proportions of κ and λ types vary by species and can be used to detect abnormal proliferation of B cell clones. Other types of light chains, such as the iota (ι) chain, are found in other vertebrates like sharks (Chondrichthyes) and bony fishes (Teleostei). In non-mammalian animals In most placental mammals, the structure of antibodies is generally the same. Jawed fish appear to be the most primitive animals that are able to make antibodies similar to those of mammals, although many features of their adaptive immunity appeared somewhat earlier. Cartilaginous fish (such as sharks) produce heavy-chain-only antibodies (i.e., lacking light chains) which moreover feature longer chain pentamers (with five constant units per molecule). Camelids (such as camels, llamas, alpacas) are also notable for producing heavy-chain-only antibodies. Antibody–antigen interactions The antibody's paratope interacts with the antigen's epitope. An antigen usually contains different epitopes along its surface arranged discontinuously, and dominant epitopes on a given antigen are called determinants. Antibody and antigen interact by spatial complementarity (lock and key). The molecular forces involved in the Fab-epitope interaction are weak and non-specific – for example electrostatic forces, hydrogen bonds, hydrophobic interactions, and van der Waals forces. This means binding between antibody and antigen is reversible, and the antibody's affinity towards an antigen is relative rather than absolute. Relatively weak binding also means it is possible for an antibody to cross-react with different antigens of different relative affinities. Function The main categories of antibody action include the following: Neutralisation, in which neutralizing antibodies block parts of the surface of a bacterial cell or virion to render its attack ineffective Agglutination, in which antibodies "glue together" foreign cells into clumps that are attractive targets for phagocytosis Precipitation, in which antibodies "glue together" serum-soluble antigens, forcing them to precipitate out of solution in clumps that are attractive targets for phagocytosis Complement activation (fixation), in which antibodies that are latched onto a foreign cell encourage complement to attack it with a membrane attack complex, which leads to the following: Lysis of the foreign cell Encouragement of inflammation by chemotactically attracting inflammatory cells More indirectly, an antibody can signal immune cells to present antibody fragments to T cells, or downregulate other immune cells to avoid autoimmunity. Activated B cells differentiate into either antibody-producing cells called plasma cells that secrete soluble antibody or memory cells that survive in the body for years afterward in order to allow the immune system to remember an antigen and respond faster upon future exposures. At the prenatal and neonatal stages of life, the presence of antibodies is provided by passive immunization from the mother. Early endogenous antibody production varies for different kinds of antibodies, and usually appear within the first years of life. Since antibodies exist freely in the bloodstream, they are said to be part of the humoral immune system. Circulating antibodies are produced by clonal B cells that specifically respond to only one antigen (an example is a virus capsid protein fragment). Antibodies contribute to immunity in three ways: They prevent pathogens from entering or damaging cells by binding to them; they stimulate removal of pathogens by macrophages and other cells by coating the pathogen; and they trigger destruction of pathogens by stimulating other immune responses such as the complement pathway. Antibodies will also trigger vasoactive amine degranulation to contribute to immunity against certain types of antigens (helminths, allergens). Activation of complement Antibodies that bind to surface antigens (for example, on bacteria) will attract the first component of the complement cascade with their Fc region and initiate activation of the "classical" complement system. This results in the killing of bacteria in two ways. First, the binding of the antibody and complement molecules marks the microbe for ingestion by phagocytes in a process called opsonization; these phagocytes are attracted by certain complement molecules generated in the complement cascade. Second, some complement system components form a membrane attack complex to assist antibodies to kill the bacterium directly (bacteriolysis). Activation of effector cells To combat pathogens that replicate outside cells, antibodies bind to pathogens to link them together, causing them to agglutinate. Since an antibody has at least two paratopes, it can bind more than one antigen by binding identical epitopes carried on the surfaces of these antigens. By coating the pathogen, antibodies stimulate effector functions against the pathogen in cells that recognize their Fc region. Those cells that recognize coated pathogens have Fc receptors, which, as the name suggests, interact with the Fc region of IgA, IgG, and IgE antibodies. The engagement of a particular antibody with the Fc receptor on a particular cell triggers an effector function of that cell; phagocytes will phagocytose, mast cells and neutrophils will degranulate, natural killer cells will release cytokines and cytotoxic molecules; that will ultimately result in destruction of the invading microbe. The activation of natural killer cells by antibodies initiates a cytotoxic mechanism known as antibody-dependent cell-mediated cytotoxicity (ADCC) – this process may explain the efficacy of monoclonal antibodies used in biological therapies against cancer. The Fc receptors are isotype-specific, which gives greater flexibility to the immune system, invoking only the appropriate immune mechanisms for distinct pathogens. Natural antibodies Humans and higher primates also produce "natural antibodies" that are present in serum before viral infection. Natural antibodies have been defined as antibodies that are produced without any previous infection, vaccination, other foreign antigen exposure or passive immunization. These antibodies can activate the classical complement pathway leading to lysis of enveloped virus particles long before the adaptive immune response is activated. Antibodies are produced exclusively by B cells in response to antigens where initially, antibodies are formed as membrane-bound receptors, but upon activation by antigens and helper T cells, B cells differentiate to produce soluble antibodies. Many natural antibodies are directed against the disaccharide galactose α(1,3)-galactose (α-Gal), which is found as a terminal sugar on glycosylated cell surface proteins, and generated in response to production of this sugar by bacteria contained in the human gut. These antibodies undergo quality checks in the endoplasmic reticulum (ER), which contains proteins that assist in proper folding and assembly. Rejection of xenotransplantated organs is thought to be, in part, the result of natural antibodies circulating in the serum of the recipient binding to α-Gal antigens expressed on the donor tissue. Immunoglobulin diversity Virtually all microbes can trigger an antibody response. Successful recognition and eradication of many different types of microbes requires diversity among antibodies; their amino acid composition varies allowing them to interact with many different antigens. It has been estimated that humans generate about 10 billion different antibodies, each capable of binding a distinct epitope of an antigen. Although a huge repertoire of different antibodies is generated in a single individual, the number of genes available to make these proteins is limited by the size of the human genome. Several complex genetic mechanisms have evolved that allow vertebrate B cells to generate a diverse pool of antibodies from a relatively small number of antibody genes. Domain variability The chromosomal region that encodes an antibody is large and contains several distinct gene loci for each domain of the antibody—the chromosome region containing heavy chain genes (IGH@) is found on chromosome 14, and the loci containing lambda and kappa light chain genes (IGL@ and IGK@) are found on chromosomes 22 and 2 in humans. One of these domains is called the variable domain, which is present in each heavy and light chain of every antibody, but can differ in different antibodies generated from distinct B cells. Differences between the variable domains are located on three loops known as hypervariable regions (HV-1, HV-2 and HV-3) or complementarity-determining regions (CDR1, CDR2 and CDR3). CDRs are supported within the variable domains by conserved framework regions. The heavy chain locus contains about 65 different variable domain genes that all differ in their CDRs. Combining these genes with an array of genes for other domains of the antibody generates a large cavalry of antibodies with a high degree of variability. This combination is called V(D)J recombination discussed below. V(D)J recombination Somatic recombination of immunoglobulins, also known as V(D)J recombination, involves the generation of a unique immunoglobulin variable region. The variable region of each immunoglobulin heavy or light chain is encoded in several pieces—known as gene segments (subgenes). These segments are called variable (V), diversity (D) and joining (J) segments. V, D and J segments are found in Ig heavy chains, but only V and J segments are found in Ig light chains. Multiple copies of the V, D and J gene segments exist, and are tandemly arranged in the genomes of mammals. In the bone marrow, each developing B cell will assemble an immunoglobulin variable region by randomly selecting and combining one V, one D and one J gene segment (or one V and one J segment in the light chain). As there are multiple copies of each type of gene segment, and different combinations of gene segments can be used to generate each immunoglobulin variable region, this process generates a huge number of antibodies, each with different paratopes, and thus different antigen specificities. The rearrangement of several subgenes (i.e. V2 family) for lambda light chain immunoglobulin is coupled with the activation of microRNA miR-650, which further influences biology of B-cells. RAG proteins play an important role with V(D)J recombination in cutting DNA at a particular region. Without the presence of these proteins, V(D)J recombination would not occur. After a B cell produces a functional immunoglobulin gene during V(D)J recombination, it cannot express any other variable region (a process known as allelic exclusion) thus each B cell can produce antibodies containing only one kind of variable chain. Somatic hypermutation and affinity maturation Following activation with antigen, B cells begin to proliferate rapidly. In these rapidly dividing cells, the genes encoding the variable domains of the heavy and light chains undergo a high rate of point mutation, by a process called somatic hypermutation (SHM). SHM results in approximately one nucleotide change per variable gene, per cell division. As a consequence, any daughter B cells will acquire slight amino acid differences in the variable domains of their antibody chains. This serves to increase the diversity of the antibody pool and impacts the antibody's antigen-binding affinity. Some point mutations will result in the production of antibodies that have a weaker interaction (low affinity) with their antigen than the original antibody, and some mutations will generate antibodies with a stronger interaction (high affinity). B cells that express high affinity antibodies on their surface will receive a strong survival signal during interactions with other cells, whereas those with low affinity antibodies will not, and will die by apoptosis. Thus, B cells expressing antibodies with a higher affinity for the antigen will outcompete those with weaker affinities for function and survival allowing the average affinity of antibodies to increase over time. The process of generating antibodies with increased binding affinities is called affinity maturation. Affinity maturation occurs in mature B cells after V(D)J recombination, and is dependent on help from helper T cells. Class switching Isotype or class switching is a biological process occurring after activation of the B cell, which allows the cell to produce different classes of antibody (IgA, IgE, or IgG). The different classes of antibody, and thus effector functions, are defined by the constant (C) regions of the immunoglobulin heavy chain. Initially, naive B cells express only cell-surface IgM and IgD with identical antigen binding regions. Each isotype is adapted for a distinct function; therefore, after activation, an antibody with an IgG, IgA, or IgE effector function might be required to effectively eliminate an antigen. Class switching allows different daughter cells from the same activated B cell to produce antibodies of different isotypes. Only the constant region of the antibody heavy chain changes during class switching; the variable regions, and therefore antigen specificity, remain unchanged. Thus the progeny of a single B cell can produce antibodies, all specific for the same antigen, but with the ability to produce the effector function appropriate for each antigenic challenge. Class switching is triggered by cytokines; the isotype generated depends on which cytokines are present in the B cell environment. Class switching occurs in the heavy chain gene locus by a mechanism called class switch recombination (CSR). This mechanism relies on conserved nucleotide motifs, called switch (S) regions, found in DNA upstream of each constant region gene (except in the δ-chain). The DNA strand is broken by the activity of a series of enzymes at two selected S-regions. The variable domain exon is rejoined through a process called non-homologous end joining (NHEJ) to the desired constant region (γ, α or ε). This process results in an immunoglobulin gene that encodes an antibody of a different isotype. Specificity designations An antibody can be called monospecific if it has specificity for a single antigen or epitope, or bispecific if it has affinity for two different antigens or two different epitopes on the same antigen. A group of antibodies can be called polyvalent (or unspecific) if they have affinity for various antigens or microorganisms. Intravenous immunoglobulin, if not otherwise noted, consists of a variety of different IgG (polyclonal IgG). In contrast, monoclonal antibodies are identical antibodies produced by a single B cell. Asymmetrical antibodies Heterodimeric antibodies, which are also asymmetrical antibodies, allow for greater flexibility and new formats for attaching a variety of drugs to the antibody arms. One of the general formats for a heterodimeric antibody is the "knobs-into-holes" format. This format is specific to the heavy chain part of the constant region in antibodies. The "knobs" part is engineered by replacing a small amino acid with a larger one. It fits into the "hole", which is engineered by replacing a large amino acid with a smaller one. What connects the "knobs" to the "holes" are the disulfide bonds between each chain. The "knobs-into-holes" shape facilitates antibody dependent cell mediated cytotoxicity. Single-chain variable fragments (scFv) are connected to the variable domain of the heavy and light chain via a short linker peptide. The linker is rich in glycine, which gives it more flexibility, and serine/threonine, which gives it specificity. Two different scFv fragments can be connected together, via a hinge region, to the constant domain of the heavy chain or the constant domain of the light chain. This gives the antibody bispecificity, allowing for the binding specificities of two different antigens. The "knobs-into-holes" format enhances heterodimer formation but does not suppress homodimer formation. To further improve the function of heterodimeric antibodies, many scientists are looking towards artificial constructs. Artificial antibodies are largely diverse protein motifs that use the functional strategy of the antibody molecule, but are not limited by the loop and framework structural constraints of the natural antibody. Being able to control the combinational design of the sequence and three-dimensional space could transcend the natural design and allow for the attachment of different combinations of drugs to the arms. Heterodimeric antibodies have a greater range in shapes they can take and the drugs that are attached to the arms do not have to be the same on each arm, allowing for different combinations of drugs to be used in cancer treatment. Pharmaceuticals are able to produce highly functional bispecific, and even multispecific, antibodies. The degree to which they can function is impressive given that such a change of shape from the natural form should lead to decreased functionality. Interchromosomal DNA Transposition Antibody diversification typically occurs through somatic hypermutation, class switching, and affinity maturation targeting the BCR gene loci, but on occasion more unconventional forms of diversification have been documented. For example, in the case of malaria caused by Plasmodium falciparum, some antibodies from those who had been infected demonstrated an insertion from chromosome 19 containing a 98-amino acid stretch from leukocyte-associated immunoglobulin-like receptor 1, LAIR1, in the elbow joint. This represents a form of interchromosomal transposition. LAIR1 normally binds collagen, but can recognize repetitive interspersed families of polypeptides (RIFIN) family members that are highly expressed on the surface of P. falciparum-infected red blood cells. In fact, these antibodies underwent affinity maturation that enhanced affinity for RIFIN but abolished affinity for collagen. These "LAIR1-containing" antibodies have been found in 5-10% of donors from Tanzania and Mali, though not in European donors. European donors did show 100-1000 nucleotide stretches inside the elbow joints as well, however. This particular phenomenon may be specific to malaria, as infection is known to induce genomic instability. History The first use of the term "antibody" occurred in a text by Paul Ehrlich. The term Antikörper (the German word for antibody) appears in the conclusion of his article "Experimental Studies on Immunity", published in October 1891, which states that, "if two substances give rise to two different Antikörper, then they themselves must be different". However, the term was not accepted immediately and several other terms for antibody were proposed; these included Immunkörper, Amboceptor, Zwischenkörper, substance sensibilisatrice, copula, Desmon, philocytase, fixateur, and Immunisin. The word antibody has formal analogy to the word antitoxin and a similar concept to Immunkörper (immune body in English). As such, the original construction of the word contains a logical flaw; the antitoxin is something directed against a toxin, while the antibody is a body directed against something. The study of antibodies began in 1890 when Emil von Behring and Kitasato Shibasaburō described antibody activity against diphtheria and tetanus toxins. Von Behring and Kitasato put forward the theory of humoral immunity, proposing that a mediator in serum could react with a foreign antigen. His idea prompted Paul Ehrlich to propose the side-chain theory for antibody and antigen interaction in 1897, when he hypothesized that receptors (described as "side-chains") on the surface of cells could bind specifically to toxins – in a "lock-and-key" interaction – and that this binding reaction is the trigger for the production of antibodies. Other researchers believed that antibodies existed freely in the blood and, in 1904, Almroth Wright suggested that soluble antibodies coated bacteria to label them for phagocytosis and killing; a process that he named opsoninization. In the 1920s, Michael Heidelberger and Oswald Avery observed that antigens could be precipitated by antibodies and went on to show that antibodies are made of protein. The biochemical properties of antigen-antibody-binding interactions were examined in more detail in the late 1930s by John Marrack. The next major advance was in the 1940s, when Linus Pauling confirmed the lock-and-key theory proposed by Ehrlich by showing that the interactions between antibodies and antigens depend more on their shape than their chemical composition. In 1948, Astrid Fagraeus discovered that B cells, in the form of plasma cells, were responsible for generating antibodies. Further work concentrated on characterizing the structures of the antibody proteins. A major advance in these structural studies was the discovery in the early 1960s by Gerald Edelman and Joseph Gally of the antibody light chain, and their realization that this protein is the same as the Bence-Jones protein described in 1845 by Henry Bence Jones. Edelman went on to discover that antibodies are composed of disulfide bond-linked heavy and light chains. Around the same time, antibody-binding (Fab) and antibody tail (Fc) regions of IgG were characterized by Rodney Porter. Together, these scientists deduced the structure and complete amino acid sequence of IgG, a feat for which they were jointly awarded the 1972 Nobel Prize in Physiology or Medicine. The Fv fragment was prepared and characterized by David Givol. While most of these early studies focused on IgM and IgG, other immunoglobulin isotypes were identified in the 1960s: Thomas Tomasi discovered secretory antibody (IgA); David S. Rowe and John L. Fahey discovered IgD; and Kimishige Ishizaka and Teruko Ishizaka discovered IgE and showed it was a class of antibodies involved in allergic reactions. In a landmark series of experiments beginning in 1976, Susumu Tonegawa showed that genetic material can rearrange itself to form the vast array of available antibodies. Medical applications Disease diagnosis Detection of particular antibodies is a very common form of medical diagnostics, and applications such as serology depend on these methods. For example, in biochemical assays for disease diagnosis, a titer of antibodies directed against Epstein-Barr virus or Lyme disease is estimated from the blood. If those antibodies are not present, either the person is not infected or the infection occurred a very long time ago, and the B cells generating these specific antibodies have naturally decayed. In clinical immunology, levels of individual classes of immunoglobulins are measured by nephelometry (or turbidimetry) to characterize the antibody profile of patient. Elevations in different classes of immunoglobulins are sometimes useful in determining the cause of liver damage in patients for whom the diagnosis is unclear. For example, elevated IgA indicates alcoholic cirrhosis, elevated IgM indicates viral hepatitis and primary biliary cirrhosis, while IgG is elevated in viral hepatitis, autoimmune hepatitis and cirrhosis. Autoimmune disorders can often be traced to antibodies that bind the body's own epitopes; many can be detected through blood tests. Antibodies directed against red blood cell surface antigens in immune mediated hemolytic anemia are detected with the Coombs test. The Coombs test is also used for antibody screening in blood transfusion preparation and also for antibody screening in antenatal women. Practically, several immunodiagnostic methods based on detection of complex antigen-antibody are used to diagnose infectious diseases, for example ELISA, immunofluorescence, Western blot, immunodiffusion, immunoelectrophoresis, and magnetic immunoassay. Antibodies raised against human chorionic gonadotropin are used in over the counter pregnancy tests. New dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies, which allows for positron emission tomography (PET) imaging of cancer. Disease therapy Targeted monoclonal antibody therapy is employed to treat diseases such as rheumatoid arthritis, multiple sclerosis, psoriasis, and many forms of cancer including non-Hodgkin's lymphoma, colorectal cancer, head and neck cancer and breast cancer. Some immune deficiencies, such as X-linked agammaglobulinemia and hypogammaglobulinemia, result in partial or complete lack of antibodies. These diseases are often treated by inducing a short-term form of immunity called passive immunity. Passive immunity is achieved through the transfer of ready-made antibodies in the form of human or animal serum, pooled immunoglobulin or monoclonal antibodies, into the affected individual. Prenatal therapy Rh factor, also known as Rh D antigen, is an antigen found on red blood cells; individuals that are Rh-positive (Rh+) have this antigen on their red blood cells and individuals that are Rh-negative (Rh–) do not. During normal childbirth, delivery trauma or complications during pregnancy, blood from a fetus can enter the mother's system. In the case of an Rh-incompatible mother and child, consequential blood mixing may sensitize an Rh- mother to the Rh antigen on the blood cells of the Rh+ child, putting the remainder of the pregnancy, and any subsequent pregnancies, at risk for hemolytic disease of the newborn. Rho(D) immune globulin antibodies are specific for human RhD antigen. Anti-RhD antibodies are administered as part of a prenatal treatment regimen to prevent sensitization that may occur when a Rh-negative mother has a Rh-positive fetus. Treatment of a mother with Anti-RhD antibodies prior to and immediately after trauma and delivery destroys Rh antigen in the mother's system from the fetus. This occurs before the antigen can stimulate maternal B cells to "remember" Rh antigen by generating memory B cells. Therefore, her humoral immune system will not make anti-Rh antibodies, and will not attack the Rh antigens of the current or subsequent babies. Rho(D) Immune Globulin treatment prevents sensitization that can lead to Rh disease, but does not prevent or treat the underlying disease itself. Research applications Specific antibodies are produced by injecting an antigen into a mammal, such as a mouse, rat, rabbit, goat, sheep, or horse for large quantities of antibody. Blood isolated from these animals contains polyclonal antibodies—multiple antibodies that bind to the same antigen—in the serum, which can now be called antiserum. Antigens are also injected into chickens for generation of polyclonal antibodies in egg yolk. To obtain antibody that is specific for a single epitope of an antigen, antibody-secreting lymphocytes are isolated from the animal and immortalized by fusing them with a cancer cell line. The fused cells are called hybridomas, and will continually grow and secrete antibody in culture. Single hybridoma cells are isolated by dilution cloning to generate cell clones that all produce the same antibody; these antibodies are called monoclonal antibodies. Polyclonal and monoclonal antibodies are often purified using Protein A/G or antigen-affinity chromatography. In research, purified antibodies are used in many applications. Antibodies for research applications can be found directly from antibody suppliers, or through use of a specialist search engine. Research antibodies are most commonly used to identify and locate intracellular and extracellular proteins. Antibodies are used in flow cytometry to differentiate cell types by the proteins they express; different types of cells express different combinations of cluster of differentiation molecules on their surface, and produce different intracellular and secretable proteins. They are also used in immunoprecipitation to separate proteins and anything bound to them (co-immunoprecipitation) from other molecules in a cell lysate, in Western blot analyses to identify proteins separated by electrophoresis, and in immunohistochemistry or immunofluorescence to examine protein expression in tissue sections or to locate proteins within cells with the assistance of a microscope. Proteins can also be detected and quantified with antibodies, using ELISA and ELISpot techniques. Antibodies used in research are some of the most powerful, yet most problematic reagents with a tremendous number of factors that must be controlled in any experiment including cross reactivity, or the antibody recognizing multiple epitopes and affinity, which can vary widely depending on experimental conditions such as pH, solvent, state of tissue etc. Multiple attempts have been made to improve both the way that researchers validate antibodies and ways in which they report on antibodies. Researchers using antibodies in their work need to record them correctly in order to allow their research to be reproducible (and therefore tested, and qualified by other researchers). Less than half of research antibodies referenced in academic papers can be easily identified. Papers published in F1000 in 2014 and 2015 provide researchers with a guide for reporting research antibody use. The RRID paper, is co-published in 4 journals that implemented the RRIDs Standard for research resource citation, which draws data from the antibodyregistry.org as the source of antibody identifiers (see also group at Force11). Antibody regions can be used to further biomedical research by acting as a guide for drugs to reach their target. Several application involve using bacterial plasmids to tag plasmids with the Fc region of the antibody such as pFUSE-Fc plasmid. Regulations Production and testing There are several ways to obtain antibodies, including in vivo techniques like animal immunization and various in vitro approaches, such as the phage display method. Traditionally, most antibodies are produced by hybridoma cell lines through immortalization of antibody-producing cells by chemically induced fusion with myeloma cells. In some cases, additional fusions with other lines have created "triomas" and "quadromas". The manufacturing process should be appropriately described and validated. Validation studies should at least include: The demonstration that the process is able to produce in good quality (the process should be validated) The efficiency of the antibody purification (all impurities and virus must be eliminated) The characterization of purified antibody (physicochemical characterization, immunological properties, biological activities, contaminants, ...) Determination of the virus clearance studies Before clinical trials Product safety testing: Sterility (bacteria and fungi), in vitro and in vivo testing for adventitious viruses, murine retrovirus testing..., product safety data needed before the initiation of feasibility trials in serious or immediately life-threatening conditions, it serves to evaluate dangerous potential of the product. Feasibility testing: These are pilot studies whose objectives include, among others, early characterization of safety and initial proof of concept in a small specific patient population (in vitro or in vivo testing). Preclinical studies Testing cross-reactivity of antibody: to highlight unwanted interactions (toxicity) of antibodies with previously characterized tissues. This study can be performed in vitro (reactivity of the antibody or immunoconjugate should be determined with a quick-frozen adult tissues) or in vivo (with appropriates animal models). Preclinical pharmacology and toxicity testing: preclinical safety testing of antibody is designed to identify possible toxicity in humans, to estimate the likelihood and severity of potential adverse events in humans, and to identify a safe starting dose and dose escalation, when possible. Animal toxicity studies: Acute toxicity testing, repeat-dose toxicity testing, long-term toxicity testing Pharmacokinetics and pharmacodynamics testing: Use for determinate clinical dosages, antibody activities, evaluation of the potential clinical effects Structure prediction and computational antibody design The importance of antibodies in health care and the biotechnology industry demands knowledge of their structures at high resolution. This information is used for protein engineering, modifying the antigen binding affinity, and identifying an epitope, of a given antibody. X-ray crystallography is one commonly used method for determining antibody structures. However, crystallizing an antibody is often laborious and time-consuming. Computational approaches provide a cheaper and faster alternative to crystallography, but their results are more equivocal, since they do not produce empirical structures. Online web servers such as Web Antibody Modeling (WAM) and Prediction of Immunoglobulin Structure (PIGS) enable computational modeling of antibody variable regions. Rosetta Antibody is a novel antibody FV region structure prediction server, which incorporates sophisticated techniques to minimize CDR loops and optimize the relative orientation of the light and heavy chains, as well as homology models that predict successful docking of antibodies with their unique antigen. However, describing an antibody's binding site using only one single static structure limits the understanding and characterization of the antibody's function and properties. To improve antibody structure prediction and to take the strongly correlated CDR loop and interface movements into account, antibody paratopes should be described as interconverting states in solution with varying probabilities. The ability to describe the antibody through binding affinity to the antigen is supplemented by information on antibody structure and amino acid sequences for the purpose of patent claims. Several methods have been presented for computational design of antibodies based on the structural bioinformatics studies of antibody CDRs. There are a variety of methods used to sequence an antibody including Edman degradation, cDNA, etc.; albeit one of the most common modern uses for peptide/protein identification is liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). High volume antibody sequencing methods require computational approaches for the data analysis, including de novo sequencing directly from tandem mass spectra and database search methods that use existing protein sequence databases. Many versions of shotgun protein sequencing are able to increase the coverage by utilizing CID/HCD/ETD fragmentation methods and other techniques, and they have achieved substantial progress in attempt to fully sequence proteins, especially antibodies. Other methods have assumed the existence of similar proteins, a known genome sequence, or combined top-down and bottom up approaches. Current technologies have the ability to assemble protein sequences with high accuracy by integrating de novo sequencing peptides, intensity, and positional confidence scores from database and homology searches. Antibody mimetic Antibody mimetics are organic compounds, like antibodies, that can specifically bind antigens. They consist of artificial peptides or proteins, or aptamer-based nucleic acid molecules with a molar mass of about 3 to 20 kDa. Antibody fragments, such as Fab and nanobodies are not considered as antibody mimetics. Common advantages over antibodies are better solubility, tissue penetration, stability towards heat and enzymes, and comparatively low production costs. Antibody mimetics have been developed and commercialized as research, diagnostic and therapeutic agents. Binding antibody unit BAU (binding antibody unit, often as BAU/mL) is a measurement unit defined by the WHO for the comparison of assays detecting the same class of immunoglobulins with the same specificity.
Biology and health sciences
Animal: General
null
2388
https://en.wikipedia.org/wiki/Antidepressant
Antidepressant
Antidepressants are a class of medications used to treat major depressive disorder, anxiety disorders, chronic pain, and addiction. Common side effects of antidepressants include dry mouth, weight gain, dizziness, headaches, akathisia, sexual dysfunction, and emotional blunting. There is an increased risk of suicidal thinking and behavior when taken by children, adolescents, and young adults. Discontinuation syndrome, which resembles recurrent depression in the case of the SSRI class, may occur after stopping the intake of any antidepressant, having effects which may be permanent and irreversible. Research regarding the effectiveness of antidepressants for depression in adults is controversial and has found both benefits and drawbacks. Meanwhile, evidence of benefit in children and adolescents is unclear, even though antidepressant use has considerably increased in children and adolescents in the 2000s. While a 2018 study found that the 21 most commonly prescribed antidepressant medications were slightly more effective than placebos for the short-term (acute) treatments of adults with major depressive disorder, other research has found that the placebo effect may account for most or all of the drugs' observed efficacy. Research on the effectiveness of antidepressants is generally done on people who have severe symptoms, a population that exhibits much weaker placebo responses, meaning that the results may not be extrapolated to the general population that has not (or has not yet) been diagnosed with anxiety or depression. Medical uses Antidepressants are prescribed to treat major depressive disorder (MDD), anxiety disorders, chronic pain, and some addictions. Antidepressants are often used in combination with one another. Despite its longstanding prominence in pharmaceutical advertising, the idea that low serotonin levels cause depression is not supported by scientific evidence. Proponents of the monoamine hypothesis of depression recommend choosing an antidepressant which impacts the most prominent symptoms. Under this practice, for example, a person with MDD who is also anxious or irritable would be treated with selective serotonin reuptake inhibitors (SSRIs) or norepinephrine reuptake inhibitors, while a person suffering from loss of energy and enjoyment of life would take a norepinephrine–dopamine reuptake inhibitor. Major depressive disorder The UK National Institute for Health and Care Excellence (NICE)'s 2022 guidelines indicate that antidepressants should not be routinely used for the initial treatment of mild depression, "unless that is the person's preference". The guidelines recommended that antidepressant treatment be considered: For people with a history of moderate or severe depression. For people with mild depression that has been present for an extended period. As a first-line treatment for moderate to severe depression. As a second-line treatment for mild depression that persists after other interventions. The guidelines further note that in most cases, antidepressants should be used in combination with psychosocial interventions and should be continued for at least six months to reduce the risk of relapse and that SSRIs are typically better tolerated than other antidepressants. American Psychiatric Association (APA) treatment guidelines recommend that initial treatment be individually tailored based on factors including the severity of symptoms, co-existing disorders, prior treatment experience, and the person's preference. Options may include antidepressants, psychotherapy, electroconvulsive therapy (ECT), transcranial magnetic stimulation (TMS), or light therapy. The APA recommends antidepressant medication as an initial treatment choice in people with mild, moderate, or severe major depression, and that should be given to all people with severe depression unless ECT is planned. Reviews of antidepressants generally find that they benefit adults with depression. On the other hand, some contend that most studies on antidepressant medication are confounded by several biases: the lack of an active placebo, which means that many people in the placebo arm of a double-blind study may deduce that they are not getting any true treatment, thus destroying double-blindness; a short follow up after termination of treatment; non-systematic recording of adverse effects; very strict exclusion criteria in samples of patients; studies being paid for by the industry; selective publication of results. This means that the small beneficial effects that are found may not be statistically significant. Among the 21 most commonly prescribed antidepressants, the most effective and well-tolerated are escitalopram, paroxetine, sertraline, agomelatine, and mirtazapine. For children and adolescents with moderate to severe depressive disorder, some evidence suggests fluoxetine (either with or without cognitive behavioral therapy) is the best treatment, but more research is needed to be certain. Sertraline, escitalopram, and duloxetine may also help reduce symptoms. A 2023 systematic review and meta-analysis of randomized controlled trials of antidepressants for major depressive disorder found that the medications provided only small or doubtful benefits in terms of quality of life. Likewise, a 2022 systematic review and meta-analysis of randomized controlled trials of antidepressants for major depressive disorder in children and adolescents found small improvements in quality of life. Quality of life as an outcome measure is often selectively reported in trials of antidepressants. Anxiety disorders For children and adolescents, fluvoxamine is effective in treating a range of anxiety disorders. Fluoxetine, sertraline, and paroxetine can also help with managing various forms of anxiety in children and adolescents. Meta-analyses of published and unpublished trials have found that antidepressants have a placebo-subtracted effect size (standardized mean difference or SMD) in the treatment of anxiety disorders of around 0.3, which equates to a small improvement and is roughly the same magnitude of benefit as their effectiveness in the treatment of depression. The effect size (SMD) for improvement with placebo in trials of antidepressants for anxiety disorders is approximately 1.0, which is a large improvement in terms of effect size definitions. In relation to this, most of the benefit of antidepressants for anxiety disorders is attributable to placebo responses rather than to the effects of the antidepressants themselves. Generalized anxiety disorder Antidepressants are recommended by the National Institute for Health and Care Excellence (NICE) for the treatment of generalized anxiety disorder (GAD) that has failed to respond to conservative measures such as education and self-help activities. GAD is a common disorder in which the central feature is excessively worrying about numerous events. Key symptoms include excessive anxiety about events and issues going on around them and difficulty controlling worrisome thoughts that persists for at least 6 months. Antidepressants provide a modest to moderate reduction in anxiety in GAD. The efficacy of different antidepressants is similar. Social anxiety disorder Some antidepressants are used as a treatment for social anxiety disorder, but their efficacy is not entirely convincing, as only a small proportion of antidepressants showed some effectiveness for this condition. Paroxetine was the first drug to be FDA-approved for this disorder. Its efficacy is considered beneficial, although not everyone responds favorably to the drug. Sertraline and fluvoxamine extended-release were later approved for it as well, while escitalopram is used off-label with acceptable efficiency. However, there is not enough evidence to support Citalopram for treating social anxiety disorder, and fluoxetine was no better than a placebo in clinical trials. SSRIs are used as a first-line treatment for social anxiety, but they do not work for everyone. One alternative would be venlafaxine, an SNRI, which has shown benefits for social phobia in five clinical trials against a placebo, while the other SNRIs are not considered particularly useful for this disorder as many of them did not undergo testing for it. , it is unclear if duloxetine and desvenlafaxine can provide benefits for people with social anxiety. However, another class of antidepressants called MAOIs are considered effective for social anxiety, but they come with many unwanted side effects and are rarely used. Phenelzine was shown to be a good treatment option, but its use is limited by dietary restrictions. Moclobemide is a RIMA and showed mixed results, but still received approval in some European countries for social anxiety disorder. TCA antidepressants, such as clomipramine and imipramine, are not considered effective for this anxiety disorder in particular. This leaves out SSRIs such as paroxetine, sertraline, and fluvoxamine CR as acceptable and tolerated treatment options for this disorder. Obsessive–compulsive disorder SSRIs are a second-line treatment for adult obsessive–compulsive disorder (OCD) with mild functional impairment, and a first-line treatment for those with moderate or severe impairment. In children, SSRIs are considered as a second-line therapy in those with moderate-to-severe impairment, with close monitoring for psychiatric adverse effects. Sertraline and fluoxetine are effective in treating OCD for children and adolescents. Clomipramine, a TCA drug, is considered effective and useful for OCD. However, it is used as a second-line treatment because it is less well-tolerated than SSRIs. Despite this, it has not shown superiority to fluvoxamine in trials. All SSRIs can be used effectively for OCD. SNRI use may also be attempted, though no SNRIs have been approved for the treatment of OCD. Despite these treatment options, many patients remain symptomatic after initiating the medication, and less than half achieve remission. Placebo responses are a large component of the benefit of antidepressants in the treatment of depression and anxiety. However, placebo responses with antidepressants are lower in magnitude in the treatment of OCD compared to depression and anxiety. A 2019 meta-analysis found placebo improvement effect sizes (SMD) of about 1.2 for depression, 1.0 for anxiety disorders, and 0.6 for OCD with antidepressants. Post–traumatic stress disorder Antidepressants are one of the treatment options for PTSD. However, their efficacy is not well established. Paroxetine and sertraline have been FDA approved for the treatment of PTSD. Paroxetine has slightly higher response and remission rates than sertraline for this condition. However, neither drug is considered very helpful for a broad patient demographic. Fluoxetine and venlafaxine are used off-label. Fluoxetine has produced unsatisfactory mixed results. Venlafaxine showed response rates of 78%, which is significantly higher than what paroxetine and sertraline achieved. However, it did not address as many symptoms of PTSD as paroxetine and sertraline, in part due to the fact that venlafaxine is an SNRI. This class of drugs inhibits the reuptake of norepinephrine, which may cause anxiety in some patients. Fluvoxamine, escitalopram, and citalopram were not well-tested for this disorder. MAOIs, while some of them may be helpful, are not used much because of their unwanted side effects. This leaves paroxetine and sertraline as acceptable treatment options for some people, although more effective antidepressants are needed. Panic disorder Panic disorder is treated relatively well with medications compared to other disorders. Several classes of antidepressants have shown efficacy for this disorder, with SSRIs and SNRIs used first-line. Paroxetine, sertraline, and fluoxetine are FDA-approved for panic disorder, while fluvoxamine, escitalopram, and citalopram are also considered effective for them. SNRI venlafaxine is also approved for this condition. Unlike social anxiety and PTSD, some TCAs antidepressants, like clomipramine and imipramine, have shown efficacy for panic disorder. Moreover, the MAOI phenelzine is also considered useful. Panic disorder has many drugs for its treatment. However, the starting dose must be lower than the one used for major depressive disorder because people have reported an increase in anxiety as a result of starting the medication. In conclusion, while panic disorder's treatment options seem acceptable and useful for this condition, many people are still symptomatic after treatment with residual symptoms. Eating disorders Antidepressants are recommended as an alternative or additional first step to self-help programs in the treatment of bulimia nervosa. SSRIs (fluoxetine in particular) are preferred over other antidepressants due to their acceptability, tolerability, and superior reduction of symptoms in short-term trials. Long-term efficacy remains poorly characterized. Bupropion is not recommended for the treatment of eating disorders, due to an increased risk of seizure. Similar recommendations apply to binge eating disorder. SSRIs provide short-term reductions in binge eating behavior, but have not been associated with significant weight loss. Clinical trials have generated mostly negative results for the use of SSRIs in the treatment of anorexia nervosa. Treatment guidelines from the National Institute of Health and Care Excellence (NICE) recommend against the use of SSRIs in this disorder. Those from the American Psychiatric Association (APA) note that SSRIs confer no advantage regarding weight gain, but may be used for the treatment of co-existing depressive, anxiety, or obsessive–compulsive disorders. Pain Fibromyalgia A 2012 meta-analysis concluded that antidepressant treatment favorably affects pain, health-related quality of life, depression, and sleep in fibromyalgia syndrome. Tricyclics appear to be the most effective class, with moderate effects on pain and sleep, and small effects on fatigue and health-related quality of life. The fraction of people experiencing a 30% pain reduction on tricyclics was 48%, versus 28% on placebo. For SSRIs and SNRIs, the fractions of people experiencing a 30% pain reduction were 36% (20% in the placebo comparator arms) and 42% (32% in the corresponding placebo comparator arms) respectively. Discontinuation of treatment due to side effects was common. Antidepressants including amitriptyline, fluoxetine, duloxetine, milnacipran, moclobemide, and pirlindole are recommended by the European League Against Rheumatism (EULAR) for the treatment of fibromyalgia based on "limited evidence". Neuropathic pain A 2014 meta-analysis from the Cochrane Collaboration found the antidepressant duloxetine to be effective for the treatment of pain resulting from diabetic neuropathy. The same group reviewed data for amitriptyline in the treatment of neuropathic pain and found limited useful randomized clinical trial data. They concluded that the long history of successful use in the community for the treatment of fibromyalgia and neuropathic pain justified its continued use. The group was concerned about the potential overestimation of the amount of pain relief provided by amitriptyline, and highlighted that only a small number of people will experience significant pain relief by taking this medication. Other uses Antidepressants may be modestly helpful for treating people who have both depression and alcohol dependence, however, the evidence supporting this association is of low quality. Bupropion is used to help people stop smoking. Antidepressants are also used to control some symptoms of narcolepsy. Antidepressants may be used to relieve pain in people with active rheumatoid arthritis. However, further research is required. Antidepressants have been shown to be superior to placebo in treating depression in individuals with physical illness, although reporting bias may have exaggerated this finding. Antidepressants have been shown to improve some parts of cognitive functioning for depressed users, such as memory, attention, and processing speed. Certain antidepressants acting as serotonin 5-HT2A receptor antagonists, such as trazodone and mirtazapine, have been used as hallucinogen antidotes or "trip killers" to block the effects of serotonergic psychedelics like psilocybin and lysergic acid diethylamide (LSD). Limitations and strategies Among individuals treated with a given antidepressant, between 30% and 50% do not show a response. Approximately one-third of people achieve a full remission, one-third experience a response, and one-third are non-responders. Partial remission is characterized by the presence of poorly defined residual symptoms. These symptoms typically include depressed mood, anxiety, sleep disturbance, fatigue, and diminished interest or pleasure. It is currently unclear which factors predict partial remission. However, it is clear that residual symptoms are powerful predictors of relapse, with relapse rates three to six times higher in people with residual symptoms than in those, who experience full remission. In addition, antidepressant drugs tend to lose efficacy throughout long-term maintenance therapy. According to data from the Centers for Disease Control and Prevention, less than one-third of Americans taking one antidepressant medication have seen a mental health professional in the previous year. Several strategies are used in clinical practice to try to overcome these limits and variations. They include switching medication, augmentation, and combination. There is controversy amongst researchers regarding the efficacy and risk-benefit ratio of antidepressants. Although antidepressants consistently out-perform a placebo in meta-analyses, the difference is modest and it is not clear that their statistical superiority results in clinical efficacy. The aggregate effect of antidepressants typically results in changes below the threshold of clinical significance on depression rating scales. Proponents of antidepressants counter that the most common scale, the HDRS, is not suitable for assessing drug action, that the threshold for clinical significance is arbitrary, and that antidepressants consistently result in significantly raised scores on the mood item of the scale. Assessments of antidepressants using alternative, more sensitive scales, such as the MADRS, do not result in marked difference from the HDRS and likewise only find a marginal clinical benefit. Another hypothesis proposed to explain the poor performance of antidepressants in clinical trials is a high treatment response heterogeneity. Some patients, that differ strongly in their response to antidepressants, could influence the average response, while the heterogeneity could itself be obscured by the averaging. Studies have not supported this hypothesis, but it is very difficult to measure treatment effect heterogeneity. Poor and complex clinical trial design might also account for the small effects seen for antidepressants. The randomized controlled trials used to approve drugs are short, and may not capture the full effect of antidepressants. Additionally, the placebo effect might be inflated in these trials by frequent clinical consultation, lowering the comparative performance of antidepressants. Critics agree that current clinical trials are poorly-designed, which limits the knowledge on antidepressants. More naturalistic studies, such as STAR*D, have produced results, which suggest that antidepressants may be less effective in clinical practice than in randomized controlled trials. Critics of antidepressants maintain that the superiority of antidepressants over placebo is the result of systemic flaws in clinical trials and the research literature. Trials conducted with industry involvement tend to produce more favorable results, and accordingly many of the trials included in meta-analyses are at high risk of bias. Additionally, meta-analyses co-authored by industry employees find more favorable results for antidepressants. The results of antidepressant trials are significantly more likely to be published if they are favorable, and unfavorable results are very often left unpublished or misreported, a phenomenon called publication bias or selective publication. Although this issue has diminished with time, it remains an obstacle to accurately assessing the efficacy of antidepressants. Misreporting of clinical trial outcomes and of serious adverse events, such as suicide, is common. Ghostwriting of antidepressant trials is widespread, a practice in which prominent researchers, or so-called key opinion leaders, attach their names to studies actually written by pharmaceutical company employees or consultants. A particular concern is that the psychoactive effects of antidepressants may lead to the unblinding of participants or researchers, enhancing the placebo effect and biasing results. Some have therefore maintained that antidepressants may only be active placebos. When these and other flaws in the research literature are not taken into account, meta-analyses may find inflated results on the basis of poor evidence. Critics contend that antidepressants have not been proven sufficiently effective by RCTs or in clinical practice and that the widespread use of antidepressants is not evidence-based. They also note that adverse effects, including withdrawal difficulties, are likely underreported, skewing clinicians' ability to make risk-benefit judgements. Accordingly, they believe antidepressants are overused, particularly for non-severe depression and conditions in which they are not indicated. Critics charge that the widespread use and public acceptance of antidepressants is the result of pharmaceutical advertising, research manipulation, and misinformation. Current mainstream psychiatric opinion recognizes the limitations of antidepressants but recommends their use in adults with more severe depression as a first-line treatment. Switching antidepressants The American Psychiatric Association 2000 Practice Guideline advises that where no response is achieved within the following six to eight weeks of treatment with an antidepressant, switch to an antidepressant in the same class, and then to a different class. A 2006 meta-analysis review found wide variation in the findings of prior studies: for people who had failed to respond to an SSRI antidepressant, between 12% and 86% showed a response to a new drug. However, the more antidepressants an individual had previously tried, the less likely they were to benefit from a new antidepressant trial. However, a later meta-analysis found no difference between switching to a new drug and staying on the old medication: although 34% of treatment-resistant people responded when switched to the new drug, 40% responded without being switched. Augmentation and combination For a partial response, the American Psychiatric Association (APA) guidelines suggest augmentation or adding a drug from a different class. These include lithium and thyroid augmentation, dopamine agonists, sex steroids, NRIs, glucocorticoid-specific agents, or the newer anticonvulsants. A combination strategy involves adding another antidepressant, usually from a different class to affect other mechanisms. Although this may be used in clinical practice, there is little evidence for the relative efficacy or adverse effects of this strategy. Other tests conducted include the use of psychostimulants as an augmentation therapy. Several studies have shown the efficacy of combining modafinil for treatment-resistant people. It has been used to help combat SSRI-associated fatigue. Long-term use and stopping The effects of antidepressants typically do not continue once the course of medication ends. This results in a high rate of relapse. In 2003, a meta-analysis found that 18% of people who had responded to an antidepressant relapsed while still taking it, compared to 41% whose antidepressant was switched for a placebo. A gradual loss of therapeutic benefit occurs in a minority of people during the course of treatment. A strategy involving the use of pharmacotherapy in the treatment of the acute episode, followed by psychotherapy in its residual phase, has been suggested by some studies. For patients who wish to stop their antidepressants, engaging in brief psychological interventions such as Preventive Cognitive Therapy or mindfulness-based cognitive therapy while tapering down has been found to diminish the risk for relapse. Adverse effects Antidepressants can cause various adverse effects, depending on the individual and the drug in question. Almost any medication involved with serotonin regulation has the potential to cause serotonin toxicity (also known as serotonin syndrome) – an excess of serotonin that can induce mania, restlessness, agitation, emotional lability, insomnia, and confusion as its primary symptoms. Although the condition is serious, it is not particularly common, generally only appearing at high doses or while on other medications. Assuming proper medical intervention has been taken (within about 24 hours) it is rarely fatal. Antidepressants appear to increase the risk of diabetes by about 1.3-fold. MAOIs tend to have pronounced (sometimes fatal) interactions with a wide variety of medications and over-the-counter drugs. If taken with foods that contain very high levels of tyramine (e.g., mature cheese, cured meats, or yeast extracts), they may cause a potentially lethal hypertensive crisis. At lower doses, the person may only experience a headache due to an increase in blood pressure. In response to these adverse effects, a different type of MAOI, the class of reversible inhibitor of monoamine oxidase A (RIMA), has been developed. The primary advantage of RIMAs is that they do not require the person to follow a special diet while being purportedly effective as SSRIs and tricyclics in treating depressive disorders. Tricyclics and SSRI can cause the so-called drug-induced QT prolongation, especially in older adults; this condition can degenerate into a specific type of abnormal heart rhythm called Torsades de points, which can potentially lead to sudden cardiac arrest. Some antidepressants are also believed to increase thoughts of suicidal ideation. Antidepressants have been associated with an increased risk of dementia in older adults. Researchers have developed a tool that allows people to rate their concern about common side effects of antidepressants. The tool ranks potential treatment options in a visual display that highlights the drugs with side effects of least concern to an individual. Pregnancy SSRI use in pregnancy has been associated with a variety of risks with varying degrees of proof of causation. As depression is independently associated with negative pregnancy outcomes, determining the extent to which observed associations between antidepressant use and specific adverse outcomes reflect a causative relationship has been difficult in some cases. In other cases, the attribution of adverse outcomes to antidepressant exposure seems fairly clear. SSRI use in pregnancy is associated with an increased risk of spontaneous abortion of about 1.7-fold, and is associated with preterm birth and low birth weight. A systematic review of the risk of major birth defects in antidepressant-exposed pregnancies found a small increase (3% to 24%) in the risk of major malformations and a risk of cardiovascular birth defects that did not differ from non-exposed pregnancies. A study of fluoxetine-exposed pregnancies found a 12% increase in the risk of major malformations that did not reach statistical significance. Other studies have found an increased risk of cardiovascular birth defects among depressed mothers not undergoing SSRI treatment, suggesting the possibility of ascertainment bias, e.g. that worried mothers may pursue more aggressive testing of their infants. Another study found no increase in cardiovascular birth defects and a 27% increased risk of major malformations in SSRI exposed pregnancies. The FDA advises for the risk of birth defects with the use of paroxetine and the MAOI should be avoided. A 2013 systematic review and meta-analysis found that antidepressant use during pregnancy was statistically significantly associated with some pregnancy outcomes, such as gestational age and preterm birth, but not with other outcomes. The same review cautioned that because differences between the exposed and unexposed groups were small, it was doubtful whether they were clinically significant. A neonate (infant less than 28 days old) may experience a withdrawal syndrome from abrupt discontinuation of the antidepressant at birth. Antidepressants can be present in varying amounts in breast milk, but their effects on infants are currently unknown. Moreover, SSRIs inhibit nitric oxide synthesis, which plays an important role in setting the vascular tone. Several studies have pointed to an increased risk of prematurity associated with SSRI use, and this association may be due to an increased risk of pre-eclampsia during pregnancy. Antidepressant-induced mania Another possible problem with antidepressants is the chance of antidepressant-induced mania or hypomania in people with or without a diagnosis of bipolar disorder. Many cases of bipolar depression are very similar to those of unipolar depression. Therefore, the person can be misdiagnosed with unipolar depression and be given antidepressants. Studies have shown that antidepressant-induced mania can occur in 20–40% of people with bipolar disorder. For bipolar depression, antidepressants (most frequently SSRIs) can exacerbate or trigger symptoms of hypomania and mania. Bupropion has been associated with a lower risk of mood switch than other antidepressants. Suicide Studies have shown that the use of antidepressants is correlated with an increased risk of suicidal behavior and thinking (suicidality) in those aged under 25 years old. This problem has been serious enough to warrant government intervention by the US Food and Drug Administration (FDA) to warn of the increased risk of suicidality during antidepressant treatment. According to the FDA, the heightened risk of suicidality occurs within the first one to two months of treatment. The National Institute for Health and Care Excellence (NICE) places the excess risk in the "early stages of treatment". A meta-analysis suggests that the relationship between antidepressant use and suicidal behavior or thoughts is age-dependent. Compared with placebo, the use of antidepressants is associated with an increase in suicidal behavior or thoughts among those 25 years old or younger (OR=1.62). A review of RCTs and epidemiological studies by Healy and Whitaker found an increase in suicidal acts by a factor of 2.4. There is no effect or possibly a mild protective effect among those aged 25 to 64 (OR=0.79). Antidepressant treatment has a protective effect against suicidality among those aged 65 and over (OR=0.37). Sexual dysfunction Sexual side effects are also common with SSRIs, such as loss of sexual drive, failure to reach orgasm, and erectile dysfunction. Although usually reversible, these sexual side-effects can, in rare cases, continue after the drug has been completely withdrawn. In a study of 1,022 outpatients, overall sexual dysfunction with all antidepressants averaged 59.1% with SSRI values between 57% and 73%, mirtazapine 24%, nefazodone 8%, amineptine 7%, and moclobemide 4%. Moclobemide, a selective reversible MAO-A inhibitor, does not cause sexual dysfunction and can lead to an improvement in all aspects of sexual function. Biochemical mechanisms suggested as causative include increased serotonin, particularly affecting 5-HT2 and 5-HT3 receptors; decreased dopamine; decreased norepinephrine; blockade of cholinergic and α1adrenergic receptors; inhibition of nitric oxide synthetase; and elevation of prolactin levels. Mirtazapine is reported to have fewer sexual side effects, most likely because it antagonizes 5-HT2 and 5-HT3 receptors and may, in some cases, reverse sexual dysfunction induced by SSRIs by the same mechanism. Bupropion, a weak NDRI and nicotinic antagonist, may be useful in treating reduced libido as a result of SSRI treatment. Emotional blunting Certain antidepressants may cause emotional blunting, characterized by a reduced intensity of both positive and negative emotions as well as symptoms of apathy, indifference, and amotivation. It may be experienced as either beneficial or detrimental depending on the situation. This side effect has been particularly associated with serotonergic antidepressants like SSRIs and SNRIs but may be less with atypical antidepressants like bupropion, agomelatine, and vortioxetine. Higher doses of antidepressants seem to be more likely to produce emotional blunting than lower doses. Emotional blunting can be decreased by reducing dosage, discontinuing the medication, or switching to a different antidepressant that may have less propensity for causing this side effect. Changes in weight Changes in appetite or weight are common among antidepressants but are largely drug-dependent and related to which neurotransmitters they affect. Mirtazapine and paroxetine, for example, may be associated with weight gain and/or increased appetite, while others (such as bupropion and venlafaxine) achieve the opposite effect. The antihistaminic properties of certain TCA- and TeCA-class antidepressants have been shown to contribute to the common side effects of increased appetite and weight gain associated with these classes of medication. Bone loss A 2021 nationwide cohort study in South Korea observed a link between SSRI use and bone loss, particularly in recent users. The study also stressed the need of further research to better understand these effects. A 2012 review found that SSRIs along with tricyclic antidepressants were associated with a significant increase in the risk of osteoporotic fractures, peaking in the months after initiation, and moving back towards baseline during the year after treatment was stopped. These effects exhibited a dose–response relationship within SSRIs which varied between different drugs of that class. A 2018 meta-analysis of 11 small studies found a reduction in bone density of the lumbar spine in SSRI users which affected older people the most. Risk of death A 2017 meta-analysis found that antidepressants were associated with a significantly increased risk of death (+33%) and new cardiovascular complications (+14%) in the general population. Conversely, risks were not greater in people with existing cardiovascular disease. Discontinuation syndrome Antidepressant discontinuation syndrome, also called antidepressant withdrawal syndrome, is a condition that can occur following the interruption, reduction, or discontinuation of antidepressant medication. The symptoms may include flu-like symptoms, trouble sleeping, nausea, poor balance, sensory changes, and anxiety. The problem usually begins within three days and may last for several months. Rarely psychosis may occur. A discontinuation syndrome can occur after stopping any antidepressant including selective serotonin reuptake inhibitors (SSRIs), serotonin–norepinephrine reuptake inhibitors (SNRIs), and tricyclic antidepressants (TCAs). The risk is greater among those who have taken the medication for longer and when the medication in question has a short half-life. The underlying reason for its occurrence is unclear. The diagnosis is based on the symptoms. Methods of prevention include gradually decreasing the dose among those who wish to stop, though it is possible for symptoms to occur with tapering. Treatment may include restarting the medication and slowly decreasing the dose. People may also be switched to the long-acting antidepressant fluoxetine, which can then be gradually decreased. Approximately 20–50% of people who suddenly stop an antidepressant develop an antidepressant discontinuation syndrome. The condition is generally not serious. Though about half of people with symptoms describe them as severe. Some restart antidepressants due to the severity of the symptoms. Pharmacology Antidepressants act via a large number of different mechanisms of action. This includes serotonin reuptake inhibition (SSRIs, SNRIs, TCAs, vilazodone, vortioxetine), norepinephrine reuptake inhibition (NRIs, SNRIs, TCAs), dopamine reuptake inhibition (bupropion, amineptine, nomifensine), direct modulation of monoamine receptors (vilazodone, vortioxetine, SARIs, agomelatine, TCAs, TeCAs, antipsychotics), monoamine oxidase inhibition (MAOIs), and NMDA receptor antagonism (ketamine, esketamine, dextromethorphan), among others (e.g., brexanolone, tianeptine). Some antidepressants also have additional actions, like sigma receptor modulation (certain SSRIs, TCAs, dextromethorphan) and antagonism of histamine H1 and muscarinic acetylcholine receptors (TCAs, TeCAs). The earliest and most widely known scientific theory of antidepressant action is the monoamine hypothesis, which can be traced back to the 1950s and 1960s. This theory states that depression is due to an imbalance, most often a deficiency, of the monoamine neurotransmitters, namely serotonin, norepinephrine, and/or dopamine. However, serotonin in particular has been implicated, as in the serotonin hypothesis of depression. The monoamine hypothesis was originally proposed based on observations that reserpine, a drug which depletes the monoamine neurotransmitters, produced depressive effects in people, and that certain hydrazine antituberculosis agents like iproniazid, which prevent the breakdown of monoamine neurotransmitters, produced apparent antidepressant effects. Most currently marketed antidepressants, which are monoaminergic in their actions, are theoretically consistent with the monoamine hypothesis. Despite the widespread nature of the monoamine hypothesis, it has a number of limitations: for one, all monoaminergic antidepressants have a delayed onset of action of at least a week; and secondly, many people with depression do not respond to monoaminergic antidepressants. A number of alternative hypotheses have been proposed, including hypotheses involving glutamate, neurogenesis, epigenetics, cortisol hypersecretion, and inflammation, among others. In 2022, a major systematic umbrella review by Joanna Moncrieff and colleagues showed that the serotonin theory of depression was not supported by evidence from a wide variety of areas. The authors concluded that there is no association between serotonin and depression, and that there is no evidence that strongly supports the theory that depression is caused by low serotonin activity or concentrations. Other literature had described the lack of support for the theory previously. In many of the expert responses to the review, it was stated that the monoamine hypothesis had already long been abandoned by psychiatry. This is in spite of about 90% of the general public in Western countries believing the theory to be true and many in the field of psychiatry continuing to promote the theory up to recent times. In addition to the serotonin umbrella review, reviews have found that reserpine, a drug that depletes the monoamine neurotransmitters—including serotonin, norepinephrine, and dopamine—shows no consistent evidence of producing depressive effects. Instead, findings of reserpine and mood are highly mixed, with similar proportions of studies finding that it has no influence on mood, produces depressive effects, or actually has antidepressant effects. In relation to this, the general monoamine hypothesis, as opposed to only the serotonin theory of depression, likewise does not appear to be well-supported by evidence. The serotonin and monoamine hypotheses of depression have been heavily promoted by the pharmaceutical industry (e.g., in advertisements) and by the psychiatric profession at large despite the lack of evidence in support of them. In the case of the pharmaceutical industry, this can be attributed to obvious financial incentives, with the theory creating a bias against non-pharmacological treatments for depression. An alternative theory for antidepressant action proposed by certain academics such as Irving Kirsch and Joanna Moncrieff is that they work largely or entirely via placebo mechanisms. This is supported by meta-analyses of randomized controlled trials of antidepressants for depression, which consistently show that placebo groups in trials improve about 80 to 90% as much as antidepressant groups on average and that antidepressants are only marginally more effective for depression than placebos. The difference between antidepressants and placebo corresponds to an effect size (SMD) of about 0.3, which in turn equates to about a 2- to 3-point additional improvement on the 0–52-point (HRSD) and 0–60-point (MADRS) depression rating scales used in trials. Differences in effectiveness between different antidepressants are small and not clinically meaningful. The small advantage of antidepressants over placebo is often statistically significant and is the basis for their regulatory approval, but is sufficiently modest that its clinical significance is doubtful. Moreover, the small advantage of antidepressants over placebo may simply be a methodological artifact caused by unblinding due to the psychoactive effects and side effects of antidepressants, in turn resulting in enhanced placebo effects and apparent antidepressant efficacy. Placebos have been found to modify the activity of several brain regions and to increase levels of dopamine and endogenous opioids in the reward pathways. It has been argued by Kirsch that although antidepressants may be used efficaciously for depression as active placebos, they are limited by significant pharmacological side effects and risks, and therefore non-pharmacological therapies, such as psychotherapy and lifestyle changes, which can have similar efficacy to antidepressants but do not have their adverse effects, ought to be preferred as treatments in people with depression. The placebo response, or the improvement in scores in the placebo group in clinical trials, is not only due to the placebo effect, but is also due to other phenomena such as spontaneous remission and regression to the mean. Depression tends to have an episodic course, with people eventually recovering even with no medical intervention, and people tend to seek treatment, as well as enroll in clinical trials, when they are feeling their worst. In meta-analyses of trials of depression therapies, Kirsch estimated based on improvement in untreated waiting-list controls that spontaneous remission and regression to the mean only account for about 25% of the improvement in depression scores with antidepressant therapy. However, another academic, Michael P. Hengartner, has argued and presented evidence that spontaneous remission and regression to the mean might actually account for most of the improvement in depression scores with antidepressants, and that the substantial placebo effect observed in clinical trials might largely be a methodological artifact. This suggests that antidepressants may be associated with much less genuine treatment benefit, whether due to the placebo effect or to the antidepressant itself, than has been traditionally assumed. Types Selective serotonin reuptake inhibitors Selective serotonin reuptake inhibitors (SSRIs) are believed to increase the extracellular level of the neurotransmitter serotonin by limiting its reabsorption into the presynaptic cell, increasing the level of serotonin in the synaptic cleft available to bind to the postsynaptic receptor. They have varying degrees of selectivity for the other monoamine transporters, with pure SSRIs having only weak affinity for the norepinephrine and dopamine transporters. SSRIs are the most widely prescribed antidepressants in many countries. The efficacy of SSRIs in mild or moderate cases of depression has been disputed. Serotonin–norepinephrine reuptake inhibitors Serotonin–norepinephrine reuptake inhibitors (SNRIs) are potent inhibitors of the reuptake of serotonin and norepinephrine. These neurotransmitters are known to play an important role in mood. SNRIs can be contrasted with the more widely used selective serotonin reuptake inhibitors (SSRIs), which act mostly upon serotonin alone. The human serotonin transporter (SERT) and norepinephrine transporter (NET) are membrane proteins that are responsible for the reuptake of serotonin and norepinephrine. Balanced dual inhibition of monoamine reuptake may offer advantages over other antidepressants drugs by treating a wider range of symptoms. SNRIs are sometimes also used to treat anxiety disorders, obsessive–compulsive disorder (OCD), attention deficit hyperactivity disorder (ADHD), chronic neuropathic pain, and fibromyalgia syndrome (FMS), and for the relief of menopausal symptoms. Serotonin modulators and stimulators Serotonin modulator and stimulators (SMSs), sometimes referred to more simply as "serotonin modulators", are a type of drug with a multimodal action specific to the serotonin neurotransmitter system. To be precise, SMSs simultaneously modulate one or more serotonin receptors and inhibit the reuptake of serotonin. The term was coined in reference to the mechanism of action of the serotonergic antidepressant vortioxetine, which acts as a serotonin reuptake inhibitor (SRI), a partial agonist of the 5-HT1A receptor, and antagonist of the 5-HT3 and 5-HT7 receptors. However, it can also technically be applied to vilazodone, which is an antidepressant as well and acts as an SRI and 5-HT1A receptor partial agonist. An alternative term is serotonin partial agonist/reuptake inhibitor (SPARI), which can be applied only to vilazodone. Serotonin antagonists and reuptake inhibitors Serotonin antagonist and reuptake inhibitors (SARIs) while mainly used as antidepressants are also anxiolytics and hypnotics. They act by antagonizing serotonin receptors such as 5-HT2A and inhibiting the reuptake of serotonin, norepinephrine, and/or dopamine. Additionally, most also act as α1-adrenergic receptor antagonists. The majority of the currently marketed SARIs belong to the phenylpiperazine class of compounds. They include trazodone and nefazodone. Tricyclic antidepressants The majority of the tricyclic antidepressants (TCAs) act primarily as serotonin–norepinephrine reuptake inhibitors (SNRIs) by blocking the serotonin transporter (SERT) and the norepinephrine transporter (NET), respectively, which results in an elevation of the synaptic concentrations of these neurotransmitters, and therefore an enhancement of neurotransmission. Notably, with the sole exception of amineptine, the TCAs have weak affinity for the dopamine transporter (DAT), and therefore have low efficacy as dopamine reuptake inhibitors (DRIs). Although TCAs are sometimes prescribed for depressive disorders, they have been largely replaced in clinical use in most parts of the world by newer antidepressants such as selective serotonin reuptake inhibitors (SSRIs), serotonin–norepinephrine reuptake inhibitors (SNRIs), and norepinephrine reuptake inhibitors (NRIs). Adverse effects have been found to be of a similar level between TCAs and SSRIs. Tetracyclic antidepressants Tetracyclic antidepressants (TeCAs) are a class of antidepressants that were first introduced in the 1970s. They are named after their chemical structure, which contains four rings of atoms, and are closely related to tricyclic antidepressants (TCAs), which contain three rings of atoms. Monoamine oxidase inhibitors Monoamine oxidase inhibitors (MAOIs) are chemicals that inhibit the activity of the monoamine oxidase enzyme family. They have a long history of use as medications prescribed for the treatment of depression. They are particularly effective in treating atypical depression. They are also used in the treatment of Parkinson's disease and several other disorders. Because of potentially lethal dietary and drug interactions, MAOIs have historically been reserved as a last line of treatment, used only when other classes of antidepressant drugs (for example selective serotonin reuptake inhibitors and tricyclic antidepressants) have failed. MAOIs have been found to be effective in the treatment of panic disorder with agoraphobia, social phobia, atypical depression or mixed anxiety and depression, bulimia, and post-traumatic stress disorder, as well as borderline personality disorder. MAOIs appear to be particularly effective in the management of bipolar depression according to a retrospective-analysis. There are reports of MAOI efficacy in obsessive–compulsive disorder (OCD), trichotillomania, dysmorphophobia, and avoidant personality disorder, but these reports are from uncontrolled case reports. MAOIs can also be used in the treatment of Parkinson's disease by targeting MAO-B in particular (therefore affecting dopaminergic neurons), as well as providing an alternative for migraine prophylaxis. Inhibition of both MAO-A and MAO-B is used in the treatment of clinical depression and anxiety disorders. NMDA receptor antagonists NMDA receptor antagonists like ketamine and esketamine are rapid-acting antidepressants and seem to work via blockade of the ionotropic glutamate NMDA receptor. Other NMDA antagonists may also play a role in treating depression. The combination medication dextromethorphan/bupropion (Auvelity), which contains the NMDA receptor antagonist dextromethorphan, was approved in the United States in 2022 for treating major depressive disorder. Others See the list of antidepressants and management of depression for other drugs that are not specifically characterized. Adjuncts Adjunct medications are an umbrella category of substances that increase the potency or "enhance" antidepressants. They work by affecting variables very close to the antidepressant, sometimes affecting a completely different mechanism of action. This may be attempted when depression treatments have not been successful in the past. Common types of adjunct medication techniques generally fall into the following categories: Two or more antidepressants taken together, from either the same or different classes (affecting the same area of the brain, often at a much higher level). An antipsychotic combined with an antidepressant, particularly atypical antipsychotics such as aripiprazole (Abilify), quetiapine (Seroquel), olanzapine (Zyprexa), and risperidone (Risperdal). It is unknown if undergoing psychological therapy at the same time as taking anti-depressants enhances the anti-depressive effect of the medication. Less common adjuncts Lithium has been used to augment antidepressant therapy in those who have failed to respond to antidepressants alone. Furthermore, Lithium dramatically decreases the suicide risk in recurrent depression. There is some evidence for the addition of a thyroid hormone, triiodothyronine, in patients with normal thyroid function. Psychopharmacologists have also tried adding a stimulant, in particular, D-amphetamine. However, the use of stimulants in cases of treatment-resistant depression is relatively controversial. A review article published in 2007 found psychostimulants may be effective in treatment-resistant depression with concomitant antidepressant therapy, but a more certain conclusion could not be drawn due to substantial deficiencies in the studies available for consideration, and the somewhat contradictory nature of their results. History The idea of an antidepressant, if melancholy is thought synonymous with depression, existed at least as early as the 1599 pamphlet A pil to purge melancholie or, A preprative to a pvrgation: or, Topping, copping, and capping: take either or whether: or, Mash them, and squash them, and dash them, and diddle come derrie come daw them, all together... Thomas d'Urfey's Wit and Mirth: Or Pills to Purge Melancholy, the title of a large collection of songs, was published between 1698 and 1720. Before the 1950s, opioids and amphetamines were commonly used as antidepressants. Amphetamine has been described as the first antidepressant. Use of opioids and amphetamines for depression was later restricted due to their addictive nature and side effects. Extracts from the herb St John's wort have been used as a "nerve tonic" to alleviate depression. St John's wort fell out of favor in most countries through the 19th and 20th centuries, except in Germany, where Hypericum extracts were eventually licensed, packaged, and prescribed. Small-scale efficacy trials were carried out in the 1970s and 1980s, and attention grew in the 1990s following a meta-analysis. It remains an over-the-counter (OTC) supplement in most countries. Lead contamination associated with its usage has been seen as concerning, as lead levels in women in the United States taking St. John's wort are elevated by about 20% on average. Research continues to investigate its active component hyperforin, and to further understand its mode of action. Isoniazid, iproniazid, and imipramine In 1951, Irving Selikoff and Edward H. Robitzek, working out of Sea View Hospital on Staten Island, began clinical trials on two new anti-tuberculosis agents developed by Hoffman-LaRoche, isoniazid, and iproniazid. Only patients with a poor prognosis were initially treated. Nevertheless, their condition improved dramatically. Selikoff and Robitzek noted "a subtle general stimulation ... the patients exhibited renewed vigor and indeed this occasionally served to introduce disciplinary problems." The promise of a cure for tuberculosis in the Sea View Hospital trials was excitedly discussed in the mainstream press. In 1952, learning of the stimulating side effects of isoniazid, the Cincinnati psychiatrist Max Lurie tried it on his patients. In the following year, he and Harry Salzer reported that isoniazid improved depression in two-thirds of their patients, so they then coined the term antidepressant to refer to its action. A similar incident took place in Paris, where Jean Delay, head of psychiatry at Sainte-Anne Hospital, heard of this effect from his pulmonology colleagues at Cochin Hospital. In 1952 (before Lurie and Salzer), Delay, with the resident Jean-Francois Buisson, reported the positive effect of isoniazid on depressed patients. The mode of antidepressant action of isoniazid is still unclear. It is speculated that its effect is due to the inhibition of diamine oxidase, coupled with a weak inhibition of monoamine oxidase A. Selikoff and Robitzek also experimented with another anti-tuberculosis drug, iproniazid; it showed a greater psychostimulant effect, but more pronounced toxicity. Later, Jackson Smith, Gordon Kamman, George E. Crane, and Frank Ayd, described the psychiatric applications of iproniazid. Ernst Zeller found iproniazid to be a potent monoamine oxidase inhibitor. Nevertheless, iproniazid remained relatively obscure until Nathan S. Kline, the influential head of research at Rockland State Hospital, began to popularize it in the medical and popular press as a "psychic energizer". Roche put a significant marketing effort behind iproniazid. Its sales grew until it was recalled in 1961, due to reports of lethal hepatotoxicity. The antidepressant effect of a tricyclic antidepressant, a three-ringed compound, was first discovered in 1957 by Roland Kuhn in a Swiss psychiatric hospital. Antihistamine derivatives were used to treat surgical shock and later as neuroleptics. Although in 1955, reserpine was shown to be more effective than a placebo in alleviating anxious depression, neuroleptics were being developed as sedatives and antipsychotics. Attempting to improve the effectiveness of chlorpromazine, Kuhn in conjunction with the Geigy Pharmaceutical Company discovered the compound "G 22355", later renamed imipramine. Imipramine had a beneficial effect on patients with depression who showed mental and motor retardation. Kuhn described his new compound as a "thymoleptic" "taking hold of the emotions," in contrast with neuroleptics, "taking hold of the nerves" in 1955–56. These gradually became established, resulting in the patent and manufacture in the US in 1951 by Häfliger and SchinderA. Antidepressants became prescription drugs in the 1950s. It was estimated that no more than fifty to one hundred individuals per million had the kind of depression that these new drugs would treat, and pharmaceutical companies were not enthusiastic about marketing for this small market. Sales through the 1960s remained poor compared to the sales of tranquilizers, which were being marketed for different uses. Imipramine remained in common use and numerous successors were introduced. The use of monoamine oxidase inhibitors (MAOI) increased after the development and introduction of "reversible" forms affecting only the MAO-A subtype of inhibitors, making this drug safer to use. By the 1960s, it was thought that the mode of action of tricyclics was to inhibit norepinephrine reuptake. However, norepinephrine reuptake became associated with stimulating effects. Later tricyclics were thought to affect serotonin as proposed in 1969 by Carlsson and Lindqvist as well as Lapin and Oxenkrug. Second-generation antidepressants Researchers began a process of rational drug design to isolate antihistamine-derived compounds that would selectively target these systems. The first such compound to be patented was zimelidine in 1971, while the first released clinically was indalpine. Fluoxetine was approved for commercial use by the US Food and Drug Administration (FDA) in 1988, becoming the first blockbuster SSRI. Fluoxetine was developed at Eli Lilly and Company in the early 1970s by Bryan Molloy, Klaus Schmiegel, David T. Wong, and others. SSRIs became known as "novel antidepressants" along with other newer drugs such as SNRIs and NRIs with various selective effects. Rapid-acting antidepressants Esketamine (brand name Spravato), the first rapid-acting antidepressant to be approved for clinical treatment of depression, was introduced for this indication in March 2019 in the United States. Research A 2016 randomized controlled trial evaluated the rapid antidepressant effects of the psychedelic Ayahuasca in treatment-resistant depression with a positive outcome. In 2018, the FDA granted Breakthrough Therapy Designation for psilocybin-assisted therapy for treatment-resistant depression and in 2019, the FDA granted Breakthrough Therapy Designation for psilocybin therapy treating major depressive disorder. Publication bias and aged research A 2018 systematic review published in The Lancet comparing the efficacy of 21 different first and second generation antidepressants found that antidepressant drugs tended to perform better and cause less adverse events when they were novel or experimental treatments compared to when they were evaluated again years later. Unpublished data was also associated with smaller positive effect sizes. However, the review did not find evidence of bias associated with industry funded research. Society and culture Prescription trends United Kingdom In the UK, figures reported in 2010 indicated that the number of antidepressants prescribed by the National Health Service (NHS) almost doubled over a decade. Further analysis published in 2014 showed that number of antidepressants dispensed annually in the community went up by 25 million in the 14 years between 1998 and 2012, rising from 15 million to 40 million. Nearly 50% of this rise occurred in the four years after the Great Recession, during which time the annual increase in prescriptions rose from 6.7% to 8.5%. These sources also suggest that aside from the recession, other factors that may influence changes in prescribing rates may include: improvements in diagnosis, a reduction of the stigma surrounding mental health, broader prescribing trends, GP characteristics, geographical location, and housing status. Another factor that may contribute to increasing consumption of antidepressants is the fact that these medications now are used for other conditions including social anxiety and post-traumatic stress disorder. Between 2005 and 2017, the number of adolescents (12 to 17 years) in England who were prescribed antidepressants has doubled. On the other hand, antidepressant prescriptions for children aged 5–11 in England decreased between 1999 and 2017. From April 2015, prescriptions increased for both age groups (for people aged 0 to 17) and peaked during the first COVID lockdown in March 2020. According to National Institute for Health and Care Excellence (NICE) guidelines, antidepressants for children and adolescents with depression and obsessive-compulsive disorder (OCD) should be prescribed together with therapy and after being assessed by a child and adolescent psychiatrist. However, between 2006 and 2017, only 1 in 4 of 12–17 year-olds who were prescribed an SSRI by their GP had seen a specialist psychiatrist and 1 in 6 has seen a pediatrician. Half of these prescriptions were for depression and 16% for anxiety, the latter not being licensed for treatment with antidepressants. Among the suggested possible reasons why GPs are not following the guidelines are the difficulties of accessing talking therapies, long waiting lists, and the urgency of treatment. According to some researchers, strict adherence to treatment guidelines would limit access to effective medication for young people with mental health problems. United States In the United States, antidepressants were the most commonly prescribed medication in 2013. Of the estimated 16 million "long term" (over 24 months) users, roughly 70 percent are female. , about 16.5% of white people in the United States took antidepressants compared with 5.6% of black people in the United States. United States: The most commonly prescribed antidepressants in the US retail market in 2010 were: Netherlands: In the Netherlands, paroxetine is the most prescribed antidepressant, followed by amitriptyline, citalopram and venlafaxine. Adherence , worldwide, 30% to 60% of people did not follow their practitioner's instructions about taking their antidepressants, and in the US, it appeared that around 50% of people did not take their antidepressants as directed by their practitioner. When people fail to take their antidepressants, there is a greater risk that the drug will not help, that symptoms get worse, that they miss work or are less productive at work, and that the person may be hospitalized. Social science perspective Some academics have highlighted the need to examine the use of antidepressants and other medical treatments in cross-cultural terms, because various cultures prescribe and observe different manifestations, symptoms, meanings, and associations of depression and other medical conditions within their populations. These cross-cultural discrepancies, it has been argued, then have implications on the perceived efficacy and use of antidepressants and other strategies in the treatment of depression in these different cultures. In India, antidepressants are largely seen as tools to combat marginality, promising the individual the ability to reintegrate into society through their use—a view and association not observed in the West. Environmental impacts Because most antidepressants function by inhibiting the reuptake of neurotransmitters serotonin, dopamine, and norepinephrine these drugs can interfere with natural neurotransmitter levels in other organisms impacted by indirect exposure. Antidepressants fluoxetine and sertraline have been detected in aquatic organisms residing in effluent-dominated streams. The presence of antidepressants in surface waters and aquatic organisms has caused concern because ecotoxicological effects on aquatic organisms due to fluoxetine exposure have been demonstrated. Coral reef fish have been demonstrated to modulate aggressive behavior through serotonin. Artificially increasing serotonin levels in crustaceans can temporarily reverse social status and turn subordinates into aggressive and territorial dominant males. Exposure to Fluoxetine has been demonstrated to increase serotonergic activity in fish, subsequently reducing aggressive behavior. Perinatal exposure to Fluoxetine at relevant environmental concentrations has been shown to lead to significant modifications of memory processing in 1-month-old cuttlefish. This impairment may disadvantage cuttlefish and decrease their survival. Somewhat less than 10% of orally administered Fluoxetine is excreted from humans unchanged or as glucuronide.
Biology and health sciences
Psychiatric drugs
Health
2392
https://en.wikipedia.org/wiki/Anode
Anode
An anode usually is an electrode of a polarized electrical device through which conventional current enters the device. This contrasts with a cathode, which is usually an electrode of the device through which conventional current leaves the device. A common mnemonic is ACID, for "anode current into device". The direction of conventional current (the flow of positive charges) in a circuit is opposite to the direction of electron flow, so (negatively charged) electrons flow from the anode of a galvanic cell, into an outside or external circuit connected to the cell. For example, the end of a household battery marked with a "+" is the cathode (while discharging). In both a galvanic cell and an electrolytic cell, the anode is the electrode at which the oxidation reaction occurs. In a galvanic cell the anode is the wire or plate having excess negative charge as a result of the oxidation reaction. In an electrolytic cell, the anode is the wire or plate upon which excess positive charge is imposed. As a result of this, anions will tend to move towards the anode where they will undergo oxidation. Historically, the anode of a galvanic cell was also known as the zincode because it was usually composed of zinc. Charge flow The terms anode and cathode are not defined by the voltage polarity of electrodes, but are usually defined by the direction of current through the electrode. An anode usually is the electrode of a device through which conventional current (positive charge) flows into the device from an external circuit, while a cathode usually is the electrode through which conventional current flows out of the device. In general, if the current through the electrodes reverses direction, as occurs for example in a rechargeable battery when it is being charged, the roles of the electrodes as anode and cathode are reversed. However, the definition of anode and cathode is different for electrical devices such as diodes and vacuum tubes where the electrode naming is fixed and does not depend on the actual charge flow (current). These devices usually allow substantial current flow in one direction but negligible current in the other direction. Therefore, the electrodes are named based on the direction of this "forward" current. In a diode the anode is the terminal through which current enters and the cathode is the terminal through which current leaves, when the diode is forward biased. The names of the electrodes do not change in cases where reverse current flows through the device. Similarly, in a vacuum tube only one electrode can thermionically emit electrons into the evacuated tube, so electrons can only enter the device from the external circuit through the heated electrode. Therefore, this electrode is permanently named the cathode, and the electrode through which the electrons exit the tube is named the anode. Conventional current depends not only on the direction the charge carriers move, but also the carriers' electric charge. The currents outside the device are usually carried by electrons in a metal conductor. Since electrons have a negative charge, the direction of electron flow is opposite to the direction of conventional current. Consequently, electrons leave the device through the anode and enter the device through the cathode. Examples The polarity of voltage on an anode with respect to an associated cathode varies depending on the device type and on its operating mode. In the following examples, the anode is negative in a device that provides power, and positive in a device that consumes power: In a discharging battery or galvanic cell (diagram on left), the anode is the negative terminal: it is where conventional current flows into the cell. This inward current is carried externally by electrons moving outwards. In a recharging battery, or an electrolytic cell, the anode is the positive terminal imposed by an external source of potential difference. The current through a recharging battery is opposite to the direction of current during discharge; in other words, the electrode which was the cathode during battery discharge becomes the anode while the battery is recharging. In battery engineering, it is common to designate one electrode of a rechargeable battery the anode and the other the cathode according to the roles the electrodes play when the battery is discharged. This is despite the fact that the roles are reversed when the battery is charged. When this is done, "anode" simply designates the negative terminal of the battery and "cathode" designates the positive terminal. In a diode, the anode is the terminal represented by the tail of the arrow symbol (flat side of the triangle), where conventional current flows into the device. Note the electrode naming for diodes is always based on the direction of the forward current (that of the arrow, in which the current flows "most easily"), even for types such as Zener diodes or solar cells where the current of interest is the reverse current. In vacuum tubes or gas-filled tubes, the anode is the terminal where current enters the tube. Etymology The word was coined in 1834 from the Greek ἄνοδος (anodos), 'ascent', by William Whewell, who had been consulted by Michael Faraday over some new names needed to complete a paper on the recently discovered process of electrolysis. In that paper Faraday explained that when an electrolytic cell is oriented so that electric current traverses the "decomposing body" (electrolyte) in a direction "from East to West, or, which will strengthen this help to the memory, that in which the sun appears to move", the anode is where the current enters the electrolyte, on the East side: "ano upwards, odos a way; the way which the sun rises". The use of 'East' to mean the 'in' direction (actually 'in' → 'East' → 'sunrise' → 'up') may appear contrived. Previously, as related in the first reference cited above, Faraday had used the more straightforward term "eisode" (the doorway where the current enters). His motivation for changing it to something meaning 'the East electrode' (other candidates had been "eastode", "oriode" and "anatolode") was to make it immune to a possible later change in the direction convention for current, whose exact nature was not known at the time. The reference he used to this effect was the Earth's magnetic field direction, which at that time was believed to be invariant. He fundamentally defined his arbitrary orientation for the cell as being that in which the internal current would run parallel to and in the same direction as a hypothetical magnetizing current loop around the local line of latitude which would induce a magnetic dipole field oriented like the Earth's. This made the internal current East to West as previously mentioned, but in the event of a later convention change it would have become West to East, so that the East electrode would not have been the 'way in' any more. Therefore, "eisode" would have become inappropriate, whereas "anode" meaning 'East electrode' would have remained correct with respect to the unchanged direction of the actual phenomenon underlying the current, then unknown but, he thought, unambiguously defined by the magnetic reference. In retrospect the name change was unfortunate, not only because the Greek roots alone do not reveal the anode's function any more, but more importantly because as we now know, the Earth's magnetic field direction on which the "anode" term is based is subject to reversals whereas the current direction convention on which the "eisode" term was based has no reason to change in the future. Since the later discovery of the electron, an easier to remember and more durably correct technically although historically false, etymology has been suggested: anode, from the Greek anodos, 'way up', 'the way (up) out of the cell (or other device) for electrons'. Electrolytic anode In electrochemistry, the anode is where oxidation occurs and is the positive polarity contact in an electrolytic cell. At the anode, anions (negative ions) are forced by the electrical potential to react chemically and give off electrons (oxidation) which then flow up and into the driving circuit. Mnemonics: LEO Red Cat (Loss of Electrons is Oxidation, Reduction occurs at the Cathode), or AnOx Red Cat (Anode Oxidation, Reduction Cathode), or OIL RIG (Oxidation is Loss, Reduction is Gain of electrons), or Roman Catholic and Orthodox (Reduction – Cathode, anode – Oxidation), or LEO the lion says GER (Losing electrons is Oxidation, Gaining electrons is Reduction). This process is widely used in metals refining. For example, in copper refining, copper anodes, an intermediate product from the furnaces, are electrolysed in an appropriate solution (such as sulfuric acid) to yield high purity (99.99%) cathodes. Copper cathodes produced using this method are also described as electrolytic copper. Historically, when non-reactive anodes were desired for electrolysis, graphite (called plumbago in Faraday's time) or platinum were chosen. They were found to be some of the least reactive materials for anodes. Platinum erodes very slowly compared to other materials, and graphite crumbles and can produce carbon dioxide in aqueous solutions but otherwise does not participate in the reaction. Battery or galvanic cell anode In a battery or galvanic cell, the anode is the negative electrode from which electrons flow out towards the external part of the circuit. Internally the positively charged cations are flowing away from the anode (even though it is negative and therefore would be expected to attract them, this is due to electrode potential relative to the electrolyte solution being different for the anode and cathode metal/electrolyte systems); but, external to the cell in the circuit, electrons are being pushed out through the negative contact and thus through the circuit by the voltage potential as would be expected. Battery manufacturers may regard the negative electrode as the anode, particularly in their technical literature. Though from an electrochemical viewpoint incorrect, it does resolve the problem of which electrode is the anode in a secondary (or rechargeable) cell. Using the traditional definition, the anode switches ends between charge and discharge cycles. Vacuum tube anode In electronic vacuum devices such as a cathode-ray tube, the anode is the positively charged electron collector. In a tube, the anode is a charged positive plate that collects the electrons emitted by the cathode through electric attraction. It also accelerates the flow of these electrons. Diode anode In a semiconductor diode, the anode is the P-doped layer which initially supplies holes to the junction. In the junction region, the holes supplied by the anode combine with electrons supplied from the N-doped region, creating a depleted zone. As the P-doped layer supplies holes to the depleted region, negative dopant ions are left behind in the P-doped layer ('P' for positive charge-carrier ions). This creates a base negative charge on the anode. When a positive voltage is applied to anode of the diode from the circuit, more holes are able to be transferred to the depleted region, and this causes the diode to become conductive, allowing current to flow through the circuit. The terms anode and cathode should not be applied to a Zener diode, since it allows flow in either direction, depending on the polarity of the applied potential (i.e. voltage). Sacrificial anode In cathodic protection, a metal anode that is more reactive to the corrosive environment than the metal system to be protected is electrically linked to the protected system. As a result, the metal anode partially corrodes or dissolves instead of the metal system. As an example, an iron or steel ship's hull may be protected by a zinc sacrificial anode, which will dissolve into the seawater and prevent the hull from being corroded. Sacrificial anodes are particularly needed for systems where a static charge is generated by the action of flowing liquids, such as pipelines and watercraft. Sacrificial anodes are also generally used in tank-type water heaters. In 1824 to reduce the impact of this destructive electrolytic action on ships hulls, their fastenings and underwater equipment, the scientist-engineer Humphry Davy developed the first and still most widely used marine electrolysis protection system. Davy installed sacrificial anodes made from a more electrically reactive (less noble) metal attached to the vessel hull and electrically connected to form a cathodic protection circuit. A less obvious example of this type of protection is the process of galvanising iron. This process coats iron structures (such as fencing) with a coating of zinc metal. As long as the zinc remains intact, the iron is protected from the effects of corrosion. Inevitably, the zinc coating becomes breached, either by cracking or physical damage. Once this occurs, corrosive elements act as an electrolyte and the zinc/iron combination as electrodes. The resultant current ensures that the zinc coating is sacrificed but that the base iron does not corrode. Such a coating can protect an iron structure for a few decades, but once the protecting coating is consumed, the iron rapidly corrodes. If, conversely, tin is used to coat steel, when a breach of the coating occurs it actually accelerates oxidation of the iron. Impressed current anode Another cathodic protection is used on the impressed current anode. It is made from titanium and covered with mixed metal oxide. Unlike the sacrificial anode rod, the impressed current anode does not sacrifice its structure. This technology uses an external current provided by a DC source to create the cathodic protection. Impressed current anodes are used in larger structures like pipelines, boats, city water tower, water heaters and more. Related antonym The opposite of an anode is a cathode. When the current through the device is reversed, the electrodes switch functions, so the anode becomes the cathode and the cathode becomes anode, as long as the reversed current is applied. The exception is diodes where electrode naming is always based on the forward current direction.
Physical sciences
Electrochemistry
Chemistry
2393
https://en.wikipedia.org/wiki/Analog%20television
Analog television
Analog television is the original television technology that uses analog signals to transmit video and audio. In an analog television broadcast, the brightness, colors and sound are represented by amplitude, phase and frequency of an analog signal. Analog signals vary over a continuous range of possible values which means that electronic noise and interference may be introduced. Thus with analog, a moderately weak signal becomes snowy and subject to interference. In contrast, picture quality from a digital television (DTV) signal remains good until the signal level drops below a threshold where reception is no longer possible or becomes intermittent. Analog television may be wireless (terrestrial television and satellite television) or can be distributed over a cable network as cable television. All broadcast television systems used analog signals before the arrival of DTV. Motivated by the lower bandwidth requirements of compressed digital signals, beginning just after the year 2000, a digital television transition is proceeding in most countries of the world, with different deadlines for the cessation of analog broadcasts. Several countries have made the switch already, with the remaining countries still in progress mostly in Africa, Asia, and South America. Development The earliest systems of analog television were mechanical television systems that used spinning disks with patterns of holes punched into the disc to scan an image. A similar disk reconstructed the image at the receiver. Synchronization of the receiver disc rotation was handled through sync pulses broadcast with the image information. Camera systems used similar spinning discs and required intensely bright illumination of the subject for the light detector to work. The reproduced images from these mechanical systems were dim, very low resolution and flickered severely. Analog television did not begin in earnest as an industry until the development of the cathode-ray tube (CRT), which uses a focused electron beam to trace lines across a phosphor coated surface. The electron beam could be swept across the screen much faster than any mechanical disc system, allowing for more closely spaced scan lines and much higher image resolution. Also, far less maintenance was required of an all-electronic system compared to a mechanical spinning disc system. All-electronic systems became popular with households after World War II. Standards Broadcasters of analog television encode their signal using different systems. The official systems of transmission were defined by the ITU in 1961 as: A, B, C, D, E, F, G, H, I, K, K1, L, M and N. These systems determine the number of scan lines, frame rate, channel width, video bandwidth, video-audio separation, and so on. A color encoding scheme (NTSC, PAL, or SECAM) could be added to the base monochrome signal. Using RF modulation the signal is then modulated onto a very high frequency (VHF) or ultra high frequency (UHF) carrier wave. Each frame of a television image is composed of scan lines drawn on the screen. The lines are of varying brightness; the whole set of lines is drawn quickly enough that the human eye perceives it as one image. The process repeats and the next sequential frame is displayed, allowing the depiction of motion. The analog television signal contains timing and synchronization information so that the receiver can reconstruct a two-dimensional moving image from a one-dimensional time-varying signal. The first commercial television systems were black-and-white; the beginning of color television was in the 1950s. A practical television system needs to take luminance, chrominance (in a color system), synchronization (horizontal and vertical), and audio signals, and broadcast them over a radio transmission. The transmission system must include a means of television channel selection. Analog broadcast television systems come in a variety of frame rates and resolutions. Further differences exist in the frequency and modulation of the audio carrier. The monochrome combinations still existing in the 1950s were standardized by the International Telecommunication Union (ITU) as capital letters A through N. When color television was introduced, the chrominance information was added to the monochrome signals in a way that black and white televisions ignore. In this way backward compatibility was achieved. There are three standards for the way the additional color information can be encoded and transmitted. The first was the American NTSC system. The European and Australian PAL and the French and former Soviet Union SECAM standards were developed later and attempt to cure certain defects of the NTSC system. PAL's color encoding is similar to the NTSC systems. SECAM, though, uses a different modulation approach than PAL or NTSC. PAL had a late evolution called PALplus, allowing widescreen broadcasts while remaining fully compatible with existing PAL equipment. In principle, all three color encoding systems can be used with any scan line/frame rate combination. Therefore, in order to describe a given signal completely, it is necessary to quote the color system plus the broadcast standard as a capital letter. For example, the United States, Canada, Mexico and South Korea used (or use) NTSC-M, Japan used NTSC-J, the UK used PAL-I, France used SECAM-L, much of Western Europe and Australia used (or use) PAL-B/G, most of Eastern Europe uses SECAM-D/K or PAL-D/K and so on. Not all of the possible combinations exist. NTSC is only used with system M, even though there were experiments with NTSC-A (405 line) in the UK and NTSC-N (625 line) in part of South America. PAL is used with a variety of 625-line standards (B, G, D, K, I, N) but also with the North American 525-line standard, accordingly named PAL-M. Likewise, SECAM is used with a variety of 625-line standards. For this reason, many people refer to any 625/25 type signal as PAL and to any 525/30 signal as NTSC, even when referring to digital signals; for example, on DVD-Video, which does not contain any analog color encoding, and thus no PAL or NTSC signals at all. Although a number of different broadcast television systems are in use worldwide, the same principles of operation apply. Displaying an image A cathode-ray tube (CRT) television displays an image by scanning a beam of electrons across the screen in a pattern of horizontal lines known as a raster. At the end of each line, the beam returns to the start of the next line; at the end of the last line, the beam returns to the beginning of the first line at the top of the screen. As it passes each point, the intensity of the beam is varied, varying the luminance of that point. A color television system is similar except there are three beams that scan together and an additional signal known as chrominance controls the color of the spot. When analog television was developed, no affordable technology for storing video signals existed; the luminance signal had to be generated and transmitted at the same time at which it is displayed on the CRT. It was therefore essential to keep the raster scanning in the camera (or other device for producing the signal) in exact synchronization with the scanning in the television. The physics of the CRT require that a finite time interval be allowed for the spot to move back to the start of the next line (horizontal retrace) or the start of the screen (vertical retrace). The timing of the luminance signal must allow for this. The human eye has a characteristic called phi phenomenon. Quickly displaying successive scan images creates the illusion of smooth motion. Flickering of the image can be partially solved using a long persistence phosphor coating on the CRT so that successive images fade slowly. However, slow phosphor has the negative side-effect of causing image smearing and blurring when there is rapid on-screen motion occurring. The maximum frame rate depends on the bandwidth of the electronics and the transmission system, and the number of horizontal scan lines in the image. A frame rate of 25 or 30 hertz is a satisfactory compromise, while the process of interlacing two video fields of the picture per frame is used to build the image. This process doubles the apparent number of video frames per second and further reduces flicker and other defects in transmission. Receiving signals The television system for each country will specify a number of television channels within the UHF or VHF frequency ranges. A channel actually consists of two signals: the picture information is transmitted using amplitude modulation on one carrier frequency, and the sound is transmitted with frequency modulation at a frequency at a fixed offset (typically 4.5 to 6 MHz) from the picture signal. The channel frequencies chosen represent a compromise between allowing enough bandwidth for video (and hence satisfactory picture resolution), and allowing enough channels to be packed into the available frequency band. In practice a technique called vestigial sideband is used to reduce the channel spacing, which would be nearly twice the video bandwidth if pure AM was used. Signal reception is invariably done via a superheterodyne receiver: the first stage is a tuner which selects a television channel and frequency-shifts it to a fixed intermediate frequency (IF). The signal amplifier performs amplification to the IF stages from the microvolt range to fractions of a volt. Extracting the sound At this point the IF signal consists of a video carrier signal at one frequency and the sound carrier at a fixed offset in frequency. A demodulator recovers the video signal. Also at the output of the same demodulator is a new frequency modulated sound carrier at the offset frequency. In some sets made before 1948, this was filtered out, and the sound IF of about 22 MHz was sent to an FM demodulator to recover the basic sound signal. In newer sets, this new carrier at the offset frequency was allowed to remain as intercarrier sound, and it was sent to an FM demodulator to recover the basic sound signal. One particular advantage of intercarrier sound is that when the front panel fine tuning knob is adjusted, the sound carrier frequency does not change with the tuning, but stays at the above-mentioned offset frequency. Consequently, it is easier to tune the picture without losing the sound. So the FM sound carrier is then demodulated, amplified, and used to drive a loudspeaker. Until the advent of the NICAM and MTS systems, television sound transmissions were monophonic. Structure of a video signal The video carrier is demodulated to give a composite video signal containing luminance, chrominance and synchronization signals. The result is identical to the composite video format used by analog video devices such as VCRs or CCTV cameras. To ensure good linearity and thus fidelity, consistent with affordable manufacturing costs of transmitters and receivers, the video carrier is never modulated to the extent that it is shut off altogether. When intercarrier sound was introduced later in 1948, not completely shutting off the carrier had the side effect of allowing intercarrier sound to be economically implemented. Each line of the displayed image is transmitted using a signal as shown above. The same basic format (with minor differences mainly related to timing and the encoding of color) is used for PAL, NTSC, and SECAM television systems. A monochrome signal is identical to a color one, with the exception that the elements shown in color in the diagram (the colorburst, and the chrominance signal) are not present. The front porch is a brief (about 1.5 microsecond) period inserted between the end of each transmitted line of picture and the leading edge of the next line's sync pulse. Its purpose was to allow voltage levels to stabilise in older televisions, preventing interference between picture lines. The front porch is the first component of the horizontal blanking interval which also contains the horizontal sync pulse and the back porch. The back porch is the portion of each scan line between the end (rising edge) of the horizontal sync pulse and the start of active video. It is used to restore the black level (300 mV) reference in analog video. In signal processing terms, it compensates for the fall time and settling time following the sync pulse. In color television systems such as PAL and NTSC, this period also includes the colorburst signal. In the SECAM system, it contains the reference subcarrier for each consecutive color difference signal in order to set the zero-color reference. In some professional systems, particularly satellite links between locations, the digital audio is embedded within the line sync pulses of the video signal, to save the cost of renting a second channel. The name for this proprietary system is Sound-in-Syncs. Monochrome video signal extraction The luminance component of a composite video signal varies between 0 V and approximately 0.7 V above the black level. In the NTSC system, there is a blanking signal level used during the front porch and back porch, and a black signal level 75 mV above it; in PAL and SECAM these are identical. In a monochrome receiver, the luminance signal is amplified to drive the control grid in the electron gun of the CRT. This changes the intensity of the electron beam and therefore the brightness of the spot being scanned. Brightness and contrast controls determine the DC shift and amplification, respectively. Color video signal extraction U and V signals A color signal conveys picture information for each of the red, green, and blue components of an image. However, these are not simply transmitted as three separate signals, because: such a signal would not be compatible with monochrome receivers, an important consideration when color broadcasting was first introduced. It would also occupy three times the bandwidth of existing television, requiring a decrease in the number of television channels available. Instead, the RGB signals are converted into YUV form, where the Y signal represents the luminance of the colors in the image. Because the rendering of colors in this way is the goal of both monochrome film and television systems, the Y signal is ideal for transmission as the luminance signal. This ensures a monochrome receiver will display a correct picture in black and white, where a given color is reproduced by a shade of gray that correctly reflects how light or dark the original color is. The U and V signals are color difference signals. The U signal is the difference between the B signal and the Y signal, also known as B minus Y (B-Y), and the V signal is the difference between the R signal and the Y signal, also known as R minus Y (R-Y). The U signal then represents how purplish-blue or its complementary color, yellowish-green, the color is, and the V signal how purplish-red or its complementary, greenish-cyan, it is. The advantage of this scheme is that the U and V signals are zero when the picture has no color content. Since the human eye is more sensitive to detail in luminance than in color, the U and V signals can be transmitted with reduced bandwidth with acceptable results. In the receiver, a single demodulator can extract an additive combination of U plus V. An example is the X demodulator used in the X/Z demodulation system. In that same system, a second demodulator, the Z demodulator, also extracts an additive combination of U plus V, but in a different ratio. The X and Z color difference signals are further matrixed into three color difference signals, (R-Y), (B-Y), and (G-Y). The combinations of usually two, but sometimes three demodulators were: In the end, further matrixing of the above color-difference signals c through f yielded the three color-difference signals, (R-Y), (B-Y), and (G-Y). The R, G, and B signals in the receiver needed for the display device (CRT, Plasma display, or LCD display) are electronically derived by matrixing as follows: R is the additive combination of (R-Y) with Y, G is the additive combination of (G-Y) with Y, and B is the additive combination of (B-Y) with Y. All of this is accomplished electronically. It can be seen that in the combining process, the low-resolution portion of the Y signals cancel out, leaving R, G, and B signals able to render a low-resolution image in full color. However, the higher resolution portions of the Y signals do not cancel out, and so are equally present in R, G, and B, producing the higher-resolution image detail in monochrome, although it appears to the human eye as a full-color and full-resolution picture. NTSC and PAL systems In the NTSC and PAL color systems, U and V are transmitted by using quadrature amplitude modulation of a subcarrier. This kind of modulation applies two independent signals to one subcarrier, with the idea that both signals will be recovered independently at the receiving end. For NTSC, the subcarrier is at 3.58 MHz. For the PAL system it is at 4.43 MHz. The subcarrier itself is not included in the modulated signal (suppressed carrier), it is the subcarrier sidebands that carry the U and V information. The usual reason for using suppressed carrier is that it saves on transmitter power. In this application a more important advantage is that the color signal disappears entirely in black and white scenes. The subcarrier is within the bandwidth of the main luminance signal and consequently can cause undesirable artifacts on the picture, all the more noticeable in black and white receivers. A small sample of the subcarrier, the colorburst, is included in the horizontal blanking portion, which is not visible on the screen. This is necessary to give the receiver a phase reference for the modulated signal. Under quadrature amplitude modulation the modulated chrominance signal changes phase as compared to its subcarrier and also changes amplitude. The chrominance amplitude (when considered together with the Y signal) represents the approximate saturation of a color, and the chrominance phase against the subcarrier reference approximately represents the hue of the color. For particular test colors found in the test color bar pattern, exact amplitudes and phases are sometimes defined for test and troubleshooting purposes only. Due to the nature of the quadrature amplitude modulation process that created the chrominance signal, at certain times, the signal represents only the U signal, and 70 nanoseconds (NTSC) later, it represents only the V signal. About 70 nanoseconds later still, -U, and another 70 nanoseconds, -V. So to extract U, a synchronous demodulator is utilized, which uses the subcarrier to briefly gate the chroma every 280 nanoseconds, so that the output is only a train of discrete pulses, each having an amplitude that is the same as the original U signal at the corresponding time. In effect, these pulses are discrete-time analog samples of the U signal. The pulses are then low-pass filtered so that the original analog continuous-time U signal is recovered. For V, a 90-degree shifted subcarrier briefly gates the chroma signal every 280 nanoseconds, and the rest of the process is identical to that used for the U signal. Gating at any other time than those times mentioned above will yield an additive mixture of any two of U, V, -U, or -V. One of these off-axis (that is, of the U and V axis) gating methods is called I/Q demodulation. Another much more popular off-axis scheme was the X/Z demodulation system. Further matrixing recovered the original U and V signals. This scheme was actually the most popular demodulator scheme throughout the 1960s. The above process uses the subcarrier. But as previously mentioned, it was deleted before transmission, and only the chroma is transmitted. Therefore, the receiver must reconstitute the subcarrier. For this purpose, a short burst of the subcarrier, known as the colorburst, is transmitted during the back porch (re-trace blanking period) of each scan line. A subcarrier oscillator in the receiver locks onto this signal (see phase-locked loop) to achieve a phase reference, resulting in the oscillator producing the reconstituted subcarrier. NTSC uses this process unmodified. Unfortunately, this often results in poor color reproduction due to phase errors in the received signal, caused sometimes by multipath, but mostly by poor implementation at the studio end. With the advent of solid-state receivers, cable TV, and digital studio equipment for conversion to an over-the-air analog signal, these NTSC problems have been largely fixed, leaving operator error at the studio end as the sole color rendition weakness of the NTSC system. In any case, the PAL D (delay) system mostly corrects these kinds of errors by reversing the phase of the signal on each successive line, and averaging the results over pairs of lines. This process is achieved by the use of a 1H (where H = horizontal scan frequency) duration delay line. Phase shift errors between successive lines are therefore canceled out and the wanted signal amplitude is increased when the two in-phase (coincident) signals are re-combined. NTSC is more spectrum efficient than PAL, giving more picture detail for a given bandwidth. This is because sophisticated comb filters in receivers are more effective with NTSC's 4 color frame sequence compared to PAL's 8-field sequence. However, in the end, the larger channel width of most PAL systems in Europe still gives PAL systems the edge in transmitting more picture detail. SECAM system In the SECAM television system, U and V are transmitted on alternate lines, using simple frequency modulation of two different color subcarriers. In some analog color CRT displays, starting in 1956, the brightness control signal (luminance) is fed to the cathode connections of the electron guns, and the color difference signals (chrominance signals) are fed to the control grids connections. This simple CRT matrix mixing technique was replaced in later solid state designs of signal processing with the original matrixing method used in the 1954 and 1955 color TV receivers. Synchronization Synchronizing pulses added to the video signal at the end of every scan line and video frame ensure that the sweep oscillators in the receiver remain locked in step with the transmitted signal so that the image can be reconstructed on the receiver screen. A sync separator circuit detects the sync voltage levels and sorts the pulses into horizontal and vertical sync. Horizontal synchronization The horizontal sync pulse separates the scan lines. The horizontal sync signal is a single short pulse that indicates the start of every line. The rest of the scan line follows, with the signal ranging from 0.3 V (black) to 1 V (white), until the next horizontal or vertical synchronization pulse. The format of the horizontal sync pulse varies. In the 525-line NTSC system it is a 4.85 μs pulse at 0 V. In the 625-line PAL system the pulse is 4.7 μs at 0 V. This is lower than the amplitude of any video signal (blacker than black) so it can be detected by the level-sensitive sync separator circuit of the receiver. Two timing intervals are defined – the front porch between the end of the displayed video and the start of the sync pulse, and the back porch after the sync pulse and before the displayed video. These and the sync pulse itself are called the horizontal blanking (or retrace) interval and represent the time that the electron beam in the CRT is returning to the start of the next display line. Vertical synchronization Vertical synchronization separates the video fields. In PAL and NTSC, the vertical sync pulse occurs within the vertical blanking interval. The vertical sync pulses are made by prolonging the length of horizontal sync pulses through almost the entire length of the scan line. The vertical sync signal is a series of much longer pulses, indicating the start of a new field. The sync pulses occupy the whole line interval of a number of lines at the beginning and end of a scan; no picture information is transmitted during vertical retrace. The pulse sequence is designed to allow horizontal sync to continue during vertical retrace; it also indicates whether each field represents even or odd lines in interlaced systems (depending on whether it begins at the start of a horizontal line, or midway through). The format of such a signal in 525-line NTSC is: pre-equalizing pulses (6 to start scanning odd lines, 5 to start scanning even lines) long-sync pulses (5 pulses) post-equalizing pulses (5 to start scanning odd lines, 4 to start scanning even lines) Each pre- or post-equalizing pulse consists of half a scan line of black signal: 2 μs at 0 V, followed by 30 μs at 0.3 V. Each long sync pulse consists of an equalizing pulse with timings inverted: 30 μs at 0 V, followed by 2 μs at 0.3 V. In video production and computer graphics, changes to the image are often performed during the vertical blanking interval to avoid visible discontinuity of the image. If this image in the framebuffer is updated with a new image while the display is being refreshed, the display shows a mishmash of both frames, producing page tearing partway down the image. Horizontal and vertical hold The sweep (or deflection) oscillators were designed to run without a signal from the television station (or VCR, computer, or other composite video source). This allows the television receiver to display a raster and to allow an image to be presented during antenna placement. With sufficient signal strength, the receiver's sync separator circuit would split timebase pulses from the incoming video and use them to reset the horizontal and vertical oscillators at the appropriate time to synchronize with the signal from the station. The free-running oscillation of the horizontal circuit is especially critical, as the horizontal deflection circuits typically power the flyback transformer (which provides acceleration potential for the CRT) as well as the filaments for the high voltage rectifier tube and sometimes the filament(s) of the CRT itself. Without the operation of the horizontal oscillator and output stages in these television receivers, there would be no illumination of the CRT's face. The lack of precision timing components in early equipment meant that the timebase circuits occasionally needed manual adjustment. If their free-run frequencies were too far from the actual line and field rates, the circuits would not be able to follow the incoming sync signals. Loss of horizontal synchronization usually resulted in an unwatchable picture; loss of vertical synchronization would produce an image rolling up or down the screen. Older analog television receivers often provide manual controls to adjust horizontal and vertical timing. The adjustment takes the form of horizontal hold and vertical hold controls, usually on the front panel along with other common controls. These adjust the free-run frequencies of the corresponding timebase oscillators. A slowly rolling vertical picture demonstrates that the vertical oscillator is nearly synchronized with the television station but is not locking to it, often due to a weak signal or a failure in the sync separator stage not resetting the oscillator. Horizontal sync errors cause the image to be torn diagonally and repeated across the screen as if it were wrapped around a screw or a barber's pole; the greater the error, the more copies of the image will be seen at once wrapped around the barber pole. By the early 1980s the efficacy of the synchronization circuits, plus the inherent stability of the sets' oscillators, had been improved to the point where these controls were no longer necessary. Integrated Circuits which eliminated the horizontal hold control were starting to appear as early as 1969. The final generations of analog television receivers used IC-based designs where the receiver's timebases were derived from accurate crystal oscillators. With these sets, adjustment of the free-running frequency of either sweep oscillator was unnecessary and unavailable. Horizontal and vertical hold controls were rarely used in CRT-based computer monitors, as the quality and consistency of components were quite high by the advent of the computer age, but might be found on some composite monitors used with the 1970s–80s home or personal computers. Other technical information Components of a television system The tuner is the object which, with the aid of an antenna, isolates the television signals received over the air. There are two types of tuners in analog television, VHF and UHF tuners. The VHF tuner selects the VHF television frequency. This consists of a 4 MHz video bandwidth and about 100 kHz audio bandwidth. It then amplifies the signal and converts it to a 45.75 MHz Intermediate Frequency (IF) amplitude-modulated video and a 41.25 MHz IF frequency-modulated audio carrier. The IF amplifiers are centered at 44 MHz for optimal frequency transference of the audio and video carriers. Like radio, television has automatic gain control (AGC). This controls the gain of the IF amplifier stages and the tuner. The video amp and output amplifier is implemented using a pentode or a power transistor. The filter and demodulator separates the 45.75 MHz video from the 41.25 MHz audio then it simply uses a diode to detect the video signal. After the video detector, the video is amplified and sent to the sync separator and then to the picture tube. The audio signal goes to a 4.5 MHz amplifier. This amplifier prepares the signal for the 4.5 MHz detector. It then goes through a 4.5 MHz IF transformer to the detector. In television, there are 2 ways of detecting FM signals. One way is by the ratio detector. This is simple but very hard to align. The next is a relatively simple detector. This is the quadrature detector. It was invented in 1954. The first tube designed for this purpose was the 6BN6 type. It is easy to align and simple in circuitry. It was such a good design that it is still being used today in the Integrated circuit form. After the detector, it goes to the audio amplifier. Image synchronization is achieved by transmitting negative-going pulses. The horizontal sync signal is a single short pulse that indicates the start of every line. Two-timing intervals are defined – the front porch between the end of the displayed video and the start of the sync pulse, and the back porch after the sync pulse and before the displayed video. These and the sync pulse itself are called the horizontal blanking (or retrace) interval and represent the time that the electron beam in the CRT is returning to the start of the next display line. The vertical sync signal is a series of much longer pulses, indicating the start of a new field. The vertical sync pulses occupy the whole of line interval of a number of lines at the beginning and end of a scan; no picture information is transmitted during vertical retrace. The pulse sequence is designed to allow horizontal sync to continue during vertical retrace. A sync separator circuit detects the sync voltage levels and extracts and conditions signals that the horizontal and vertical oscillators can use to keep in sync with the video. It also forms the AGC voltage. The horizontal and vertical oscillators form the raster on the CRT. They are driven by the sync separator. There are many ways to create these oscillators. The earliest is the thyratron oscillator. Although it is known to drift, it makes a perfect sawtooth wave. This sawtooth wave is so good that no linearity control is needed. This oscillator was designed for the electrostatic deflection CRTs but also found some use in electromagnetically deflected CRTs. The next oscillator developed was the blocking oscillator which uses a transformer to create a sawtooth wave. This was only used for a brief time period and never was very popular. Finally the multivibrator was probably the most successful. It needed more adjustment than the other oscillators, but it is very simple and effective. This oscillator was so popular that it was used from the early 1950s until today. Two oscillator amplifiers are needed. The vertical amplifier directly drives the yoke. Since it operates at 50 or 60 Hz and drives an electromagnet, it is similar to an audio amplifier. Because of the rapid deflection required, the horizontal oscillator requires a high-power flyback transformer driven by a high-powered tube or transistor. Additional windings on this flyback transformer typically power other parts of the system. Loss of horizontal synchronization usually results in a scrambled and unwatchable picture; loss of vertical synchronization produces an image rolling up or down the screen. Timebase circuits In an analog receiver with a CRT display sync pulses are fed to horizontal and vertical timebase circuits (commonly called sweep circuits in the United States), each consisting of an oscillator and an amplifier. These generate modified sawtooth and parabola current waveforms to scan the electron beam. Engineered waveform shapes are necessary to make up for the distance variations from the electron beam source and the screen surface. The oscillators are designed to free-run at frequencies very close to the field and line rates, but the sync pulses cause them to reset at the beginning of each scan line or field, resulting in the necessary synchronization of the beam sweep with the originating signal. The output waveforms from the timebase amplifiers are fed to the horizontal and vertical deflection coils wrapped around the CRT tube. These coils produce magnetic fields proportional to the changing current, and these deflect the electron beam across the screen. In the 1950s, the power for these circuits was derived directly from the mains supply. A simple circuit consisted of a series voltage dropper resistance and a rectifier. This avoided the cost of a large high-voltage mains supply (50 or 60 Hz) transformer. It was inefficient and produced a lot of heat. In the 1960s, semiconductor technology was introduced into timebase circuits. During the late 1960s in the UK, synchronous (with the scan line rate) power generation was introduced into solid state receiver designs. In the UK use of the simple (50 Hz) types of power, circuits were discontinued as thyristor based switching circuits were introduced. The reason for design changes arose from the electricity supply contamination problems arising from EMI, and supply loading issues due to energy being taken from only the positive half cycle of the mains supply waveform. CRT flyback power supply Most of the receiver's circuitry (at least in transistor- or IC-based designs) operates from a comparatively low-voltage DC power supply. However, the anode connection for a cathode-ray tube requires a very high voltage (typically 10–30 kV) for correct operation. This voltage is not directly produced by the main power supply circuitry; instead, the receiver makes use of the circuitry used for horizontal scanning. Direct current (DC), is switched through the line output transformer, and alternating current (AC) is induced into the scan coils. At the end of each horizontal scan line the magnetic field, which has built up in both transformer and scan coils by the current, is a source of latent electromagnetic energy. This stored collapsing magnetic field energy can be captured. The reverse flow, short duration, (about 10% of the line scan time) current from both the line output transformer and the horizontal scan coil is discharged again into the primary winding of the flyback transformer by the use of a rectifier which blocks this counter-electromotive force. A small value capacitor is connected across the scan-switching device. This tunes the circuit inductances to resonate at a much higher frequency. This lengthens the flyback time from the extremely rapid decay rate that would result if they were electrically isolated during this short period. One of the secondary windings on the flyback transformer then feeds this brief high-voltage pulse to a Cockcroft–Walton generator design voltage multiplier. This produces the required high-voltage supply. A flyback converter is a power supply circuit operating on similar principles. A typical modern design incorporates the flyback transformer and rectifier circuitry into a single unit with a captive output lead, known as a diode split line output transformer or an Integrated High Voltage Transformer (IHVT), so that all high-voltage parts are enclosed. Earlier designs used a separate line output transformer and a well-insulated high-voltage multiplier unit. The high frequency (15 kHz or so) of the horizontal scanning allows reasonably small components to be used. Transition to digital In many countries, over-the-air broadcast television of analog audio and analog video signals has been discontinued to allow the re-use of the television broadcast radio spectrum for other services. The first country to make a wholesale switch to digital over-the-air (terrestrial television) broadcasting was Luxembourg in 2006, followed later in 2006 by the Netherlands. The Digital television transition in the United States for high-powered transmission was completed on 12 June 2009, the date that the Federal Communications Commission (FCC) set. Almost two million households could no longer watch television because they had not prepared for the transition. The switchover had been delayed by the DTV Delay Act. While the majority of the viewers of over-the-air broadcast television in the U.S. watch full-power stations (which number about 1800), there are three other categories of television stations in the U.S.: low-power broadcasting stations, class A stations, and television translator stations. These were given later deadlines. In Japan, the switch to digital began in northeastern Ishikawa Prefecture on 24 July 2010 and ended in 43 of the country's 47 prefectures (including the rest of Ishikawa) on 24 July 2011, but in Fukushima, Iwate, and Miyagi prefectures, the conversion was delayed to 31 March 2012, due to complications from the 2011 Tōhoku earthquake and tsunami and its related nuclear accidents. In Canada, most of the larger cities turned off analog broadcasts on 31 August 2011. China had scheduled to end analog broadcasting between 2015 and 2021. Brazil switched to digital television on 2 December 2007 in São Paulo and planned to end analog broadcasting nationwide by 30 June 2016. However, the Ministry of Communications announced in 2012 that the deadline would be delayed. As of 2024, Brazil is in the process of implementing its next-generation digital television system, known as TV 3.0. In July 2024, ATSC 3.0 standard was officially selected for the country's next-generation digital television system. The transition to TV 3.0 is expected to begin in 2025, with initial deployments planned for key cities such as São Paulo, Rio de Janeiro, and Brasília. In Malaysia, the Malaysian Communications and Multimedia Commission advertised for tender bids to be submitted in the third quarter of 2009 for the 470 through 742 MHz UHF allocation, to enable Malaysia's broadcast system to move into DTV. The new broadcast band allocation would result in Malaysia's having to build an infrastructure for all broadcasters, using a single digital terrestrial television broadcast channel. Large portions of Malaysia are covered by television broadcasts from Singapore, Thailand, Brunei, and Indonesia (from Borneo and Batam). Starting from 1 November 2019, all regions in Malaysia were no longer using the analog system after the states of Sabah and Sarawak finally turned it off on 31 October 2019. In Singapore, digital television under DVB-T2 began on 16 December 2013. The switchover was delayed many times until analog TV was switched off at midnight on 2 January 2019. In the Philippines, the National Telecommunications Commission required all broadcasting companies to end analog broadcasting on 31 December 2015 at 11:59 p.m. Due to delay of the release of the implementing rules and regulations for digital television broadcast, the target date was moved to 2020. Full digital broadcast was expected in 2021 and all of the analog TV services were to be shut down by the end of 2023. However, in February 2023, the NTC postponed the ASO/DTV transition to 2025 due to many provincial television stations not being ready to start their digital TV transmissions. In the Russian Federation, the Russian Television and Radio Broadcasting Network (RTRS) disabled analog broadcasting of federal channels in five stages, shutting down broadcasting in multiple federal subjects at each stage. The first region to have analog broadcasting disabled was Tver Oblast on 3 December 2018, and the switchover was completed on 14 October 2019. During the transition, DVB-T2 receivers and monetary compensations for purchasing of terrestrial or satellite digital TV reception equipment were provided to disabled people, World War II veterans, certain categories of retirees and households with income per member below living wage.
Technology
Broadcasting
null
2396
https://en.wikipedia.org/wiki/Adhesive
Adhesive
Adhesive, also known as glue, cement, mucilage, or paste, is any non-metallic substance applied to one or both surfaces of two separate items that binds them together and resists their separation. The use of adhesives offers certain advantages over other binding techniques such as sewing, mechanical fastenings, and welding. These include the ability to bind different materials together, the more efficient distribution of stress across a joint, the cost-effectiveness of an easily mechanized process, and greater flexibility in design. Disadvantages of adhesive use include decreased stability at high temperatures, relative weakness in bonding large objects with a small bonding surface area, and greater difficulty in separating objects during testing. Adhesives are typically organized by the method of adhesion followed by reactive or non-reactive, a term which refers to whether the adhesive chemically reacts in order to harden. Alternatively, they can be organized either by their starting physical phase or whether their raw stock is of natural or synthetic origin. Adhesives may be found naturally or produced synthetically. The earliest human use of adhesive-like substances was approximately 200,000 years ago, when Neanderthals produced tar from the dry distillation of birch bark for use in binding stone tools to wooden handles. The first references to adhesives in literature appeared approximately 2000 BC. The Greeks and Romans made great contributions to the development of adhesives. In Europe, glue was not widely used until the period AD 1500–1700. From then until the 1900s increases in adhesive use and discovery were relatively gradual. Only since the 20th century has the development of synthetic adhesives accelerated rapidly, and innovation in the field continues to the present. History The earliest evidence of human adhesive use was discovered in central Italy when three stone implements were discovered with birch bark tar indications. The tools were dated to about 200,000 before present in the Middle Paleolithic. It is the earliest example of tar-hafted stone tools. An experimental archeology study published in 2019 demonstrated how birch bark tar can be produced in an easier, more discoverable process. It involves directly burning birch bark under an overhanging rock surface in an open-air environment and collecting the tar that builds up on the rock. Although sticky enough, plant-based, single-component adhesives can be brittle and vulnerable to environmental conditions. The first use of compound adhesives was discovered in Sibudu, South Africa. Here, 70,000-year-old stone segments that were once inserted in axe hafts were discovered covered with an adhesive composed of plant gum and red ochre (natural iron oxide) as adding ochre to plant gum produces a stronger product and protects the gum from disintegrating under wet conditions. The ability to produce stronger adhesives allowed middle Stone Age humans to attach stone segments to sticks in greater variations, which led to the development of new tools. A study of material from Le Moustier indicates that Middle Paleolithic people, possibly Neanderthals, used glue made from a mixture of ocher and bitumen to make hand grips for cutting and scraping stone tools. More recent examples of adhesive use by prehistoric humans have been found at the burial sites of ancient tribes. Archaeologists studying the sites found that approximately 6,000 years ago the tribesmen had buried their dead together with food found in broken clay pots repaired with tree resins. Another investigation by archaeologists uncovered the use of bituminous cements to fasten ivory eyeballs to statues in Babylonian temples dating to approximately 4000 BC. In 2000, a paper revealed the discovery of a 5,200-year-old man nicknamed the "Tyrolean Iceman" or "Ötzi", who was preserved in a glacier near the Austria-Italy border. Several of his belongings were found with him including two arrows with flint arrowheads and a copper hatchet, each with evidence of organic glue used to connect the stone or metal parts to the wooden shafts. The glue was analyzed as pitch, which requires the heating of tar during its production. The retrieval of this tar requires a transformation of birch bark by means of heat, in a process known as pyrolysis. The first references to adhesives in literature appeared in approximately 2000 BC. Further historical records of adhesive use are found from the period spanning 1500–1000 BC. Artifacts from this period include paintings depicting wood gluing operations and a casket made of wood and glue in King Tutankhamun's tomb. Other ancient Egyptian artifacts employ animal glue for bonding or lamination. Such lamination of wood for bows and furniture is thought to have extended their life and was accomplished using casein (milk protein)-based glues. The ancient Egyptians also developed starch-based pastes for the bonding of papyrus to clothing and a plaster of Paris-like material made of calcined gypsum. From AD 1 to 500 the Greeks and Romans made great contributions to the development of adhesives. Wood veneering and marquetry were developed, the production of animal and fish glues refined, and other materials utilized. Egg-based pastes were used to bond gold leaves, and incorporated various natural ingredients such as blood, bone, hide, milk, cheese, vegetables, and grains. The Greeks began the use of slaked lime as mortar while the Romans furthered mortar development by mixing lime with volcanic ash and sand. This material, known as pozzolanic cement, was used in the construction of the Roman Colosseum and Pantheon. The Romans were also the first people known to have used tar and beeswax as caulk and sealant between the wooden planks of their boats and ships. In Central Asia, the rise of the Mongols in approximately AD 1000 can be partially attributed to the good range and power of the bows of Genghis Khan's hordes. These bows were made of a bamboo core, with horn on the belly (facing towards the archer) and sinew on the back, bound together with animal glue. In Europe, glue fell into disuse until the period AD 1500–1700. At this time, world-renowned cabinet and furniture makers such as Thomas Chippendale and Duncan Phyfe began to use adhesives to hold their products together. In 1690, the first commercial glue plant was established in The Netherlands. This plant produced glues from animal hides. In 1750, the first British glue patent was issued for fish glue. The following decades of the next century witnessed the manufacture of casein glues in German and Swiss factories. In 1876, the first U.S. patent (number 183,024) was issued to the Ross brothers for the production of casein glue. The first U.S. postage stamps used starch-based adhesives when issued in 1847. The first US patent (number 61,991) on dextrin (a starch derivative) adhesive was issued in 1867. Natural rubber was first used as material for adhesives in 1830, which marked the starting point of the modern adhesive. In 1862, a British patent (number 3288) was issued for the plating of metal with brass by electrodeposition to obtain a stronger bond to rubber. The development of the automobile and the need for rubber shock mounts required stronger and more durable bonds of rubber and metal. This spurred the development of cyclized rubber treated in strong acids. By 1927, this process was used to produce solvent-based thermoplastic rubber cements for metal to rubber bonding. Natural rubber-based sticky adhesives were first used on a backing by Henry Day (US Patent 3,965) in 1845. Later these kinds of adhesives were used in cloth backed surgical and electric tapes. By 1925, the pressure-sensitive tape industry was born. Today, sticky notes, Scotch Tape, and other tapes are examples of pressure-sensitive adhesives (PSA). A key step in the development of synthetic plastics was the introduction of a thermoset plastic known as Bakelite phenolic in 1910. Within two years, phenolic resin was applied to plywood as a coating varnish. In the early 1930s, phenolics gained importance as adhesive resins. The 1920s, 1930s, and 1940s witnessed great advances in the development and production of new plastics and resins due to the First and Second World Wars. These advances greatly improved the development of adhesives by allowing the use of newly developed materials that exhibited a variety of properties. With changing needs and ever evolving technology, the development of new synthetic adhesives continues to the present. However, due to their low cost, natural adhesives are still more commonly used. Types Adhesives are typically organized by the method of adhesion. These are then organized into reactive and non-reactive adhesives, which refers to whether the adhesive chemically reacts in order to harden. Alternatively they can be organized by whether the raw stock is of natural, or synthetic origin, or by their starting physical phase. By reactiveness Non-reactive Drying There are two types of adhesives that harden by drying: solvent-based adhesives and polymer dispersion adhesives, also known as emulsion adhesives. Solvent-based adhesives are a mixture of ingredients (typically polymers) dissolved in a solvent. White glue, contact adhesives and rubber cements are members of the drying adhesive family. As the solvent evaporates, the adhesive hardens. Depending on the chemical composition of the adhesive, they will adhere to different materials to greater or lesser degrees. Polymer dispersion adhesives are milky-white dispersions often based on polyvinyl acetate (PVAc). They are used extensively in the woodworking and packaging industries. They are also used with fabrics and fabric-based components, and in engineered products such as loudspeaker cones. Pressure-sensitive Pressure-sensitive adhesives (PSA) form a bond by the application of light pressure to bind the adhesive with the adherend. They are designed to have a balance between flow and resistance to flow. The bond forms because the adhesive is soft enough to flow (i.e., "wet") to the adherend. The bond has strength because the adhesive is hard enough to resist flow when stress is applied to the bond. Once the adhesive and the adherend are in close proximity, molecular interactions, such as van der Waals forces, become involved in the bond, contributing significantly to its ultimate strength. PSAs are designed for either permanent or removable applications. Examples of permanent applications include safety labels for power equipment, foil tape for HVAC duct work, automotive interior trim assembly, and sound/vibration damping films. Some high performance permanent PSAs exhibit high adhesion values and can support kilograms of weight per square centimeter of contact area, even at elevated temperatures. Permanent PSAs may initially be removable (for example to recover mislabeled goods) and build adhesion to a permanent bond after several hours or days. Removable adhesives are designed to form a temporary bond, and ideally can be removed after months or years without leaving residue on the adherend. Removable adhesives are used in applications such as surface protection films, masking tapes, bookmark and note papers, barcode labels, price marking labels, promotional graphics materials, and for skin contact (wound care dressings, EKG electrodes, athletic tape, analgesic and trans-dermal drug patches, etc.). Some removable adhesives are designed to repeatedly stick and unstick. They have low adhesion, and generally cannot support much weight. Pressure-sensitive adhesive is used in Post-it notes. Pressure-sensitive adhesives are manufactured with either a liquid carrier or in 100% solid form. Articles are made from liquid PSAs by coating the adhesive and drying off the solvent or water carrier. They may be further heated to initiate a cross-linking reaction and increase molecular weight. 100% solid PSAs may be low viscosity polymers that are coated and then reacted with radiation to increase molecular weight and form the adhesive, or they may be high viscosity materials that are heated to reduce viscosity enough to allow coating, and then cooled to their final form. Major raw material for PSA's are acrylate-based polymers. Contact Contact adhesives form high shear-resistance bonds with a rapid cure time. They are often applied in thin layers for use with laminates, such as bonding Formica to countertops, and in footwear, as in attaching outsoles to uppers. Natural rubber and polychloroprene (Neoprene) are commonly used contact adhesives. Both of these elastomers undergo strain crystallization. Contact adhesives must be applied to both surfaces and allowed some time to dry before the two surfaces are pushed together. Some contact adhesives require as long as 24 hours to dry completely before the surfaces are to be held together. Once the surfaces are pushed together, the bond forms very quickly. Clamps are typically not needed due to the rapid bond formation. Hot Hot adhesives, also known as hot melt adhesives, are thermoplastics applied in molten form (in the 65–180 °C range) which solidify on cooling to form strong bonds between a wide range of materials. Ethylene-vinyl acetate-based hot-melts are particularly popular for crafts because of their ease of use and the wide range of common materials they can join. A glue gun (shown at right) is one method of applying hot adhesives. The glue gun melts the solid adhesive, then allows the liquid to pass through its barrel onto the material, where it solidifies. Thermoplastic glue may have been invented around 1940 by Procter & Gamble as a solution to the problem that water-based adhesives, commonly used in packaging at that time, failed in humid climates, causing packages to open. However, water-based adhesives are still of strong interest as they typically do not contain volatile solvents. Reactive Anaerobic Anaerobic adhesives cure when in contact with metal, in the absence of oxygen. They work well in a close-fitting space, as when used as a Thread-locking fluid. Multi-part Multi-component adhesives harden by mixing two or more components which chemically react. This reaction causes polymers to cross-link into acrylates, urethanes, and epoxies . There are several commercial combinations of multi-component adhesives in use in industry. Some of these combinations are: Polyester resin & polyurethane resin Polyols & polyurethane resin Acrylic polymers & polyurethane resins The individual components of a multi-component adhesive are not adhesive by nature. The individual components react with each other after being mixed and show full adhesion only on curing. The multi-component resins can be either solvent-based or solvent-less. The solvents present in the adhesives are a medium for the polyester or the polyurethane resin. The solvent is dried during the curing process. Pre-mixed and frozen adhesives Pre-mixed and frozen adhesives (PMFs) are adhesives that are mixed, deaerated, packaged, and frozen. As it is necessary for PMFs to remain frozen before use, once they are frozen at −80 °C they are shipped with dry ice and are required to be stored at or below −40 °C. PMF adhesives eliminate mixing mistakes by the end user and reduce exposure of curing agents that can contain irritants or toxins. PMFs were introduced commercially in the 1960s and are commonly used in aerospace and defense. One-part One-part adhesives harden via a chemical reaction with an external energy source, such as radiation, heat, and moisture. Ultraviolet (UV) light curing adhesives, also known as light curing materials (LCM), have become popular within the manufacturing sector due to their rapid curing time and strong bond strength. Light curing adhesives can cure in as little as one second and many formulations can bond dissimilar substrates (materials) and withstand harsh temperatures. These qualities make UV curing adhesives essential to the manufacturing of items in many industrial markets such as electronics, telecommunications, medical, aerospace, glass, and optical. Unlike traditional adhesives, UV light curing adhesives not only bond materials together but they can also be used to seal and coat products. They are generally acrylic-based. Heat curing adhesives consist of a pre-made mixture of two or more components. When heat is applied the components react and cross-link. This type of adhesive includes thermoset epoxies, urethanes, and polyimides. Moisture curing adhesives cure when they react with moisture present on the substrate surface or in the air. This type of adhesive includes cyanoacrylates and urethanes. By origin Natural Natural adhesives are made from organic sources such as vegetable starch (dextrin), natural resins, or animals (e.g. the milk protein casein and hide-based animal glues). These are often referred to as bioadhesives. One example is a simple paste made by cooking flour in water. Starch-based adhesives are used in corrugated board and paper sack production, paper tube winding, and wallpaper adhesives. Casein glue is mainly used to adhere glass bottle labels. Animal glues have traditionally been used in bookbinding, wood joining, and many other areas but now are largely replaced by synthetic glues except in specialist applications like the production and repair of stringed instruments. Albumen made from the protein component of blood has been used in the plywood industry. Masonite, a wood hardboard, was originally bonded using natural wood lignin, an organic polymer, though most modern particle boards such as MDF use synthetic thermosetting resins. Synthetic Synthetic adhesives are made out of organic compounds. Many are based on elastomers, thermoplastics, emulsions, and thermosets. Examples of thermosetting adhesives are: epoxy, polyurethane, cyanoacrylate and acrylic polymers. The first commercially produced synthetic adhesive was Karlsons Klister in the 1920s. Application Applicators of different adhesives are designed according to the adhesive being used and the size of the area to which the adhesive will be applied. The adhesive is applied to either one or both of the materials being bonded. The pieces are aligned and pressure is added to aid in adhesion and rid the bond of air bubbles. Common ways of applying an adhesive include brushes, rollers, using films or pellets, spray guns and applicator guns (e.g., caulk gun). All of these can be used manually or automated as part of a machine. Mechanisms of adhesion For an adhesive to be effective it must have three main properties. Firstly, it must be able to wet the base material. Wetting is the ability of a liquid to maintain contact with a solid surface. It must also increase in strength after application, and finally it must be able to transmit load between the two surfaces/substrates being adhered. Adhesion, the attachment between adhesive and substrate may occur either by mechanical means, in which the adhesive works its way into small pores of the substrate, or by one of several chemical mechanisms. The strength of adhesion depends on many factors, including the means by which it occurs. In some cases, an actual chemical bond occurs between adhesive and substrate. Thiolated polymers, for example, form chemical bonds with endogenous proteins such as mucus glycoproteins, integrins or keratins via disulfide bridges. Because of their comparatively high adhesive properties, these polymers find numerous biomedical applications. In others, electrostatic forces, as in static electricity, hold the substances together. A third mechanism involves the van der Waals forces that develop between molecules. A fourth means involves the moisture-aided diffusion of the glue into the substrate, followed by hardening. Methods to improve adhesion The quality of adhesive bonding depends strongly on the ability of the adhesive to efficiently cover (wet) the substrate area. This happens when the surface energy of the substrate is greater than the surface energy of the adhesive. However, high-strength adhesives have high surface energy. Thus, they bond poorly to low-surface-energy polymers or other materials. To solve this problem, surface treatment can be used to increase the surface energy as a preparation step before adhesive bonding. Importantly, surface preparation provides a reproducible surface allowing consistent bonding results. The commonly used surface activation techniques include plasma activation, flame treatment and wet chemistry priming. Failure There are several factors that could contribute to the failure of two adhered surfaces. Sunlight and heat may weaken the adhesive. Solvents can deteriorate or dissolve adhesive. Physical stresses may also cause the separation of surfaces. When subjected to loading, debonding may occur at different locations in the adhesive joint. The major fracture types are the following: Cohesive fracture Cohesive fracture is obtained if a crack propagates in the bulk polymer which constitutes the adhesive. In this case the surfaces of both adherends after debonding will be covered by fractured adhesive. The crack may propagate in the center of the layer or near an interface. For this last case, the cohesive fracture can be said to be "cohesive near the interface". Adhesive fracture Adhesive fracture (sometimes referred to as interfacial fracture) is when debonding occurs between the adhesive and the adherend. In most cases, the occurrence of adhesive fracture for a given adhesive goes along with smaller fracture toughness. Other types of fracture Other types of fracture include: The mixed type, which occurs if the crack propagates at some spots in a cohesive and in others in an interfacial manner. Mixed fracture surfaces can be characterised by a certain percentage of adhesive and cohesive areas. The alternating crack path type which occurs if the cracks jump from one interface to the other. This type of fracture appears in the presence of tensile pre-stresses in the adhesive layer. Fracture can also occur in the adherend if the adhesive is tougher than the adherend. In this case, the adhesive remains intact and is still bonded to one substrate and remnants of the other. For example, when one removes a price label, the adhesive usually remains on the label and the surface. This is cohesive failure. If, however, a layer of paper remains stuck to the surface, the adhesive has not failed. Another example is when someone tries to pull apart Oreo cookies and all the filling remains on one side; this is an adhesive failure, rather than a cohesive failure. Design of adhesive joints As a general design rule, the material properties of the object need to be greater than the forces anticipated during its use. (i.e. geometry, loads, etc.). The engineering work will consist of having a good model to evaluate the function. For most adhesive joints, this can be achieved using fracture mechanics. Concepts such as the stress concentration factor and the strain energy release rate can be used to predict failure. In such models, the behavior of the adhesive layer itself is neglected and only the adherents are considered. Failure will also very much depend on the opening mode of the joint. Mode I is an opening or tensile mode where the loadings are normal to the crack. Mode II is a sliding or in-plane shear mode where the crack surfaces slide over one another in direction perpendicular to the leading edge of the crack. This is typically the mode for which the adhesive exhibits the highest resistance to fracture. Mode III is a tearing or antiplane shear mode. As the loads are usually fixed, an acceptable design will result from combination of a material selection procedure and geometry modifications, if possible. In adhesively bonded structures, the global geometry and loads are fixed by structural considerations and the design procedure focuses on the material properties of the adhesive and on local changes on the geometry. Increasing the joint resistance is usually obtained by designing its geometry so that: The bonded zone is large It is mainly loaded in mode II Stable crack propagation will follow the appearance of a local failure. Shelf life Some glues and adhesives have a limited shelf life. Shelf life is dependent on multiple factors, the foremost of which being temperature. Adhesives may lose their effectiveness at high temperatures, as well as become increasingly stiff. Other factors affecting shelf life include exposure to oxygen or water vapor.
Technology
Material and chemical
null
2408
https://en.wikipedia.org/wiki/Analytical%20chemistry
Analytical chemistry
Analytical chemistry studies and uses instruments and methods to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration. Analytical chemistry consists of classical, wet chemical methods and modern analytical techniques. Classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, solubility, radioactivity or reactivity. Classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte. Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has broad applications to medicine, science, and engineering. History Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period, significant contributions to analytical chemistry included the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups. The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860. Most of the major developments in analytical chemistry took place after 1900. During this period, instrumental analysis became progressively dominant in the field. In particular, many of the basic spectroscopic and spectrometric techniques were discovered in the early 20th century and refined in the late 20th century. The separation sciences follow a similar time line of development and also became increasingly transformed into high performance instruments. In the 1970s many of these techniques began to be used together as hybrid techniques to achieve a complete characterization of samples. Starting in the 1970s, analytical chemistry became progressively more inclusive of biological questions (bioanalytical chemistry), whereas it had previously been largely focused on inorganic or small organic molecules. Lasers have been increasingly used as probes and even to initiate and influence a wide variety of reactions. The late 20th century also saw an expansion of the application of analytical chemistry from somewhat academic chemical questions to forensic, environmental, industrial and medical questions, such as in histology. Modern analytical chemistry is dominated by instrumental analysis. Many analytical chemists focus on a single type of instrument. Academics tend to either focus on new applications and discoveries or on new methods of analysis. The discovery of a chemical present in blood that increases the risk of cancer would be a discovery that an analytical chemist might be involved in. An effort to develop a new method might involve the use of a tunable laser to increase the specificity and sensitivity of a spectrometric method. Many methods, once developed, are kept purposely static so that data can be compared over long periods of time. This is particularly true in industrial quality assurance (QA), forensic and environmental applications. Analytical chemistry plays an increasingly important role in the pharmaceutical industry where, aside from QA, it is used in the discovery of new drug candidates and in clinical applications where understanding the interactions between the drug and the patient are critical. Classical methods Although modern analytical chemistry is dominated by sophisticated instrumentation, the roots of analytical chemistry and some of the principles used in modern instruments are from traditional techniques, many of which are still used today. These techniques also tend to form the backbone of most undergraduate analytical chemistry educational labs. Qualitative analysis Qualitative analysis determines the presence or absence of a particular compound, but not the mass or concentration. By definition, qualitative analyses do not measure quantity. Chemical tests There are numerous qualitative chemical tests, for example, the acid test for gold and the Kastle-Meyer test for the presence of blood. Flame test Inorganic qualitative analysis generally refers to a systematic scheme to confirm the presence of certain aqueous ions or elements by performing a series of reactions that eliminate a range of possibilities and then confirm suspected ions with a confirming test. Sometimes small carbon-containing ions are included in such schemes. With modern instrumentation, these tests are rarely used but can be useful for educational purposes and in fieldwork or other situations where access to state-of-the-art instruments is not available or expedient. Quantitative analysis Quantitative analysis is the measurement of the quantities of particular chemical constituents present in a substance. Quantities can be measured by mass (gravimetric analysis) or volume (volumetric analysis). Gravimetric analysis The gravimetric analysis involves determining the amount of material present by weighing the sample before and/or after some transformation. A common example used in undergraduate education is the determination of the amount of water in a hydrate by heating the sample to remove the water such that the difference in weight is due to the loss of water. Volumetric analysis Titration involves the gradual addition of a measurable reactant to an exact volume of a solution being analyzed until some equivalence point is reached. Titration is a family of techniques used to determine the concentration of an analyte. Titrating accurately to either the half-equivalence point or the endpoint of a titration allows the chemist to determine the amount of moles used, which can then be used to determine a concentration or composition of the titrant. Most familiar to those who have taken chemistry during secondary education is the acid-base titration involving a color-changing indicator, such as phenolphthalein. There are many other types of titrations, for example, potentiometric titrations or precipitation titrations. Chemists might also create titration curves in order by systematically testing the pH every drop in order to understand different properties of the titrant. Instrumental methods Spectroscopy Spectroscopy measures the interaction of the molecules with electromagnetic radiation. Spectroscopy consists of many different applications such as atomic absorption spectroscopy, atomic emission spectroscopy, ultraviolet-visible spectroscopy, X-ray spectroscopy, fluorescence spectroscopy, infrared spectroscopy, Raman spectroscopy, dual polarization interferometry, nuclear magnetic resonance spectroscopy, photoemission spectroscopy, Mössbauer spectroscopy and so on. Mass spectrometry Mass spectrometry measures mass-to-charge ratio of molecules using electric and magnetic fields. In a mass spectrometer, a small amount of sample is ionized and converted to gaseous ions, where they are separated and analyzed according to their mass-to-charge ratios. There are several ionization methods: electron ionization, chemical ionization, electrospray ionization, fast atom bombardment, matrix-assisted laser desorption/ionization, and others. Also, mass spectrometry is categorized by approaches of mass analyzers: magnetic-sector, quadrupole mass analyzer, quadrupole ion trap, time-of-flight, Fourier transform ion cyclotron resonance, and so on. Electrochemical analysis Electroanalytical methods measure the potential (volts) and/or current (amps) in an electrochemical cell containing the analyte. These methods can be categorized according to which aspects of the cell are controlled and which are measured. The four main categories are potentiometry (the difference in electrode potentials is measured), coulometry (the transferred charge is measured over time), amperometry (the cell's current is measured over time), and voltammetry (the cell's current is measured while actively altering the cell's potential). Potentiometry measures the cell's potential, coulometry measures the cell's current, and voltammetry measures the change in current when cell potential changes. Thermal analysis Calorimetry and thermogravimetric analysis measure the interaction of a material and heat. Separation Separation processes are used to decrease the complexity of material mixtures. Chromatography, electrophoresis and field flow fractionation are representative of this field. Chromatographic assays Chromatography can be used to determine the presence of substances in a sample as different components in a mixture have different tendencies to adsorb onto the stationary phase or dissolve in the mobile phase. Thus, different components of the mixture move at different speed. Different components of a mixture can therefore be identified by their respective Rƒ values, which is the ratio between the migration distance of the substance and the migration distance of the solvent front during chromatography. In combination with the instrumental methods, chromatography can be used in quantitative determination of the substances. Chromatography separates the analyte from the rest of the sample so that it may be measured without interference from other compounds. There are different types of chromatography that differ from the media they use to separate the analyte and the sample. In Thin-layer chromatography, the analyte mixture moves up and separates along the coated sheet under the volatile mobile phase. In Gas chromatography, gas separates the volatile analytes. A common method for chromatography using liquid as a mobile phase is High-performance liquid chromatography. Hybrid techniques Combinations of the above techniques produce a "hybrid" or "hyphenated" technique. Several examples are in popular use today and new hybrid techniques are under development. For example, gas chromatography-mass spectrometry, gas chromatography-infrared spectroscopy, liquid chromatography-mass spectrometry, liquid chromatography-NMR spectroscopy, liquid chromatography-infrared spectroscopy, and capillary electrophoresis-mass spectrometry. Hyphenated separation techniques refer to a combination of two (or more) techniques to detect and separate chemicals from solutions. Most often the other technique is some form of chromatography. Hyphenated techniques are widely used in chemistry and biochemistry. A slash is sometimes used instead of hyphen, especially if the name of one of the methods contains a hyphen itself. Microscopy The visualization of single molecules, single cells, biological tissues, and nanomaterials is an important and attractive approach in analytical science. Also, hybridization with other traditional analytical tools is revolutionizing analytical science. Microscopy can be categorized into three different fields: optical microscopy, electron microscopy, and scanning probe microscopy. Recently, this field is rapidly progressing because of the rapid development of the computer and camera industries. Lab-on-a-chip Devices that integrate (multiple) laboratory functions on a single chip of only millimeters to a few square centimeters in size and that are capable of handling extremely small fluid volumes down to less than picoliters. Errors Error can be defined as numerical difference between observed value and true value. The experimental error can be divided into two types, systematic error and random error. Systematic error results from a flaw in equipment or the design of an experiment while random error results from uncontrolled or uncontrollable variables in the experiment. In error the true value and observed value in chemical analysis can be related with each other by the equation where is the absolute error. is the true value. is the observed value. An error of a measurement is an inverse measure of accurate measurement, i.e. smaller the error greater the accuracy of the measurement. Errors can be expressed relatively. Given the relative error(): The percent error can also be calculated: If we want to use these values in a function, we may also want to calculate the error of the function. Let be a function with variables. Therefore, the propagation of uncertainty must be calculated in order to know the error in : Standards Standard curve A general method for analysis of concentration involves the creation of a calibration curve. This allows for the determination of the amount of a chemical in a material by comparing the results of an unknown sample to those of a series of known standards. If the concentration of element or compound in a sample is too high for the detection range of the technique, it can simply be diluted in a pure solvent. If the amount in the sample is below an instrument's range of measurement, the method of addition can be used. In this method, a known quantity of the element or compound under study is added, and the difference between the concentration added and the concentration observed is the amount actually in the sample. Internal standards Sometimes an internal standard is added at a known concentration directly to an analytical sample to aid in quantitation. The amount of analyte present is then determined relative to the internal standard as a calibrant. An ideal internal standard is an isotopically enriched analyte which gives rise to the method of isotope dilution. Standard addition The method of standard addition is used in instrumental analysis to determine the concentration of a substance (analyte) in an unknown sample by comparison to a set of samples of known concentration, similar to using a calibration curve. Standard addition can be applied to most analytical techniques and is used instead of a calibration curve to solve the matrix effect problem. Signals and noise One of the most important components of analytical chemistry is maximizing the desired signal while minimizing the associated noise. The analytical figure of merit is known as the signal-to-noise ratio (S/N or SNR). Noise can arise from environmental factors as well as from fundamental physical processes. Thermal noise Thermal noise results from the motion of charge carriers (usually electrons) in an electrical circuit generated by their thermal motion. Thermal noise is white noise meaning that the power spectral density is constant throughout the frequency spectrum. The root mean square value of the thermal noise in a resistor is given by where kB is the Boltzmann constant, T is the temperature, R is the resistance, and is the bandwidth of the frequency . Shot noise Shot noise is a type of electronic noise that occurs when the finite number of particles (such as electrons in an electronic circuit or photons in an optical device) is small enough to give rise to statistical fluctuations in a signal. Shot noise is a Poisson process, and the charge carriers that make up the current follow a Poisson distribution. The root mean square current fluctuation is given by where e is the elementary charge and I is the average current. Shot noise is white noise. Flicker noise Flicker noise is electronic noise with a 1/ƒ frequency spectrum; as f increases, the noise decreases. Flicker noise arises from a variety of sources, such as impurities in a conductive channel, generation, and recombination noise in a transistor due to base current, and so on. This noise can be avoided by modulation of the signal at a higher frequency, for example, through the use of a lock-in amplifier. Environmental noise Environmental noise arises from the surroundings of the analytical instrument. Sources of electromagnetic noise are power lines, radio and television stations, wireless devices, compact fluorescent lamps and electric motors. Many of these noise sources are narrow bandwidth and, therefore, can be avoided. Temperature and vibration isolation may be required for some instruments. Noise reduction Noise reduction can be accomplished either in computer hardware or software. Examples of hardware noise reduction are the use of shielded cable, analog filtering, and signal modulation. Examples of software noise reduction are digital filtering, ensemble average, boxcar average, and correlation methods. Applications Analytical chemistry has applications including in forensic science, bioanalysis, clinical analysis, environmental analysis, and materials analysis. Analytical chemistry research is largely driven by performance (sensitivity, detection limit, selectivity, robustness, dynamic range, linear range, accuracy, precision, and speed), and cost (purchase, operation, training, time, and space). Among the main branches of contemporary analytical atomic spectrometry, the most widespread and universal are optical and mass spectrometry. In the direct elemental analysis of solid samples, the new leaders are laser-induced breakdown and laser ablation mass spectrometry, and the related techniques with transfer of the laser ablation products into inductively coupled plasma. Advances in design of diode lasers and optical parametric oscillators promote developments in fluorescence and ionization spectrometry and also in absorption techniques where uses of optical cavities for increased effective absorption pathlength are expected to expand. The use of plasma- and laser-based methods is increasing. An interest towards absolute (standardless) analysis has revived, particularly in emission spectrometry. Great effort is being put into shrinking the analysis techniques to chip size. Although there are few examples of such systems competitive with traditional analysis techniques, potential advantages include size/portability, speed, and cost. (micro total analysis system (μTAS) or lab-on-a-chip). Microscale chemistry reduces the amounts of chemicals used. Many developments improve the analysis of biological systems. Examples of rapidly expanding fields in this area are genomics, DNA sequencing and related research in genetic fingerprinting and DNA microarray; proteomics, the analysis of protein concentrations and modifications, especially in response to various stressors, at various developmental stages, or in various parts of the body, metabolomics, which deals with metabolites; transcriptomics, including mRNA and associated fields; lipidomics - lipids and its associated fields; peptidomics - peptides and its associated fields; and metallomics, dealing with metal concentrations and especially with their binding to proteins and other molecules. Analytical chemistry has played a critical role in the understanding of basic science to a variety of practical applications, such as biomedical applications, environmental monitoring, quality control of industrial manufacturing, forensic science, and so on. The recent developments in computer automation and information technologies have extended analytical chemistry into a number of new biological fields. For example, automated DNA sequencing machines were the basis for completing human genome projects leading to the birth of genomics. Protein identification and peptide sequencing by mass spectrometry opened a new field of proteomics. In addition to automating specific processes, there is effort to automate larger sections of lab testing, such as in companies like Emerald Cloud Lab and Transcriptic. Analytical chemistry has been an indispensable area in the development of nanotechnology. Surface characterization instruments, electron microscopes and scanning probe microscopes enable scientists to visualize atomic structures with chemical characterizations.
Physical sciences
Analytical chemistry
null
2428
https://en.wikipedia.org/wiki/Analog%20computer
Analog computer
An analog computer or analogue computer is a type of computation machine (computer) that uses physical phenomena such as electrical, mechanical, or hydraulic quantities behaving according to the mathematical principles in question (analog signals) to model the problem being solved. In contrast, digital computers represent varying quantities symbolically and by discrete values of both time and amplitude (digital signals). Analog computers can have a very wide range of complexity. Slide rules and nomograms are the simplest, while naval gunfire control computers and large hybrid digital/analog computers were among the most complicated. Complex mechanisms for process control and protective relays used analog computation to perform control and protective functions. Analog computers were widely used in scientific and industrial applications even after the advent of digital computers, because at the time they were typically much faster, but they started to become obsolete as early as the 1950s and 1960s, although they remained in use in some specific applications, such as aircraft flight simulators, the flight computer in aircraft, and for teaching control systems in universities. Perhaps the most relatable example of analog computers are mechanical watches where the continuous and periodic rotation of interlinked gears drives the second, minute and hour needles in the clock. More complex applications, such as aircraft flight simulators and synthetic-aperture radar, remained the domain of analog computing (and hybrid computing) well into the 1980s, since digital computers were insufficient for the task. Timeline of analog computers Precursors This is a list of examples of early computation devices considered precursors of the modern computers. Some of them may even have been dubbed 'computers' by the press, though they may fail to fit modern definitions. The Antikythera mechanism, a type of device used to determine the positions of heavenly bodies known as an orrery, was described as an early mechanical analog computer by British physicist, information scientist, and historian of science Derek J. de Solla Price. It was discovered in 1901, in the Antikythera wreck off the Greek island of Antikythera, between Kythera and Crete, and has been dated to , during the Hellenistic period. Devices of a level of complexity comparable to that of the Antikythera mechanism would not reappear until a thousand years later. Many mechanical aids to calculation and measurement were constructed for astronomical and navigation use. The planisphere was first described by Ptolemy in the 2nd century AD. The astrolabe was invented in the Hellenistic world in either the 1st or 2nd centuries BC and is often attributed to Hipparchus. A combination of the planisphere and dioptra, the astrolabe was effectively an analog computer capable of working out several different kinds of problems in spherical astronomy. The sector, a calculating instrument used for solving problems in proportion, trigonometry, multiplication and division, and for various functions, such as squares and cube roots, was developed in the late 16th century and found application in gunnery, surveying and navigation. The planimeter was a manual instrument to calculate the area of a closed figure by tracing over it with a mechanical linkage. The slide rule was invented around 1620–1630, shortly after the publication of the concept of the logarithm. It is a hand-operated analog computer for doing multiplication and division. As slide rule development progressed, added scales provided reciprocals, squares and square roots, cubes and cube roots, as well as transcendental functions such as logarithms and exponentials, circular and hyperbolic trigonometry and other functions. Aviation is one of the few fields where slide rules are still in widespread use, particularly for solving time–distance problems in light aircraft. In 1831–1835, mathematician and engineer Giovanni Plana devised a perpetual-calendar machine, which, through a system of pulleys and cylinders, could predict the perpetual calendar for every year from AD 0 (that is, 1 BC) to AD 4000, keeping track of leap years and varying day length. The tide-predicting machine invented by Sir William Thomson in 1872 was of great utility to navigation in shallow waters. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location. The differential analyser, a mechanical analog computer designed to solve differential equations by integration, used wheel-and-disc mechanisms to perform the integration. In 1876 James Thomson had already discussed the possible construction of such calculators, but he had been stymied by the limited output torque of the ball-and-disk integrators. Several systems followed, notably those of Spanish engineer Leonardo Torres Quevedo, who built various analog machines for solving real and complex roots of polynomials; and Michelson and Stratton, whose Harmonic Analyser performed Fourier analysis, but using an array of 80 springs rather than Kelvin integrators. This work led to the mathematical understanding of the Gibbs phenomenon of overshoot in Fourier representation near discontinuities. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output. The torque amplifier was the advance that allowed these machines to work. Starting in the 1920s, Vannevar Bush and others developed mechanical differential analyzers. Modern era The Dumaresq was a mechanical calculating device invented around 1902 by Lieutenant John Dumaresq of the Royal Navy. It was an analog computer that related vital variables of the fire control problem to the movement of one's own ship and that of a target ship. It was often used with other devices, such as a Vickers range clock to generate range and deflection data so the gun sights of the ship could be continuously set. A number of versions of the Dumaresq were produced of increasing complexity as development proceeded. By 1912, Arthur Pollen had developed an electrically driven mechanical analog computer for fire-control systems, based on the differential analyser. It was used by the Imperial Russian Navy in World War I. Starting in 1929, AC network analyzers were constructed to solve calculation problems related to electrical power systems that were too large to solve with numerical methods at the time. These were essentially scale models of the electrical properties of the full-size system. Since network analyzers could handle problems too large for analytic methods or hand computation, they were also used to solve problems in nuclear physics and in the design of structures. More than 50 large network analyzers were built by the end of the 1950s. World War II era gun directors, gun data computers, and bomb sights used mechanical analog computers. In 1942 Helmut Hölzer built a fully electronic analog computer at Peenemünde Army Research Center as an embedded control system (mixing device) to calculate V-2 rocket trajectories from the accelerations and orientations (measured by gyroscopes) and to stabilize and guide the missile. Mechanical analog computers were very important in gun fire control in World War II, the Korean War and well past the Vietnam War; they were made in significant numbers. In the period 1930–1945 in the Netherlands, Johan van Veen developed an analogue computer to calculate and predict tidal currents when the geometry of the channels are changed. Around 1950, this idea was developed into the Deltar, a hydraulic analogy computer supporting the closure of estuaries in the southwest of the Netherlands (the Delta Works). The FERMIAC was an analog computer invented by physicist Enrico Fermi in 1947 to aid in his studies of neutron transport. Project Cyclone was an analog computer developed by Reeves in 1950 for the analysis and design of dynamic systems. Project Typhoon was an analog computer developed by RCA in 1952. It consisted of over 4,000 electron tubes and used 100 dials and 6,000 plug-in connectors to program. The MONIAC Computer was a hydraulic analogy of a national economy first unveiled in 1949. Computer Engineering Associates was spun out of Caltech in 1950 to provide commercial services using the "Direct Analogy Electric Analog Computer" ("the largest and most impressive general-purpose analyzer facility for the solution of field problems") developed there by Gilbert D. McCann, Charles H. Wilts, and Bart Locanthi. Educational analog computers illustrated the principles of analog calculation. The Heathkit EC-1, a $199 educational analog computer, was made by the Heath Company, US . It was programmed using patch cords that connected nine operational amplifiers and other components. General Electric also marketed an "educational" analog computer kit of a simple design in the early 1960s consisting of two transistor tone generators and three potentiometers wired such that the frequency of the oscillator was nulled when the potentiometer dials were positioned by hand to satisfy an equation. The relative resistance of the potentiometer was then equivalent to the formula of the equation being solved. Multiplication or division could be performed, depending on which dials were inputs and which was the output. Accuracy and resolution was limited and a simple slide rule was more accurate. However, the unit did demonstrate the basic principle. Analog computer designs were published in electronics magazines. One example is the PEAC (Practical Electronics analogue computer), published in Practical Electronics in the January 1968 edition. Another more modern hybrid computer design was published in Everyday Practical Electronics in 2002. An example described in the EPE hybrid computer was the flight of a VTOL aircraft such as the Harrier jump jet. The altitude and speed of the aircraft were calculated by the analog part of the computer and sent to a PC via a digital microprocessor and displayed on the PC screen. In industrial process control, analog loop controllers were used to automatically regulate temperature, flow, pressure, or other process conditions. The technology of these controllers ranged from purely mechanical integrators, through vacuum-tube and solid-state devices, to emulation of analog controllers by microprocessors. Electronic analog computers The similarity between linear mechanical components, such as springs and dashpots (viscous-fluid dampers), and electrical components, such as capacitors, inductors, and resistors is striking in terms of mathematics. They can be modeled using equations of the same form. However, the difference between these systems is what makes analog computing useful. Complex systems often are not amenable to pen-and-paper analysis, and require some form of testing or simulation. Complex mechanical systems, such as suspensions for racing cars, are expensive to fabricate and hard to modify. And taking precise mechanical measurements during high-speed tests adds further difficulty. By contrast, it is very inexpensive to build an electrical equivalent of a complex mechanical system, to simulate its behavior. Engineers arrange a few operational amplifiers (op amps) and some passive linear components to form a circuit that follows the same equations as the mechanical system being simulated. All measurements can be taken directly with an oscilloscope. In the circuit, the (simulated) stiffness of the spring, for instance, can be changed by adjusting the parameters of an integrator. The electrical system is an analogy to the physical system, hence the name, but it is much less expensive than a mechanical prototype, much easier to modify, and generally safer. The electronic circuit can also be made to run faster or slower than the physical system being simulated. Experienced users of electronic analog computers said that they offered a comparatively intimate control and understanding of the problem, relative to digital simulations. Electronic analog computers are especially well-suited to representing situations described by differential equations. Historically, they were often used when a system of differential equations proved very difficult to solve by traditional means. As a simple example, the dynamics of a spring-mass system can be described by the equation , with as the vertical position of a mass , the damping coefficient, the spring constant and the gravity of Earth. For analog computing, the equation is programmed as . The equivalent analog circuit consists of two integrators for the state variables (speed) and (position), one inverter, and three potentiometers. Electronic analog computers have drawbacks: the value of the circuit's supply voltage limits the range over which the variables may vary (since the value of a variable is represented by a voltage on a particular wire). Therefore, each problem must be scaled so its parameters and dimensions can be represented using voltages that the circuit can supply —e.g., the expected magnitudes of the velocity and the position of a spring pendulum. Improperly scaled variables can have their values "clamped" by the limits of the supply voltage. Or if scaled too small, they can suffer from higher noise levels. Either problem can cause the circuit to produce an incorrect simulation of the physical system. (Modern digital simulations are much more robust to widely varying values of their variables, but are still not entirely immune to these concerns: floating-point digital calculations support a huge dynamic range, but can suffer from imprecision if tiny differences of huge values lead to numerical instability.) The precision of the analog computer readout was limited chiefly by the precision of the readout equipment used, generally three or four significant figures. (Modern digital simulations are much better in this area. Digital arbitrary-precision arithmetic can provide any desired degree of precision.) However, in most cases the precision of an analog computer is absolutely sufficient given the uncertainty of the model characteristics and its technical parameters. Many small computers dedicated to specific computations are still part of industrial regulation equipment, but from the 1950s to the 1970s, general-purpose analog computers were the only systems fast enough for real time simulation of dynamic systems, especially in the aircraft, military and aerospace field. In the 1960s, the major manufacturer was Electronic Associates of Princeton, New Jersey, with its 231R Analog Computer (vacuum tubes, 20 integrators) and subsequently its EAI 8800 Analog Computer (solid state operational amplifiers, 64 integrators). Its challenger was Applied Dynamics of Ann Arbor, Michigan. Although the basic technology for analog computers is usually operational amplifiers (also called "continuous current amplifiers" because they have no low frequency limitation), in the 1960s an attempt was made in the French ANALAC computer to use an alternative technology: medium frequency carrier and non dissipative reversible circuits. In the 1970s, every large company and administration concerned with problems in dynamics had an analog computing center, such as: In the US: NASA (Huntsville, Houston), Martin Marietta (Orlando), Lockheed, Westinghouse, Hughes Aircraft In Europe: CEA (French Atomic Energy Commission), MATRA, Aérospatiale, BAC (British Aircraft Corporation). Construction An analog computing machine consists of several main components: Signal sources: These are blocks that generate analog signals, such as voltage or current, to represent input data and operations. Amplifiers: Amplifiers are used to boost analog signals and maintain their amplitudes throughout the system. They amplify weak input signals and compensate for signal losses during transmission. Filters: Filters are used to modify the spectrum of signals by suppressing or amplifying specific frequencies. They allow the isolation or suppression of certain signal components depending on the computational requirements. Modulators and demodulators: Modulators convert information into analog signals that can be transmitted through a communication channel, and demodulators perform the reverse transformation, recovering the original data from modulated signals. Adders, multipliers, log converters, and other calculation stages: These perform arithmetic operations on analog signals. They can be used for mathematical operations such as addition, multiplication, exponentiation, integration, and differentiation. Storage and memory: Analog computing machines can use various forms of information storage, such as capacitors or inductors, to store intermediate results and memory. Feedback and control: Feedback and control blocks are used to maintain the stability and accuracy of the analog computing machine. They may include regulation systems and error correction. Patch panel: Analog computing machines also feature a patch panel or patch field. A patch panel is a physical structure on which connectors or contacts are placed to interconnect various components and modules within the system. On the patch panel, various connections and routes can be set and switched to configure the machine and determine signal flows. This allows users to flexibly configure and reconfigure the analog computing system to perform specific tasks. Patch panels are used to control data flows, connect and disconnect connections between various blocks of the system, including signal sources, amplifiers, filters, and other components. They provide convenience and flexibility in configuring and experimenting with analog computations. Patch panels can be presented as a physical panel with connectors or, in more modern systems, as a software interface that allows virtual management of signal connections and routes. Hardware interfaces: Interfaces provide means of interaction with the machine, for example, for parameter control or data transmission. Output device: this device is designed to present the results of analog computations in a convenient form for the user or to transmit the obtained data to other systems. Output devices in analog machines can vary depending on the specific goals of the system. For example, they could be graphical indicators, oscilloscopes, graphic recording devices, TV connection module, voltmeter, etc. These devices allow for the visualization of analog signals and the representation of the results of measurements or mathematical operations. Power source and stabilizers. These are just general blocks that can be found in a typical analog computing machine. The actual configuration and components may vary depending on the specific implementation and the intended use of the machine. Analog–digital hybrids Analog computing devices are fast; digital computing devices are more versatile and accurate. The idea behind an analog-digital hybrid is to combine the two processes for the best efficiency. An example of such hybrid elementary device is the hybrid multiplier, where one input is an analog signal, the other input is a digital signal and the output is analog. It acts as an analog potentiometer, upgradable digitally. This kind of hybrid technique is mainly used for fast dedicated real time computation when computing time is very critical, as signal processing for radars and generally for controllers in embedded systems. In the early 1970s, analog computer manufacturers tried to tie together their analog computers with a digital computers to get the advantages of the two techniques. In such systems, the digital computer controlled the analog computer, providing initial set-up, initiating multiple analog runs, and automatically feeding and collecting data. The digital computer may also participate to the calculation itself using analog-to-digital and digital-to-analog converters. The largest manufacturer of hybrid computers was Electronic Associates. Their hybrid computer model 8900 was made of a digital computer and one or more analog consoles. These systems were mainly dedicated to large projects such as the Apollo program and Space Shuttle at NASA, or Ariane in Europe, especially during the integration step where at the beginning everything is simulated, and progressively real components replace their simulated parts. Only one company was known as offering general commercial computing services on its hybrid computers, CISI of France, in the 1970s. The best reference in this field is the 100,000 simulation runs for each certification of the automatic landing systems of Airbus and Concorde aircraft. After 1980, purely digital computers progressed more and more rapidly and were fast enough to compete with analog computers. One key to the speed of analog computers was their fully parallel computation, but this was also a limitation. The more equations required for a problem, the more analog components were needed, even when the problem wasn't time critical. "Programming" a problem meant interconnecting the analog operators; even with a removable wiring panel this was not very versatile. Implementations Mechanical analog computers While a wide variety of mechanisms have been developed throughout history, some stand out because of their theoretical importance, or because they were manufactured in significant quantities. Most practical mechanical analog computers of any significant complexity used rotating shafts to carry variables from one mechanism to another. Cables and pulleys were used in a Fourier synthesizer, a tide-predicting machine, which summed the individual harmonic components. Another category, not nearly as well known, used rotating shafts only for input and output, with precision racks and pinions. The racks were connected to linkages that performed the computation. At least one U.S. Naval sonar fire control computer of the later 1950s, made by Librascope, was of this type, as was the principal computer in the Mk. 56 Gun Fire Control System. Online, there is a remarkably clear illustrated reference (OP 1140) that describes the fire control computer mechanisms. For adding and subtracting, precision miter-gear differentials were in common use in some computers; the Ford Instrument Mark I Fire Control Computer contained about 160 of them. Integration with respect to another variable was done by a rotating disc driven by one variable. Output came from a pick-off device (such as a wheel) positioned at a radius on the disc proportional to the second variable. (A carrier with a pair of steel balls supported by small rollers worked especially well. A roller, its axis parallel to the disc's surface, provided the output. It was held against the pair of balls by a spring.) Arbitrary functions of one variable were provided by cams, with gearing to convert follower movement to shaft rotation. Functions of two variables were provided by three-dimensional cams. In one good design, one of the variables rotated the cam. A hemispherical follower moved its carrier on a pivot axis parallel to that of the cam's rotating axis. Pivoting motion was the output. The second variable moved the follower along the axis of the cam. One practical application was ballistics in gunnery. Coordinate conversion from polar to rectangular was done by a mechanical resolver (called a "component solver" in US Navy fire control computers). Two discs on a common axis positioned a sliding block with pin (stubby shaft) on it. One disc was a face cam, and a follower on the block in the face cam's groove set the radius. The other disc, closer to the pin, contained a straight slot in which the block moved. The input angle rotated the latter disc (the face cam disc, for an unchanging radius, rotated with the other (angle) disc; a differential and a few gears did this correction). Referring to the mechanism's frame, the location of the pin corresponded to the tip of the vector represented by the angle and magnitude inputs. Mounted on that pin was a square block. Rectilinear-coordinate outputs (both sine and cosine, typically) came from two slotted plates, each slot fitting on the block just mentioned. The plates moved in straight lines, the movement of one plate at right angles to that of the other. The slots were at right angles to the direction of movement. Each plate, by itself, was like a Scotch yoke, known to steam engine enthusiasts. During World War II, a similar mechanism converted rectilinear to polar coordinates, but it was not particularly successful and was eliminated in a significant redesign (USN, Mk. 1 to Mk. 1A). Multiplication was done by mechanisms based on the geometry of similar right triangles. Using the trigonometric terms for a right triangle, specifically opposite, adjacent, and hypotenuse, the adjacent side was fixed by construction. One variable changed the magnitude of the opposite side. In many cases, this variable changed sign; the hypotenuse could coincide with the adjacent side (a zero input), or move beyond the adjacent side, representing a sign change. Typically, a pinion-operated rack moving parallel to the (trig.-defined) opposite side would position a slide with a slot coincident with the hypotenuse. A pivot on the rack let the slide's angle change freely. At the other end of the slide (the angle, in trig. terms), a block on a pin fixed to the frame defined the vertex between the hypotenuse and the adjacent side. At any distance along the adjacent side, a line perpendicular to it intersects the hypotenuse at a particular point. The distance between that point and the adjacent side is some fraction that is the product of 1 the distance from the vertex, and 2 the magnitude of the opposite side. The second input variable in this type of multiplier positions a slotted plate perpendicular to the adjacent side. That slot contains a block, and that block's position in its slot is determined by another block right next to it. The latter slides along the hypotenuse, so the two blocks are positioned at a distance from the (trig.) adjacent side by an amount proportional to the product. To provide the product as an output, a third element, another slotted plate, also moves parallel to the (trig.) opposite side of the theoretical triangle. As usual, the slot is perpendicular to the direction of movement. A block in its slot, pivoted to the hypotenuse block positions it. A special type of integrator, used at a point where only moderate accuracy was needed, was based on a steel ball, instead of a disc. It had two inputs, one to rotate the ball, and the other to define the angle of the ball's rotating axis. That axis was always in a plane that contained the axes of two movement pick-off rollers, quite similar to the mechanism of a rolling-ball computer mouse (in that mechanism, the pick-off rollers were roughly the same diameter as the ball). The pick-off roller axes were at right angles. A pair of rollers "above" and "below" the pick-off plane were mounted in rotating holders that were geared together. That gearing was driven by the angle input, and established the rotating axis of the ball. The other input rotated the "bottom" roller to make the ball rotate. Essentially, the whole mechanism, called a component integrator, was a variable-speed drive with one motion input and two outputs, as well as an angle input. The angle input varied the ratio (and direction) of coupling between the "motion" input and the outputs according to the sine and cosine of the input angle. Although they did not accomplish any computation, electromechanical position servos (aka. torque amplifiers) were essential in mechanical analog computers of the "rotating-shaft" type for providing operating torque to the inputs of subsequent computing mechanisms, as well as driving output data-transmission devices such as large torque-transmitter synchros in naval computers. Other readout mechanisms, not directly part of the computation, included internal odometer-like counters with interpolating drum dials for indicating internal variables, and mechanical multi-turn limit stops. Considering that accurately controlled rotational speed in analog fire-control computers was a basic element of their accuracy, there was a motor with its average speed controlled by a balance wheel, hairspring, jeweled-bearing differential, a twin-lobe cam, and spring-loaded contacts (ship's AC power frequency was not necessarily accurate, nor dependable enough, when these computers were designed). Electronic analog computers Electronic analog computers typically have front panels with numerous jacks (single-contact sockets) that permit patch cords (flexible wires with plugs at both ends) to create the interconnections that define the problem setup. In addition, there are precision high-resolution potentiometers (variable resistors) for setting up (and, when needed, varying) scale factors. In addition, there is usually a zero-center analog pointer-type meter for modest-accuracy voltage measurement. Stable, accurate voltage sources provide known magnitudes. Typical electronic analog computers contain anywhere from a few to a hundred or more operational amplifiers ("op amps"), named because they perform mathematical operations. Op amps are a particular type of feedback amplifier with very high gain and stable input (low and stable offset). They are always used with precision feedback components that, in operation, all but cancel out the currents arriving from input components. The majority of op amps in a representative setup are summing amplifiers, which add and subtract analog voltages, providing the result at their output jacks. As well, op amps with capacitor feedback are usually included in a setup; they integrate the sum of their inputs with respect to time. Integrating with respect to another variable is the nearly exclusive province of mechanical analog integrators; it is almost never done in electronic analog computers. However, given that a problem solution does not change with time, time can serve as one of the variables. Other computing elements include analog multipliers, nonlinear function generators, and analog comparators. Electrical elements such as inductors and capacitors used in electrical analog computers had to be carefully manufactured to reduce non-ideal effects. For example, in the construction of AC power network analyzers, one motive for using higher frequencies for the calculator (instead of the actual power frequency) was that higher-quality inductors could be more easily made. Many general-purpose analog computers avoided the use of inductors entirely, re-casting the problem in a form that could be solved using only resistive and capacitive elements, since high-quality capacitors are relatively easy to make. The use of electrical properties in analog computers means that calculations are normally performed in real time (or faster), at a speed determined mostly by the frequency response of the operational amplifiers and other computing elements. In the history of electronic analog computers, there were some special high-speed types. Nonlinear functions and calculations can be constructed to a limited precision (three or four digits) by designing function generators—special circuits of various combinations of resistors and diodes to provide the nonlinearity. Typically, as the input voltage increases, progressively more diodes conduct. When compensated for temperature, the forward voltage drop of a transistor's base-emitter junction can provide a usably accurate logarithmic or exponential function. Op amps scale the output voltage so that it is usable with the rest of the computer. Any physical process that models some computation can be interpreted as an analog computer. Some examples, invented for the purpose of illustrating the concept of analog computation, include using a bundle of spaghetti as a model of sorting numbers; a board, a set of nails, and a rubber band as a model of finding the convex hull of a set of points; and strings tied together as a model of finding the shortest path in a network. These are all described in Dewdney (1984). Components Analog computers often have a complicated framework, but they have, at their core, a set of key components that perform the calculations. The operator manipulates these through the computer's framework. Key hydraulic components might include pipes, valves and containers. Key mechanical components might include rotating shafts for carrying data within the computer, miter gear differentials, disc/ball/roller integrators, cams (2-D and 3-D), mechanical resolvers and multipliers, and torque servos. Key electrical/electronic components might include: precision resistors and capacitors operational amplifiers multipliers potentiometers fixed-function generators The core mathematical operations used in an electric analog computer are: addition integration with respect to time inversion multiplication exponentiation logarithm division In some analog computer designs, multiplication is much preferred to division. Division is carried out with a multiplier in the feedback path of an Operational Amplifier. Differentiation with respect to time is not frequently used, and in practice is avoided by redefining the problem when possible. It corresponds in the frequency domain to a high-pass filter, which means that high-frequency noise is amplified; differentiation also risks instability. Limitations In general, analog computers are limited by non-ideal effects. An analog signal is composed of four basic components: DC and AC magnitudes, frequency, and phase. The real limits of range on these characteristics limit analog computers. Some of these limits include the operational amplifier offset, finite gain, and frequency response, noise floor, non-linearities, temperature coefficient, and parasitic effects within semiconductor devices. For commercially available electronic components, ranges of these aspects of input and output signals are always figures of merit. Decline In the 1950s to 1970s, digital computers based on first vacuum tubes, transistors, integrated circuits and then micro-processors became more economical and precise. This led digital computers to largely replace analog computers. Even so, some research in analog computation is still being done. A few universities still use analog computers to teach control system theory. The American company Comdyna manufactured small analog computers. At Indiana University Bloomington, Jonathan Mills has developed the Extended Analog Computer based on sampling voltages in a foam sheet. At the Harvard Robotics Laboratory, analog computation is a research topic. Lyric Semiconductor's error correction circuits use analog probabilistic signals. Slide rules are still used as flight computers in flight training. Resurgence With the development of very-large-scale integration (VLSI) technology, Yannis Tsividis' group at Columbia University has been revisiting analog/hybrid computers design in standard CMOS process. Two VLSI chips have been developed, an 80th-order analog computer (250 nm) by Glenn Cowan in 2005 and a 4th-order hybrid computer (65 nm) developed by Ning Guo in 2015, both targeting at energy-efficient ODE/PDE applications. Glenn's chip contains 16 macros, in which there are 25 analog computing blocks, namely integrators, multipliers, fanouts, few nonlinear blocks. Ning's chip contains one macro block, in which there are 26 computing blocks including integrators, multipliers, fanouts, ADCs, SRAMs and DACs. Arbitrary nonlinear function generation is made possible by the ADC+SRAM+DAC chain, where the SRAM block stores the nonlinear function data. The experiments from the related publications revealed that VLSI analog/hybrid computers demonstrated about 1–2 orders magnitude of advantage in both solution time and energy while achieving accuracy within 5%, which points to the promise of using analog/hybrid computing techniques in the area of energy-efficient approximate computing. In 2016, a team of researchers developed a compiler to solve differential equations using analog circuits. Analog computers are also used in neuromorphic computing, and in 2021 a group of researchers have shown that a specific type of artificial neural network called a spiking neural network was able to work with analog neuromorphic computers. In 2021, the German company anabrid GmbH began to produce THE ANALOG THING (abbreviated THAT), a small low-cost analog computer mainly for educational and scientific use. The company is also constructing analog mainframes and hybrid computers. Practical examples These are examples of analog computers that have been constructed or practically used: Analog Paradim, a modular analog computer produced by anabrid Boeing B-29 Superfortress Central Fire Control System Deltar E6B flight computer Ishiguro Storm Surge Computer Kerrison Predictor Leonardo Torres y Quevedo's Analogue Calculating Machines based on "fusee sans fin" Librascope, aircraft weight and balance computer Mechanical computer Mechanical watch Mechanical integrators, for example, the planimeter Mischgerät (V-2 guidance computer) MONIAC, economic modelling Nomogram Norden bombsight Rangekeeper, and related fire control computers Scanimate SR-71 inlet control system (fast adjustment of inlet geometry to prevent super-sonic shock waves from causing engine flame-out at high mach numbers) THE ANALOG THING, a small analog computer by anabrid Torpedo Data Computer Torquetum Water integrator Analog (audio) synthesizers can also be viewed as a form of analog computer, and their technology was originally based in part on electronic analog computer technology. The ARP 2600's Ring Modulator was actually a moderate-accuracy analog multiplier. The Simulation Council (or Simulations Council) was an association of analog computer users in US. It is now known as The Society for Modeling and Simulation International. The Simulation Council newsletters from 1952 to 1963 are available online and show the concerns and technologies at the time, and the common use of analog computers for missilry.
Technology
Computer hardware
null
2431
https://en.wikipedia.org/wiki/Minute%20and%20second%20of%20arc
Minute and second of arc
A minute of arc, arcminute (arcmin), arc minute, or minute arc, denoted by the symbol , is a unit of angular measurement equal to of one degree. Since one degree is of a turn, or complete rotation, one arcminute is of a turn. The nautical mile (nmi) was originally defined as the arc length of a minute of latitude on a spherical Earth, so the actual Earth's circumference is very near . A minute of arc is of a radian. A second of arc, arcsecond (arcsec), or arc second, denoted by the symbol , is of an arcminute, of a degree, of a turn, and (about ) of a radian. These units originated in Babylonian astronomy as sexagesimal (base 60) subdivisions of the degree; they are used in fields that involve very small angles, such as astronomy, optometry, ophthalmology, optics, navigation, land surveying, and marksmanship. To express even smaller angles, standard SI prefixes can be employed; the milliarcsecond (mas) and microarcsecond (μas), for instance, are commonly used in astronomy. For a three-dimensional area such as on a sphere, square arcminutes or seconds may be used. Symbols and abbreviations The prime symbol () designates the arcminute, though a single quote (U+0027) is commonly used where only ASCII characters are permitted. One arcminute is thus written as 1′. It is also abbreviated as arcmin or amin. Similarly, double prime (U+2033) designates the arcsecond, though a double quote (U+0022) is commonly used where only ASCII characters are permitted. One arcsecond is thus written as 1″. It is also abbreviated as arcsec or asec. In celestial navigation, seconds of arc are rarely used in calculations, the preference usually being for degrees, minutes, and decimals of a minute, for example, written as 42° 25.32′ or 42° 25.322′. This notation has been carried over into marine GPS and aviation GPS receivers, which normally display latitude and longitude in the latter format by default. Common examples The average apparent diameter of the full Moon is about 31 arcminutes, or 0.52°. One arcminute is the approximate distance two contours can be separated by, and still be distinguished by, a person with 20/20 vision. One arcsecond is the approximate angle subtended by a U.S. dime coin (18 mm) at a distance of . An arcsecond is also the angle subtended by an object of diameter at a distance of one astronomical unit, an object of diameter at one light-year, an object of diameter one astronomical unit () at a distance of one parsec, per the definition of the latter. One milliarcsecond is about the size of a half dollar, seen from a distance equal to that between the Washington Monument and the Eiffel Tower. One microarcsecond is about the size of a period at the end of a sentence in the Apollo mission manuals left on the Moon as seen from Earth. One nanoarcsecond is about the size of a penny on Neptune's moon Triton as observed from Earth. Also notable examples of size in arcseconds are: Hubble Space Telescope has calculational resolution of 0.05 arcseconds and actual resolution of almost 0.1 arcseconds, which is close to the diffraction limit. At crescent phase, Venus measures between 60.2 and 66 seconds of arc. History The concepts of degrees, minutes, and seconds—as they relate to the measure of both angles and time—derive from Babylonian astronomy and time-keeping. Influenced by the Sumerians, the ancient Babylonians divided the Sun's perceived motion across the sky over the course of one full day into 360 degrees. Each degree was subdivided into 60 minutes and each minute into 60 seconds. Thus, one Babylonian degree was equal to four minutes in modern terminology, one Babylonian minute to four modern seconds, and one Babylonian second to (approximately 0.067) of a modern second. Uses Astronomy Since antiquity, the arcminute and arcsecond have been used in astronomy: in the ecliptic coordinate system as latitude (β) and longitude (λ); in the horizon system as altitude (Alt) and azimuth (Az); and in the equatorial coordinate system as declination (δ). All are measured in degrees, arcminutes, and arcseconds. The principal exception is right ascension (RA) in equatorial coordinates, which is measured in time units of hours, minutes, and seconds. Contrary to what one might assume, minutes and seconds of arc do not directly relate to minutes and seconds of time, in either the rotational frame of the Earth around its own axis (day), or the Earth's rotational frame around the Sun (year). The Earth's rotational rate around its own axis is 15 minutes of arc per minute of time (360 degrees / 24 hours in day); the Earth's rotational rate around the Sun (not entirely constant) is roughly 24 minutes of time per minute of arc (from 24 hours in day), which tracks the annual progression of the Zodiac. Both of these factor in what astronomical objects you can see from surface telescopes (time of year) and when you can best see them (time of day), but neither are in unit correspondence. For simplicity, the explanations given assume a degree/day in the Earth's annual rotation around the Sun, which is off by roughly 1%. The same ratios hold for seconds, due to the consistent factor of 60 on both sides. The arcsecond is also often used to describe small astronomical angles such as the angular diameters of planets (e.g. the angular diameter of Venus which varies between 10″ and 60″); the proper motion of stars; the separation of components of binary star systems; and parallax, the small change of position of a star or Solar System body as the Earth revolves about the Sun. These small angles may also be written in milliarcseconds (mas), or thousandths of an arcsecond. The unit of distance called the parsec, abbreviated from the parallax angle of one arc second, was developed for such parallax measurements. The distance from the Sun to a celestial object is the reciprocal of the angle, measured in arcseconds, of the object's apparent movement caused by parallax. The European Space Agency's astrometric satellite Gaia, launched in 2013, can approximate star positions to 7 microarcseconds (μas). Apart from the Sun, the star with the largest angular diameter from Earth is R Doradus, a red giant with a diameter of 0.05″. Because of the effects of atmospheric blurring, ground-based telescopes will smear the image of a star to an angular diameter of about 0.5″; in poor conditions this increases to 1.5″ or even more. The dwarf planet Pluto has proven difficult to resolve because its angular diameter is about 0.1″. Techniques exist for improving seeing on the ground. Adaptive optics, for example, can produce images around 0.05″ on a 10 m class telescope. Space telescopes are not affected by the Earth's atmosphere but are diffraction limited. For example, the Hubble Space Telescope can reach an angular size of stars down to about 0.1″. Cartography Minutes (′) and seconds (″) of arc are also used in cartography and navigation. At sea level one minute of arc along the equator equals exactly one geographical mile (not to be confused with international mile or statute mile) along the Earth's equator or approximately . A second of arc, one sixtieth of this amount, is roughly . The exact distance varies along meridian arcs or any other great circle arcs because the figure of the Earth is slightly oblate (bulges a third of a percent at the equator). Positions are traditionally given using degrees, minutes, and seconds of arcs for latitude, the arc north or south of the equator, and for longitude, the arc east or west of the Prime Meridian. Any position on or above the Earth's reference ellipsoid can be precisely given with this method. However, when it is inconvenient to use base-60 for minutes and seconds, positions are frequently expressed as decimal fractional degrees to an equal amount of precision. Degrees given to three decimal places ( of a degree) have about the precision of degrees-minutes-seconds ( of a degree) and specify locations within about . For navigational purposes positions are given in degrees and decimal minutes, for instance The Needles lighthouse is at 50º 39.734’N 001º 35.500’W. Property cadastral surveying Related to cartography, property boundary surveying using the metes and bounds system and cadastral surveying relies on fractions of a degree to describe property lines' angles in reference to cardinal directions. A boundary "mete" is described with a beginning reference point, the cardinal direction North or South followed by an angle less than 90 degrees and a second cardinal direction, and a linear distance. The boundary runs the specified linear distance from the beginning point, the direction of the distance being determined by rotating the first cardinal direction the specified angle toward the second cardinal direction. For example, North 65° 39′ 18″ West 85.69 feet would describe a line running from the starting point 85.69 feet in a direction 65° 39′ 18″ (or 65.655°) away from north toward the west. Firearms The arcminute is commonly found in the firearms industry and literature, particularly concerning the precision of rifles, though the industry refers to it as minute of angle (MOA). It is especially popular as a unit of measurement with shooters familiar with the imperial measurement system because 1 MOA subtends a circle with a diameter of 1.047 inches (which is often rounded to just 1 inch) at 100 yards ( at or 2.908 cm at 100 m), a traditional distance on American target ranges. The subtension is linear with the distance, for example, at 500 yards, 1 MOA subtends 5.235 inches, and at 1000 yards 1 MOA subtends 10.47 inches. Since many modern telescopic sights are adjustable in half (), quarter () or eighth () MOA increments, also known as clicks, zeroing and adjustments are made by counting 2, 4 and 8 clicks per MOA respectively. For example, if the point of impact is 3 inches high and 1.5 inches left of the point of aim at 100 yards (which for instance could be measured by using a spotting scope with a calibrated reticle, or a target delineated for such purposes), the scope needs to be adjusted 3 MOA down, and 1.5 MOA right. Such adjustments are trivial when the scope's adjustment dials have a MOA scale printed on them, and even figuring the right number of clicks is relatively easy on scopes that click in fractions of MOA. This makes zeroing and adjustments much easier: To adjust a MOA scope 3 MOA down and 1.5 MOA right, the scope needs to be adjusted 3 × 2 = 6 clicks down and 1.5 x 2 = 3 clicks right To adjust a MOA scope 3 MOA down and 1.5 MOA right, the scope needs to be adjusted 3 x 4 = 12 clicks down and 1.5 × 4 = 6 clicks right To adjust a MOA scope 3 MOA down and 1.5 MOA right, the scope needs to be adjusted 3 x 8 = 24 clicks down and 1.5 × 8 = 12 clicks right Another common system of measurement in firearm scopes is the milliradian (mrad). Zeroing an mrad based scope is easy for users familiar with base ten systems. The most common adjustment value in mrad based scopes is  mrad (which approximates MOA). To adjust a  mrad scope 0.9 mrad down and 0.4 mrad right, the scope needs to be adjusted 9 clicks down and 4 clicks right (which equals approximately 3 and 1.5 MOA respectively). One thing to be aware of is that some MOA scopes, including some higher-end models, are calibrated such that an adjustment of 1 MOA on the scope knobs corresponds to exactly 1 inch of impact adjustment on a target at 100 yards, rather than the mathematically correct 1.047 inches. This is commonly known as the Shooter's MOA (SMOA) or Inches Per Hundred Yards (IPHY). While the difference between one true MOA and one SMOA is less than half of an inch even at 1000 yards, this error compounds significantly on longer range shots that may require adjustment upwards of 20–30 MOA to compensate for the bullet drop. If a shot requires an adjustment of 20 MOA or more, the difference between true MOA and SMOA will add up to 1 inch or more. In competitive target shooting, this might mean the difference between a hit and a miss. The physical group size equivalent to m minutes of arc can be calculated as follows: group size = tan() × distance. In the example previously given, for 1 minute of arc, and substituting 3,600 inches for 100 yards, 3,600 tan() ≈ 1.047 inches. In metric units 1 MOA at 100 metres ≈ 2.908 centimetres. Sometimes, a precision-oriented firearm's performance will be measured in MOA. This simply means that under ideal conditions (i.e. no wind, high-grade ammo, clean barrel, and a stable mounting platform such as a vise or a benchrest used to eliminate shooter error), the gun is capable of producing a group of shots whose center points (center-to-center) fit into a circle, the average diameter of circles in several groups can be subtended by that amount of arc. For example, a 1 MOA rifle should be capable, under ideal conditions, of repeatably shooting 1-inch groups at 100 yards. Most higher-end rifles are warrantied by their manufacturer to shoot under a given MOA threshold (typically 1 MOA or better) with specific ammunition and no error on the shooter's part. For example, Remington's M24 Sniper Weapon System is required to shoot 0.8 MOA or better, or be rejected from sale by quality control. Rifle manufacturers and gun magazines often refer to this capability as sub-MOA, meaning a gun consistently shooting groups under 1 MOA. This means that a single group of 3 to 5 shots at 100 yards, or the average of several groups, will measure less than 1 MOA between the two furthest shots in the group, i.e. all shots fall within 1 MOA. If larger samples are taken (i.e., more shots per group) then group size typically increases, however this will ultimately average out. If a rifle was truly a 1 MOA rifle, it would be just as likely that two consecutive shots land exactly on top of each other as that they land 1 MOA apart. For 5-shot groups, based on 95% confidence, a rifle that normally shoots 1 MOA can be expected to shoot groups between 0.58 MOA and 1.47 MOA, although the majority of these groups will be under 1 MOA. What this means in practice is if a rifle that shoots 1-inch groups on average at 100 yards shoots a group measuring 0.7 inches followed by a group that is 1.3 inches, this is not statistically abnormal. The metric system counterpart of the MOA is the milliradian (mrad or 'mil'), being equal to of the target range, laid out on a circle that has the observer as centre and the target range as radius. The number of milliradians on a full such circle therefore always is equal to 2 × × 1000, regardless the target range. Therefore, 1 MOA ≈ 0.2909 mrad. This means that an object which spans 1 mrad on the reticle is at a range that is in metres equal to the object's linear size in millimetres (e.g. an object of 100 mm subtending 1 mrad is 100 metres away). So there is no conversion factor required, contrary to the MOA system. A reticle with markings (hashes or dots) spaced with a one mrad apart (or a fraction of a mrad) are collectively called a mrad reticle. If the markings are round they are called mil-dots. In the table below conversions from mrad to metric values are exact (e.g. 0.1 mrad equals exactly 10 mm at 100 metres), while conversions of minutes of arc to both metric and imperial values are approximate. 1′ at 100 yards is about 1.047 inches 1′ ≈ 0.291 mrad (or 29.1 mm at 100 m, approximately 30 mm at 100 m) 1 mrad ≈ 3.44′, so  mrad ≈ ′ 0.1 mrad equals exactly 1 cm at 100 m, or exactly 0.36 inches at 100 yards Human vision In humans, 20/20 vision is the ability to resolve a spatial pattern separated by a visual angle of one minute of arc, from a distance of twenty feet. A 20/20 letter subtends 5 minutes of arc total. Materials The deviation from parallelism between two surfaces, for instance in optical engineering, is usually measured in arcminutes or arcseconds. In addition, arcseconds are sometimes used in rocking curve (ω-scan) x ray diffraction measurements of high-quality epitaxial thin films. Manufacturing Some measurement devices make use of arcminutes and arcseconds to measure angles when the object being measured is too small for direct visual inspection. For instance, a toolmaker's optical comparator will often include an option to measure in "minutes and seconds".
Physical sciences
Angle
Basics and measurement
2443
https://en.wikipedia.org/wiki/Acceleration
Acceleration
In mechanics, acceleration is the rate of change of the velocity of an object with respect to time. Acceleration is one of several components of kinematics, the study of motion. Accelerations are vector quantities (in that they have magnitude and direction). The orientation of an object's acceleration is given by the orientation of the net force acting on that object. The magnitude of an object's acceleration, as described by Newton's Second Law, is the combined effect of two causes: the net balance of all external forces acting onto that object — magnitude is directly proportional to this net resulting force; that object's mass, depending on the materials out of which it is made — magnitude is inversely proportional to the object's mass. The SI unit for acceleration is metre per second squared (, ). For example, when a vehicle starts from a standstill (zero velocity, in an inertial frame of reference) and travels in a straight line at increasing speeds, it is accelerating in the direction of travel. If the vehicle turns, an acceleration occurs toward the new direction and changes its motion vector. The acceleration of the vehicle in its current direction of motion is called a linear (or tangential during circular motions) acceleration, the reaction to which the passengers on board experience as a force pushing them back into their seats. When changing direction, the effecting acceleration is called radial (or centripetal during circular motions) acceleration, the reaction to which the passengers experience as a centrifugal force. If the speed of the vehicle decreases, this is an acceleration in the opposite direction of the velocity vector (mathematically a negative, if the movement is unidimensional and the velocity is positive), sometimes called deceleration or retardation, and passengers experience the reaction to deceleration as an inertial force pushing them forward. Such negative accelerations are often achieved by retrorocket burning in spacecraft. Both acceleration and deceleration are treated the same, as they are both changes in velocity. Each of these accelerations (tangential, radial, deceleration) is felt by passengers until their relative (differential) velocity are neutralised in reference to the acceleration due to change in speed. Definition and properties Average acceleration An object's average acceleration over a period of time is its change in velocity, , divided by the duration of the period, . Mathematically, Instantaneous acceleration Instantaneous acceleration, meanwhile, is the limit of the average acceleration over an infinitesimal interval of time. In the terms of calculus, instantaneous acceleration is the derivative of the velocity vector with respect to time: As acceleration is defined as the derivative of velocity, , with respect to time and velocity is defined as the derivative of position, , with respect to time, acceleration can be thought of as the second derivative of with respect to : (Here and elsewhere, if motion is in a straight line, vector quantities can be substituted by scalars in the equations.) By the fundamental theorem of calculus, it can be seen that the integral of the acceleration function is the velocity function ; that is, the area under the curve of an acceleration vs. time ( vs. ) graph corresponds to the change of velocity. Likewise, the integral of the jerk function , the derivative of the acceleration function, can be used to find the change of acceleration at a certain time: Units Acceleration has the dimensions of velocity (L/T) divided by time, i.e. L T−2. The SI unit of acceleration is the metre per second squared (m s−2); or "metre per second per second", as the velocity in metres per second changes by the acceleration value, every second. Other forms An object moving in a circular motion—such as a satellite orbiting the Earth—is accelerating due to the change of direction of motion, although its speed may be constant. In this case it is said to be undergoing centripetal (directed towards the center) acceleration. Proper acceleration, the acceleration of a body relative to a free-fall condition, is measured by an instrument called an accelerometer. In classical mechanics, for a body with constant mass, the (vector) acceleration of the body's center of mass is proportional to the net force vector (i.e. sum of all forces) acting on it (Newton's second law): where is the net force acting on the body, is the mass of the body, and is the center-of-mass acceleration. As speeds approach the speed of light, relativistic effects become increasingly large. Tangential and centripetal acceleration The velocity of a particle moving on a curved path as a function of time can be written as: with equal to the speed of travel along the path, and a unit vector tangent to the path pointing in the direction of motion at the chosen moment in time. Taking into account both the changing speed and the changing direction of , the acceleration of a particle moving on a curved path can be written using the chain rule of differentiation for the product of two functions of time as: where is the unit (inward) normal vector to the particle's trajectory (also called the principal normal), and is its instantaneous radius of curvature based upon the osculating circle at time . The components are called the tangential acceleration and the normal or radial acceleration (or centripetal acceleration in circular motion, see also circular motion and centripetal force), respectively. Geometrical analysis of three-dimensional space curves, which explains tangent, (principal) normal and binormal, is described by the Frenet–Serret formulas. Special cases Uniform acceleration Uniform or constant acceleration is a type of motion in which the velocity of an object changes by an equal amount in every equal time period. A frequently cited example of uniform acceleration is that of an object in free fall in a uniform gravitational field. The acceleration of a falling body in the absence of resistances to motion is dependent only on the gravitational field strength (also called acceleration due to gravity). By Newton's Second Law the force acting on a body is given by: Because of the simple analytic properties of the case of constant acceleration, there are simple formulas relating the displacement, initial and time-dependent velocities, and acceleration to the time elapsed: where is the elapsed time, is the initial displacement from the origin, is the displacement from the origin at time , is the initial velocity, is the velocity at time , and is the uniform rate of acceleration. In particular, the motion can be resolved into two orthogonal parts, one of constant velocity and the other according to the above equations. As Galileo showed, the net result is parabolic motion, which describes, e.g., the trajectory of a projectile in vacuum near the surface of Earth. Circular motion In uniform circular motion, that is moving with constant speed along a circular path, a particle experiences an acceleration resulting from the change of the direction of the velocity vector, while its magnitude remains constant. The derivative of the location of a point on a curve with respect to time, i.e. its velocity, turns out to be always exactly tangential to the curve, respectively orthogonal to the radius in this point. Since in uniform motion the velocity in the tangential direction does not change, the acceleration must be in radial direction, pointing to the center of the circle. This acceleration constantly changes the direction of the velocity to be tangent in the neighbouring point, thereby rotating the velocity vector along the circle. For a given speed , the magnitude of this geometrically caused acceleration (centripetal acceleration) is inversely proportional to the radius of the circle, and increases as the square of this speed: For a given angular velocity , the centripetal acceleration is directly proportional to radius . This is due to the dependence of velocity on the radius . Expressing centripetal acceleration vector in polar components, where is a vector from the centre of the circle to the particle with magnitude equal to this distance, and considering the orientation of the acceleration towards the center, yields As usual in rotations, the speed of a particle may be expressed as an angular speed with respect to a point at the distance as Thus This acceleration and the mass of the particle determine the necessary centripetal force, directed toward the centre of the circle, as the net force acting on this particle to keep it in this uniform circular motion. The so-called 'centrifugal force', appearing to act outward on the body, is a so-called pseudo force experienced in the frame of reference of the body in circular motion, due to the body's linear momentum, a vector tangent to the circle of motion. In a nonuniform circular motion, i.e., the speed along the curved path is changing, the acceleration has a non-zero component tangential to the curve, and is not confined to the principal normal, which directs to the center of the osculating circle, that determines the radius for the centripetal acceleration. The tangential component is given by the angular acceleration , i.e., the rate of change of the angular speed times the radius . That is, The sign of the tangential component of the acceleration is determined by the sign of the angular acceleration (), and the tangent is always directed at right angles to the radius vector. Coordinate systems In multi-dimensional Cartesian coordinate systems, acceleration is broken up into components that correspond with each dimensional axis of the coordinate system. In a two-dimensional system, where there is an x-axis and a y-axis, corresponding acceleration components are defined as The two-dimensional acceleration vector is then defined as . The magnitude of this vector is found by the distance formula asIn three-dimensional systems where there is an additional z-axis, the corresponding acceleration component is defined asThe three-dimensional acceleration vector is defined as with its magnitude being determined by Relation to relativity Special relativity The special theory of relativity describes the behaviour of objects travelling relative to other objects at speeds approaching that of light in vacuum. Newtonian mechanics is exactly revealed to be an approximation to reality, valid to great accuracy at lower speeds. As the relevant speeds increase toward the speed of light, acceleration no longer follows classical equations. As speeds approach that of light, the acceleration produced by a given force decreases, becoming infinitesimally small as light speed is approached; an object with mass can approach this speed asymptotically, but never reach it. General relativity Unless the state of motion of an object is known, it is impossible to distinguish whether an observed force is due to gravity or to acceleration—gravity and inertial acceleration have identical effects. Albert Einstein called this the equivalence principle, and said that only observers who feel no force at all—including the force of gravity—are justified in concluding that they are not accelerating. Conversions
Physical sciences
Classical mechanics
null
2457
https://en.wikipedia.org/wiki/Apoptosis
Apoptosis
Apoptosis (from ) is a form of programmed cell death that occurs in multicellular organisms and in some eukaryotic, single-celled microorganisms such as yeast. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, DNA fragmentation, and mRNA decay. The average adult human loses 50 to 70 billion cells each day due to apoptosis. For the average human child between 8 and 14 years old, each day the approximate loss is 20 to 30 billion cells. In contrast to necrosis, which is a form of traumatic cell death that results from acute cellular injury, apoptosis is a highly regulated and controlled process that confers advantages during an organism's life cycle. For example, the separation of fingers and toes in a developing human embryo occurs because cells between the digits undergo apoptosis. Unlike necrosis, apoptosis produces cell fragments called apoptotic bodies that phagocytes are able to engulf and remove before the contents of the cell can spill out onto surrounding cells and cause damage to them. Because apoptosis cannot stop once it has begun, it is a highly regulated process. Apoptosis can be initiated through one of two pathways. In the intrinsic pathway the cell kills itself because it senses cell stress, while in the extrinsic pathway the cell kills itself because of signals from other cells. Weak external signals may also activate the intrinsic pathway of apoptosis. Both pathways induce cell death by activating caspases, which are proteases, or enzymes that degrade proteins. The two pathways both activate initiator caspases, which then activate executioner caspases, which then kill the cell by degrading proteins indiscriminately. In addition to its importance as a biological phenomenon, defective apoptotic processes have been implicated in a wide variety of diseases. Excessive apoptosis causes atrophy, whereas an insufficient amount results in uncontrolled cell proliferation, such as cancer. Some factors like Fas receptors and caspases promote apoptosis, while some members of the Bcl-2 family of proteins inhibit apoptosis. Discovery and etymology German scientist Carl Vogt was first to describe the principle of apoptosis in 1842. In 1885, anatomist Walther Flemming delivered a more precise description of the process of programmed cell death. However, it was not until 1965 that the topic was resurrected. While studying tissues using electron microscopy, John Kerr at the University of Queensland was able to distinguish apoptosis from traumatic cell death. Following the publication of a paper describing the phenomenon, Kerr was invited to join Alastair Currie, as well as Andrew Wyllie, who was Currie's graduate student, at the University of Aberdeen. In 1972, the trio published a seminal article in the British Journal of Cancer. Kerr had initially used the term programmed cell necrosis, but in the article, the process of natural cell death was called apoptosis. Kerr, Wyllie and Currie credited James Cormack, a professor of Greek language at University of Aberdeen, with suggesting the term apoptosis. Kerr received the Paul Ehrlich and Ludwig Darmstaedter Prize on March 14, 2000, for his description of apoptosis. He shared the prize with Boston biologist H. Robert Horvitz. For many years, neither "apoptosis" nor "programmed cell death" was a highly cited term. Two discoveries brought cell death from obscurity to a major field of research: identification of the first component of the cell death control and effector mechanisms, and linkage of abnormalities in cell death to human disease, in particular cancer. This occurred in 1988 when it was shown that BCL2, the gene responsible for follicular lymphoma, encoded a protein that inhibited cell death. The 2002 Nobel Prize in Medicine was awarded to Sydney Brenner, H. Robert Horvitz and John Sulston for their work identifying genes that control apoptosis. The genes were identified by studies in the nematode C. elegans and homologues of these genes function in humans to regulate apoptosis. In Greek, apoptosis translates to the "falling off" of leaves from a tree. Cormack, professor of Greek language, reintroduced the term for medical use as it had a medical meaning for the Greeks over two thousand years before. Hippocrates used the term to mean "the falling off of the bones". Galen extended its meaning to "the dropping of the scabs". Cormack was no doubt aware of this usage when he suggested the name. Debate continues over the correct pronunciation, with opinion divided between a pronunciation with the second p silent ( ) and the second p pronounced (). In English, the p of the Greek -pt- consonant cluster is typically silent at the beginning of a word (e.g. pterodactyl, Ptolemy), but articulated when used in combining forms preceded by a vowel, as in helicopter or the orders of insects: diptera, lepidoptera, etc. In the original Kerr, Wyllie & Currie paper, there is a footnote regarding the pronunciation: We are most grateful to Professor James Cormack of the Department of Greek, University of Aberdeen, for suggesting this term. The word "apoptosis" () is used in Greek to describe the "dropping off" or "falling off" of petals from flowers, or leaves from trees. To show the derivation clearly, we propose that the stress should be on the penultimate syllable, the second half of the word being pronounced like "ptosis" (with the "p" silent), which comes from the same root "to fall", and is already used to describe the drooping of the upper eyelid. Activation mechanisms The initiation of apoptosis is tightly regulated by activation mechanisms, because once apoptosis has begun, it inevitably leads to the death of the cell. The two best-understood activation mechanisms are the intrinsic pathway (also called the mitochondrial pathway) and the extrinsic pathway. The intrinsic pathway is activated by intracellular signals generated when cells are stressed and depends on the release of proteins from the intermembrane space of mitochondria. The extrinsic pathway is activated by extracellular ligands binding to cell-surface death receptors, which leads to the formation of the death-inducing signaling complex (DISC). A cell initiates intracellular apoptotic signaling in response to a stress, which may bring about cell death. The binding of nuclear receptors by glucocorticoids, heat, radiation, nutrient deprivation, viral infection, hypoxia, increased intracellular concentration of free fatty acids and increased intracellular calcium concentration, for example, by damage to the membrane, can all trigger the release of intracellular apoptotic signals by a damaged cell. A number of cellular components, such as poly ADP ribose polymerase, may also help regulate apoptosis. Single cell fluctuations have been observed in experimental studies of stress induced apoptosis. Before the actual process of cell death is precipitated by enzymes, apoptotic signals must cause regulatory proteins to initiate the apoptosis pathway. This step allows those signals to cause cell death, or the process to be stopped, should the cell no longer need to die. Several proteins are involved, but two main methods of regulation have been identified: the targeting of mitochondria functionality, or directly transducing the signal via adaptor proteins to the apoptotic mechanisms. An extrinsic pathway for initiation identified in several toxin studies is an increase in calcium concentration within a cell caused by drug activity, which also can cause apoptosis via a calcium binding protease calpain. Intrinsic pathway The intrinsic pathway is also known as the mitochondrial pathway. Mitochondria are essential to multicellular life. Without them, a cell ceases to respire aerobically and quickly dies. This fact forms the basis for some apoptotic pathways. Apoptotic proteins that target mitochondria affect them in different ways. They may cause mitochondrial swelling through the formation of membrane pores, or they may increase the permeability of the mitochondrial membrane and cause apoptotic effectors to leak out. There is also a growing body of evidence indicating that nitric oxide is able to induce apoptosis by helping to dissipate the membrane potential of mitochondria and therefore make it more permeable. Nitric oxide has been implicated in initiating and inhibiting apoptosis through its possible action as a signal molecule of subsequent pathways that activate apoptosis. During apoptosis, cytochrome c is released from mitochondria through the actions of the proteins Bax and Bak. The mechanism of this release is enigmatic, but appears to stem from a multitude of Bax/Bak homo- and hetero-dimers of Bax/Bak inserted into the outer membrane. Once cytochrome c is released it binds with Apoptotic protease activating factor – 1 (Apaf-1) and ATP, which then bind to pro-caspase-9 to create a protein complex known as an apoptosome. The apoptosome cleaves the pro-caspase to its active form of caspase-9, which in turn cleaves and activates pro-caspase into the effector caspase-3. Mitochondria also release proteins known as SMACs (second mitochondria-derived activator of caspases) into the cell's cytosol following the increase in permeability of the mitochondria membranes. SMAC binds to proteins that inhibit apoptosis (IAPs) thereby deactivating them, and preventing the IAPs from arresting the process and therefore allowing apoptosis to proceed. IAP also normally suppresses the activity of a group of cysteine proteases called caspases, which carry out the degradation of the cell. Therefore, the actual degradation enzymes can be seen to be indirectly regulated by mitochondrial permeability. Extrinsic pathway Two theories of the direct initiation of apoptotic mechanisms in mammals have been suggested: the TNF-induced (tumor necrosis factor) model and the Fas-Fas ligand-mediated model, both involving receptors of the TNF receptor (TNFR) family coupled to extrinsic signals. TNF pathway TNF-alpha is a cytokine produced mainly by activated macrophages, and is the major extrinsic mediator of apoptosis. Most cells in the human body have two receptors for TNF-alpha: TNFR1 and TNFR2. The binding of TNF-alpha to TNFR1 has been shown to initiate the pathway that leads to caspase activation via the intermediate membrane proteins TNF receptor-associated death domain (TRADD) and Fas-associated death domain protein (FADD). cIAP1/2 can inhibit TNF-α signaling by binding to TRAF2. FLIP inhibits the activation of caspase-8. Binding of this receptor can also indirectly lead to the activation of transcription factors involved in cell survival and inflammatory responses. However, signalling through TNFR1 might also induce apoptosis in a caspase-independent manner. The link between TNF-alpha and apoptosis shows why an abnormal production of TNF-alpha plays a fundamental role in several human diseases, especially in autoimmune diseases. The TNF-alpha receptor superfamily also includes death receptors (DRs), such as DR4 and DR5. These receptors bind to the protein TRAIL and mediate apoptosis. Apoptosis is known to be one of the primary mechanisms of targeted cancer therapy. Luminescent iridium complex-peptide hybrids (IPHs) have recently been designed, which mimic TRAIL and bind to death receptors on cancer cells, thereby inducing their apoptosis. Fas pathway The fas receptor (First apoptosis signal) – (also known as Apo-1 or CD95) is a transmembrane protein of the TNF family which binds the Fas ligand (FasL). The interaction between Fas and FasL results in the formation of the death-inducing signaling complex (DISC), which contains the FADD, caspase-8 and caspase-10. In some types of cells (type I), processed caspase-8 directly activates other members of the caspase family, and triggers the execution of apoptosis of the cell. In other types of cells (type II), the Fas-DISC starts a feedback loop that spirals into increasing release of proapoptotic factors from mitochondria and the amplified activation of caspase-8. Common components Following TNF-R1 and Fas activation in mammalian cells a balance between proapoptotic (BAX, BID, BAK, or BAD) and anti-apoptotic (Bcl-Xl and Bcl-2) members of the Bcl-2 family are established. This balance is the proportion of proapoptotic homodimers that form in the outer-membrane of the mitochondrion. The proapoptotic homodimers are required to make the mitochondrial membrane permeable for the release of caspase activators such as cytochrome c and SMAC. Control of proapoptotic proteins under normal cell conditions of nonapoptotic cells is incompletely understood, but in general, Bax or Bak are activated by the activation of BH3-only proteins, part of the Bcl-2 family. Caspases Caspases play the central role in the transduction of ER apoptotic signals. Caspases are proteins that are highly conserved, cysteine-dependent aspartate-specific proteases. There are two types of caspases: initiator caspases (caspases 2, 8, 9, 10, 11, and 12) and effector caspases (caspases 3, 6, and 7). The activation of initiator caspases requires binding to specific oligomeric activator protein. Effector caspases are then activated by these active initiator caspases through proteolytic cleavage. The active effector caspases then proteolytically degrade a host of intracellular proteins to carry out the cell death program. Caspase-independent apoptotic pathway There also exists a caspase-independent apoptotic pathway that is mediated by AIF (apoptosis-inducing factor). Apoptosis model in amphibians The frog Xenopus laevis serves as an ideal model system for the study of the mechanisms of apoptosis. In fact, iodine and thyroxine also stimulate the spectacular apoptosis of the cells of the larval gills, tail and fins in amphibian's metamorphosis, and stimulate the evolution of their nervous system transforming the aquatic, vegetarian tadpole into the terrestrial, carnivorous frog. Negative regulators of apoptosis Negative regulation of apoptosis inhibits cell death signaling pathways, helping tumors to evade cell death and developing drug resistance. The ratio between anti-apoptotic (Bcl-2) and pro-apoptotic (Bax) proteins determines whether a cell lives or dies. Many families of proteins act as negative regulators categorized into either antiapoptotic factors, such as IAPs and Bcl-2 proteins or prosurvival factors like cFLIP, BNIP3, FADD, Akt, and NF-κB. Proteolytic caspase cascade: Killing the cell Many pathways and signals lead to apoptosis, but these converge on a single mechanism that actually causes the death of the cell. After a cell receives stimulus, it undergoes organized degradation of cellular organelles by activated proteolytic caspases. In addition to the destruction of cellular organelles, mRNA is rapidly and globally degraded by a mechanism that is not yet fully characterized. mRNA decay is triggered very early in apoptosis. A cell undergoing apoptosis shows a series of characteristic morphological changes. Early alterations include: Cell shrinkage and rounding occur because of the retraction of lamellipodia and the breakdown of the proteinaceous cytoskeleton by caspases. The cytoplasm appears dense, and the organelles appear tightly packed. Chromatin undergoes condensation into compact patches against the nuclear envelope (also known as the perinuclear envelope) in a process known as pyknosis, a hallmark of apoptosis. The nuclear envelope becomes discontinuous and the DNA inside it is fragmented in a process referred to as karyorrhexis. The nucleus breaks into several discrete chromatin bodies or nucleosomal units due to the degradation of DNA. Apoptosis progresses quickly and its products are quickly removed, making it difficult to detect or visualize on classical histology sections. During karyorrhexis, endonuclease activation leaves short DNA fragments, regularly spaced in size. These give a characteristic "laddered" appearance on agar gel after electrophoresis. Tests for DNA laddering differentiate apoptosis from ischemic or toxic cell death. Apoptotic cell disassembly Before the apoptotic cell is disposed of, there is a process of disassembly. There are three recognized steps in apoptotic cell disassembly: Membrane blebbing: The cell membrane shows irregular buds known as blebs. Initially these are smaller surface blebs. Later these can grow into larger so-called dynamic membrane blebs. An important regulator of apoptotic cell membrane blebbing is ROCK1 (rho associated coiled-coil-containing protein kinase 1). Formation of membrane protrusions: Some cell types, under specific conditions, may develop different types of long, thin extensions of the cell membrane called membrane protrusions. Three types have been described: microtubule spikes, apoptopodia (feet of death), and beaded apoptopodia (the latter having a beads-on-a-string appearance). Pannexin 1 is an important component of membrane channels involved in the formation of apoptopodia and beaded apoptopodia. Fragmentation: The cell breaks apart into multiple vesicles called apoptotic bodies, which undergo phagocytosis. The plasma membrane protrusions may help bring apoptotic bodies closer to phagocytes. Removal of dead cells The removal of dead cells by neighboring phagocytic cells has been termed efferocytosis. Dying cells that undergo the final stages of apoptosis display phagocytotic molecules, such as phosphatidylserine, on their cell surface. Phosphatidylserine is normally found on the inner leaflet surface of the plasma membrane, but is redistributed during apoptosis to the extracellular surface by a protein known as scramblase. These molecules mark the cell for phagocytosis by cells possessing the appropriate receptors, such as macrophages. The removal of dying cells by phagocytes occurs in an orderly manner without eliciting an inflammatory response. During apoptosis cellular RNA and DNA are separated from each other and sorted to different apoptotic bodies; separation of RNA is initiated as nucleolar segregation. Pathway knock-outs Many knock-outs have been made in the apoptosis pathways to test the function of each of the proteins. Several caspases, in addition to APAF1 and FADD, have been mutated to determine the new phenotype. In order to create a tumor necrosis factor (TNF) knockout, an exon containing the nucleotides 3704–5364 was removed from the gene. This exon encodes a portion of the mature TNF domain, as well as the leader sequence, which is a highly conserved region necessary for proper intracellular processing. TNF-/- mice develop normally and have no gross structural or morphological abnormalities. However, upon immunization with SRBC (sheep red blood cells), these mice demonstrated a deficiency in the maturation of an antibody response; they were able to generate normal levels of IgM, but could not develop specific IgG levels. Apaf-1 is the protein that turns on caspase 9 by cleavage to begin the caspase cascade that leads to apoptosis. Since a -/- mutation in the APAF-1 gene is embryonic lethal, a gene trap strategy was used in order to generate an APAF-1 -/- mouse. This assay is used to disrupt gene function by creating an intragenic gene fusion. When an APAF-1 gene trap is introduced into cells, many morphological changes occur, such as spina bifida, the persistence of interdigital webs, and open brain. In addition, after embryonic day 12.5, the brain of the embryos showed several structural changes. APAF-1 cells are protected from apoptosis stimuli such as irradiation. A BAX-1 knock-out mouse exhibits normal forebrain formation and a decreased programmed cell death in some neuronal populations and in the spinal cord, leading to an increase in motor neurons. The caspase proteins are integral parts of the apoptosis pathway, so it follows that knock-outs made have varying damaging results. A caspase 9 knock-out leads to a severe brain malformation . A caspase 8 knock-out leads to cardiac failure and thus embryonic lethality . However, with the use of cre-lox technology, a caspase 8 knock-out has been created that exhibits an increase in peripheral T cells, an impaired T cell response, and a defect in neural tube closure . These mice were found to be resistant to apoptosis mediated by CD95, TNFR, etc. but not resistant to apoptosis caused by UV irradiation, chemotherapeutic drugs, and other stimuli. Finally, a caspase 3 knock-out was characterized by ectopic cell masses in the brain and abnormal apoptotic features such as membrane blebbing or nuclear fragmentation . A remarkable feature of these KO mice is that they have a very restricted phenotype: Casp3, 9, APAF-1 KO mice have deformations of neural tissue and FADD and Casp 8 KO showed defective heart development, however, in both types of KO other organs developed normally and some cell types were still sensitive to apoptotic stimuli suggesting that unknown proapoptotic pathways exist. Methods for distinguishing apoptotic from necrotic cells Label-free live cell imaging, time-lapse microscopy, flow fluorocytometry, and transmission electron microscopy can be used to compare apoptotic and necrotic cells. There are also various biochemical techniques for analysis of cell surface markers (phosphatidylserine exposure versus cell permeability by flow cytometry), cellular markers such as DNA fragmentation (flow cytometry), caspase activation, Bid cleavage, and cytochrome c release (Western blotting). Supernatant screening for caspases, HMGB1, and cytokeratin 18 release can identify primary from secondary necrotic cells. However, no distinct surface or biochemical markers of necrotic cell death have been identified yet, and only negative markers are available. These include absence of apoptotic markers (caspase activation, cytochrome c release, and oligonucleosomal DNA fragmentation) and differential kinetics of cell death markers (phosphatidylserine exposure and cell membrane permeabilization). A selection of techniques that can be used to distinguish apoptosis from necroptotic cells could be found in these references. Implication in disease Defective pathways The many different types of apoptotic pathways contain a multitude of different biochemical components, many of them not yet understood. As a pathway is more or less sequential in nature, removing or modifying one component leads to an effect in another. In a living organism, this can have disastrous effects, often in the form of disease or disorder. A discussion of every disease caused by modification of the various apoptotic pathways would be impractical, but the concept overlying each one is the same: The normal functioning of the pathway has been disrupted in such a way as to impair the ability of the cell to undergo normal apoptosis. This results in a cell that lives past its "use-by date" and is able to replicate and pass on any faulty machinery to its progeny, increasing the likelihood of the cell's becoming cancerous or diseased. A recently described example of this concept in action can be seen in the development of a lung cancer called NCI-H460. The X-linked inhibitor of apoptosis protein (XIAP) is overexpressed in cells of the H460 cell line. XIAPs bind to the processed form of caspase-9 and suppress the activity of apoptotic activator cytochrome c, therefore overexpression leads to a decrease in the number of proapoptotic agonists. As a consequence, the balance of anti-apoptotic and proapoptotic effectors is upset in favour of the former, and the damaged cells continue to replicate despite being directed to die. Defects in regulation of apoptosis in cancer cells occur often at the level of control of transcription factors. As a particular example, defects in molecules that control transcription factor NF-κB in cancer change the mode of transcriptional regulation and the response to apoptotic signals, to curtail dependence on the tissue that the cell belongs. This degree of independence from external survival signals, can enable cancer metastasis. Dysregulation of p53 The tumor-suppressor protein p53 accumulates when DNA is damaged due to a chain of biochemical factors. Part of this pathway includes alpha-interferon and beta-interferon, which induce transcription of the p53 gene, resulting in the increase of p53 protein level and enhancement of cancer cell-apoptosis. p53 prevents the cell from replicating by stopping the cell cycle at G1, or interphase, to give the cell time to repair; however, it will induce apoptosis if damage is extensive and repair efforts fail. Any disruption to the regulation of the p53 or interferon genes will result in impaired apoptosis and the possible formation of tumors. Inhibition Inhibition of apoptosis can result in a number of cancers, inflammatory diseases, and viral infections. It was originally believed that the associated accumulation of cells was due to an increase in cellular proliferation, but it is now known that it is also due to a decrease in cell death. The most common of these diseases is cancer, the disease of excessive cellular proliferation, which is often characterized by an overexpression of IAP family members. As a result, the malignant cells experience an abnormal response to apoptosis induction: Cycle-regulating genes (such as p53, ras or c-myc) are mutated or inactivated in diseased cells, and further genes (such as bcl-2) also modify their expression in tumors. Some apoptotic factors are vital during mitochondrial respiration e.g. cytochrome C. Pathological inactivation of apoptosis in cancer cells is correlated with frequent respiratory metabolic shifts toward glycolysis (an observation known as the "Warburg hypothesis". HeLa cell Apoptosis in HeLa cells is inhibited by proteins produced by the cell; these inhibitory proteins target retinoblastoma tumor-suppressing proteins. These tumor-suppressing proteins regulate the cell cycle, but are rendered inactive when bound to an inhibitory protein. HPV E6 and E7 are inhibitory proteins expressed by the human papillomavirus, HPV being responsible for the formation of the cervical tumor from which HeLa cells are derived. HPV E6 causes p53, which regulates the cell cycle, to become inactive. HPV E7 binds to retinoblastoma tumor suppressing proteins and limits its ability to control cell division. These two inhibitory proteins are partially responsible for HeLa cells' immortality by inhibiting apoptosis to occur. Treatments The main method of treatment for potential death from signaling-related diseases involves either increasing or decreasing the susceptibility of apoptosis in diseased cells, depending on whether the disease is caused by either the inhibition of or excess apoptosis. For instance, treatments aim to restore apoptosis to treat diseases with deficient cell death and to increase the apoptotic threshold to treat diseases involved with excessive cell death. To stimulate apoptosis, one can increase the number of death receptor ligands (such as TNF or TRAIL), antagonize the anti-apoptotic Bcl-2 pathway, or introduce Smac mimetics to inhibit the inhibitor (IAPs). The addition of agents such as Herceptin, Iressa, or Gleevec works to stop cells from cycling and causes apoptosis activation by blocking growth and survival signaling further upstream. Finally, adding p53-MDM2 complexes displaces p53 and activates the p53 pathway, leading to cell cycle arrest and apoptosis. Many different methods can be used either to stimulate or to inhibit apoptosis in various places along the death signaling pathway. Apoptosis is a multi-step, multi-pathway cell-death programme that is inherent in every cell of the body. In cancer, the apoptosis cell-division ratio is altered. Cancer treatment by chemotherapy and irradiation kills target cells primarily by inducing apoptosis. Hyperactive apoptosis On the other hand, loss of control of cell death (resulting in excess apoptosis) can lead to neurodegenerative diseases, hematologic diseases, and tissue damage. Neurons that rely on mitochondrial respiration undergo apoptosis in neurodegenerative diseases such as Alzheimer's and Parkinson's. (an observation known as the "Inverse Warburg hypothesis"). Moreover, there is an inverse epidemiological comorbidity between neurodegenerative diseases and cancer. The progression of HIV is directly linked to excess, unregulated apoptosis. In a healthy individual, the number of CD4+ lymphocytes is in balance with the cells generated by the bone marrow; however, in HIV-positive patients, this balance is lost due to an inability of the bone marrow to regenerate CD4+ cells. In the case of HIV, CD4+ lymphocytes die at an accelerated rate through uncontrolled apoptosis, when stimulated. At the molecular level, hyperactive apoptosis can be caused by defects in signaling pathways that regulate the Bcl-2 family proteins. Increased expression of apoptotic proteins such as BIM, or their decreased proteolysis, leads to cell death and can cause a number of pathologies, depending on the cells where excessive activity of BIM occurs. Cancer cells can escape apoptosis through mechanisms that suppress BIM expression or by increased proteolysis of BIM. Treatments Treatments aiming to inhibit works to block specific caspases. Finally, the Akt protein kinase promotes cell survival through two pathways. Akt phosphorylates and inhibits Bad (a Bcl-2 family member), causing Bad to interact with the 14-3-3 scaffold, resulting in Bcl dissociation and thus cell survival. Akt also activates IKKα, which leads to NF-κB activation and cell survival. Active NF-κB induces the expression of anti-apoptotic genes such as Bcl-2, resulting in inhibition of apoptosis. NF-κB has been found to play both an antiapoptotic role and a proapoptotic role depending on the stimuli utilized and the cell type. HIV progression The progression of the human immunodeficiency virus infection into AIDS is due primarily to the depletion of CD4+ T-helper lymphocytes in a manner that is too rapid for the body's bone marrow to replenish the cells, leading to a compromised immune system. One of the mechanisms by which T-helper cells are depleted is apoptosis, which results from a series of biochemical pathways: HIV enzymes deactivate anti-apoptotic Bcl-2. This does not directly cause cell death but primes the cell for apoptosis should the appropriate signal be received. In parallel, these enzymes activate proapoptotic procaspase-8, which does directly activate the mitochondrial events of apoptosis. HIV may increase the level of cellular proteins that prompt Fas-mediated apoptosis. HIV proteins decrease the amount of CD4 glycoprotein marker present on the cell membrane. Released viral particles and proteins present in extracellular fluid are able to induce apoptosis in nearby "bystander" T helper cells. HIV decreases the production of molecules involved in marking the cell for apoptosis, giving the virus time to replicate and continue releasing apoptotic agents and virions into the surrounding tissue. The infected CD4+ cell may also receive the death signal from a cytotoxic T cell. Cells may also die as direct consequences of viral infections. HIV-1 expression induces tubular cell G2/M arrest and apoptosis. The progression from HIV to AIDS is not immediate or even necessarily rapid; HIV's cytotoxic activity toward CD4+ lymphocytes is classified as AIDS once a given patient's CD4+ cell count falls below 200. Researchers from Kumamoto University in Japan have developed a new method to eradicate HIV in viral reservoir cells, named "Lock-in and apoptosis." Using the synthesized compound Heptanoylphosphatidyl L-Inositol Pentakisphophate (or L-Hippo) to bind strongly to the HIV protein PR55Gag, they were able to suppress viral budding. By suppressing viral budding, the researchers were able to trap the HIV virus in the cell and allow for the cell to undergo apoptosis (natural cell death). Associate Professor Mikako Fujita has stated that the approach is not yet available to HIV patients because the research team has to conduct further research on combining the drug therapy that currently exists with this "Lock-in and apoptosis" approach to lead to complete recovery from HIV. Viral infection Viral induction of apoptosis occurs when one or several cells of a living organism are infected with a virus, leading to cell death. Cell death in organisms is necessary for the normal development of cells and the cell cycle maturation. It is also important in maintaining the regular functions and activities of cells. Viruses can trigger apoptosis of infected cells via a range of mechanisms including: Receptor binding Activation of protein kinase R (PKR) Interaction with p53 Expression of viral proteins coupled to MHC proteins on the surface of the infected cell, allowing recognition by cells of the immune system (such as natural killer and cytotoxic T cells) that then induce the infected cell to undergo apoptosis. Canine distemper virus (CDV) is known to cause apoptosis in central nervous system and lymphoid tissue of infected dogs in vivo and in vitro. Apoptosis caused by CDV is typically induced via the extrinsic pathway, which activates caspases that disrupt cellular function and eventually leads to the cells death. In normal cells, CDV activates caspase-8 first, which works as the initiator protein followed by the executioner protein caspase-3. However, apoptosis induced by CDV in HeLa cells does not involve the initiator protein caspase-8. HeLa cell apoptosis caused by CDV follows a different mechanism than that in vero cell lines. This change in the caspase cascade suggests CDV induces apoptosis via the intrinsic pathway, excluding the need for the initiator caspase-8. The executioner protein is instead activated by the internal stimuli caused by viral infection not a caspase cascade. The Oropouche virus (OROV) is found in the family Bunyaviridae. The study of apoptosis brought on by Bunyaviridae was initiated in 1996, when it was observed that apoptosis was induced by the La Crosse virus into the kidney cells of baby hamsters and into the brains of baby mice. OROV is a disease that is transmitted between humans by the biting midge (Culicoides paraensis). It is referred to as a zoonotic arbovirus and causes febrile illness, characterized by the onset of a sudden fever known as Oropouche fever. The Oropouche virus also causes disruption in cultured cells – cells that are cultivated in distinct and specific conditions. An example of this can be seen in HeLa cells, whereby the cells begin to degenerate shortly after they are infected. With the use of gel electrophoresis, it can be observed that OROV causes DNA fragmentation in HeLa cells. It can be interpreted by counting, measuring, and analyzing the cells of the Sub/G1 cell population. When HeLA cells are infected with OROV, the cytochrome C is released from the membrane of the mitochondria, into the cytosol of the cells. This type of interaction shows that apoptosis is activated via an intrinsic pathway. In order for apoptosis to occur within OROV, viral uncoating, viral internalization, along with the replication of cells is necessary. Apoptosis in some viruses is activated by extracellular stimuli. However, studies have demonstrated that the OROV infection causes apoptosis to be activated through intracellular stimuli and involves the mitochondria. Many viruses encode proteins that can inhibit apoptosis. Several viruses encode viral homologs of Bcl-2. These homologs can inhibit proapoptotic proteins such as BAX and BAK, which are essential for the activation of apoptosis. Examples of viral Bcl-2 proteins include the Epstein-Barr virus BHRF1 protein and the adenovirus E1B 19K protein. Some viruses express caspase inhibitors that inhibit caspase activity and an example is the CrmA protein of cowpox viruses. Whilst a number of viruses can block the effects of TNF and Fas. For example, the M-T2 protein of myxoma viruses can bind TNF preventing it from binding the TNF receptor and inducing a response. Furthermore, many viruses express p53 inhibitors that can bind p53 and inhibit its transcriptional transactivation activity. As a consequence, p53 cannot induce apoptosis, since it cannot induce the expression of proapoptotic proteins. The adenovirus E1B-55K protein and the hepatitis B virus HBx protein are examples of viral proteins that can perform such a function. Viruses can remain intact from apoptosis in particular in the latter stages of infection. They can be exported in the apoptotic bodies that pinch off from the surface of the dying cell, and the fact that they are engulfed by phagocytes prevents the initiation of a host response. This favours the spread of the virus. Prions can cause apoptosis in neurons. Plants Programmed cell death in plants has a number of molecular similarities to that of animal apoptosis, but it also has differences, notable ones being the presence of a cell wall and the lack of an immune system that removes the pieces of the dead cell. Instead of an immune response, the dying cell synthesizes substances to break itself down and places them in a vacuole that ruptures as the cell dies. Additionally, plants do not contain phagocytic cells, which are essential in the process of breaking down and removing apoptotic bodies. Whether this whole process resembles animal apoptosis closely enough to warrant using the name apoptosis (as opposed to the more general programmed cell death) is unclear. Caspase-independent apoptosis The characterization of the caspases allowed the development of caspase inhibitors, which can be used to determine whether a cellular process involves active caspases. Using these inhibitors it was discovered that cells can die while displaying a morphology similar to apoptosis without caspase activation. Later studies linked this phenomenon to the release of AIF (apoptosis-inducing factor) from the mitochondria and its translocation into the nucleus mediated by its NLS (nuclear localization signal). Inside the mitochondria, AIF is anchored to the inner membrane. In order to be released, the protein is cleaved by a calcium-dependent calpain protease.
Biology and health sciences
Cell processes
Biology
2494
https://en.wikipedia.org/wiki/Aurochs
Aurochs
The aurochs (Bos primigenius) ( or , plural aurochs or aurochsen) is an extinct species of bovine, considered to be the wild ancestor of modern domestic cattle. With a shoulder height of up to in bulls and in cows, it was one of the largest herbivores in the Holocene; it had massive elongated and broad horns that reached in length. The aurochs was part of the Pleistocene megafauna. It probably evolved in Asia and migrated west and north during warm interglacial periods. The oldest-known aurochs fossils date to the Middle Pleistocene. The species had an expansive range spanning from Western Europe and North Africa to the Indian subcontinent and East Asia. The distribution of the aurochs progressively contracted during the Holocene due to habitat loss and hunting, with the last known individual dying in the Jaktorów forest in Poland in 1627. There is a long history of interaction between aurochs and humans, including archaic humans like Neanderthals. The aurochs is depicted in Paleolithic cave paintings, Neolithic petroglyphs, Ancient Egyptian reliefs and Bronze Age figurines. It symbolised power, sexual potency and prowess in religions of the ancient Near East. Its horns were used in votive offerings, as trophies and drinking horns. Two aurochs domestication events occurred during the Neolithic Revolution. One gave rise to the domestic taurine cattle (Bos taurus) in the Fertile Crescent in the Near East that was introduced to Europe via the Balkans and the coast of the Mediterranean Sea. Hybridisation between aurochs and early domestic cattle occurred during the early Holocene. Domestication of the Indian aurochs led to the zebu cattle (Bos indicus) that hybridised with early taurine cattle in the Near East about 4,000 years ago. Some modern cattle breeds exhibit features reminiscent of the aurochs, such as the dark colour and light eel stripe along the back of bulls, the lighter colour of cows, or an aurochs-like horn shape. Etymology Both "aur" and "ur" are Germanic or Celtic words meaning "wild ox". In Old High German, this word was compounded with ohso ('ox') to ūrohso, which became the early modern Aurochs. The Latin word "urus" was used for wild ox from the Gallic Wars onwards. The use of the plural form in English is a direct parallel of the German plural Ochsen and recreates the same distinction by analogy as English singular ox and plural oxen, although aurochs may stand for both the singular and the plural term; both are attested. Taxonomy and evolution The scientific name Bos taurus was introduced by Carl Linnaeus in 1758 for feral cattle in Poland. The scientific name Bos primigenius was proposed for the aurochs by Ludwig Heinrich Bojanus who described the skeletal differences between the aurochs and domestic cattle in 1825, published in 1827. The name Bos namadicus was used by Hugh Falconer in 1859 for cattle fossils found in Nerbudda deposits. Bos primigenius mauritanicus was coined by Philippe Thomas in 1881 who described fossils found in deposits near Oued Seguen west of Constantine, Algeria. In 2003, the International Commission on Zoological Nomenclature placed Bos primigenius on the Official List of Specific Names in Zoology and thereby recognized the validity of this name for a wild species. Subspecies Three aurochs subspecies have traditionally been recognised to have existed in historical times: The Eurasian aurochs (B. p. primigenius) was part of the Pleistocene megafauna in Eurasia and survived until the 17th century in Eastern Europe. The Indian aurochs (B. p. namadicus) lived on the Indian subcontinent. The North African aurochs (B. p. mauritanicus) lived north of the Sahara. This subspecies has also been called B. p. opisthonomus. In the 21st century, Chinese geneticists published mitochondrial DNA evidence supporting that Eurasian aurochs populations from northern China were genetically isolated for large stretches of the Pleistocene, and as a result distinctive enough to be considered a separate subspecies, the East Asian aurochs (B. p. sinensis), even if the animals were not morphologically distinct. At least two dwarf subspecies of aurochs developed in Mediterranean islands as a result of sea level changes during the Pleistocene: B. p. siciliae on the Italian island of Sicily. B. p. thrinacius on the Greek island of Kythira. Evolution Calibrations using fossils of 16 Bovidae species indicate that the Bovini tribe evolved about . The Bos and Bison genetic lineages are estimated to have genetically diverged from the Bovini about . The following cladogram shows the phylogenetic relationships of the aurochs based on analysis of nuclear and mitochondrial genomes in the Bovini tribe: The cold Pliocene climate caused an extension of open grassland, which enabled the evolution of large grazers. The origin of the aurochs is unclear, with authors suggesting either an African or Asian origin for the species. Bos acutifrons is considered to be a possible ancestor of the aurochs, of which a fossil skull was excavated in the Sivalik Hills in India that dates to the Early Pleistocene about . An aurochs skull excavated in Tunisia's Kef Governorate from early Middle Pleistocene strata dating about is the oldest well-dated fossil specimen to date. The authors of the study proposed that Bos might have evolved in Africa and migrated to Eurasia during the Middle Pleistocene. Middle Pleistocene aurochs fossils were also excavated in a Saharan erg in the Hoggar Mountains. Fossils of the Indian subspecies (Bos primigenius namadicus) were excavated in alluvial deposits in South India dating to the Middle Pleistocene. Remains of aurochs are common in Late Pleistocene sites across the Indian subcontinent. The earliest fossils in Europe date to the Middle Pleistocene. One site widely historically suggested to represent the first appearance of aurochs in Europe was the Notarchirico site in southern Italy, dating around 600,000 years ago, however a 2024 re-examination of the site found that presence of aurochs at the locality was unsupported, with the oldest records of aurochs now placed at the Ponte Molle site in central Italy, dating to around 550-450,000 years ago. Aurochs were present in Britain by Marine Isotope Stage 11 ~400,000 years ago. The earliest remains aurochs in East Asia are uncertain, but may date to the late Middle Pleistocene. Late Pleistocene aurochs fossils were found in Affad 23 in Sudan dating to 50,000 years ago when the climate in this region was more humid than during the African humid period. Following the most recent deglaciation, the range of the aurochs expanded into Denmark and southern Sweden at the beginning of the Holocene, around 12-11,000 years ago. Description According to a 16th-century description by Sigismund von Herberstein, the aurochs was pitch-black with a grey streak along the back; his wood carving made in 1556 was based on a culled aurochs, which he had received in Mazovia. In 1827, Charles Hamilton Smith published an image of an aurochs that was based on an oil painting that he had purchased from a merchant in Augsburg, which is thought to have been made in the early 16th century. This painting is thought to have shown an aurochs, although some authors suggested it may have shown a hybrid between an aurochs and domestic cattle, or a Polish steer. Contemporary reconstructions of the aurochs are based on skeletons and the information derived from contemporaneous artistic depictions and historic descriptions of the animal. Coat colour Remains of aurochs hair were not known until the early 1980s. Depictions show that the North African aurochs may have had a light saddle marking on its back. Calves were probably born with a chestnut colour, and young bulls changed to black with a white eel stripe running down the spine, while cows retained a reddish-brown colour. Both sexes had a light-coloured muzzle, but evidence for variation in coat colour does not exist. Egyptian grave paintings show cattle with a reddish-brown coat colour in both sexes, with a light saddle, but the horn shape of these suggest that they may depict domesticated cattle. Many primitive cattle breeds, particularly those from Southern Europe, display similar coat colours to the aurochs, including the black colour in bulls with a light eel stripe, a pale mouth, and similar sexual dimorphism in colour. A feature often attributed to the aurochs is blond forehead hairs. According to historical descriptions of the aurochs, it had long and curly forehead hair, but none mentions a certain colour. Although the colour is present in a variety of primitive cattle breeds, it is probably a discolouration that appeared after domestication. Body shape The proportions and body shape of the aurochs were strikingly different from many modern cattle breeds. For example, the legs were considerably longer and more slender, resulting in a shoulder height that nearly equalled the trunk length. The skull, carrying the large horns, was substantially larger and more elongated than in most cattle breeds. As in other wild bovines, the body shape of the aurochs was athletic, and especially in bulls, showed a strongly expressed neck and shoulder musculature. Therefore, the fore hand was larger than the rear, similar to the wisent, but unlike many domesticated cattle. Even in carrying cows, the udder was small and hardly visible from the side; this feature is equal to that of other wild bovines. Size The aurochs was one of the largest herbivores in Holocene Europe. The size of an aurochs appears to have varied by region, with larger specimens in northern Europe than farther south. Aurochs in Denmark and Germany ranged in height at the shoulders between in bulls and in cows, while aurochs bulls in Hungary reached . The African aurochs was similar in size to the European aurochs in the Pleistocene, but declined in size during the transition to the Holocene; it may have also varied in size geographically. The body mass of aurochs appears to have shown some variability. Some individuals reached around , whereas those from the late Middle Pleistocene are estimated to have weighed up to . The aurochs exhibited considerable sexual dimorphism in the size of males and females. Horns The horns were massive, reaching in length and between in diameter. Its horns grew from the skull at a 60-degree angle to the muzzle facing forwards and were curved in three directions, namely upwards and outwards at the base, then swinging forwards and inwards, then inwards and upwards. The curvature of bull horns was more strongly expressed than horns of cows. The basal circumference of horn cores reached in the largest Chinese specimen and in a French specimen. Some cattle breeds still show horn shapes similar to that of the aurochs, such as the Spanish fighting bull, and occasionally also individuals of derived breeds. Genetics A well-preserved aurochs bone yielded sufficient mitochondrial DNA for a sequence analysis in 2010, which showed that its genome consists of 16,338 base pairs. Further studies using the aurochs whole genome sequence have identified candidate microRNA-regulated domestication genes. A comprehensive sequence analysis of Late Pleistocene and Holocene aurochs published in 2024 suggested that Indian aurochs (represented by modern zebu cattle) were the most genetically divergent aurochs population, having diverged from other aurochs around 300–166,000 years ago, with other aurochs populations spanning Europe and the Middle East to East Asia sharing much more recent common ancestry within the last 100,000 years. Late Pleistocene European aurochs were found to have a small (~3%) ancestry component from a divergent lineage that split prior to the divergence of Indian and other aurochs, suggested to be residual from earlier European aurochs populations. Towards the end of the Late Pleistocene, European aurochs experienced considerable gene flow from Middle Eastern aurochs. European Holocene aurochs primarily descend from those that were present in the Iberian Peninsula during the Last Glacial Maximum, with the Holocene also seeing mixing between previously isolated aurochs populations. Distribution and habitat The aurochs was widely distributed in North Africa, Mesopotamia, and throughout Europe to the Pontic–Caspian steppe, Caucasus and Western Siberia in the west and to the Gulf of Finland and Lake Ladoga in the north. Fossil horns attributed to the aurochs were found in Late Pleistocene deposits at an elevation of on the eastern margin of the Tibetan plateau close to the Heihe River in Zoigê County that date to about 26,620±600 years BP. Most fossils in China were found in plains below in Heilongjiang, Yushu, Jilin, northeastern Manchuria, Inner Mongolia, near Beijing, Yangyuan County in Hebei province, Datong and Dingcun in Shanxi province, Huan County in Gansu and in Guizhou provinces. Ancient DNA in aurochs fossils found in Northeast China indicate that the aurochs survived in the region until at least 5,000 years BP. Fossils were also excavated on the Korean Peninsula, and in the Japanese archipelago. During warm interglacial periods the aurochs was widespread across Europe, but during glacial periods retreated into southern refugia in the Iberian, Italian and Balkan peninsulas. Landscapes in Europe probably consisted of dense forests throughout much of the last few thousand years. The aurochs is likely to have used riparian forests and wetlands along lakes. Analysis of specimens found in Britain suggests that aurochs preferred inhabiting low lying relatively flat landscapes. Pollen of mostly small shrubs found in fossiliferous sediments with aurochs remains in China indicate that it preferred temperate grassy plains or grasslands bordering woodlands. It may have also lived in open grasslands. In the warm Atlantic period of the Holocene, it was restricted to remaining open country and forest margins, where competition with livestock and humans gradually increased leading to a successive decline of the aurochs. Behaviour and ecology Aurochs formed small herds mainly in winter, but typically lived singly or in smaller groups during the summer. If aurochs had social behaviour similar to their descendants, social status would have been gained through displays and fights, in which both cows and bulls engaged. Since it has a hypsodont jaw, it has been suggested to have been a grazer, with a food selection very similar to domesticated cattle feeding on grass, twigs and acorns. Mesowear analysis of Holocene Danish aurochs premolar teeth indicates that it changed from an abrasion-dominated grazer in the Danish Preboreal to a mixed feeder in the Boreal, Atlantic and Subboreal periods. Dental microwear and mesowear analysis of specimens from the Pleistocene of Britain has found these aurochs had mixed feeding to browsing diets, rather than being strict grazers. Mating season was in September, and calves were born in spring. Rutting bulls had violent fights, and evidence from the Jaktorów forest shows that they were fully capable of mortally wounding one another. In autumn, aurochs fed for the winter, gaining weight and possessing a shinier coat than during the rest of the year. Calves stayed with their mothers until they were strong enough to join and keep up with the herd on the feeding grounds. Aurochs calves would have been vulnerable to predation by grey wolves (Canis lupus) and brown bears (Ursus arctos), while the immense size and strength of healthy adult aurochs meant they likely did not need to fear most predators. According to historical descriptions, the aurochs was swift despite its build, could be very aggressive if provoked, and was not generally fearful of humans. In Middle Pleistocene Europe, aurochs were likely predated upon by the "European jaguar" Panthera gombaszoegensis and the scimitar toothed-cat (Homotherium latidens), with evidence for the consumption of aurochs by cave hyenas (Crocuta (Crocuta) spelaea) having been found from Late Pleistocene Italy. The lion (Panthera leo), tiger (Panthera tigris) and wolf are thought to have been the aurochs main predators during the Holocene. During interglacial periods in the Middle Pleistocene and early Late Pleistocene in Europe, the aurochs occurred alongside other large temperate adapted megafauna species, including the straight-tusked elephant (Palaeoloxodon antiquus), Merck's rhinoceros (Stephanorhinus kirchbergensis), the narrow-nosed rhinoceros, (Stephanorhinus hemitoechus) and the Irish elk/giant deer (Megaloceros giganteus). Relationship with humans In Asia Acheulean layers in Hunasagi on India's southern Deccan Plateau yielded aurochs bones with cut marks. An aurochs bone with cut marks induced with flint was found in a Middle Paleolithic layer at the Nesher Ramla Homo site in Israel; it was dated to Marine Isotope Stage 5 about 120,000 years ago. An archaeological excavation in Israel found traces of a feast held by the Natufian culture around 12,000 years BP, in which three aurochs were eaten. This appears to be an uncommon occurrence in the culture and was held in conjunction with the burial of an older woman, presumably of some social status. Petroglyphs depicting aurochs in Gobustan Rock Art in Azerbaijan date to the Upper Paleolithic to Neolithic periods. Aurochs bones and skulls found at the settlements of Mureybet, Hallan Çemi and Çayönü indicate that people stored and shared food in the Pre-Pottery Neolithic B culture. Remains of an aurochs were also found in a necropolis in Sidon, Lebanon, dating to around 3,700 years BP; the aurochs was buried together with numerous animals, a few human bones and foods. Seals dating to the Indus Valley civilisation found in Harappa and Mohenjo-daro show an animal with curved horns like an aurochs. Aurochs figurines were made by the Maykop culture in the Western Caucasus. The aurochs is denoted in the Akkadian words rīmu and rēmu, both used in the context of hunts by rulers such as Naram-Sin of Akkad, Tiglath-Pileser I and Shalmaneser III; in Mesopotamia, it symbolised power and sexual potency, was an epithet of the gods Enlil and Shamash, denoted prowess as an epithet of the king Sennacherib and the hero Gilgamesh. Wild bulls are frequently referred to in Ugaritic texts as hunted by and sacrificed to the god Baal. An aurochs is depicted on Babylon's Ishtar Gate, constructed in the 6th century BC. In Africa Petroglyphs depicting aurochs found in Qurta in the upper Nile valley were dated to the Late Pleistocene about 19–15,000 years BP using luminescence dating and are the oldest engravings found to date in Africa. Aurochs are part of hunting scenes in reliefs in a tomb at Thebes, Egypt dating to the 20th century BC, and in the mortuary temple of Ramesses III at Medinet Habu dating to around 1175 BC. The latter is the youngest depiction of aurochs in Ancient Egyptian art to date. In Europe Evidence has been found for the butchery of aurochs by archaic humans in Europe during the Middle Palaeolithic, such as the Biache-Saint-Vaast site in northern France dating to around 240,000 years ago, where bones of aurochs have been found burnt by fire and with cut marks, thought to have been created by Neanderthals. At the late Middle Palaeolithic Cueva Des-Cubierta site in Spain, Neanderthals are proposed to have kept the skulls of aurochs as hunting trophies. The aurochs is widely represented in Upper Paleolithic cave paintings in the Chauvet and Lascaux caves in southern France dating to 36,000 and 21,000 years BP, respectively. Two Paleolithic rock engravings in the Calabrian Romito Cave depict an aurochs. Palaeolithic engravings showing aurochs were also found in the Grotta del Genovese on the Italian island of Levanzo. Upper Paleolithic rock engravings and paintings depicting the aurochs were also found in caves on the Iberian Peninsula dating from the Gravettian to the Magdalenian cultures. Aurochs bones with chop and cut marks were found at various Mesolithic hunting and butchering sites in France, Luxemburg, Germany, the Netherlands, England and Denmark. Aurochs bones were also found in Mesolithic settlements by the Narva and Emajõgi rivers in Estonia. Aurochs and human bones were uncovered from pits and burnt mounds at several Neolithic sites in England. A cup found in the Greek site of Vaphio shows a hunting scene, in which people try to capture an aurochs. One of the bulls throws one hunter on the ground while attacking the second with its horns. The cup seems to date to Mycenaean Greece. Greeks and Paeonians hunted aurochs and used their huge horns as trophies, cups for wine, and offerings to the gods and heroes. The ox mentioned by Samus, Philippus of Thessalonica and Antipater as killed by Philip V of Macedon on the foothills of mountain Orvilos, was actually an aurochs; Philip offered the horns, which were long and the skin to a temple of Hercules. The aurochs was described in Julius Caesar's Commentarii de Bello Gallico. Aurochs were occasionally captured and exhibited in venatio shows in Roman amphitheatres such as the Colosseum. Aurochs horns were often used by Romans as hunting horns. In the , Sigurd kills four aurochs. During the Middle Ages, aurochs horns were used as drinking horns including the horn of the last bull; many aurochs horn sheaths are preserved today. The aurochs drinking horn at Corpus Christi College, Cambridge was engraved with the college's coat of arms in the 17th century. An aurochs head with a star between its horns and Christian iconographic elements represents the official coat of arms of Moldavia perpetuated for centuries. Aurochs were hunted with arrows, nets and hunting dogs, and its hair on the forehead was cut from the living animal; belts were made out of this hair and believed to increase the fertility of women. When the aurochs was slaughtered, the os cordis was extracted from the heart; this bone contributed to the mystique and magical powers that were attributed to it. In eastern Europe, the aurochs has left traces in expressions like "behaving like an aurochs" for a drunken person behaving badly, and "a bloke like an aurochs" for big and strong people. Domestication The earliest-known domestication of the aurochs dates to the Neolithic Revolution in the Fertile Crescent, where cattle hunted and kept by Neolithic farmers gradually decreased in size between 9800 and 7500 BC. Aurochs bones found at Mureybet and Göbekli Tepe are larger in size than cattle bones from later Neolithic settlements in northern Syria like Dja'de el-Mughara and Tell Halula. In Late Neolithic sites of northern Iraq and western Iran dating to the sixth millennium BC, cattle remains are also smaller but more frequent, indicating that domesticated cattle were imported during the Halaf culture from the central Fertile Crescent region. Results of genetic research indicate that the modern taurine cattle (Bos taurus) arose from 80 aurochs tamed in southeastern Anatolia and northern Syria about 10,500 years ago. Taurine cattle spread into the Balkans and northern Italy along the Danube River and the coast of the Mediterranean Sea. Hybridisation between male aurochs and early domestic cattle occurred in central Europe between 9500 and 1000 BC. Analyses of mitochondrial DNA sequences of Italian aurochs specimens dated to 17–7,000 years ago and 51 modern cattle breeds revealed some degree of introgression of aurochs genes into south European cattle, indicating that female aurochs had contact with free-ranging domestic cattle. Cattle bones of various sizes found at a Chalcolithic settlement in the Kutná Hora District provide further evidence for hybridisation of aurochs and domestic cattle between 3000 and 2800 BC in the Bohemian region. Whole genome sequencing of a 6,750-year-old aurochs bone found in England was compared with genome sequence data of 81 cattle and single-nucleotide polymorphism data of 1,225 cattle. Results revealed that British and Irish cattle breeds share some genetic variants with the aurochs specimen; early herders in Britain might have been responsible for the local gene flow from aurochs into the ancestors of British and Irish cattle. The Murboden cattle breed also exhibits sporadic introgression of female European aurochs into domestic cattle in the Alps. Domestic cattle continued to diminish in both body and horn size until the Middle Ages. Comparative analysis of single-nucleotide polymorphisms and shared alleles revealed admixture between East Asian aurochs and introduced taurine cattle in ancient China, for example at Shimao. This suggested the incorporation of local aurochs into domestic cattle as far back as 4,000 years BP, either through spontaneous introgression, or the capture of different aurochs groups to supplement domestic stocks. The same study detected derived alleles shared by aurochs and modern taurine cattle in East Asia, especially among Tibetan breeds. Introgression with local aurochs could have facilitated rapid adaptation to new environments. The Indian aurochs is thought to have been domesticated 10,000–8,000 years ago. Aurochs fossils found at the Neolithic site of Mehrgarh in Pakistan are dated to around 8,000 years BP and represent some of the earliest evidence for its domestication on the Indian subcontinent. Female Indian aurochs contributed to the gene pool of zebu (Bos indicus) between 5,500 and 4,000 years BP during the expansion of pastoralism in northern India. The zebu initially spread eastwards to Southeast Asia. Hybridisation between zebu and early taurine cattle occurred in the Near East after 4,000 years BP coinciding with the drought period during the 4.2-kiloyear event. The zebu was introduced to East Africa about 3,500–2,500 years ago, and reached Mongolia in the 13th and 14th centuries. A third domestication event thought to have occurred in Egypt's Western Desert is not supported by results of an analysis of genetic admixture, introgression and migration patterns of 3,196 domestic cattle representing 180 populations. However, the same study supported extensive hybridization between taurine cattle in Africa, arrived from the Near East after domestication, and local wild African aurochs prior to the entry of the zebu in Africa. The zebu was introduced through Ancient Egypt and started to spread comprehensively through West Africa in the last 1,400 years, along with Arabic cultural influences. Most modern African cattle breeds are hybridized to a variable extent with Indicine cattle, with introgression being most reduced in areas of West Africa where the tse-tse fly is present. Extinction The Indian aurochs (B. p. namadicus) became extinct sometime during the Holocene period, likely due to habitat loss caused by expanding pastoralism and interbreeding with the domestic zebu. The timing of extinction of aurochs in the Indian subcontinent is unclear, due to difficulty distinguishing aurochs remains from those of domestic cattle, with a 2021 review suggesting remains from Mehrgarh, Pakistan, dating to around 8,000 years ago "might constitute the only dated and reliably identified evidence" of Holocene Indian aurochs. The extinction probably predates the historical period, due to a lack of references to the aurochs in Indian texts. A 2014 review suggested that the youngest remains of African aurochs (B. p. mauritanicus) dated to around 6,000 years Before Present (BP), though some authors suggest that it may have survived until at least to the Roman period, as indicated by remains found in Buto and Faiyum in the Nile Delta. In China, aurochs persisted until at least 3,600 BP. The Eurasian aurochs (B. p. primigenius) was present in southern Sweden during the Holocene climatic optimum until at least 7,800 years BP. In Denmark, the first-known local extinction of the aurochs occurred after the sea level rise on the newly formed Danish islands about 8,000–7,500 years BP, and the last documented aurochs lived in southern Jutland around 3,000 years BP. The latest-known aurochs fossil in Great Britain dates to 3,245 years BP, and it was probably extinct by 3,000 years ago. Excessive hunting began and continued until the aurochs was nearly extinct. The gradual extinction of the aurochs in Central Europe was concurrent with the clearcutting of large forest tracts between the 9th and 12th centuries. By the 13th century, the aurochs existed only in small numbers in Eastern Europe, and hunting it became a privilege of nobles and later royals. The population in Hungary was declining from at least the 9th century and was extinct in the 13th century. Findings from subfossil records indicate that wild aurochs might have survived in northwestern Transylvania until the 14th to 16th century, in western Moldavia until probably the early 17th century. The last-known aurochs herd lived in a marshy woodland in Poland's Jaktorów Forest. It decreased from around 50 individuals in the mid 16th century to four individuals by 1601. The last aurochs cow died in 1627 from natural causes. A 2021 study argued that the aurochs possibly survived in northeastern Bulgaria until at least the 17th century. A horn-core excavated in 2020 in Sofia was identified as being from an aurochs; the archaeological layer in which it was found was dated to the second half of the 17th or first half of the 18th century, suggesting that aurochs may have survived in Bulgaria until that date. Breeding of aurochs-like cattle In the early 1920s, Heinz Heck initiated a selective breeding program in Hellabrunn Zoo attempting to breed back the aurochs using several cattle breeds; the result is called Heck cattle. Herds of these cattle were released to Oostvaardersplassen, a polder in the Netherlands in the 1980s as aurochs surrogates for naturalistic grazing with the aim to restore prehistorical landscapes. Large numbers of them died of starvation during the cold winters of 2005 and 2010, and the project of no interference ended in 2018. Starting in 1996, Heck cattle were crossed with southern European cattle breeds such as Sayaguesa Cattle, Chianina and to a lesser extent Spanish Fighting Bulls in the hope of creating a more aurochs-like animal. The resulting crossbreeds are called Taurus cattle. Other breeding-back projects are the Tauros Programme and the Uruz Project. However, approaches aiming at breeding an aurochs-like phenotype do not equate to an aurochs-like genotype.
Biology and health sciences
Artiodactyla
null
2500
https://en.wikipedia.org/wiki/Anus
Anus
In mammals, invertebrates and most fish, the anus (: anuses or ani; from Latin, 'ring' or 'circle') is the external body orifice at the exit end of the digestive tract (bowel), i.e. the opposite end from the mouth. Its function is to facilitate the expulsion of wastes that remain after digestion. Bowel contents that pass through the anus include the gaseous flatus and the semi-solid feces, which (depending on the type of animal) include: indigestible matter such as bones, hair pellets, endozoochorous seeds and digestive rocks; residual food material after the digestible nutrients have been extracted, for example cellulose or lignin; ingested matter which would be toxic if it remained in the digestive tract; excreted metabolites like bilirubin-containing bile; and dead mucosal epithelia or excess gut bacteria and other endosymbionts. Passage of feces through the anus is typically controlled by muscular sphincters, and failure to stop unwanted passages results in fecal incontinence. Amphibians, reptiles and birds use a similar orifice (known as the cloaca) for excreting liquid and solid wastes, for copulation and egg-laying. Monotreme mammals also have a cloaca, which is thought to be a feature inherited from the earliest amniotes. Marsupials have a single orifice for excreting both solids and liquids and, in females, a separate vagina for reproduction. Female placental mammals have completely separate orifices for defecation, urination, and reproduction; males have one opening for defecation and another for both urination and reproduction, although the channels flowing to that orifice are almost completely separate. The development of the anus was an important stage in the evolution of multicellular animals. It appears to have happened at least twice, following different paths in protostomes and deuterostomes. This accompanied or facilitated other important evolutionary developments: the bilaterian body plan, the coelom, and metamerism, in which the body was built of repeated "modules" which could later specialize, such as the heads of most arthropods, which are composed of fused, specialized segments. In comb jellies, there are species with one and sometimes two permanent anuses, species like the warty comb jelly grows an anus, which then disappear when it is no longer needed. Development In animals at least as complex as an earthworm, the embryo forms a dent on one side, the blastopore, which deepens to become the archenteron, the first phase in the growth of the gut. In deuterostomes, the original dent becomes the anus while the gut eventually tunnels through to make another opening, which forms the mouth. The protostomes were so named because it was thought that in their embryos the dent formed the mouth first (proto– meaning "first") and the anus was formed later at the opening made by the other end of the gut. Research from 2001 shows the edges of the dent close up in the middles of protosomes, leaving openings at the ends which become the mouths and anuses.
Biology and health sciences
Gastrointestinal tract
Biology
2504
https://en.wikipedia.org/wiki/Amphetamine
Amphetamine
Amphetamine (contracted from alpha-methylphenethylamine) is a central nervous system (CNS) stimulant that is used in the treatment of attention deficit hyperactivity disorder (ADHD), narcolepsy, and obesity; it is also used to treat binge eating disorder in the form of its inactive prodrug lisdexamfetamine. Amphetamine was discovered as a chemical in 1887 by Lazăr Edeleanu, and then as a drug in the late 1920s. It exists as two enantiomers: levoamphetamine and dextroamphetamine. Amphetamine properly refers to a specific chemical, the racemic free base, which is equal parts of the two enantiomers in their pure amine forms. The term is frequently used informally to refer to any combination of the enantiomers, or to either of them alone. Historically, it has been used to treat nasal congestion and depression. Amphetamine is also used as an athletic performance enhancer and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. It is a prescription drug in many countries, and unauthorized possession and distribution of amphetamine are often tightly controlled due to the significant health risks associated with recreational use. The first amphetamine pharmaceutical was Benzedrine, a brand which was used to treat a variety of conditions. Pharmaceutical amphetamine is prescribed as racemic amphetamine, Adderall, dextroamphetamine, or the inactive prodrug lisdexamfetamine. Amphetamine increases monoamine and excitatory neurotransmission in the brain, with its most pronounced effects targeting the norepinephrine and dopamine neurotransmitter systems. At therapeutic doses, amphetamine causes emotional and cognitive effects such as euphoria, change in desire for sex, increased wakefulness, and improved cognitive control. It induces physical effects such as improved reaction time, fatigue resistance, decreased appetite, elevated heart rate, and increased muscle strength. Larger doses of amphetamine may impair cognitive function and induce rapid muscle breakdown. Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses. Very high doses can result in psychosis (e.g., hallucinations, delusions and paranoia) which rarely occurs at therapeutic doses even during long-term use. Recreational doses are generally much larger than prescribed therapeutic doses and carry a far greater risk of serious side effects. Amphetamine belongs to the phenethylamine class. It is also the parent compound of its own structural class, the substituted amphetamines, which includes prominent substances such as bupropion, cathinone, MDMA, and methamphetamine. As a member of the phenethylamine class, amphetamine is also chemically related to the naturally occurring trace amine neuromodulators, specifically phenethylamine and , both of which are produced within the human body. Phenethylamine is the parent compound of amphetamine, while is a positional isomer of amphetamine that differs only in the placement of the methyl group. Uses Medical Amphetamine is used to treat attention deficit hyperactivity disorder (ADHD), narcolepsy, obesity, and, in the form of lisdexamfetamine, binge eating disorder. It is sometimes prescribed for its past medical indications, particularly for depression and chronic pain. ADHD Long-term amphetamine exposure at sufficiently high doses in some animal species is known to produce abnormal dopamine system development or nerve damage, but, in humans with ADHD, long-term use of pharmaceutical amphetamines at therapeutic doses appears to improve brain development and nerve growth. Reviews of magnetic resonance imaging (MRI) studies suggest that long-term treatment with amphetamine decreases abnormalities in brain structure and function found in subjects with ADHD, and improves function in several parts of the brain, such as the right caudate nucleus of the basal ganglia. Reviews of clinical stimulant research have established the safety and effectiveness of long-term continuous amphetamine use for the treatment of ADHD. Randomized controlled trials of continuous stimulant therapy for the treatment of ADHD spanning 2 years have demonstrated treatment effectiveness and safety. Two reviews have indicated that long-term continuous stimulant therapy for ADHD is effective for reducing the core symptoms of ADHD (i.e., hyperactivity, inattention, and impulsivity), enhancing quality of life and academic achievement, and producing improvements in a large number of functional outcomes across 9 categories of outcomes related to academics, antisocial behavior, driving, non-medicinal drug use, obesity, occupation, self-esteem, service use (i.e., academic, occupational, health, financial, and legal services), and social function. Additionally, a 2024 meta-analytic systematic review reported moderate improvements in quality of life when amphetamine treatment is used for ADHD. One review highlighted a nine-month randomized controlled trial of amphetamine treatment for ADHD in children that found an average increase of 4.5 IQ points, continued increases in attention, and continued decreases in disruptive behaviors and hyperactivity. Another review indicated that, based upon the longest follow-up studies conducted to date, lifetime stimulant therapy that begins during childhood is continuously effective for controlling ADHD symptoms and reduces the risk of developing a substance use disorder as an adult. A 2025 meta-analytic systematic review of 113 randomized controlled trials demonstrated that stimulant medications significantly improved core ADHD symptoms in adults over a three-month period, with good acceptability compared to other pharmacological and non-pharmacological treatments. Models of ADHD suggest that it is associated with functional impairments in some of the brain's neurotransmitter systems; these functional impairments involve impaired dopamine neurotransmission in the mesocorticolimbic projection and norepinephrine neurotransmission in the noradrenergic projections from the locus coeruleus to the prefrontal cortex. Stimulants like methylphenidate and amphetamine are effective in treating ADHD because they increase neurotransmitter activity in these systems. Approximately 80% of those who use these stimulants see improvements in ADHD symptoms. Children with ADHD who use stimulant medications generally have better relationships with peers and family members, perform better in school, are less distractible and impulsive, and have longer attention spans. The Cochrane reviews on the treatment of ADHD in children, adolescents, and adults with pharmaceutical amphetamines stated that short-term studies have demonstrated that these drugs decrease the severity of symptoms, but they have higher discontinuation rates than non-stimulant medications due to their adverse side effects. A Cochrane review on the treatment of ADHD in children with tic disorders such as Tourette syndrome indicated that stimulants in general do not make tics worse, but high doses of dextroamphetamine could exacerbate tics in some individuals. Binge eating disorder Binge eating disorder (BED) is characterized by recurrent and persistent episodes of compulsive binge eating. These episodes are often accompanied by marked distress and a feeling of loss of control over eating. The pathophysiology of BED is not fully understood, but it is believed to involve dysfunctional dopaminergic reward circuitry along the cortico-striatal-thalamic-cortical loop. As of July 2024, lisdexamfetamine is the only USFDA- and TGA-approved pharmacotherapy for BED. Evidence suggests that lisdexamfetamine's treatment efficacy in BED is underpinned at least in part by a psychopathological overlap between BED and ADHD, with the latter conceptualized as a cognitive control disorder that also benefits from treatment with lisdexamfetamine. Lisdexamfetamine's therapeutic effects for BED primarily involve direct action in the central nervous system after conversion to its pharmacologically active metabolite, dextroamphetamine. Centrally, dextroamphetamine increases neurotransmitter activity of dopamine and norepinephrine in prefrontal cortical regions that regulate cognitive control of behavior. Similar to its therapeutic effect in ADHD, dextroamphetamine enhances cognitive control and may reduce impulsivity in patients with BED by enhancing the cognitive processes responsible for overriding prepotent feeding responses that precede binge eating episodes. In addition, dextroamphetamine's actions outside of the central nervous system may also contribute to its treatment effects in BED. Peripherally, dextroamphetamine triggers lipolysis through noradrenergic signaling in adipose fat cells, leading to the release of triglycerides into blood plasma to be utilized as a fuel substrate. Dextroamphetamine also activates TAAR1 in peripheral organs along the gastrointestinal tract that are involved in the regulation of food intake and body weight. Together, these actions confer an anorexigenic effect that promotes satiety in response to feeding and may decrease binge eating as a secondary effect. Medical reviews of randomized controlled trials have demonstrated that lisdexamfetamine, at doses between 50–70 mg, is safe and effective for the treatment of moderate-to-severe BED in adults. These reviews suggest that lisdexamfetamine is persistently effective at treating BED and is associated with significant reductions in the number of binge eating days and binge eating episodes per week. Furthermore, a meta-analytic systematic review highlighted an open-label, 12-month extension safety and tolerability study that reported lisdexamfetamine remained effective at reducing the number of binge eating days for the duration of the study. In addition, both a review and a meta-analytic systematic review found lisdexamfetamine to be superior to placebo in several secondary outcome measures, including persistent binge eating cessation, reduction of obsessive-compulsive related binge eating symptoms, reduction of body-weight, and reduction of triglycerides. Lisdexamfetamine, like all pharmaceutical amphetamines, has direct appetite suppressant effects that may be therapeutically useful in both BED and its comorbidities. Based on reviews of neuroimaging studies involving BED-diagnosed participants, therapeautic neuroplasticity in dopaminergic and noradrenergic pathways from long-term use of lisdexamfetamine may be implicated in lasting improvements in the regulation of eating behaviors that are observed even after the drug is discontinued. Narcolepsy Narcolepsy is a chronic sleep-wake disorder that is associated with excessive daytime sleepiness, cataplexy, and sleep paralysis. Patients with narcolepsy are diagnosed as either type 1 or type 2, with only the former presenting cataplexy symptoms. Type 1 narcolepsy results from the loss of approximately 70,000 orexin-releasing neurons in the lateral hypothalamus, leading to significantly reduced cerebrospinal orexin levels; this reduction is a diagnostic biomarker for type 1 narcolepsy. Lateral hypothalamic orexin neurons innervate every component of the ascending reticular activating system (ARAS), which includes noradrenergic, dopaminergic, histaminergic, and serotonergic nuclei that promote wakefulness. Amphetamine’s therapeutic mode of action in narcolepsy primarily involves increasing monoamine neurotransmitter activity in the ARAS. This includes noradrenergic neurons in the locus coeruleus, dopaminergic neurons in the ventral tegmental area, histaminergic neurons in the tuberomammillary nucleus, and serotonergic neurons in the dorsal raphe nucleus. Dextroamphetamine, the more dopaminergic enantiomer of amphetamine, is particularly effective at promoting wakefulness because dopamine release has the greatest influence on cortical activation and cognitive arousal, relative to other monoamines. In contrast, levoamphetamine may have a greater effect on cataplexy, a symptom more sensitive to the effects of norepinephrine and serotonin. Noradrenergic and serotonergic nuclei in the ARAS are involved in the regulation of the REM sleep cycle and function as "REM-off" cells, with amphetamine's effect on norepinephrine and serotonin contributing to the suppression of REM sleep and a possible reduction of cataplexy at high doses. The American Academy of Sleep Medicine (AASM) 2021 clinical practice guideline conditionally recommends dextroamphetamine for the treatment of both type 1 and type 2 narcolepsy. Treatment with pharmaceutical amphetamines is generally less preferred relative to other stimulants (e.g., modafinil) and is considered a third-line treatment option. Medical reviews indicate that amphetamine is safe and effective for the treatment of narcolepsy. Amphetamine appears to be most effective at improving symptoms associated with hypersomnolence, with three reviews finding clinically significant reductions in daytime sleepiness in patients with narcolepsy. Additionally, these reviews suggest that amphetamine may dose-dependently improve cataplexy symptoms. However, the quality of evidence for these findings is low and is consequently reflected in the AASM's conditional recommendation for dextroamphetamine as a treatment option for narcolepsy. Enhancing performance Cognitive performance In 2015, a systematic review and a meta-analysis of high quality clinical trials found that, when used at low (therapeutic) doses, amphetamine produces modest yet unambiguous improvements in cognition, including working memory, long-term episodic memory, inhibitory control, and some aspects of attention, in normal healthy adults; these cognition-enhancing effects of amphetamine are known to be partially mediated through the indirect activation of both dopamine D1 receptor and α2-adrenergic receptor in the prefrontal cortex. A systematic review from 2014 found that low doses of amphetamine also improve memory consolidation, in turn leading to improved recall of information. Therapeutic doses of amphetamine also enhance cortical network efficiency, an effect which mediates improvements in working memory in all individuals. Amphetamine and other ADHD stimulants also improve task saliency (motivation to perform a task) and increase arousal (wakefulness), in turn promoting goal-directed behavior. Stimulants such as amphetamine can improve performance on difficult and boring tasks and are used by some students as a study and test-taking aid. Based upon studies of self-reported illicit stimulant use, of college students use diverted ADHD stimulants, which are primarily used for enhancement of academic performance rather than as recreational drugs. However, high amphetamine doses that are above the therapeutic range can interfere with working memory and other aspects of cognitive control. Physical performance Amphetamine is used by some athletes for its psychological and athletic performance-enhancing effects, such as increased endurance and alertness; however, non-medical amphetamine use is prohibited at sporting events that are regulated by collegiate, national, and international anti-doping agencies. In healthy people at oral therapeutic doses, amphetamine has been shown to increase muscle strength, acceleration, athletic performance in anaerobic conditions, and endurance (i.e., it delays the onset of fatigue), while improving reaction time. Amphetamine improves endurance and reaction time primarily through reuptake inhibition and release of dopamine in the central nervous system. Amphetamine and other dopaminergic drugs also increase power output at fixed levels of perceived exertion by overriding a "safety switch", allowing the core temperature limit to increase in order to access a reserve capacity that is normally off-limits. At therapeutic doses, the adverse effects of amphetamine do not impede athletic performance; however, at much higher doses, amphetamine can induce effects that severely impair performance, such as rapid muscle breakdown and elevated body temperature. Recreational Amphetamine, specifically the more dopaminergic dextrorotatory enantiomer (dextroamphetamine), is also used recreationally as a euphoriant and aphrodisiac, and like other amphetamines; is used as a club drug for its energetic and euphoric high. Dextroamphetamine (d-amphetamine) is considered to have a high potential for misuse in a recreational manner since individuals typically report feeling euphoric, more alert, and more energetic after taking the drug. A notable part of the 1960s mod subculture in the UK was recreational amphetamine use, which was used to fuel all-night dances at clubs like Manchester's Twisted Wheel. Newspaper reports described dancers emerging from clubs at 5 a.m. with dilated pupils. Mods used the drug for stimulation and alertness, which they viewed as different from the intoxication caused by alcohol and other drugs. Dr. Andrew Wilson argues that for a significant minority, "amphetamines symbolised the smart, on-the-ball, cool image" and that they sought "stimulation not intoxication [...] greater awareness, not escape" and "confidence and articulacy" rather than the "drunken rowdiness of previous generations." Dextroamphetamine's dopaminergic (rewarding) properties affect the mesocorticolimbic circuit; a group of neural structures responsible for incentive salience (i.e., "wanting"; desire or craving for a reward and motivation), positive reinforcement and positively-valenced emotions, particularly ones involving pleasure. Large recreational doses of dextroamphetamine may produce symptoms of dextroamphetamine overdose. Recreational users sometimes open dexedrine capsules and crush the contents in order to insufflate (snort) it or subsequently dissolve it in water and inject it. Immediate-release formulations have higher potential for abuse via insufflation (snorting) or intravenous injection due to a more favorable pharmacokinetic profile and easy crushability (especially tablets). Injection into the bloodstream can be dangerous because insoluble fillers within the tablets can block small blood vessels. Chronic overuse of dextroamphetamine can lead to severe drug dependence, resulting in withdrawal symptoms when drug use stops. Contraindications According to the International Programme on Chemical Safety (IPCS) and the U.S. Food and Drug Administration (FDA), amphetamine is contraindicated in people with a history of drug abuse, cardiovascular disease, severe agitation, or severe anxiety. It is also contraindicated in individuals with advanced arteriosclerosis (hardening of the arteries), glaucoma (increased eye pressure), hyperthyroidism (excessive production of thyroid hormone), or moderate to severe hypertension. These agencies indicate that people who have experienced allergic reactions to other stimulants or who are taking monoamine oxidase inhibitors (MAOIs) should not take amphetamine, although safe concurrent use of amphetamine and monoamine oxidase inhibitors has been documented. These agencies also state that anyone with anorexia nervosa, bipolar disorder, depression, hypertension, liver or kidney problems, mania, psychosis, Raynaud's phenomenon, seizures, thyroid problems, tics, or Tourette syndrome should monitor their symptoms while taking amphetamine. Evidence from human studies indicates that therapeutic amphetamine use does not cause developmental abnormalities in the fetus or newborns (i.e., it is not a human teratogen), but amphetamine abuse does pose risks to the fetus. Amphetamine has also been shown to pass into breast milk, so the IPCS and the FDA advise mothers to avoid breastfeeding when using it. Due to the potential for reversible growth impairments, the FDA advises monitoring the height and weight of children and adolescents prescribed an amphetamine pharmaceutical. Adverse effects The adverse side effects of amphetamine are many and varied, and the amount of amphetamine used is the primary factor in determining the likelihood and severity of adverse effects. Amphetamine products such as Adderall, Dexedrine, and their generic equivalents are currently approved by the U.S. FDA for long-term therapeutic use. Recreational use of amphetamine generally involves much larger doses, which have a greater risk of serious adverse drug effects than dosages used for therapeutic purposes. Physical Cardiovascular side effects can include hypertension or hypotension from a vasovagal response, Raynaud's phenomenon (reduced blood flow to the hands and feet), and tachycardia (increased heart rate). Sexual side effects in males may include erectile dysfunction, frequent erections, or prolonged erections. Gastrointestinal side effects may include abdominal pain, constipation, diarrhea, and nausea. Other potential physical side effects include appetite loss, blurred vision, dry mouth, excessive grinding of the teeth, nosebleed, profuse sweating, rhinitis medicamentosa (drug-induced nasal congestion), reduced seizure threshold, tics (a type of movement disorder), and weight loss. Dangerous physical side effects are rare at typical pharmaceutical doses. Amphetamine stimulates the medullary respiratory centers, producing faster and deeper breaths. In a normal person at therapeutic doses, this effect is usually not noticeable, but when respiration is already compromised, it may be evident. Amphetamine also induces contraction in the urinary bladder sphincter, the muscle which controls urination, which can result in difficulty urinating. This effect can be useful in treating bed wetting and loss of bladder control. The effects of amphetamine on the gastrointestinal tract are unpredictable. If intestinal activity is high, amphetamine may reduce gastrointestinal motility (the rate at which content moves through the digestive system); however, amphetamine may increase motility when the smooth muscle of the tract is relaxed. Amphetamine also has a slight analgesic effect and can enhance the pain relieving effects of opioids. FDA-commissioned studies from 2011 indicate that in children, young adults, and adults there is no association between serious adverse cardiovascular events (sudden death, heart attack, and stroke) and the medical use of amphetamine or other ADHD stimulants. However, amphetamine pharmaceuticals are contraindicated in individuals with cardiovascular disease. Psychological At normal therapeutic doses, the most common psychological side effects of amphetamine include increased alertness, apprehension, concentration, initiative, self-confidence and sociability, mood swings (elated mood followed by mildly depressed mood), insomnia or wakefulness, and decreased sense of fatigue. Less common side effects include anxiety, change in libido, grandiosity, irritability, repetitive or obsessive behaviors, and restlessness; these effects depend on the user's personality and current mental state. Amphetamine psychosis (e.g., delusions and paranoia) can occur in heavy users. Although very rare, this psychosis can also occur at therapeutic doses during long-term therapy. According to the FDA, "there is no systematic evidence" that stimulants produce aggressive behavior or hostility. Amphetamine has also been shown to produce a conditioned place preference in humans taking therapeutic doses, meaning that individuals acquire a preference for spending time in places where they have previously used amphetamine. Reinforcement disorders Addiction Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at therapeutic doses; in fact, lifetime stimulant therapy for ADHD that begins during childhood reduces the risk of developing substance use disorders as an adult. Pathological overactivation of the mesolimbic pathway, a dopamine pathway that connects the ventral tegmental area to the nucleus accumbens, plays a central role in amphetamine addiction. Individuals who frequently self-administer high doses of amphetamine have a high risk of developing an amphetamine addiction, since chronic use at high doses gradually increases the level of accumbal ΔFosB, a "molecular switch" and "master control protein" for addiction. Once nucleus accumbens ΔFosB is sufficiently overexpressed, it begins to increase the severity of addictive behavior (i.e., compulsive drug-seeking) with further increases in its expression. While there are currently no effective drugs for treating amphetamine addiction, regularly engaging in sustained aerobic exercise appears to reduce the risk of developing such an addiction. Exercise therapy improves clinical treatment outcomes and may be used as an adjunct therapy with behavioral therapies for addiction. Biomolecular mechanisms Chronic use of amphetamine at excessive doses causes alterations in gene expression in the mesocorticolimbic projection, which arise through transcriptional and epigenetic mechanisms. The most important transcription factors that produce these alterations are Delta FBJ murine osteosarcoma viral oncogene homolog B (ΔFosB), cAMP response element binding protein (CREB), and nuclear factor-kappa B (NF-κB). ΔFosB is the most significant biomolecular mechanism in addiction because ΔFosB overexpression (i.e., an abnormally high level of gene expression which produces a pronounced gene-related phenotype) in the D1-type medium spiny neurons in the nucleus accumbens is necessary and sufficient for many of the neural adaptations and regulates multiple behavioral effects (e.g., reward sensitization and escalating drug self-administration) involved in addiction. Once ΔFosB is sufficiently overexpressed, it induces an addictive state that becomes increasingly more severe with further increases in ΔFosB expression. It has been implicated in addictions to alcohol, cannabinoids, cocaine, methylphenidate, nicotine, opioids, phencyclidine, propofol, and substituted amphetamines, among others. ΔJunD, a transcription factor, and G9a, a histone methyltransferase enzyme, both oppose the function of ΔFosB and inhibit increases in its expression. Sufficiently overexpressing ΔJunD in the nucleus accumbens with viral vectors can completely block many of the neural and behavioral alterations seen in chronic drug abuse (i.e., the alterations mediated by ΔFosB). Similarly, accumbal G9a hyperexpression results in markedly increased histone 3 lysine residue 9 dimethylation (H3K9me2) and blocks the induction of ΔFosB-mediated neural and behavioral plasticity by chronic drug use, which occurs via H3K9me2-mediated repression of transcription factors for ΔFosB and H3K9me2-mediated repression of various ΔFosB transcriptional targets (e.g., CDK5). ΔFosB also plays an important role in regulating behavioral responses to natural rewards, such as palatable food, sex, and exercise. Since both natural rewards and addictive drugs induce the expression of ΔFosB (i.e., they cause the brain to produce more of it), chronic acquisition of these rewards can result in a similar pathological state of addiction. Consequently, ΔFosB is the most significant factor involved in both amphetamine addiction and amphetamine-induced sexual addictions, which are compulsive sexual behaviors that result from excessive sexual activity and amphetamine use. These sexual addictions are associated with a dopamine dysregulation syndrome which occurs in some patients taking dopaminergic drugs. The effects of amphetamine on gene regulation are both dose- and route-dependent. Most of the research on gene regulation and addiction is based upon animal studies with intravenous amphetamine administration at very high doses. The few studies that have used equivalent (weight-adjusted) human therapeutic doses and oral administration show that these changes, if they occur, are relatively minor. This suggests that medical use of amphetamine does not significantly affect gene regulation. Pharmacological treatments there is no effective pharmacotherapy for amphetamine addiction. Reviews from 2015 and 2016 indicated that TAAR1-selective agonists have significant therapeutic potential as a treatment for psychostimulant addictions; however, the only compounds which are known to function as TAAR1-selective agonists are experimental drugs. Amphetamine addiction is largely mediated through increased activation of dopamine receptors and NMDA receptors in the nucleus accumbens; magnesium ions inhibit NMDA receptors by blocking the receptor calcium channel. One review suggested that, based upon animal testing, pathological (addiction-inducing) psychostimulant use significantly reduces the level of intracellular magnesium throughout the brain. Supplemental magnesium treatment has been shown to reduce amphetamine self-administration (i.e., doses given to oneself) in humans, but it is not an effective monotherapy for amphetamine addiction. A systematic review and meta-analysis from 2019 assessed the efficacy of 17 different pharmacotherapies used in randomized controlled trials (RCTs) for amphetamine and methamphetamine addiction; it found only low-strength evidence that methylphenidate might reduce amphetamine or methamphetamine self-administration. There was low- to moderate-strength evidence of no benefit for most of the other medications used in RCTs, which included antidepressants (bupropion, mirtazapine, sertraline), antipsychotics (aripiprazole), anticonvulsants (topiramate, baclofen, gabapentin), naltrexone, varenicline, citicoline, ondansetron, prometa, riluzole, atomoxetine, dextroamphetamine, and modafinil. Behavioral treatments A 2018 systematic review and network meta-analysis of 50 trials involving 12 different psychosocial interventions for amphetamine, methamphetamine, or cocaine addiction found that combination therapy with both contingency management and community reinforcement approach had the highest efficacy (i.e., abstinence rate) and acceptability (i.e., lowest dropout rate). Other treatment modalities examined in the analysis included monotherapy with contingency management or community reinforcement approach, cognitive behavioral therapy, 12-step programs, non-contingent reward-based therapies, psychodynamic therapy, and other combination therapies involving these. Additionally, research on the neurobiological effects of physical exercise suggests that daily aerobic exercise, especially endurance exercise (e.g., marathon running), prevents the development of drug addiction and is an effective adjunct therapy (i.e., a supplemental treatment) for amphetamine addiction. Exercise leads to better treatment outcomes when used as an adjunct treatment, particularly for psychostimulant addictions. In particular, aerobic exercise decreases psychostimulant self-administration, reduces the reinstatement (i.e., relapse) of drug-seeking, and induces increased dopamine receptor D2 (DRD2) density in the striatum. This is the opposite of pathological stimulant use, which induces decreased striatal DRD2 density. One review noted that exercise may also prevent the development of a drug addiction by altering ΔFosB or immunoreactivity in the striatum or other parts of the reward system. Dependence and withdrawal Drug tolerance develops rapidly in amphetamine abuse (i.e., recreational amphetamine use), so periods of extended abuse require increasingly larger doses of the drug in order to achieve the same effect. According to a Cochrane review on withdrawal in individuals who compulsively use amphetamine and methamphetamine, "when chronic heavy users abruptly discontinue amphetamine use, many report a time-limited withdrawal syndrome that occurs within 24 hours of their last dose." This review noted that withdrawal symptoms in chronic, high-dose users are frequent, occurring in roughly 88% of cases, and persist for  weeks with a marked "crash" phase occurring during the first week. Amphetamine withdrawal symptoms can include anxiety, drug craving, depressed mood, fatigue, increased appetite, increased movement or decreased movement, lack of motivation, sleeplessness or sleepiness, and lucid dreams. The review indicated that the severity of withdrawal symptoms is positively correlated with the age of the individual and the extent of their dependence. Mild withdrawal symptoms from the discontinuation of amphetamine treatment at therapeutic doses can be avoided by tapering the dose. Overdose An amphetamine overdose can lead to many different symptoms, but is rarely fatal with appropriate care. The severity of overdose symptoms increases with dosage and decreases with drug tolerance to amphetamine. Tolerant individuals have been known to take as much as 5 grams of amphetamine in a day, which is roughly 100 times the maximum daily therapeutic dose. Symptoms of a moderate and extremely large overdose are listed below; fatal amphetamine poisoning usually also involves convulsions and coma. In 2013, overdose on amphetamine, methamphetamine, and other compounds implicated in an "amphetamine use disorder" resulted in an estimated 3,788 deaths worldwide ( deaths, 95% confidence). Toxicity In rodents and primates, sufficiently high doses of amphetamine cause dopaminergic neurotoxicity, or damage to dopamine neurons, which is characterized by dopamine terminal degeneration and reduced transporter and receptor function. There is no evidence that amphetamine is directly neurotoxic in humans. However, large doses of amphetamine may indirectly cause dopaminergic neurotoxicity as a result of hyperpyrexia, the excessive formation of reactive oxygen species, and increased autoxidation of dopamine. Animal models of neurotoxicity from high-dose amphetamine exposure indicate that the occurrence of hyperpyrexia (i.e., core body temperature ≥ 40 °C) is necessary for the development of amphetamine-induced neurotoxicity. Prolonged elevations of brain temperature above 40 °C likely promote the development of amphetamine-induced neurotoxicity in laboratory animals by facilitating the production of reactive oxygen species, disrupting cellular protein function, and transiently increasing blood–brain barrier permeability. Psychosis An amphetamine overdose can result in a stimulant psychosis that may involve a variety of symptoms, such as delusions and paranoia. A Cochrane review on treatment for amphetamine, dextroamphetamine, and methamphetamine psychosis states that about of users fail to recover completely. According to the same review, there is at least one trial that shows antipsychotic medications effectively resolve the symptoms of acute amphetamine psychosis. Psychosis rarely arises from therapeutic use. Drug interactions Many types of substances are known to interact with amphetamine, resulting in altered drug action or metabolism of amphetamine, the interacting substance, or both. Inhibitors of enzymes that metabolize amphetamine (e.g., CYP2D6 and FMO3) will prolong its elimination half-life, meaning that its effects will last longer. Amphetamine also interacts with , particularly monoamine oxidase A inhibitors, since both MAOIs and amphetamine increase plasma catecholamines (i.e., norepinephrine and dopamine); therefore, concurrent use of both is dangerous. Amphetamine modulates the activity of most psychoactive drugs. In particular, amphetamine may decrease the effects of sedatives and depressants and increase the effects of stimulants and antidepressants. Amphetamine may also decrease the effects of antihypertensives and antipsychotics due to its effects on blood pressure and dopamine respectively. Zinc supplementation may reduce the minimum effective dose of amphetamine when it is used for the treatment of ADHD. Norepinephrine reuptake inhibitors (NRIs) like atomoxetine prevent norepinephrine release induced by amphetamines and have been found to reduce the stimulant, euphoriant, and sympathomimetic effects of dextroamphetamine in humans. In general, there is no significant interaction when consuming amphetamine with food, but the pH of gastrointestinal content and urine affects the absorption and excretion of amphetamine, respectively. Acidic substances reduce the absorption of amphetamine and increase urinary excretion, and alkaline substances do the opposite. Due to the effect pH has on absorption, amphetamine also interacts with gastric acid reducers such as proton pump inhibitors and H2 antihistamines, which increase gastrointestinal pH (i.e., make it less acidic). Pharmacology Pharmacodynamics Amphetamine exerts its behavioral effects by altering the use of monoamines as neuronal signals in the brain, primarily in catecholamine neurons in the reward and executive function pathways of the brain. The concentrations of the main neurotransmitters involved in reward circuitry and executive functioning, dopamine and norepinephrine, increase dramatically in a dose-dependent manner by amphetamine because of its effects on monoamine transporters. The reinforcing and motivational salience-promoting effects of amphetamine are due mostly to enhanced dopaminergic activity in the mesolimbic pathway. The euphoric and locomotor-stimulating effects of amphetamine are dependent upon the magnitude and speed by which it increases synaptic dopamine and norepinephrine concentrations in the striatum. Amphetamine has been identified as a potent full agonist of trace amine-associated receptor 1 (TAAR1), a and G protein-coupled receptor (GPCR) discovered in 2001, which is important for regulation of brain monoamines. Activation of increases production via adenylyl cyclase activation and inhibits monoamine transporter function. Monoamine autoreceptors (e.g., D2 short, presynaptic α2, and presynaptic 5-HT1A) have the opposite effect of TAAR1, and together these receptors provide a regulatory system for monoamines. Notably, amphetamine and trace amines possess high binding affinities for TAAR1, but not for monoamine autoreceptors. Imaging studies indicate that monoamine reuptake inhibition by amphetamine and trace amines is site specific and depends upon the presence of TAAR1 in the associated monoamine neurons. In addition to the neuronal monoamine transporters, amphetamine also inhibits both vesicular monoamine transporters, VMAT1 and VMAT2, as well as SLC1A1, SLC22A3, and SLC22A5. SLC1A1 is excitatory amino acid transporter 3 (EAAT3), a glutamate transporter located in neurons, SLC22A3 is an extraneuronal monoamine transporter that is present in astrocytes, and SLC22A5 is a high-affinity carnitine transporter. Amphetamine is known to strongly induce cocaine- and amphetamine-regulated transcript (CART) gene expression, a neuropeptide involved in feeding behavior, stress, and reward, which induces observable increases in neuronal development and survival in vitro. The CART receptor has yet to be identified, but there is significant evidence that CART binds to a unique . Amphetamine also inhibits monoamine oxidases at very high doses, resulting in less monoamine and trace amine metabolism and consequently higher concentrations of synaptic monoamines. In humans, the only post-synaptic receptor at which amphetamine is known to bind is the receptor, where it acts as an agonist with low micromolar affinity. The full profile of amphetamine's short-term drug effects in humans is mostly derived through increased cellular communication or neurotransmission of dopamine, serotonin, norepinephrine, epinephrine, histamine, CART peptides, endogenous opioids, adrenocorticotropic hormone, corticosteroids, and glutamate, which it affects through interactions with , , , , , , and possibly other biological targets. Amphetamine also activates seven human carbonic anhydrase enzymes, several of which are expressed in the human brain. Dextroamphetamine is a more potent agonist of than levoamphetamine. Consequently, dextroamphetamine produces greater stimulation than levoamphetamine, roughly three to four times more, but levoamphetamine has slightly stronger cardiovascular and peripheral effects. Dopamine In certain brain regions, amphetamine increases the concentration of dopamine in the synaptic cleft. Amphetamine can enter the presynaptic neuron either through or by diffusing across the neuronal membrane directly. As a consequence of DAT uptake, amphetamine produces competitive reuptake inhibition at the transporter. Upon entering the presynaptic neuron, amphetamine activates which, through protein kinase A (PKA) and protein kinase C (PKC) signaling, causes DAT phosphorylation. Phosphorylation by either protein kinase can result in DAT internalization ( reuptake inhibition), but phosphorylation alone induces the reversal of dopamine transport through DAT (i.e., dopamine efflux). Amphetamine is also known to increase intracellular calcium, an effect which is associated with DAT phosphorylation through an unidentified Ca2+/calmodulin-dependent protein kinase (CAMK)-dependent pathway, in turn producing dopamine efflux. Through direct activation of G protein-coupled inwardly-rectifying potassium channels, reduces the firing rate of dopamine neurons, preventing a hyper-dopaminergic state. Amphetamine is also a substrate for the presynaptic vesicular monoamine transporter, . Following amphetamine uptake at VMAT2, amphetamine induces the collapse of the vesicular pH gradient, which results in the release of dopamine molecules from synaptic vesicles into the cytosol via dopamine efflux through VMAT2. Subsequently, the cytosolic dopamine molecules are released from the presynaptic neuron into the synaptic cleft via reverse transport at . Norepinephrine Similar to dopamine, amphetamine dose-dependently increases the level of synaptic norepinephrine, the direct precursor of epinephrine. Based upon neuronal expression, amphetamine is thought to affect norepinephrine analogously to dopamine. In other words, amphetamine induces TAAR1-mediated efflux and reuptake inhibition at phosphorylated , competitive NET reuptake inhibition, and norepinephrine release from . Serotonin Amphetamine exerts analogous, yet less pronounced, effects on serotonin as on dopamine and norepinephrine. Amphetamine affects serotonin via and, like norepinephrine, is thought to phosphorylate via . Like dopamine, amphetamine has low, micromolar affinity at the human 5-HT1A receptor. Other neurotransmitters, peptides, hormones, and enzymes Acute amphetamine administration in humans increases endogenous opioid release in several brain structures in the reward system. Extracellular levels of glutamate, the primary excitatory neurotransmitter in the brain, have been shown to increase in the striatum following exposure to amphetamine. This increase in extracellular glutamate presumably occurs via the amphetamine-induced internalization of EAAT3, a glutamate reuptake transporter, in dopamine neurons. Amphetamine also induces the selective release of histamine from mast cells and efflux from histaminergic neurons through . Acute amphetamine administration can also increase adrenocorticotropic hormone and corticosteroid levels in blood plasma by stimulating the hypothalamic–pituitary–adrenal axis. In December 2017, the first study assessing the interaction between amphetamine and human carbonic anhydrase enzymes was published; of the eleven carbonic anhydrase enzymes it examined, it found that amphetamine potently activates seven, four of which are highly expressed in the human brain, with low nanomolar through low micromolar activating effects. Based upon preclinical research, cerebral carbonic anhydrase activation has cognition-enhancing effects; but, based upon the clinical use of carbonic anhydrase inhibitors, carbonic anhydrase activation in other tissues may be associated with adverse effects, such as ocular activation exacerbating glaucoma. Pharmacokinetics The oral bioavailability of amphetamine varies with gastrointestinal pH; it is well absorbed from the gut, and bioavailability is typically 90%. Amphetamine is a weak base with a pKa of 9.9; consequently, when the pH is basic, more of the drug is in its lipid soluble free base form, and more is absorbed through the lipid-rich cell membranes of the gut epithelium. Conversely, an acidic pH means the drug is predominantly in a water-soluble cationic (salt) form, and less is absorbed. Approximately of amphetamine circulating in the bloodstream is bound to plasma proteins. Following absorption, amphetamine readily distributes into most tissues in the body, with high concentrations occurring in cerebrospinal fluid and brain tissue. The half-lives of amphetamine enantiomers differ and vary with urine pH. At normal urine pH, the half-lives of dextroamphetamine and levoamphetamine are  hours and  hours, respectively. Highly acidic urine will reduce the enantiomer half-lives to 7 hours; highly alkaline urine will increase the half-lives up to 34 hours. The immediate-release and extended release variants of salts of both isomers reach peak plasma concentrations at 3 hours and 7 hours post-dose respectively. Amphetamine is eliminated via the kidneys, with of the drug being excreted unchanged at normal urinary pH. When the urinary pH is basic, amphetamine is in its free base form, so less is excreted. When urine pH is abnormal, the urinary recovery of amphetamine may range from a low of 1% to a high of 75%, depending mostly upon whether urine is too basic or acidic, respectively. Following oral administration, amphetamine appears in urine within 3 hours. Roughly 90% of ingested amphetamine is eliminated 3 days after the last oral dose. CYP2D6, dopamine β-hydroxylase (DBH), flavin-containing monooxygenase 3 (FMO3), butyrate-CoA ligase (XM-ligase), and glycine N-acyltransferase (GLYAT) are the enzymes known to metabolize amphetamine or its metabolites in humans. Amphetamine has a variety of excreted metabolic products, including , , , benzoic acid, hippuric acid, norephedrine, and phenylacetone. Among these metabolites, the active sympathomimetics are , , and norephedrine. The main metabolic pathways involve aromatic para-hydroxylation, aliphatic alpha- and beta-hydroxylation, N-oxidation, N-dealkylation, and deamination. The known metabolic pathways, detectable metabolites, and metabolizing enzymes in humans include the following: Pharmacomicrobiomics The human metagenome (i.e., the genetic composition of an individual and all microorganisms that reside on or within the individual's body) varies considerably between individuals. Since the total number of microbial and viral cells in the human body (over 100 trillion) greatly outnumbers human cells (tens of trillions), there is considerable potential for interactions between drugs and an individual's microbiome, including: drugs altering the composition of the human microbiome, drug metabolism by microbial enzymes modifying the drug's pharmacokinetic profile, and microbial drug metabolism affecting a drug's clinical efficacy and toxicity profile. The field that studies these interactions is known as pharmacomicrobiomics. Similar to most biomolecules and other orally administered xenobiotics (i.e., drugs), amphetamine is predicted to undergo promiscuous metabolism by human gastrointestinal microbiota (primarily bacteria) prior to absorption into the blood stream. The first amphetamine-metabolizing microbial enzyme, tyramine oxidase from a strain of E. coli commonly found in the human gut, was identified in 2019. This enzyme was found to metabolize amphetamine, tyramine, and phenethylamine with roughly the same binding affinity for all three compounds. Related endogenous compounds Amphetamine has a very similar structure and function to the endogenous trace amines, which are naturally occurring neuromodulator molecules produced in the human body and brain. Among this group, the most closely related compounds are phenethylamine, the parent compound of amphetamine, and , a structural isomer of amphetamine (i.e., it has an identical molecular formula). In humans, phenethylamine is produced directly from by the aromatic amino acid decarboxylase (AADC) enzyme, which converts into dopamine as well. In turn, is metabolized from phenethylamine by phenylethanolamine N-methyltransferase, the same enzyme that metabolizes norepinephrine into epinephrine. Like amphetamine, both phenethylamine and regulate monoamine neurotransmission via ; unlike amphetamine, both of these substances are broken down by monoamine oxidase B, and therefore have a shorter half-life than amphetamine. Chemistry Amphetamine is a methyl homolog of the mammalian neurotransmitter phenethylamine with the chemical formula . The carbon atom adjacent to the primary amine is a stereogenic center, and amphetamine is composed of a racemic 1:1 mixture of two enantiomers. This racemic mixture can be separated into its optical isomers: levoamphetamine and dextroamphetamine. At room temperature, the pure free base of amphetamine is a mobile, colorless, and volatile liquid with a characteristically strong amine odor, and acrid, burning taste. Frequently prepared solid salts of amphetamine include amphetamine adipate, aspartate, hydrochloride, phosphate, saccharate, sulfate, and tannate. Dextroamphetamine sulfate is the most common enantiopure salt. Amphetamine is also the parent compound of its own structural class, which includes a number of psychoactive derivatives. In organic chemistry, amphetamine is an excellent chiral ligand for the stereoselective synthesis of . Substituted derivatives The substituted derivatives of amphetamine, or "substituted amphetamines", are a broad range of chemicals that contain amphetamine as a "backbone"; specifically, this chemical class includes derivative compounds that are formed by replacing one or more hydrogen atoms in the amphetamine core structure with substituents. The class includes amphetamine itself, stimulants like methamphetamine, serotonergic empathogens like MDMA, and decongestants like ephedrine, among other subgroups. Synthesis Since the first preparation was reported in 1887, numerous synthetic routes to amphetamine have been developed. The most common route of both legal and illicit amphetamine synthesis employs a non-metal reduction known as the Leuckart reaction (method 1). In the first step, a reaction between phenylacetone and formamide, either using additional formic acid or formamide itself as a reducing agent, yields . This intermediate is then hydrolyzed using hydrochloric acid, and subsequently basified, extracted with organic solvent, concentrated, and distilled to yield the free base. The free base is then dissolved in an organic solvent, sulfuric acid added, and amphetamine precipitates out as the sulfate salt. A number of chiral resolutions have been developed to separate the two enantiomers of amphetamine. For example, racemic amphetamine can be treated with to form a diastereoisomeric salt which is fractionally crystallized to yield dextroamphetamine. Chiral resolution remains the most economical method for obtaining optically pure amphetamine on a large scale. In addition, several enantioselective syntheses of amphetamine have been developed. In one example, optically pure is condensed with phenylacetone to yield a chiral Schiff base. In the key step, this intermediate is reduced by catalytic hydrogenation with a transfer of chirality to the carbon atom alpha to the amino group. Cleavage of the benzylic amine bond by hydrogenation yields optically pure dextroamphetamine. A large number of alternative synthetic routes to amphetamine have been developed based on classic organic reactions. One example is the Friedel–Crafts alkylation of benzene by allyl chloride to yield beta chloropropylbenzene which is then reacted with ammonia to produce racemic amphetamine (method 2). Another example employs the Ritter reaction (method 3). In this route, allylbenzene is reacted acetonitrile in sulfuric acid to yield an organosulfate which in turn is treated with sodium hydroxide to give amphetamine via an acetamide intermediate. A third route starts with which through a double alkylation with methyl iodide followed by benzyl chloride can be converted into acid. This synthetic intermediate can be transformed into amphetamine using either a Hofmann or Curtius rearrangement (method 4). A significant number of amphetamine syntheses feature a reduction of a nitro, imine, oxime, or other nitrogen-containing functional groups. In one such example, a Knoevenagel condensation of benzaldehyde with nitroethane yields . The double bond and nitro group of this intermediate is reduced using either catalytic hydrogenation or by treatment with lithium aluminium hydride (method 5). Another method is the reaction of phenylacetone with ammonia, producing an imine intermediate that is reduced to the primary amine using hydrogen over a palladium catalyst or lithium aluminum hydride (method 6). Detection in body fluids Amphetamine is frequently measured in urine or blood as part of a drug test for sports, employment, poisoning diagnostics, and forensics. Techniques such as immunoassay, which is the most common form of amphetamine test, may cross-react with a number of sympathomimetic drugs. Chromatographic methods specific for amphetamine are employed to prevent false positive results. Chiral separation techniques may be employed to help distinguish the source of the drug, whether prescription amphetamine, prescription amphetamine prodrugs, (e.g., selegiline), over-the-counter drug products that contain levomethamphetamine, or illicitly obtained substituted amphetamines. Several prescription drugs produce amphetamine as a metabolite, including benzphetamine, clobenzorex, famprofazone, fenproporex, lisdexamfetamine, mesocarb, methamphetamine, prenylamine, and selegiline, among others. These compounds may produce positive results for amphetamine on drug tests. Amphetamine is generally only detectable by a standard drug test for approximately 24 hours, although a high dose may be detectable for  days. For the assays, a study noted that an enzyme multiplied immunoassay technique (EMIT) assay for amphetamine and methamphetamine may produce more false positives than liquid chromatography–tandem mass spectrometry. Gas chromatography–mass spectrometry (GC–MS) of amphetamine and methamphetamine with the derivatizing agent chloride allows for the detection of methamphetamine in urine. GC–MS of amphetamine and methamphetamine with the chiral derivatizing agent Mosher's acid chloride allows for the detection of both dextroamphetamine and dextromethamphetamine in urine. Hence, the latter method may be used on samples that test positive using other methods to help distinguish between the various sources of the drug. History, society, and culture Amphetamine was first synthesized in 1887 in Germany by Romanian chemist Lazăr Edeleanu who named it phenylisopropylamine; its stimulant effects remained unknown until 1927, when it was independently resynthesized by Gordon Alles and reported to have sympathomimetic properties. Amphetamine had no medical use until late 1933, when Smith, Kline and French began selling it as an inhaler under the brand name Benzedrine as a decongestant. Benzedrine sulfate was introduced 3 years later and was used to treat a wide variety of medical conditions, including narcolepsy, obesity, low blood pressure, low libido, and chronic pain, among others. During World War II, amphetamine and methamphetamine were used extensively by both the Allied and Axis forces for their stimulant and performance-enhancing effects. As the addictive properties of the drug became known, governments began to place strict controls on the sale of amphetamine. For example, during the early 1970s in the United States, amphetamine became a schedule II controlled substance under the Controlled Substances Act. In spite of strict government controls, amphetamine has been used legally or illicitly by people from a variety of backgrounds, including authors, musicians, mathematicians, and athletes. Amphetamine is illegally synthesized in clandestine labs and sold on the black market, primarily in European countries. Among European Union (EU) member states 11.9 million adults of ages have used amphetamine or methamphetamine at least once in their lives and 1.7 million have used either in the last year. During 2012, approximately 5.9 metric tons of illicit amphetamine were seized within EU member states; the "street price" of illicit amphetamine within the EU ranged from  per gram during the same period. Outside Europe, the illicit market for amphetamine is much smaller than the market for methamphetamine and MDMA. Legal status As a result of the United Nations 1971 Convention on Psychotropic Substances, amphetamine became a schedule II controlled substance, as defined in the treaty, in all 183 state parties. Consequently, it is heavily regulated in most countries. Some countries, such as South Korea and Japan, have banned substituted amphetamines even for medical use. In other nations, such as Brazil (class A3), Canada (schedule I drug), the Netherlands (List I drug), the United States (schedule II drug), Australia (schedule 8), Thailand (category 1 narcotic), and United Kingdom (class B drug), amphetamine is in a restrictive national drug schedule that allows for its use as a medical treatment. Pharmaceutical products Several currently marketed amphetamine formulations contain both enantiomers, including those marketed under the brand names Adderall, Adderall XR, Mydayis, Adzenys ER, , Dyanavel XR, Evekeo, and Evekeo ODT. Of those, Evekeo (including Evekeo ODT) is the only product containing only racemic amphetamine (as amphetamine sulfate), and is therefore the only one whose active moiety can be accurately referred to simply as "amphetamine". Dextroamphetamine, marketed under the brand names Dexedrine and Zenzedi, is the only enantiopure amphetamine product currently available. A prodrug form of dextroamphetamine, lisdexamfetamine, is also available and is marketed under the brand name Vyvanse. As it is a prodrug, lisdexamfetamine is structurally different from dextroamphetamine, and is inactive until it metabolizes into dextroamphetamine. The free base of racemic amphetamine was previously available as Benzedrine, Psychedrine, and Sympatedrine. Levoamphetamine was previously available as Cydril. Many current amphetamine pharmaceuticals are salts due to the comparatively high volatility of the free base. However, oral suspension and orally disintegrating tablet (ODT) dosage forms composed of the free base were introduced in 2015 and 2016, respectively. Some of the current brands and their generic equivalents are listed below.
Biology and health sciences
Drugs and pharmacology
null
4083803
https://en.wikipedia.org/wiki/Fieldnotes
Fieldnotes
Fieldnotes refer to qualitative notes recorded by scientists or researchers in the course of field research, during or after their observation of a specific organism or phenomenon they are studying. The notes are intended to be read as evidence that gives meaning and aids in the understanding of the phenomenon. Fieldnotes allow researchers to access the subject and record what they observe in an unobtrusive manner. One major disadvantage of taking fieldnotes is that they are recorded by an observer and are thus subject to (a) memory and (b) possibly, the conscious or unconscious bias of the observer. It is best to record fieldnotes while making observations in the field or immediately after leaving the site to avoid forgetting important details. Some suggest immediately transcribing one's notes from a smaller pocket-sized notebook to something more legible in the evening or as soon as possible. Errors that occur from transcription often outweigh the errors which stem from illegible writing in the actual "field" notebook. Fieldnotes are particularly valued in descriptive sciences such as ethnography, biology, ecology, geology, and archaeology, each of which has long traditions in this area. Structure The structure of fieldnotes can vary depending on the field. Generally, there are two components of fieldnotes: descriptive information and reflective information. Descriptive information is factual data that is being recorded. Factual data includes time and date, the state of the physical setting, social environment, descriptions of the subjects being studied and their roles in the setting, and the impact that the observer may have had on the environment. Reflective information is the observer's reflections about the observation being conducted. These reflections are ideas, questions, concerns, and other related thoughts. Fieldnotes can also include sketches, diagrams, and other drawings. Visually capturing a phenomenon requires the observer to pay attention to every detail so as not to overlook anything. An author does not necessarily need to possess great artistic abilities to craft an exceptional note. In many cases, a rudimentary drawing or sketch can greatly assist in later data collection and synthesis. Increasingly, photographs may be included as part of a fieldnote when collected in a digital format. Other observers may further subdivide the structure of fieldnotes. Nigel Rapport said that fieldnotes in anthropology transition rapidly among three types. Inscription – where the writer records notes, impressions, and potentially important keywords. Transcription – where the author writes down dictated local text Description – a reflective type of writing that synthesizes previous observations and analysis for a later situation in which a more coherent conclusion can be made of the notes. Value Fieldnotes are extremely valuable for scientists at each step of their training. In an article on fieldnotes, James Van Remsen Jr. discussed the tragic loss of information from birdwatchers in his study area that could have been taking detailed fieldnotes but neglected to do so. This comment points to a larger issue regarding how often one should be taking fieldnotes. In this case, Remsen was upset because of the multitudes of "eyes and ears" that could have supplied potentially important information for his bird surveys but instead remained with the observers. Scientists like Remsen believe observations can be easily lost if notes are not taken. Nature phone apps and digital citizen science databases (like eBird) are changing the form and frequency of field data collection and may contribute to de-emphasizing the importance of hand-written notes. Apps may open up new possibilities for citizen science, but taking time to handwrite fieldnotes can help with the synthesis of details that one may not remember as well from data entry in an app. Writing in such a detailed manner may contribute to the personal growth of a scientist. Nigel Rapport, an anthropological field writer, said that fieldnotes are filled with the conventional realities of "two forms of life": local and academic. The lives are different and often contradictory but are often brought together through the efforts of a "field writer". The academic side refers to one's professional involvements, and fieldnotes take a certain official tone. The local side reflects more of the personal aspects of a writer and so the fieldnotes may also relate more to personal entries. In biology and ecology Taking fieldnotes in biology and other natural sciences will differ slightly from those taken in social sciences, as they may be limited to interactions regarding a focal species and/or subject. An example of an ornithological fieldnote was reported by Remsen (1977) regarding a sighting of a Cassin's sparrow, a relatively rare bird for the region where it was found. Grinnell method of note-taking An important teacher of efficient and accurate note-taking is Joseph Grinnell. The Grinnell technique has been regarded by many ornithologists as one of the best standardized methods for taking accurate fieldnotes. The technique has four main parts: A field-worthy notebook where one records direct observations as they are being observed. A larger more substantial journal containing written entries on observations and information, transcribed from the smaller field notebook as soon as possible. Species accounts of the notes taken on specific species. A catalog to record the location and date of collected specimens. In social sciences Grounded theory Methods for analyzing and integrating fieldnotes into qualitative or quantitative research are continuing to develop. Grounded theory is a method for integrating data in qualitative research done primarily by social scientists. This may have implications for fieldnotes in the natural sciences as well. Considerations when recording fieldnotes The decisions about choosing what is recorded may have a significant impact on the ultimate findings. As such, creating and adhering to a systematic method for recording fieldnotes is an important consideration for a qualitative research. American social scientist Robert K. Yin recommended the following considerations as best practices when recording qualitative field notes. Create vivid images: Focus on recording vivid descriptions of actions that take place in the field, instead of recording an interpretation of them. This is particularly important early in the research process. Immediately trying to interpret events can lead to premature conclusions that can prevent later insight when more observation has occurred. Focusing on the actions taking place in the field, instead of trying to describe people or scenes, can be a useful tool to minimize personal stereotyping of the situation. The verbatim principle: Similar to the vivid images, the goal is to accurately record what is happening in the field, not a personal paraphrasing (and possible unconscious stereotyping) of those events. Additionally, in social science research that involves studying culture, it is important to faithfully capture language and habits as a first step toward full understanding. Include drawings and sketches: These can quickly and accurately capture important aspects of field activity that are difficult to record in words and can be very helpful for recall when reviewing fieldnotes. Develop one's own transcribing language: While no one technique of transcribing (or "jotting") is perfect, most qualitative researchers develop a systematic approach to their own note-taking. Considering the multiple competing demands on attention (the simultaneous observation, processing, and recording of rich qualitative data in an unfamiliar environment), perfecting a system that can be automatically used and that will be interpretable later allows one to allocate one's full attention to observation. The ability to distinguish notes about events themselves from other notes to oneself is a key feature. Prior to engaging in qualitative research for the first time, practicing a transcribing format beforehand can improve the likelihood of successful observation. Convert fieldnotes to full notes daily: Prior to discussing one's observations with anyone else, one should set aside time each day to convert fieldnotes. At the very least, any unclear abbreviations, illegible words, or unfinished thoughts should be completed that would be uninterpretable later. In addition, the opportunity to collect one's thoughts and reflect on that day's events can lead to recalling additional details, uncovering emerging themes, leading to new understanding, and helping plan for future observations. This is also a good time to add the day's notes to one's total collection in an organized manner. Verify notes during collection: Converting fieldnotes as described above will likely lead the researcher to discover key points and themes that can then be checked while still present in the field. If conflicting themes are emerging, further data collection can be directed in a manner to help resolve the discrepancy. Obtain permission to record: While electronic devices and audiovisual recording can be useful tools in performing field research, there are some common pitfalls to avoid. Ensure that permission is obtained for the use of these devices beforehand and ensure that the devices to be used for recording have been previously tested and can be used inconspicuously. Keep a personal journal in addition to fieldnotes: As the researcher is the main instrument, insight into one's own reactions to and initial interpretations of events can help the researcher identify any undesired personal biases that might have influenced the research. This is useful for reflexivity.
Physical sciences
Research methods
Basics and measurement
566405
https://en.wikipedia.org/wiki/Saltwater%20crocodile
Saltwater crocodile
The saltwater crocodile (Crocodylus porosus) is a crocodilian native to saltwater habitats, brackish wetlands and freshwater rivers from India's east coast across Southeast Asia and the Sundaland to northern Australia and Micronesia. It has been listed as Least Concern on the IUCN Red List since 1996. It was hunted for its skin throughout its range up to the 1970s, and is threatened by illegal killing and habitat loss. It is regarded as dangerous to humans. The saltwater crocodile is the largest living reptile. Males can grow up to a weight of and a length of , rarely exceeding . Females are much smaller and rarely surpass . It is also called the estuarine crocodile, Indo-Pacific crocodile, marine crocodile, sea crocodile, and, informally, the saltie. A large and opportunistic hypercarnivorous apex predator, they ambush most of their prey and then drown or swallow it whole. They will prey on almost any animal that enters their territory, including other predators such as sharks, varieties of freshwater and saltwater fish including pelagic species, invertebrates such as crustaceans, various amphibians, other reptiles, birds, and mammals. Taxonomy and evolution Crocodilus porosus was the scientific name proposed by Johann Gottlob Theaenus Schneider who described a zoological specimen in 1801. In the 19th and 20th centuries, several saltwater crocodile specimens were described with the following names: Crocodilus biporcatus proposed by Georges Cuvier in 1807 were 23 saltwater crocodile specimens from India, Java and Timor. Crocodilus biporcatus raninus proposed by Salomon Müller and Hermann Schlegel in 1844 was a crocodile from Borneo. Crocodylus porosus australis proposed by Paulus Edward Pieris Deraniyagala in 1953 was a specimen from Australia. Crocodylus pethericki proposed by Richard Wells and C. Ross Wellington in 1985 was a large-bodied, relatively large-headed and short-tailed crocodile specimen collected in 1979 in the Finnis River, Northern Territory. This purported species was later considered to be a misinterpretation of the physiological changes that very large male crocodiles undergo. However, Wells and Wellington's assertion that the Australian saltwater crocodiles may be distinctive enough from northern Asian saltwater crocodiles to warrant subspecies status, as could raninus from other Asian saltwater crocodiles, has been considered to possibly bear validity. Currently, the saltwater crocodile is considered a monotypic species. Evolution Fossil remains of a saltwater crocodile excavated in northern Queensland were dated to the Pliocene. The saltwater crocodile's closest extant (living) relatives are the Siamese crocodile and the mugger crocodile. The genus Crocodylus was thought to have evolved in Australia and Asia. Results of a phylogenetic study supports its likely origin in Africa and later radiation towards Southeast Asia and the Americas; it genetically diverged from its closest recent relative, the extinct Voay of Madagascar, around near the boundary between the Oligocene and Miocene. Phylogeny Below is a cladogram based on a 2018 tip dating study by Lee & Yates simultaneously using morphological, molecular (DNA sequencing), and stratigraphic (fossil age) data, as revised in 2021 after a paleogenomics study using DNA extracted from the extinct Voay. Description The saltwater crocodile has a wide snout compared to most crocodiles. However, it has a longer snout than the mugger crocodile (C. palustris); its length is twice its width at the base. A pair of ridges runs from the eyes along the centre of the snout. The scales are oval in shape and the scutes are either small compared to other species or commonly are entirely absent. In addition, an obvious gap is also present between the cervical and dorsal shields, and small, triangular scutes are present between the posterior edges of the large, transversely arranged scutes in the dorsal shield. The relative lack of scutes is considered an asset useful to distinguish saltwater crocodiles in captivity or in illicit leather trading, as well as in the few areas in the field where sub-adult or younger saltwater crocodiles may need to be distinguished from other crocodiles. It has fewer armour plates on its neck than other crocodilians. The adult saltwater crocodile's broad body contrasts with that of most other lean crocodiles, leading to early unverified assumptions the reptile was an alligator. Young saltwater crocodiles are pale yellow in colour with black stripes and spots on their bodies and tails. This colouration lasts for several years until the crocodiles mature into adults. The colour as an adult is much darker greenish-drab, with a few lighter tan or grey areas sometimes apparent. Several colour variations are known and some adults may retain fairly pale skin, whereas others may be so dark as to appear blackish. The ventral surface is white or yellow in colour in saltwater crocodiles of all ages. Stripes are present on the lower sides of their bodies, but do not extend onto their bellies. Their tails are grey with dark bands. Size The weight of a crocodile increases approximately cubically as length increases (see square–cube law). This explains why individuals at weigh more than twice as much as individuals at . In crocodiles, linear growth eventually decreases and they start getting bulkier at a certain point. Saltwater crocodiles are the largest extant riparian predators in the world. However, they start life fairly small. Newly hatched saltwater crocodiles measure about long and weigh an average of . These sizes and ages are almost identical to those at average sexual maturity in Nile crocodiles, despite the fact that average adult male saltwater crocodiles are considerably larger than average adult male Nile crocodiles. The largest skull of a saltwater crocodile that could be scientifically verified was of a specimen in the Muséum national d'Histoire naturelle, collected in Cambodia. Its skull was long and wide near its base, with long mandibles; its length is not known, but based on skull-to-length ratios of large saltwater crocodiles its length was presumably in the range, though it could have had an exceptionally large skull or may not have the same skull-to-total-length ratios as other large saltwater crocodiles. If detached from the body, the head of a large male crocodile can weigh over , including the large muscles and tendons at the base of the skull that lend the crocodile its massive biting strength. The largest tooth measured in length. Other crocodilians like the gharial (Gavialis gangeticus) and the false gharial (Tomistoma schlegelii) have a proportionately longer skull, but both their skulls and their bodies are less massive than in the saltwater crocodile. Male size An adult male saltwater crocodile, from young adults to older individuals, typically ranges in length and weighs . On average, adult males range in length and weigh . However average size largely depends on the location, habitat, and human interactions and thus varies from one study to another. In 1993, in a study conducted (published in 1998), eleven saltwater crocodiles were found to have measured and weighed between . Very large, aged males can exceed in length and presumably weigh up to . The largest confirmed saltwater crocodile on record drowned in a fishing net in Papua New Guinea, in 1979. Its dried skin plus head measured in length and it was estimated to have been when accounting for shrinkage and a missing tail tip. Projected from their skull lengths, multiple specimens from Singapore were estimated to belong in life to male crocodiles measuring more than . A large Vietnamese saltwater crocodile was reliably estimated, based on its skull after its death, at . However, according to evidence in the form of skulls coming from some of the largest crocodiles ever shot, the maximum possible size attained by the largest members of this species is considered to be . A governmental study from Australia accepts that the very largest members of the species are likely to measure in length and weigh . Furthermore, a research paper on the morphology and physiology of crocodilians by the same organisation estimates that saltwater crocodiles reaching sizes of would weigh around . Due to the extreme size and highly aggressive nature of the species, weight in larger specimens is frequently poorly documented. A long individual named "Sweetheart" was found to have weighed . Another large crocodile named "Gomek", measuring in length weighed around . In 1992, a notorious man-eater, named "Bujang Senang" was killed in Sarawak, Malaysia. It measured in length and weighed more than . A saltwater–siamese hybrid named "Yai" (, meaning big; born 10 June 1972) at the Samutprakarn Crocodile Farm and Zoo, Thailand was claimed to be the largest crocodile ever held in captivity. It measured in length and weighed approximately . In 1962, a large male saltwater crocodile was shot in Adelaide River, Northern Territory. It was recorded to be long and weighed . A large male in the Philippines, named Lolong, was one of the largest saltwater crocodile ever caught and placed in captivity. He was long and weighed . Following his death in 2013, the largest living crocodile in captivity was "Cassius", who was kept at Marineland Crocodile Park, a zoo located at Green Island, Queensland, Australia. He measured 5.48 m (18 ft 0 in) in length and weighed approximately 1,300 kg (2,870 lb) before his own death in November 2024. Female size Adult females typically measure from in total length and weigh . Large mature females reach and weigh up to . The largest female on record measured about in total length. Female are thus similar in size to other species of large crocodiles and average slightly smaller than females of some other species, like the Nile crocodile. The saltwater crocodile has the greatest size sexual dimorphism, by far, of any extant crocodilian, as males average about 4 to 5 times as massive as adult females and can sometimes measure twice her total length. The reason for the male skewered dimorphism in this species is not definitively known but might be correlated with sex-specific territoriality and the need for adult male saltwater crocodiles to monopolise large stretches of habitat. Due to the extreme sexual dimorphism of the species as contrasted with the more modest-sized dimorphism of other species, the average length of the species is only slightly more than some other extant crocodilians at . Reported sizes Distribution and habitat The saltwater crocodile inhabits coastal brackish mangrove swamps, river deltas and freshwater rivers from India's east coast, Sri Lanka and Bangladesh to Myanmar, Malaysia, Brunei, Indonesia, Philippines, Timor Leste, Palau, Solomon Islands, Singapore, Papua New Guinea, Vanuatu and Australia's north coast. The southernmost population in India lives in Odisha's Bhitarkanika Wildlife Sanctuary; in northern Odisha, it has not been recorded since the 1930s. It occurs along the Andaman and Nicobar Islands coasts and in the Sundarbans. In Sri Lanka, it occurs foremost in western and southern parts of the country. In Myanmar, it inhabits the Ayeyarwady Delta. In southern Thailand, it was recorded in Phang Nga Province. In Singapore, it inhabits the Sungei Buloh Wetland Reserve and marshes near Kranji and Mandai. It is locally extinct in Cambodia, China, Seychelles, Thailand and Vietnam. In China, it may have once inhabited coastal areas from Fujian province in the north to the border of Vietnam.
Biology and health sciences
Crocodilia
Animals
566692
https://en.wikipedia.org/wiki/Bait%20%28luring%20substance%29
Bait (luring substance)
Bait is any appetizing substance (e.g. food) used to attract prey when hunting or fishing, most commonly in the form of trapping (e.g. mousetrap and bird trap), ambushing (e.g. from a hunting blind) and angling. Baiting is a ubiquitous practice in both recreational (especially angling) and commercial fishing, but the use of live bait can be deemed illegal under certain fisheries law and local jurisdictions. For hunting, however, baiting can often be controversial as it violates the principles of fair chase, although it is still a commonly accepted practice in varmint hunting, culling and pest control. Uses Fishing Baiting is ubiquitously practised to catching fish. Traditionally, nightcrawlers, small baitfish, insect adults and larvae have been used as standard hookbait, and offals are commonly used as groundbait (a.k.a. chumming) in blue water fishing. Modern fishermen have also begun using more plastic bait and lures, and more recently, electronic bionic baits, to attract the more territorial and aggressive predatory fishes. Because of the risk of transmitting Myxobolus cerebralis (whirling disease), trout and salmon should not be used as bait. There are various types of natural saltwater bait. Studies show that natural baits like croaker and shrimp are better recognized therefore more readily accepted by fish. The best bait for red drum (red fish) are [ pogy (menhaden) and, in the fall, specks like croaker. Hunting Baiting is a common practice in leopard hunting on a safari. A dead, smaller-sized antelope is usually placed high in the tree to lure the otherwise overcautious leopard. The hunter either watches the bait from point within firing range or stalks the animal if it has come for the bait during the night. In areas where bears are hunted, bait can be found for sale at gas stations and hunting supply stores. Often consisting of some sweet substance, such as frosting or molasses, combined with an aromatic like rotten meat or fish, the bait is spread and the hunter waits under cover for his prey. Cecil the Lion, who was infamously poached by an American trophy bowhunter in 2015, was baited out of the protected area into an ambush at the margin of a private land by a deliberately planted elephant carcass. Pest control Poisoned bait is a common method for controlling rats, mice, birds, slugs, snails, ants, cockroaches, and other pests. The basic granules, or other formulation, contains a food attractant for the target species and a suitable poison. For ants, a slow-acting toxin is needed so that the workers have time to carry the substance back to the colony, and for flies, a quick-acting substance to prevent further egg-laying and nuisance. Baits for slugs and snails often contain the molluscide metaldehyde, dangerous to children and household pets. Legal usage In Australia Baiting in Australia refers to specific campaigns to control foxes, wild dogs and dingos by poisoning in areas where they are a problem. These programs are held in conjunction with the local Department of Primary Industries, Rural Lands Protection Board (RLPB) and National Parks and Wildlife Service (NPWS) to facilitate a neighbourhood baiting campaign. Australian hunters often use carcasses when hunting feral pigs. Shot feral animals are often left in the field, and the decaying smell attracts more pigs to scavenge over the subsequent days.
Technology
Hunting and fishing
null
566959
https://en.wikipedia.org/wiki/Standard%20electrode%20potential
Standard electrode potential
In electrochemistry, standard electrode potential , or , is a measure of the reducing power of any element or compound. The IUPAC "Gold Book" defines it as; "the value of the standard emf (electromotive force) of a cell in which molecular hydrogen under standard pressure is oxidized to solvated protons at the left-hand electrode". Background The basis for an electrochemical cell, such as the galvanic cell, is always a redox reaction which can be broken down into two half-reactions: oxidation at anode (loss of electron) and reduction at cathode (gain of electron). Electricity is produced due to the difference of electric potential between the individual potentials of the two metal electrodes with respect to the electrolyte. Although the overall potential of a cell can be measured, there is no simple way to accurately measure the electrode/electrolyte potentials in isolation. The electric potential also varies with temperature, concentration and pressure. Since the oxidation potential of a half-reaction is the negative of the reduction potential in a redox reaction, it is sufficient to calculate either one of the potentials. Therefore, standard electrode potential is commonly written as standard reduction potential. Calculation The galvanic cell potential results from the voltage difference of a pair of electrodes. It is not possible to measure an absolute value for each electrode separately. However, the potential of a reference electrode, standard hydrogen electrode (SHE), is defined as to 0.00 V. An electrode with unknown electrode potential can be paired with either the standard hydrogen electrode, or another electrode whose potential has already been measured, to determine its "absolute" potential. Since the electrode potentials are conventionally defined as reduction potentials, the sign of the potential for the metal electrode being oxidized must be reversed when calculating the overall cell potential. The electrode potentials are independent of the number of electrons transferred —they are expressed in volts, which measure energy per electron transferred—and so the two electrode potentials can be simply combined to give the overall cell potential even if different numbers of electrons are involved in the two electrode reactions. For practical measurements, the electrode in question is connected to the positive terminal of the electrometer, while the standard hydrogen electrode is connected to the negative terminal. Reversible electrode A reversible electrode is an electrode that owes its potential to changes of a reversible nature. A first condition to be fulfilled is that the system is close to the chemical equilibrium. A second set of conditions is that the system is submitted to very small solicitations spread on a sufficient period of time so, that the chemical equilibrium conditions nearly always prevail. In theory, it is very difficult to experimentally achieve reversible conditions because any perturbation imposed to a system near equilibrium in a finite time forces it out of equilibrium. However, if the solicitations exerted on the system are sufficiently small and applied slowly, one can consider an electrode to be reversible. By nature, electrode reversibility depends on the experimental conditions and the way the electrode is operated. For example, electrodes used in electroplating are operated with a high over-potential to force the reduction of a given metal cation to be deposited onto a metallic surface to be protected. Such a system is far from equilibrium and continuously submitted to important and constant changes in a short period of time Standard reduction potential table The larger the value of the standard reduction potential, the easier it is for the element to be reduced (gain electrons); in other words, they are better oxidizing agents. For example, F2 has a standard reduction potential of +2.87 V and Li+ has −3.05 V: (g) + 2 2 = +2.87 V + (s) = −3.05 V The highly positive standard reduction potential of F2 means it is reduced easily and is therefore a good oxidizing agent. In contrast, the greatly negative standard reduction potential of Li+ indicates that it is not easily reduced. Instead, Li(s) would rather undergo oxidation (hence it is a good reducing agent). Zn2+ has a standard reduction potential of −0.76 V and thus can be oxidized by any other electrode whose standard reduction potential is greater than −0.76 V (e.g., H+ (0 V), Cu2+ (0.34 V), F2 (2.87 V)) and can be reduced by any electrode with standard reduction potential less than −0.76 V (e.g. H2 (−2.23 V), Na+ (−2.71 V), Li+ (−3.05 V)). In a galvanic cell, where a spontaneous redox reaction drives the cell to produce an electric potential, Gibbs free energy must be negative, in accordance with the following equation:      (unit: Joule = Coulomb × Volt) where is number of moles of electrons per mole of products and is the Faraday constant, . As such, the following rules apply: If > 0, then the process is spontaneous (galvanic cell): < 0, and energy is liberated. If < 0, then the process is non-spontaneous (electrolytic cell): > 0, and energy is consumed. Thus in order to have a spontaneous reaction ( < 0), must be positive, where: where is the standard potential at the cathode (called as standard cathodic potential or standard reduction potential and is the standard potential at the anode (called as standard anodic potential or standard oxidation potential) as given in the table of standard electrode potential.
Physical sciences
Electrochemistry
Chemistry
567471
https://en.wikipedia.org/wiki/Sea%20otter
Sea otter
The sea otter (Enhydra lutris) is a marine mammal native to the coasts of the northern and eastern North Pacific Ocean. Adult sea otters typically weigh between , making them the heaviest members of the weasel family, but among the smallest marine mammals. Unlike most marine mammals, the sea otter's primary form of insulation is an exceptionally thick coat of fur, the densest in the animal kingdom. Although it can walk on land, the sea otter is capable of living exclusively in the ocean. The sea otter inhabits nearshore environments, where it dives to the sea floor to forage. It preys mostly on marine invertebrates such as sea urchins, various mollusks and crustaceans, and some species of fish. Its foraging and eating habits are noteworthy in several respects. Its use of rocks to dislodge prey and to open shells makes it one of the few mammal species to use tools. In most of its range, it is a keystone species, controlling sea urchin populations which would otherwise inflict extensive damage to kelp forest ecosystems. Its diet includes prey species that are also valued by humans as food, leading to conflicts between sea otters and fisheries. Sea otters, whose numbers were once estimated at 150,000–300,000, were hunted extensively for their fur between 1741 and 1911, and the world population fell to 1,000–2,000 individuals living in a fraction of their historic range. A subsequent international ban on hunting, sea otter conservation efforts, and reintroduction programs into previously populated areas have contributed to numbers rebounding, and the species occupies about two-thirds of its former range. The recovery of the sea otter is considered an important success in marine conservation, although populations in the Aleutian Islands and California have recently declined or have plateaued at depressed levels. For these reasons, the sea otter remains classified as an endangered species. Evolution The sea otter is the heaviest (the giant otter is longer, but significantly slimmer) member of the family Mustelidae, a diverse group that includes the 13 otter species and terrestrial animals such as weasels, badgers, and minks. It is unique among the mustelids in not making dens or burrows, in having no functional anal scent glands, and in being able to live its entire life without leaving the water. The only living member of the genus Enhydra, the sea otter is so different from other mustelid species that, as recently as 1982, some scientists believed it was more closely related to the earless seals. Genetic analysis indicates the sea otter and its closest extant relatives, which include the African speckle-throated otter, Eurasian otter, African clawless otter and Asian small-clawed otter, shared an ancestor approximately 5 million years ago. Fossil evidence indicates the Enhydra lineage became isolated in the North Pacific approximately 2 million years ago, giving rise to the now-extinct Enhydra macrodonta and the modern sea otter, Enhydra lutris. One related species has been described, Enhydra reevei, from the Pleistocene of East Anglia. The modern sea otter evolved initially in northern Hokkaidō and Russia, and then spread east to the Aleutian Islands, mainland Alaska, and down the North American coast. In comparison to cetaceans, sirenians, and pinnipeds, which entered the water approximately 50, 40, and 20 million years ago, respectively, the sea otter is a relative newcomer to a marine existence. In some respects, though, the sea otter is more fully adapted to water than pinnipeds, which must haul out on land or ice to give birth. The full genome of the northern sea otter (Enhydra lutris kenyoni) was sequenced in 2017 and may allow for examination of the sea otter's evolutionary divergence from terrestrial mustelids. Following their divergence from their most common ancestor five million years ago, sea otters have developed traits dependent on polygenic selection, or the evolution of numerous traits to create hallmark features like thick and oily fur and large bones, compared to their freshwater sister species. Sea otters require these traits to survive the cold waters of the northern Pacific Ocean, in which they spend their entire lives despite occasionally coming out of the water as pups. Sea otters have the thickest fur of any animal (~1,000,000 hairs per square inch), as they do not have a blubber layer, while their oil glands help matt down their fur and keep it from holding air. Thick bones also prove crucial in increasing buoyancy, as sea otters spend long hours floating atop the ocean. In a study, southern and northern Sea Otter populations were compared against the African clawless otter, and it was determined that aquatic traits like loss of smell and hair thickness independently evolved, evidencing a complex genome of polygenic traits resulting in complex systems. This study was only able to take place after sequencing of Sea Otter nuclear genomes and through phylogeny to find a close ancestor with which to compare genomes. Previously, it was suspected that sea otters came from the same evolutionary branch as earless seals, such as harbor and monk seals. Sea Otters have experienced numerous population bottlenecks throughout their history, with significant numbers being wiped out 9,000-10,000 generations ago and 300–700 generations ago, long before the fur trade. These previous genetic bottlenecks are responsible for already low genetic diversity amongst species members, making the secondary bottleneck caused by the fur trade more significant. These primary bottlenecks were most likely caused by disease, a common cause for genetic bottlenecks. Estimates place these bottlenecks at leaving around ten to forty animals for about eight to forty-four years. This led to genetic drift, as the populations of northern and southern sea otters were cut off from one another by thousands of miles, leading to significant genomic differences. However, the modern population bottleneck caused by the fur trade of the eighteenth and early twentieth centuries presents the most significant concern to scientists and conservationists attempting to recover population numbers and genetic diversity. Each bottleneck has lowered genomic diversity and thus increased the chance of deleterious genetic drift. Taxonomy The first scientific description of the sea otter is contained in the field notes of Georg Steller from 1751, and the species was described by Carl Linnaeus in his landmark 1758 10th edition of Systema Naturae. Originally named Lutra marina, it underwent numerous name changes before being accepted as Enhydra lutris in 1922. The generic name Enhydra, derives from the Ancient Greek en/εν "in" and hydra/ύδρα "water", meaning "in the water", and the Latin word lutris, meaning "otter". It was formerly sometimes referred to as the "sea beaver". Subspecies Three subspecies of the sea otter are recognized with distinct geographical distributions. Enhydra lutris lutris (nominate), the Asian sea otter, ranges across Russia's Kuril Islands northeast of Japan, and the Commander Islands in the northwestern Pacific Ocean. In the eastern Pacific Ocean, E. l. kenyoni, the northern sea otter, is found from Alaska's Aleutian Islands to Oregon and E. l. nereis, the southern sea otter, is native to central and southern California. The Asian sea otter is the largest subspecies and has a slightly wider skull and shorter nasal bones than both other subspecies. Northern sea otters possess longer mandibles (lower jaws) while southern sea otters have longer rostrums and smaller teeth. Description The sea otter is one of the smallest marine mammal species, but it is the heaviest mustelid. Male sea otters usually weigh and are in length, though specimens up to have been recorded. Females are smaller, weighing and measuring in length. The average weight for adult sea otters that are in more densely populated areas, at in males and in females, was considerably lighter than the average weight of otters in more sparse populations, at in males and in females Presumably less populous otters are more able to monopolize food sources, For its size, the male otter's baculum is very large, massive and bent upwards, measuring in length and at the base. Unlike most other marine mammals, the sea otter has no blubber and relies on its exceptionally thick fur to keep warm. With up to , its fur is the densest of any animal. The fur consists of long, waterproof guard hairs and short underfur; the guard hairs keep the dense underfur layer dry. There is an air compartment between the thick fur and the skin where air is trapped and heated by the body. Cold water is kept completely away from the skin and heat loss is limited. However, a potential disadvantage of this form of insulation is compression of the air layer as the otter dives, thereby reducing the insulating quality of fur at depth when the animal forages. The fur is thick year-round, as it is shed and replaced gradually rather than in a distinct molting season. As the ability of the guard hairs to repel water depends on utmost cleanliness, the sea otter has the ability to reach and groom the fur on any part of its body, taking advantage of its loose skin and an unusually supple skeleton. The coloration of the pelage is usually deep brown with silver-gray speckles, but it can range from yellowish or grayish brown to almost black. In adults, the head, throat, and chest are lighter in color than the rest of the body. The sea otter displays numerous adaptations to its marine environment. The nostrils and small ears can close. The hind feet, which provide most of its propulsion in swimming, are long, broadly flattened, and fully webbed. The fifth digit on each hind foot is longest, facilitating swimming while on its back, but making walking difficult. The tail is fairly short, thick, slightly flattened, and muscular. The front paws are short with retractable claws, with tough pads on the palms that enable gripping slippery prey. The bones show osteosclerosis, increasing their density to reduce buoyancy. The sea otter presents an insight into the evolutionary process of the mammalian invasion of the aquatic environment, which has occurred numerous times over the course of mammalian evolution. Having only returned to the sea about 3 million years ago, sea otters represent a snapshot at the earliest point of the transition from fur to blubber. In sea otters, fur is still advantageous, given their small nature and division of lifetime between the aquatic and terrestrial environments. However, as sea otters evolve and adapt to spending more and more of their lifetimes in the sea, the convergent evolution of blubber suggests that the reliance on fur for insulation would be replaced by a dependency on blubber. This is particularly true due to the diving nature of the sea otter; as dives become lengthier and deeper, the air layer's ability to retain heat or buoyancy decreases, while blubber remains efficient at both of those functions. Blubber can also additionally serve as an energy source for deep dives, which would most likely prove advantageous over fur in the evolutionary future of sea otters. The sea otter propels itself underwater by moving the rear end of its body, including its tail and hind feet, up and down, and is capable of speeds of up to . When underwater, its body is long and streamlined, with the short forelimbs pressed closely against the chest. When at the surface, it usually floats on its back and moves by sculling its feet and tail from side to side. At rest, all four limbs can be folded onto the torso to conserve heat, whereas on particularly hot days, the hind feet may be held underwater for cooling. The sea otter's body is highly buoyant because of its large lung capacity – about 2.5 times greater than that of similar-sized land mammals – and the air trapped in its fur. The sea otter walks with a clumsy, rolling gait on land, and can run in a bounding motion. Long, highly sensitive whiskers and front paws help the sea otter find prey by touch when waters are dark or murky. Researchers have noted when they approach in plain view, sea otters react more rapidly when the wind is blowing towards the animals, indicating the sense of smell is more important than sight as a warning sense. Other observations indicate the sea otter's sense of sight is useful above and below the water, although not as good as that of seals. Its hearing is neither particularly acute nor poor. An adult's 32 teeth, particularly the molars, are flattened and rounded for crushing rather than cutting food. Seals and sea otters are the only carnivores with two pairs of lower incisor teeth rather than three; the adult dental formula is . The teeth and bones are sometimes stained purple as a result of ingesting sea urchins. The sea otter has a metabolic rate two or three times that of comparatively sized terrestrial mammals. It must eat an estimated 25 to 38% of its own body weight in food each day to burn the calories necessary to counteract the loss of heat due to the cold water environment. Its digestive efficiency is estimated at 80 to 85%, and food is digested and passed in as little as three hours. Most of its need for water is met through food, although, in contrast to most other marine mammals, it also drinks seawater. Its relatively large kidneys enable it to derive fresh water from sea water and excrete concentrated urine. Behavior The sea otter is diurnal. It has a period of foraging and eating in the morning, starting about an hour before sunrise, then rests or sleeps in mid-day. Foraging resumes for a few hours in the afternoon and subsides before sunset, and a third foraging period may occur around midnight. Females with pups appear to be more inclined to feed at night. Observations of the amount of time a sea otter must spend each day foraging range from 24 to 60%, apparently depending on the availability of food in the area. Sea otters spend much of their time grooming, which consists of cleaning the fur, untangling knots, removing loose fur, rubbing the fur to squeeze out water and introduce air, and blowing air into the fur. To casual observers, it appears as if the animals are scratching, but they are not known to have lice or other parasites in the fur. When eating, sea otters roll in the water frequently, apparently to wash food scraps from their fur. Foraging The sea otter hunts in short dives, often to the sea floor. Although it can hold its breath for up to five minutes, its dives typically last about one minute and not more than four minutes. It is the only marine animal capable of lifting and turning over rocks, which it often does with its front paws when searching for prey. The sea otter may also pluck snails and other organisms from kelp and dig deep into underwater mud for clams. It is the only marine mammal that catches fish with its forepaws rather than with its teeth. Under each foreleg, the sea otter has a loose pouch of skin that extends across the chest. In this pouch (preferentially the left one), the animal stores collected food to bring to the surface. This pouch also holds a rock, unique to the otter, that is used to break open shellfish and clams. At the surface, the sea otter eats while floating on its back, using its forepaws to tear food apart and bring it to its mouth. It can chew and swallow small mussels with their shells, whereas large mussel shells may be twisted apart. It uses its lower incisor teeth to access the meat in shellfish. To eat large sea urchins, which are mostly covered with spines, the sea otter bites through the underside where the spines are shortest, and licks the soft contents out of the urchin's shell. The sea otter's use of rocks when hunting and feeding makes it one of the few mammal species to use tools. To open hard shells, it may pound its prey with both paws against a rock on its chest. To pry an abalone off its rock, it hammers the abalone shell using a large stone, with observed rates of 45 blows in 15 seconds. Releasing an abalone, which can cling to rock with a force equal to 4,000 times its own body weight, requires multiple dives. Social structure Although each adult and independent juvenile forages alone, sea otters tend to rest together in single-sex groups called rafts. A raft typically contains 10 to 100 animals, with male rafts being larger than female ones. The largest raft ever seen contained over 2000 sea otters. To keep themselves from drifting out to sea when resting and eating, sea otters may wrap themselves in kelp. A male sea otter is most likely to mate if he maintains a breeding territory in an area that is also favored by females. As autumn is the peak breeding season in most areas, males typically defend their territory only from spring to autumn. During this time, males patrol the boundaries of their territories to exclude other males, although actual fighting is rare. Adult females move freely between male territories, where they outnumber adult males by an average of five to one. Males that do not have territories tend to congregate in large, male-only groups, and swim through female areas when searching for a mate. The species exhibits a variety of vocal behaviors. The cry of a pup is often compared to that of a gull. Females coo when they are apparently content; males may grunt instead. Distressed or frightened adults may whistle, hiss, or in extreme circumstances, scream. Although sea otters can be playful and sociable, they are not considered to be truly social animals. They spend much time alone, and each adult can meet its own hunting, grooming, and defense needs. Reproduction and life cycle Sea otters are polygynous: males have multiple female partners, typically those that inhabit their territory. If no territory is established, they seek out females in estrus. When a male sea otter finds a receptive female, the two engage in playful and sometimes aggressive behavior. They bond for the duration of estrus, or 3 days. The male holds the female's head or nose with his jaws during copulation. Visible scars are often present on females from this behavior. Births occur year-round, with peaks between May and June in northern populations and between January and March in southern populations. Gestation appears to vary from four to twelve months, as the species is capable of delayed implantation followed by four months of pregnancy. In California, sea otters usually breed every year, about twice as often as those in Alaska. Birth usually takes place in the water and typically produces a single pup weighing . Twins occur in 2% of births; however, usually only one pup survives. At birth, the eyes are open, ten teeth are visible, and the pup has a thick coat of baby fur. Mothers have been observed to lick and fluff a newborn for hours; after grooming, the pup's fur retains so much air, the pup floats like a cork and cannot dive. The fluffy baby fur is replaced by adult fur after about 13 weeks. Nursing lasts six to eight months in Californian populations and four to twelve months in Alaska, with the mother beginning to offer bits of prey at one to two months. The milk from a sea otter's two abdominal nipples is rich in fat and more similar to the milk of other marine mammals than to that of other mustelids. A pup, with guidance from its mother, practices swimming and diving for several weeks before it is able to reach the sea floor. Initially, the objects it retrieves are of little food value, such as brightly colored starfish and pebbles. Juveniles are typically independent at six to eight months, but a mother may be forced to abandon a pup if she cannot find enough food for it; at the other extreme, a pup may be nursed until it is almost adult size. Pup mortality is high, particularly during an individual's first winter – by one estimate, only 25% of pups survive their first year. Pups born to experienced mothers have the highest survival rates. Females perform all tasks of feeding and raising offspring, and have occasionally been observed caring for orphaned pups. Much has been written about the level of devotion of sea otter mothers for their pups – a mother gives her infant almost constant attention, cradling it on her chest away from the cold water and attentively grooming its fur. When foraging, she leaves her pup floating on the water, sometimes wrapped in kelp to keep it from floating away; if the pup is not sleeping, it cries loudly until she returns. Mothers have been known to carry their pups for days after the pups' deaths. Females become sexually mature at around three or four years of age and males at around five; however, males often do not successfully breed until a few years later. A captive male sired offspring at age 19. In the wild, sea otters live to a maximum age of 23 years, with lifespans ranging from 10 to 15 years for males and 15–20 years for females. Several captive individuals have lived past 20 years. The Seattle Aquarium was home to both the oldest recorded female, Etika, who lived to the age of 28, and the oldest recorded male, Adaa, who lived to be 22 years 8 months. Sea otters in the wild often develop worn teeth, which may account for their apparently shorter lifespans. Population and distribution Sea otters live in coastal waters deep, and usually stay within a kilometre ( mi) of the shore. They are found most often in areas with protection from the most severe ocean winds, such as rocky coastlines, thick kelp forests, and barrier reefs. Although they are most strongly associated with rocky substrates, sea otters can also live in areas where the sea floor consists primarily of mud, sand, or silt. Their northern range is limited by ice, as sea otters can survive amidst drift ice but not land-fast ice. Individuals generally occupy a home range a few kilometres long, and remain there year-round. The sea otter population is thought to have once been 150,000 to 300,000, stretching in an arc across the North Pacific from northern Japan to the central Baja California Peninsula in Mexico. The fur trade that began in the 1740s reduced the sea otter's numbers to an estimated 1,000 to 2,000 members in 13 colonies. Hunting records researched by historian Adele Ogden place the westernmost limit of the hunting grounds off the northern Japanese island of Hokkaido and the easternmost limit off Punta Morro Hermosa about south of Punta Eugenia, Baja California's westernmost headland in Mexico. In about two-thirds of its former range, the species is at varying levels of recovery, with high population densities in some areas and threatened populations in others. Sea otters currently have stable populations in parts of the Russian east coast, Alaska, British Columbia, Washington, and California, with reports of recolonizations in Mexico and Japan. Population estimates made between 2004 and 2007 give a worldwide total of approximately 107,000 sea otters. Japan Adele Ogden wrote in The California Sea Otter Trade that western sea otter were hunted "from Yezo northeastward past the Kuril Group and Kamchatka to the Aleutian Chain". "Yezo" refers to the island province of Hokkaido, in northern Japan, where the country's only confirmed population of western sea otter resides. Sightings have been documented in the waters of Cape Nosappu, Erimo, Hamanaka and Nemuro, among other locations in the region. Russia Currently, the most stable and secure part of the western sea otter's range is along the Russian Far East coastline, in the northwestern Pacific waters off of the country (namely Kamchatka and Sakhalin Island), occasionally being seen in and around the Sea of Okhotsk. Before the 19th century, around 20,000 to 25,000 sea otters lived near the Kuril Islands, with more near Kamchatka and the Commander Islands. After the years of the Great Hunt, the population in these areas, currently part of Russia, was only 750. By 2004, sea otters had repopulated all of their former habitat in these areas, with an estimated total population of about 27,000. Of these, about 19,000 are at the Kurils, 2,000 to 3,500 at Kamchatka and another 5,000 to 5,500 at the Commander Islands. Growth has slowed slightly, suggesting the numbers are reaching carrying capacity. British Columbia Along the North American coast south of Alaska, the sea otter's range is discontinuous. A remnant population survived off Vancouver Island into the 20th century, but it died out despite the 1911 international protection treaty, with the last sea otter taken near Kyuquot in 1929. From 1969 to 1972, 89 sea otters were flown or shipped from Alaska to the west coast of Vancouver Island. This population increased to over 5,600 in 2013 with an estimated annual growth rate of 7.2%, and their range on the island's west coast extended north to Cape Scott and across the Queen Charlotte Strait to the Broughton Archipelago and south to Clayoquot Sound and Tofino. In 1989, a separate colony was discovered in the central British Columbia coast. It is not known if this colony, which numbered about 300 animals in 2004, was founded by transplanted otters or was a remnant population that had gone undetected. By 2013, this population exceeded 1,100 individuals, was increasing at an estimated 12.6% annual rate, and its range included Aristazabal Island, and Milbanke Sound south to Calvert Island. In 2008, Canada determined the status of sea otters to be "special concern". United States Alaska Alaska is the central area of the sea otter's range. In 1973, the population in Alaska was estimated at between 100,000 and 125,000 animals. By 2006, though, the Alaska population had fallen to an estimated 73,000 animals. A massive decline in sea otter populations in the Aleutian Islands accounts for most of the change; the cause of this decline is not known, although orca predation is suspected. The sea otter population in Prince William Sound was also hit hard by the Exxon Valdez oil spill, which killed thousands of sea otters in 1989. Washington In 1969 and 1970, 59 sea otters were translocated from Amchitka Island to Washington, and released near La Push and Point Grenville. The translocated population is estimated to have declined to between 10 and 43 individuals before increasing, reaching 208 individuals in 1989. As of 2017, the population was estimated at over 2,000 individuals, and their range extends from Point Grenville in the south to Cape Flattery in the north and east to Pillar Point along the Strait of Juan de Fuca. In Washington, sea otters are found almost exclusively on the outer coasts. They can swim as close as six feet off shore along the Olympic coast. Reported sightings of sea otters in the San Juan Islands and Puget Sound almost always turn out to be North American river otters, which are commonly seen along the seashore. However, biologists have confirmed isolated sightings of sea otters in these areas since the mid-1990s. Oregon The last native sea otter in Oregon was probably shot and killed in 1906. In 1970 and 1971, a total of 95 sea otters were transplanted from Amchitka Island, Alaska to the Southern Oregon coast. However, this translocation effort failed and otters soon again disappeared from the state. In 2004, a male sea otter took up residence at Simpson Reef off of Cape Arago for six months. This male is thought to have originated from a colony in Washington, but disappeared after a coastal storm. On 18 February 2009, a male sea otter was spotted in Depoe Bay off the Oregon Coast. It could have traveled to the state from either California or Washington. California The historic population of California sea otters was estimated at 16,000 before the fur trade decimated the population, leading to their assumed extinction. Today's population of California sea otters are the descendants of a single colony of about 50 sea otters located near Bixby Creek Bridge in March 1938. Their principal range has gradually expanded and extends from Pigeon Point in San Mateo County to Santa Barbara County. Sea otters were once numerous in San Francisco Bay. Historical records revealed the Russian-American Company snuck Aleuts into San Francisco Bay multiple times, despite the Spanish capturing or shooting them while hunting sea otters in the estuaries of San Jose, San Mateo, San Bruno and around Angel Island. The founder of Fort Ross, Ivan Kuskov, finding otters scarce on his second voyage to Bodega Bay in 1812, sent a party of Aleuts to San Francisco Bay, where they met another Russian party and an American party, and caught 1,160 sea otters in three months. By 1817, sea otters in the area were practically eliminated and the Russians sought permission from the Spanish and the Mexican governments to hunt further and further south of San Francisco. In 1833, fur trappers George Nidever and George Yount canoed "along the Petaluma side of [the] Bay, and then proceeded to the San Joaquin River", returning with sea otter, beaver, and river otter pelts. Remnant sea otter populations may have survived in the bay until 1840, when the Rancho Punta de Quentin was granted to Captain John B. R. Cooper, a sea captain from Boston, by Mexican Governor Juan Bautista Alvarado along with a license to hunt sea otters, reportedly then prevalent at the mouth of Corte Madera Creek. In the late 1980s, the USFWS relocated about 140 southern sea otters to San Nicolas Island in southern California, in the hope of establishing a reserve population should the mainland be struck by an oil spill. To the surprise of biologists, the majority of the San Nicolas sea otters swam back to the mainland. Another group of twenty swam north to San Miguel Island, where they were captured and removed. By 2005, only 30 sea otters remained at San Nicolas, although they were slowly increasing as they thrived on the abundant prey around the island. The plan that authorized the translocation program had predicted the carrying capacity would be reached within five to 10 years. The spring 2016 count at San Nicolas Island was 104 sea otters, continuing a 5-year positive trend of over 12% per year. Sea otters were observed twice in Southern California in 2011, once near Laguna Beach and once at Zuniga Point Jetty, near San Diego. These are the first documented sightings of otters this far south in 30 years. When the USFWS implemented the translocation program, it also attempted, in 1986, to implement "zonal management" of the Californian population. To manage the competition between sea otters and fisheries, it declared an "otter-free zone" stretching from Point Conception to the Mexican border. In this zone, only San Nicolas Island was designated as sea otter habitat, and sea otters found elsewhere in the area were supposed to be captured and relocated. These plans were abandoned after many translocated otters died and also as it proved impractical to capture the hundreds of otters which ignored regulations and swam into the zone. However, after engaging in a period of public commentary in 2005, the Fish and Wildlife Service failed to release a formal decision on the issue. Then, in response to lawsuits filed by the Santa Barbara-based Environmental Defense Center and the Otter Project, on 19 December 2012 the USFWS declared that the "no otter zone" experiment was a failure, and will protect the otters re-colonizing the coast south of Point Conception as threatened species. Although abalone fisherman blamed the incursions of sea otters for the decline of abalone, commercial abalone fishing in southern California came to an end from overfishing in 1997, years before significant otter moved south of Point Conception. In addition, white abalone (Haliotis sorenseni), a species never overlapping with sea otter, had declined in numbers 99% by 1996, and became the first marine invertebrate to be federally listed as endangered. Although the southern sea otter's range has continuously expanded from the remnant population of about 50 individuals in Big Sur since protection in 1911, from 2007 to 2010, the otter population and its range contracted and since 2010 has made little progress. As of spring 2010, the northern boundary had moved from about Tunitas Creek to a point southeast of Pigeon Point, and the southern boundary has moved along the Gaviota Coast from approximately Coal Oil Point to Gaviota State Park. A toxin called microcystin, produced by a type of cyanobacteria (Microcystis), seems to be concentrated in the shellfish the otters eat, poisoning them. Cyanobacteria are found in stagnant water enriched with nitrogen and phosphorus from septic tank and agricultural fertilizer runoff, and may be flushed into the ocean when streamflows are high in the rainy season. A record number of sea otter carcasses were found on California's coastline in 2010, with increased shark attacks an increasing component of the mortality. Great white sharks do not consume relatively fat-poor sea otters but shark-bitten carcasses have increased from 8% in the 1980s to 15% in the 1990s and to 30% in 2010 and 2011. For southern sea otters to be considered for removal from threatened species listing, the U.S. Fish and Wildlife Service (USFWS) determined that the population should exceed 3,090 for three consecutive years. In response to recovery efforts, the population climbed steadily from the mid-20th century through the early 2000s, then remained relatively flat from 2005 to 2014 at just under 3,000. There was some contraction from the northern (now Pigeon Point) and southern limits of the sea otter's range during the end of this period, circumstantially related to an increase in lethal shark bites, raising concerns that the population had reached a plateau. However, the population increased markedly from 2015 to 2016, with the United States Geological Survey (USGS) California sea otter survey 3-year average reaching 3,272 in 2016, the first time it exceeded the threshold for delisting from the Endangered Species Act (ESA). If populations continued to grow and ESA delisting occurred, southern sea otters would still be fully protected by state regulations and the Marine Mammal Protection Act, which set higher thresholds for protection, at approximately 8,400 individuals. However, ESA delisting seems unlikely due to a precipitous population decline recorded in the spring 2017 USGS sea otter survey count, from the 2016 high of 3,615 individuals to 2,688, a loss of 25% of the California sea otter population. Mexico Historian Adele Ogden described sea otters as being particularly abundant in "Lower California", now the Baja California Peninsula, where "seven bays...were main centers". The southernmost limit was Punta Morro Hermoso about south of Punta Eugenia, in turn a headland at the southwestern end of Sebastián Vizcaíno Bay, on the west coast of the Baja Peninsula. Otter were also taken from San Benito Island, Cedros Island, and Isla Natividad in the Bay. By the early 1900s, Baja's sea otters were extirpated by hunting. In a 1997 survey, small numbers of sea otters, including pups, were reported by local fishermen, but scientists could not confirm these accounts. However, male and female otters have been confirmed by scientists off shores of the Baja Peninsula in a 2014 study, who hypothesize that otter dispersed there beginning in 2005. These sea otters may have dispersed from San Nicolas Island, which is away, as individuals have been recorded traversing distances of over . Genetic analysis of most of these animals were consistent with California, i.e. United States, otter origins, however one otter had a haplotype not previously reported, and could represent a remnant of the original native Mexican otter population. Ecology Diet High energetic requirements of sea otter metabolism require them to consume at least 20% of their body weight a day. Surface swimming and foraging are major factors in their high energy expenditure due to drag on the surface of the water when swimming and the thermal heat loss from the body during deep dives when foraging. Sea otter muscles are specially adapted to generate heat without physical activity. Sea otters are apex predators that consume over 100 prey species. In most of its range, the sea otter's diet consists almost exclusively of marine benthic invertebrates, including sea urchins (such as Strongylocentrotus franciscanus and S. purpuratus), sea cucumbers, fat innkeeper worms, crustaceans, a variety of mollusks such as chitons (such as Katharina tunicata), snails such as abalones and limpets (such as Diodora aspera), and bivalves such as clams, mussels (such as Mytilus edulis), and scallops (such as Crassadoma gigantea). Its prey ranges in size from tiny limpets and crabs to giant octopuses. Where prey such as sea urchins, clams, and abalone are present in a range of sizes, sea otters tend to select larger items over smaller ones of similar type. In California, they have been noted to ignore Pismo clams smaller than across. In a few northern areas, fish are also eaten. In studies performed at Amchitka Island in the 1960s, where the sea otter population was at carrying capacity, 50% of food found in sea otter stomachs was fish. The fish species were usually bottom-dwelling and sedentary or sluggish forms, such as Hemilepidotus hemilepidotus and family Tetraodontidae. However, south of Alaska on the North American coast, fish are a negligible or extremely minor part of the sea otter's diet. Contrary to popular depictions, sea otters rarely eat starfish, and any kelp that is consumed apparently passes through the sea otter's system undigested. Sea otters will also occasionally prey on seabirds. In California, the most commonly eaten species were western grebes, although cormorants, gulls, common loons, and surf scoters were also consumed. The individuals within a particular area often differ in their foraging methods and prey types, and tend to follow the same patterns as their mothers. The diet of local populations also changes over time, as sea otters can significantly deplete populations of highly preferred prey such as large sea urchins, and prey availability is also affected by other factors such as fishing by humans. Sea otters can thoroughly remove abalone from an area except for specimens in deep rock crevices, however, they never completely wipe out a prey species from an area. A 2007 Californian study demonstrated, in areas where food was relatively scarce, a wider variety of prey was consumed. Surprisingly, though, the diets of individuals were more specialized in these areas than in areas where food was plentiful. As a keystone species Sea otters are a classic example of a keystone species; their presence affects the ecosystem more profoundly than their size and numbers would suggest. They keep the population of certain benthic (sea floor) herbivores, particularly sea urchins, in check. Sea urchins graze on the lower stems of kelp, causing the kelp to drift away and die. Loss of the habitat and nutrients provided by kelp forests leads to profound cascade effects on the marine ecosystem. North Pacific areas that do not have sea otters often turn into urchin barrens, with abundant sea urchins and no kelp forest. Kelp forests are extremely productive ecosystems. Kelp forests sequester (absorb and capture) CO2 from the atmosphere through photosynthesis. Sea otters may help mitigate effects of climate change by their cascading trophic influence. Reintroduction of sea otters to British Columbia has led to a dramatic improvement in the health of coastal ecosystems, and similar changes have been observed as sea otter populations recovered in the Aleutian and Commander Islands and the Big Sur coast of California. However, some kelp forest ecosystems in California have also thrived without sea otters, with sea urchin populations apparently controlled by other factors. The role of sea otters in maintaining kelp forests has been observed to be more important in areas of open coast than in more protected bays and estuaries. Sea otters affect rocky ecosystems that are dominated by mussel beds by removing mussels from rocks. This allows space for competing species and increases species diversity. Predators The leading mammalian predators of this species is the orca. Sea lions and bald eagles may prey on pups. On land, young sea otters may face attack from bears and coyotes. In California, great white sharks are their primary predator, though this is the result of mistaking otters for seals and they do not consume otters after biting them. In Katmai National Park, grey wolves have been recorded to hunt and kill sea otters. Urban runoff transporting cat feces into the ocean brings Toxoplasma gondii, an obligate parasite of felids, which has killed sea otters. Parasitic infections of Sarcocystis neurona are also associated with human activity. According to the U.S. Geological Survey and the CDC, northern sea otters off Washington have been infected with the H1N1 flu virus and "may be a newly identified animal host of influenza viruses". Relationship with humans Fur trade Sea otters have the thickest fur of any mammal, which makes them a common target for many hunters. Archaeological evidence indicates that for thousands of years, indigenous peoples have hunted sea otters for food and fur. Large-scale hunting, part of the Maritime Fur Trade, which would eventually kill approximately one million sea otters, began in the 18th century when hunters and traders began to arrive from all over the world to meet foreign demand for otter pelts, which were one of the world's most valuable types of fur. In the early 18th century, Russians began to hunt sea otters in the Kuril Islands and sold them to the Chinese at Kyakhta. Russia was also exploring the far northern Pacific at this time, and sent Vitus Bering to map the Arctic coast and find routes from Siberia to North America. In 1741, on his second North Pacific voyage, Bering was shipwrecked off Bering Island in the Commander Islands, where he and many of his crew died. The surviving crew members, which included naturalist Georg Steller, discovered sea otters on the beaches of the island and spent the winter hunting sea otters and gambling with otter pelts. They returned to Siberia, having killed nearly 1,000 sea otters, and were able to command high prices for the pelts. Thus began what is sometimes called the "Great Hunt", which would continue for another hundred years. The Russians found the sea otter far more valuable than the sable skins that had driven and paid for most of their expansion across Siberia. If the sea otter pelts brought back by Bering's survivors had been sold at Kyakhta prices they would have paid for one tenth the cost of Bering's expedition. Russian fur-hunting expeditions soon depleted the sea otter populations in the Commander Islands, and by 1745, they began to move on to the Aleutian Islands. The Russians initially traded with the Aleuts inhabitants of these islands for otter pelts, but later enslaved the Aleuts, taking women and children hostage and torturing and killing Aleut men to force them to hunt. Many Aleuts were either murdered by the Russians or died from diseases the hunters had introduced. The Aleut population was reduced, by the Russians' own estimate, from 20,000 to 2,000. By the 1760s, the Russians had reached Alaska. In 1799, Tsar Paul I consolidated the rival fur-hunting companies into the Russian-American Company, granting it an imperial charter and protection, and a monopoly over trade rights and territorial acquisition. Under Aleksander I, the administration of the merchant-controlled company was transferred to the Imperial Navy, largely due to the alarming reports by naval officers of native abuse; in 1818, the indigenous peoples of Alaska were granted civil rights equivalent to a townsman status in the Russian Empire. Other nations joined in the hunt in the south. Along the coasts of what is now Mexico and California, Spanish explorers bought sea otter pelts from Native Americans and sold them in Asia. In 1778, British explorer Captain James Cook reached Vancouver Island and bought sea otter furs from the First Nations people. When Cook's ship later stopped at a Chinese port, the pelts rapidly sold at high prices, and were soon known as "soft gold". As word spread, people from all over Europe and North America began to arrive in the Pacific Northwest to trade for sea otter furs. Russian hunting expanded to the south, initiated by American ship captains, who subcontracted Russian supervisors and Aleut hunters in what are now Washington, Oregon, and California. Between 1803 and 1846, 72 American ships were involved in the otter hunt in California, harvesting an estimated 40,000 skins and tails, compared to only 13 ships of the Russian-American Company, which reported 5,696 otter skins taken between 1806 and 1846. In 1812, the Russians founded an agricultural settlement at what is now Fort Ross in northern California, as their southern headquarters. Eventually, sea otter populations became so depleted, commercial hunting was no longer viable. It had stopped in the Aleutian Islands, by 1808, as a conservation measure imposed by the Russian-American Company. Further restrictions were ordered by the company in 1834. When Russia sold Alaska to the United States in 1867, the Alaska population had recovered to over 100,000, but Americans resumed hunting and quickly extirpated the sea otter again. Prices rose as the species became rare. During the 1880s, a pelt brought $105 to $165 in the London market, but by 1903, a pelt could be worth as much as $1,125. In 1911, Russia, Japan, Great Britain (for Canada) and the United States signed the Treaty for the Preservation and Protection of Fur Seals, imposing a moratorium on the harvesting of sea otters. So few remained, perhaps only 1,000–2,000 individuals in the wild, that many believed the species would become extinct. Recovery and conservation During the 20th century, sea otter numbers rebounded in about two-thirds of their historic range, a recovery considered one of the greatest successes in marine conservation. However, the IUCN still lists the sea otter as an endangered species, and describes the significant threats to sea otters as oil pollution, predation by orcas, poaching, and conflicts with fisheries – sea otters can drown if entangled in fishing gear. The hunting of sea otters is no longer legal except for limited harvests by indigenous peoples in the United States. Poaching was a serious concern in the Russian Far East immediately after the collapse of the Soviet Union in 1991; however, it has declined significantly with stricter law enforcement and better economic conditions. The most significant threat to sea otters is oil spills, to which they are particularly vulnerable, since they rely on their fur to keep warm. When their fur is soaked with oil, it loses its ability to retain air, and the animals can quickly die from hypothermia. The liver, kidneys, and lungs of sea otters also become damaged after they inhale oil or ingest it when grooming. The Exxon Valdez oil spill of 24 March 1989 killed thousands of sea otters in Prince William Sound, and as of 2006, the lingering oil in the area continues to affect the population. Describing the public sympathy for sea otters that developed from media coverage of the event, a U.S. Fish and Wildlife Service spokesperson wrote: The small geographic ranges of the sea otter populations in California, Washington, and British Columbia mean a single major spill could be catastrophic for that state or province. Prevention of oil spills and preparation to rescue otters if one happens is a major focus for conservation efforts. Increasing the size and range of sea otter populations would also reduce the risk of an oil spill wiping out a population. However, because of the species' reputation for depleting shellfish resources, advocates for commercial, recreational, and subsistence shellfish harvesting have often opposed allowing the sea otter's range to increase, and there have even been instances of fishermen and others illegally killing them. With a population size of fifty, the low genetic diversity amongst the population post-fur trade but pre-discovery produced an evolutionary bottleneck. The recent population constraints put on the sea otter have led to low genomic diversity among species members, with much evidence of inbreeding. This inbreeding has led to the mutation of deleterious missense mutations, which may make fast-paced population growth difficult for conservation reasons. While longer-term recovery goals bolstering genetic diversity by inbreeding are costly and challenging, they could significantly aid in avoiding the further evolution of deleterious variation, thus aiding sea otter population stabilization. This method has already been utilized in returning cheetah populations to higher numbers and higher genetic diversity, and captive breeding programs through organizations such as the Monterey Bay Aquarium and The Marine Mammal Center make the chances of getting sea otter populations back up to pre-fur trade numbers possible. The population of sea otters in California has risen to around 3,000 in the wild. While this figure is far below pre-fur trade numbers, it represents a massive improvement in the conservation of the species and a massive increase in genetic diversity. On the other hand, northern sea otters have reached back up to pre-fur trade population numbers, with populations living all along the state's coast from Ketchikan in the south to Attu in the west. Historical populations, however, are estimated to have been between 150,000 and 300,000 individuals living along the northern Pacific rim from Baja California to Hokkaido Island in Japan. Modern conservation techniques have included breeding northern and southern populations of sea otters to increase genetic diversity and prevent both inbreeding and genetic drift. Moreover, the introduction of the Marine Mammal Protection Act in the 1970s made their hunting highly illegal in the United States. In the Aleutian Islands, a massive and unexpected disappearance of sea otters has occurred in recent decades. In the 1980s, the area was home to an estimated 55,000 to 100,000 sea otters, but the population fell to around 6,000 animals by 2000. The most widely accepted, but still controversial, hypothesis is that killer whales have been eating the otters. The pattern of disappearances is consistent with a rise in predation, but there has been no direct evidence of orcas preying on sea otters to any significant extent. Another area of concern is California, where recovery began to fluctuate or decline in the late 1990s. Unusually high mortality rates amongst adult and subadult otters, particularly females, have been reported. In 2017 the US Geological Survey found a 3% drop in the sea otter population of the California coast. This number still keeps them on track for removal from the endangered species list, although just barely. Necropsies of dead sea otters indicate diseases, particularly Toxoplasma gondii and acanthocephalan parasite infections, are major causes of sea otter mortality in California. The Toxoplasma gondii parasite, which is often fatal to sea otters, is carried by wild and domestic cats and may be transmitted by domestic cat droppings flushed into the ocean via sewage systems. Although disease has clearly contributed to the deaths of many of California's sea otters, it is not known why the California population is apparently more affected by disease than populations in other areas. Sea otter habitat is preserved through several protected areas in the United States, Russia and Canada. In marine protected areas, polluting activities such as dumping of waste and oil drilling are typically prohibited. An estimated 1,200 sea otters live within the Monterey Bay National Marine Sanctuary, and more than 500 live within the Olympic Coast National Marine Sanctuary. Economic impact Some of the sea otter's preferred prey species, particularly abalone, clams, and crabs, are also food sources for humans. In some areas, massive declines in shellfish harvests have been blamed on the sea otter, and intense public debate has taken place over how to manage the competition between sea otters and humans for seafood. The debate is complicated because sea otters have sometimes been held responsible for declines of shellfish stocks that were more likely caused by overfishing, disease, pollution, and seismic activity. Shellfish declines have also occurred in many parts of the North American Pacific coast that do not have sea otters, and conservationists sometimes note the existence of large concentrations of shellfish on the coast is a recent development resulting from the fur trade's near-extirpation of the sea otter. Although many factors affect shellfish stocks, sea otter predation can deplete a fishery to the point where it is no longer commercially viable. Scientists agree that sea otters and abalone fisheries cannot exist in the same area, and the same is likely true for certain other types of shellfish, as well. Many facets of the interaction between sea otters and the human economy are not as immediately felt. Sea otters have been credited with contributing to the kelp harvesting industry via their well-known role in controlling sea urchin populations; kelp is used in the production of diverse food and pharmaceutical products. Although human divers harvest red sea urchins both for food and to protect the kelp, sea otters hunt more sea urchin species and are more consistently effective in controlling these populations. E. lutris is a controlling predator of the red king crab (Paralithodes camtschaticus) in the Bering Sea, which would otherwise be out of control as it is in its invasive range, the Barents Sea. (Berents otters, Lutra lutra, occupy the same ecological niche and so are believed to help to control them in the Berents but this has not been studied.) The health of the kelp forest ecosystem is significant in nurturing populations of fish, including commercially important fish species. In some areas, sea otters are popular tourist attractions, bringing visitors to local hotels, restaurants, and sea otter-watching expeditions. Roles in human cultures For many maritime indigenous cultures throughout the North Pacific, especially the Ainu in the Kuril Islands, the Koryaks and Itelmen of Kamchatka, the Aleut in the Aleutian Islands, the Haida of Haida Gwaii and a host of tribes on the Pacific coast of North America, the sea otter has played an important role as a cultural, as well as material, resource. These cultures, many of which have strongly animist traditions full of legends and stories in which many aspects of the natural world are associated with spirits, regarded the sea otter as particularly kin to humans. The Nuu-chah-nulth, Haida, and other First Nations of coastal British Columbia used the warm and luxurious pelts as chiefs' regalia. Sea-otter pelts were given in potlatches to mark coming-of-age ceremonies, weddings, and funerals. The Aleuts carved sea otter bones for use as ornaments and in games, and used powdered sea-otter baculum as a medicine for fever. Some Ainu folk-tales portray the sea-otter as an occasional messenger between humans and the creator. The sea otter is a recurring figure in Ainu folklore. A major Ainu epic, the Kutune Shirka, tells the tale of wars and struggles over a golden sea-otter. Versions of a widespread Aleut legend tell of lovers or despairing women who plunge into the sea and become otters. These stories have been associated with the many human-like behavioral features of the sea otter, including apparent playfulness, strong mother-pup bonds and tool use, yielding to ready anthropomorphism. The beginning of commercial exploitation had a great impact on the human, as well as animal, populations. The Ainu and Aleuts have been displaced or their numbers are dwindling, while the coastal tribes of North America, where the otter is in any case greatly depleted, no longer rely as intimately on sea mammals for survival. Since the mid-1970s, the beauty and charisma of the species have gained wide appreciation, and the sea otter has become an icon of environmental conservation. The round, expressive face and soft, furry body of the sea otter are depicted in a wide variety of souvenirs, postcards, clothing, and stuffed toys. Aquariums and zoos Sea otters can do well in captivity, and are featured in over 40 public aquariums and zoos. The Seattle Aquarium became the first institution to raise sea otters from conception to adulthood with the birth of Tichuk in 1979, followed by three more pups in the early 1980s. In 2007, a YouTube video of two sea otters holding paws drew 1.5 million viewers in two weeks, and had over 22 million views . Filmed five years previously at the Vancouver Aquarium, it was YouTube's most popular animal video at the time, although it has since been surpassed. The lighter-colored otter in the video is Nyac, a survivor of the 1989 Exxon Valdez oil spill. Nyac died in September 2008, at the age of 20. Milo, the darker one, died of lymphoma in January 2012. Other sea otters at the Vancouver Aquarium have also gone viral. During the 2020 COVID-19 pandemic, the livestream of Joey, a rescued sea otter pup at the Marine Mammal Rescue Center, attracted millions of viewers from across the world on YouTube and Twitch. Many viewers said the stream helped them cope with the anxiety and depression caused by the pandemic lockdowns. In June 2024, a video of another rescued sea otter pup, Tofino, received over 120,000,000 views and 5,000,000 likes on Instagram. Beginning in 2019, the streamer Douglas Wreden, popularly known as DougDoug, has held charity streams for the Monterey Bay Aquarium to celebrate the birthday of Rosa the sea otter. As of 2024, DougDoug and his community have raised over $1,000,000 in Rosa's name. Current conservation Sea otters, being a known keystone species, need a humanitarian effort to be protected from endangerment through "unregulated human exploitation". This species has increasingly been impacted by the large oil spills and environmental degradation caused by overfishing and entanglement in fishing gear. Current efforts have been made in legislation: the international Fur Seal Treaty, The Endangered Species Act, IUCN/The World Conservation Union, Convention on international Trade in Endangered Species of Wild Fauna and Flora, and the Marine Mammal Protection Act of 1972. Other conservation efforts are done through reintroduction and zoological parks. Sea Otter Awareness Week is held every year during the last full week of September. Zoos, aquariums, and other educational institutions hold events highlighting sea otters, their ecological importance, and the challenges facing their conservation. It is organized and sponsored by Defenders of Wildlife, the Monterey Bay Aquarium, the California Department of Parks and Recreation, Sea Otter Savvy, and the Elakha Alliance.
Biology and health sciences
Carnivora
null
567489
https://en.wikipedia.org/wiki/Pliers
Pliers
Pliers are a hand tool used to hold objects firmly, possibly developed from tongs used to handle hot metal in Bronze Age Europe. They are also useful for bending and physically compressing a wide range of materials. Generally, pliers consist of a pair of metal first-class levers joined at a fulcrum positioned closer to one end of the levers, creating short jaws on one side of the fulcrum, and longer handles on the other side. This arrangement creates a mechanical advantage, allowing the force of the grip strength to be amplified and focused on an object with precision. The jaws can also be used to manipulate objects too small or unwieldy to be manipulated with the fingers. Diagonal pliers, also called side cutters, are a similarly shaped tool used for cutting rather than holding, having a pair of stout blades, similar to scissors except that the cutting surfaces meet parallel to each other rather than overlapping. Ordinary (holding/squeezing) pliers may incorporate a small pair of such cutting blades. Pincers are a similar tool with a different type of head used for cutting and pulling, rather than squeezing. Tools designed for safely handling hot objects are usually called tongs. Special tools for making crimp connections in electrical and electronic applications are often called crimping pliers or crimpers; each type of connection uses its own dedicated tool. Parallel pliers have jaws that close in parallel to each other, as opposed to the scissor-type action of traditional pliers. They use a box joint system to do this, and it allows them to generate more grip from friction on square and hexagonal fastenings. There are many kinds of pliers made for various general and specific purposes. History As pliers in the general sense are an ancient and simple invention, no single inventor can be credited. Early metal working processes from several millennia BCE would have required plier-like devices to handle hot materials in the process of smithing or casting. Development from wooden to bronze pliers would have probably happened sometime prior to 3000 BCE. Among the oldest illustrations of pliers are those showing the Greek god Hephaestus in his forge. The number of different designs of pliers grew with the invention of the different objects which they were used to handle: horseshoes, fasteners, wire, pipes, electrical, and electronic components. Design The basic design of pliers has changed little since their origins, with the pair of handles, the pivot (often formed by a rivet), and the head section with the gripping jaws or cutting edges forming the three elements. The materials used to make pliers consist mainly of steel alloys with additives such as vanadium or chromium, to improve strength and prevent corrosion. The metal handles of pliers are often fitted with grips of other materials to ensure better handling; grips are usually insulated and additionally protect against electric shock. The jaws vary widely in size, from delicate needle-nose pliers to heavy jaws capable of exerting much pressure, and shape, from basic flat jaws to various specialized and often asymmetrical jaw configurations for specific manipulations. The surfaces are typically textured rather than smooth, to minimize slipping. A plier-like tool designed for cutting wires is often called diagonal pliers. Some pliers for electrical work are fitted with wire-cutter blades either built into the jaws or on the handles just below the pivot. Where it is necessary to avoid scratching or damaging the workpiece, as for example in jewellery and musical instrument repair, pliers with a layer of softer material such as aluminium, brass, or plastic over the jaws are used. Ergonomics Much research has been undertaken to improve the design of pliers, to make them easier to use in often difficult circumstances (such as restricted spaces). The handles can be bent, for example, so that the load applied by the hand is aligned with the arm, rather than at an angle, thus reducing muscle fatigue. It is especially important for factory workers who use pliers continuously and helps prevent carpal tunnel syndrome. Types
Technology
Hand tools
null
568715
https://en.wikipedia.org/wiki/Packaging
Packaging
Packaging is the science, art and technology of enclosing or protecting products for distribution, storage, sale, and use. Packaging also refers to the process of designing, evaluating, and producing packages. Packaging can be described as a coordinated system of preparing goods for transport, warehousing, logistics, sale, and end use. Packaging contains, protects, preserves, transports, informs, and sells. In many countries it is fully integrated into government, business, institutional, industrial, and for personal use. Package labeling (American English) or labelling (British English) is any written, electronic, or graphic communication on the package or on a separate but associated label. Many countries or regions have regulations governing the content of package labels. Merchandising, branding, and persuasive graphics are not covered in this article. History of packaging Ancient era The first packages used the natural materials available at the time: baskets of reeds, wineskins (bota bags), wooden boxes, pottery vases, ceramic amphorae, wooden barrels, woven bags, etc. Processed materials were used to form packages as they were developed: first glass and bronze vessels. The study of old packages is an essential aspect of archaeology. The first usage of paper for packaging was sheets of treated mulberry bark used by the Chinese to wrap foods as early as the first or second century BC. The usage of paper-like material in Europe was when the Romans used low grade and recycled papyrus for the packaging of incense. The earliest recorded use of paper for packaging dates back to 1035, when a Persian traveller visiting markets in Cairo, Arab Egypt, noted that vegetables, spices and hardware were wrapped in paper for the customers after they were sold. Modern era Tinplate The use of tinplate for packaging dates back to the 18th century. The manufacturing of tinplate was the monopoly of Bohemia for a long time; in 1667 Andrew Yarranton, an English engineer, and Ambrose Crowley brought the method to England where it was improved by ironmasters including Philip Foley. By 1697, John Hanbury had a rolling mill at Pontypool for making "Pontypoole Plates". The method pioneered there of rolling iron plates by means of cylinders enabled more uniform black plates to be produced than was possible with the former practice of hammering. Tinplate boxes first began to be sold from ports in the Bristol Channel in 1725. The tinplate was shipped from Newport, Monmouthshire. By 1805, 80,000 boxes were made and 50,000 exported. Tobacconists in London began packaging snuff in metal-plated canisters from the 1760s onwards. Canning With the discovery of the importance of airtight containers for food preservation by French inventor Nicholas Appert, the tin canning process was patented by British merchant Peter Durand in 1810. After receiving the patent, Durand did not himself follow up with canning food. He sold his patent in 1812 to two other Englishmen, Bryan Donkin and John Hall, who refined the process and product and set up the world's first commercial canning factory on Southwark Park Road, London. By 1813, they were producing the first canned goods for the Royal Navy. The progressive improvement in canning stimulated the 1855 invention of the can opener. Robert Yeates, a cutlery and surgical instrument maker of Trafalgar Place West, Hackney Road, Middlesex, UK, devised a claw-ended can opener with a hand-operated tool that haggled its way around the top of metal cans. In 1858, another lever-type opener of a more complex shape was patented in the United States by Ezra Warner of Waterbury, Connecticut. Paper-based packaging Set-up boxes were first used in the 16th century and modern folding cartons date back to 1839. The first corrugated box was produced commercially in 1817 in England. Corrugated (also called pleated) paper received a British patent in 1856 and was used as a liner for tall hats. Scottish-born Robert Gair invented the pre-cut paperboard box in 1890—flat pieces manufactured in bulk that folded into boxes. Gair's invention came about as a result of an accident: as a Brooklyn printer and paper-bag maker during the 1870s, he was once printing an order of seed bags, and the metal ruler, commonly used to crease bags, shifted in position and cut them. Gair discovered that by cutting and creasing in one operation he could make prefabricated paperboard boxes. Commercial paper bags were first manufactured in Bristol, England, in 1844, and the American Francis Wolle patented a machine for automated bag-making in 1852. 20th century Packaging advancements in the early 20th century included Bakelite closures on bottles, transparent cellophane overwraps and panels on cartons. These innovations increased processing efficiency and improved food safety. As additional materials such as aluminum and several types of plastic were developed, they were incorporated into packages to improve performance and functionality. In 1952, Michigan State University became the first university in the world to offer a degree in Packaging Engineering. In-plant recycling has long been typical for producing packaging materials. Post-consumer recycling of aluminum and paper-based products has been economical for many years: since the 1980s, post-consumer recycling has increased due to curbside recycling, consumer awareness, and regulatory pressure. Many prominent innovations in the packaging industry were developed first for military use. Some military supplies are packaged in the same commercial packaging used for general industry. Other military packaging must transport materiel, supplies, foods, etc. under severe distribution and storage conditions. Packaging problems encountered in World War II led to Military Standard or "mil spec" regulations being applied to packaging, which was then designated "military specification packaging". As a prominent concept in the military, mil spec packaging officially came into being around 1941, due to operations in Iceland experiencing critical losses, ultimately attributed to bad packaging. In most cases, mil spec packaging solutions (such as barrier materials, field rations, antistatic bags, and various shipping crates) are similar to commercial grade packaging materials, but subject to more stringent performance and quality requirements. , the packaging sector accounted for about two percent of the gross national product in developed countries. About half of this market was related to food packaging. In 2019 the global food packaging market size was estimated at USD 303.26 billion, exhibiting a CAGR of 5.2% over the forecast period. Growing demand for packaged food by consumers owing to quickening pace of life and changing eating habits is expected to have a major impact on the market. The purposes of packaging and package labels Packaging and package labeling have several objectives Physical protection – The objects enclosed in the package may require protection from, among other things, mechanical shock, vibration, electrostatic discharge, abrasion, compression, temperature, etc. Barrier protection – A barrier to oxygen, water vapor, sunlight, dust, etc., is often required. Permeation is a critical factor in design. Some packages contain desiccants or oxygen absorbers to help extend shelf life. Modified atmospheres or controlled atmospheres are also maintained in some food packages. Keeping the contents clean, fresh, sterile and safe for the duration of the intended shelf life is a primary function. A barrier is also implemented in cases where segregation of two materials prior to end use is required, as in the case of special paints, glues, medical fluids, etc. Containment or agglomeration – liquids and powders need to be contained for shipment and sale. Small objects are typically grouped together in one package for reasons of storage and selling efficiency. For example, a single box of 1000 marbles requires less physical handling than 1000 single marbles. Liquids, powders, and granular materials need containment. Information transmission – Packages and labels communicate how to use, transport, recycle, or dispose of the package or product. With pharmaceuticals, food, medical, and chemical products, some types of information are required by government legislation. Some packages and labels also are used for track and trace purposes. Most items include their serial and lot numbers on the packaging, and in the case of food products, medicine, and some chemicals the packaging often contains an expiry/best-before date, usually in a shorthand form. Packages may indicate their construction material with a symbol. Marketing – Packaging and labels can be used by marketers to encourage potential buyers to purchase a product. Package graphic design and physical design have been important and constantly evolving phenomena for several decades. Marketing communications and graphic design are applied to the surface of the package and often to the point of sale display. Most packaging is designed to reflect the brand's message and identity on the one hand while highlighting the respective product concept on the other hand. Security – Packaging can play an important role in reducing the security risks of shipment. Packages can be made with improved tamper resistance to deter manipulation and they can also have tamper-evident features indicating that tampering has taken place. Packages can be engineered to help reduce the risks of package pilferage or the theft and resale of products: Some package constructions are more resistant to pilferage than other types, and some have pilfer-indicating seals. Counterfeit consumer goods, unauthorized sales (diversion), material substitution and tampering can all be minimized or prevented with such anti-counterfeiting technologies. Packages may include authentication seals and use security printing to help indicate that the package and contents are not counterfeit. Packages also can include anti-theft devices such as dye-packs, RFID tags, or electronic article surveillance tags that can be activated or detected by devices at exit points and require specialized tools to deactivate. Using packaging in this way is a means of retail loss prevention. Convenience – Packages can have features that add convenience in distribution, handling, stacking, display, sale, opening, reclosing, using, dispensing, reusing, recycling, and ease of disposal Portion control – Single serving or single dosage packaging has a precise amount of contents to control usage. Bulk commodities (such as salt) can be divided into packages that are a more suitable size for individual households. It also aids the control of inventory: selling sealed one-liter bottles of milk, rather than having people bring their own bottles to fill themselves. Branding/Positioning – Packaging and labels are increasingly used to go beyond marketing to brand positioning, with the materials used and design chosen key to the storytelling element of brand development. Due to the increasingly fragmented media landscape in the digital age this aspect of packaging is of growing importance. Packaging types Packaging may be of several different types. For example, a transport package or distribution package can be the shipping container used to ship, store, and handle the product or inner packages. Some identify a consumer package as one which is directed toward a consumer or household. Packaging may be described in relation to the type of product being packaged: medical device packaging, bulk chemical packaging, over-the-counter drug packaging, retail food packaging, military materiel packaging, pharmaceutical packaging, etc. It is sometimes convenient to categorize packages by layer or function: primary, secondary, tertiary,etc. Primary packaging is the material that first envelops the product and holds it. This usually is the smallest unit of distribution or use and is the package which is in direct contact with the contents. Secondary packaging is outside the primary packaging, and may be used to prevent pilferage or to group primary packages together. Tertiary or transit packaging is used for bulk handling, warehouse storage and transport shipping. The most common form is a palletized unit load that packs tightly into containers. These broad categories can be somewhat arbitrary. For example, depending on the use, a shrink wrap can be primary packaging when applied directly to the product, secondary packaging when used to combine smaller packages, or tertiary packaging when used to facilitate some types of distribution, such as to affix a number of cartons on a pallet. Packaging can also have categories based on the package form. For example, thermoform packaging and flexible packaging describe broad usage areas. Labels and symbols used on packages Many types of symbols for package labeling are nationally and internationally standardized. For consumer packaging, symbols exist for product certifications (such as the FCC and TÜV marks), trademarks, proof of purchase, etc. Some requirements and symbols exist to communicate aspects of consumer rights and safety, for example the CE marking or the estimated sign that notes conformance to EU weights and measures accuracy regulations. Examples of environmental and recycling symbols include the recycling symbol, the recycling code (which could be a resin identification code), and the "Green Dot". Food packaging may show food contact material symbols. In the European Union, products of animal origin which are intended to be consumed by humans have to carry standard, oval-shaped EC identification and health marks for food safety and quality insurance reasons. Bar codes, Universal Product Codes, and RFID labels are common to allow automated information management in logistics and retailing. Country-of-origin labeling is often used. Some products might use QR codes or similar matrix barcodes. Packaging may have visible registration marks and other printing calibration and troubleshooting cues. The labelling of medical devices includes many symbols, many of them covered by international standards, foremost ISO 15223-1. Consumer package contents Several aspects of consumer package labeling are subject to regulation. One of the most important is to accurately state the quantity (weight, volume, count) of the package contents. Consumers expect that the label accurately reflects the actual contents. Manufacturers and packagers must have effective quality assurance procedures and accurate equipment; even so, there is inherent variability in all processes. Regulations attempt to handle both sides of this. In the US, the Fair Packaging and Labeling Act provides requirements for many types of products. Also, NIST has Handbook 133, Checking the Net Contents of Packaged Goods. This is a procedural guide for compliance testing of net contents and is referenced by several other regulatory agencies. Other regions and countries have their own regulatory requirements. For example, the UK has its Weights and Measures (Packaged Goods) Regulations as well as several other regulations. In the EEA, products with hazardous formulas need to have a UFI. Shipping container labeling Technologies related to shipping containers are identification codes, bar codes, and electronic data interchange (EDI). These three core technologies serve to enable the business functions in the process of shipping containers throughout the distribution channel. Each has an essential function: identification codes either relate product information or serve as keys to other data, bar codes allow for the automated input of identification codes and other data, and EDI moves data between trading partners within the distribution channel. Elements of these core technologies include UPC and EAN item identification codes, the SCC-14 (UPC shipping container code), the SSCC-18 (Serial Shipping Container Codes), Interleaved 2-of-5 and UCC/EAN-128 (newly designated GS1-128) bar code symbologies, and ANSI ASC X12 and UN/EDIFACT EDI standards. Small parcel carriers often have their own formats. For example, United Parcel Service has a MaxiCode 2-D code for parcel tracking. RFID labels for shipping containers are also increasingly used. A Wal-Mart division, Sam's Club, has also moved in this direction and is putting pressure on its suppliers to comply. Shipments of hazardous materials or dangerous goods have special information and symbols (labels, placards, etc.) as required by UN, country, and specific carrier requirements. On transport packages, standardized symbols are also used to communicate handling needs. Some are defined in the ASTM D5445 "Standard Practice for Pictorial Markings for Handling of Goods", ISO 780 "Pictorial marking for handling of goods", and GHS hazard pictograms. Package development considerations Package design and development are often thought of as an integral part of the new product development process. Alternatively, the development of a package (or component) can be a separate process but must be linked closely with the product to be packaged. Package design starts with the identification of all the requirements: structural design, marketing, shelf life, quality assurance, logistics, legal, regulatory, graphic design, end-use, environmental, etc. The design criteria, performance (specified by package testing), completion time targets, resources, and cost constraints need to be established and agreed upon. Package design processes often employ rapid prototyping, computer-aided design, computer-aided manufacturing and document automation. An example of how package design is affected by other factors is its relationship to logistics. When the distribution system includes individual shipments by a small parcel carrier, the sorting, handling, and mixed stacking make severe demands on the strength and protective ability of the transport package. If the logistics system consists of uniform palletized unit loads, the structural design of the package can be designed to meet those specific needs, such as vertical stacking for a longer time frame. A package designed for one mode of shipment may not be suited to another. With some types of products, the design process involves detailed regulatory requirements for the packaging. For example, any package components that may contact foods are designated food contact materials. Toxicologists and food scientists need to verify that such packaging materials are allowed by applicable regulations. Packaging engineers need to verify that the completed package will keep the product safe for its intended shelf life with normal usage. Packaging processes, labeling, distribution, and sale need to be validated to assure that they comply with regulations that have the well being of the consumer in mind. Sometimes the objectives of package development seem contradictory. For example, regulations for an over-the-counter drug might require the package to be tamper-evident and child resistant: These intentionally make the package difficult to open. The intended consumer, however, might be disabled or elderly and unable to readily open the package. Meeting all goals is a challenge. Package design may take place within a company or with various degrees of external packaging engineering: independent contractors, consultants, vendor evaluations, independent laboratories, contract packagers, total outsourcing, etc. Some sort of formal project planning and project management methodology is required for all but the simplest package design and development programs. An effective quality management system and Verification and Validation protocols are mandatory for some types of packaging and recommended for all. Environmental considerations Package development involves considerations of sustainability, environmental responsibility, and applicable environmental and recycling regulations. It may involve a life cycle assessment which considers the material and energy inputs and outputs to the package, the packaged product (contents), the packaging process, the logistics system, waste management, etc. It is necessary to know the relevant regulatory requirements for point of manufacture, sale, and use. The traditional "three R's" of reduce, reuse, and recycle are part of a waste hierarchy which may be considered in product and package development. Prevention – Waste prevention is a primary goal. Packaging should be used only where needed. Proper packaging can also help prevent waste. Packaging plays an important part in preventing loss or damage to the packaged product (contents). Usually, the energy content and material usage of the product being packaged are much greater than that of the package. A vital function of the package is to protect the product for its intended use: if the product is damaged or degraded, its entire energy and material content may be lost. Minimization (also "source reduction") – Eliminate overpackaging. The mass and volume of packaging (per unit of contents) can be measured and used as criteria for minimizing the package in the design process. Usually "reduced" packaging also helps minimize costs. Packaging engineers continue to work toward reduced packaging. Reuse – Reusable packaging is encouraged. Returnable packaging has long been useful (and economically viable) for closed-loop logistics systems. Inspection, cleaning, repair, and recouperage are often needed. Some manufacturers re-use the packaging of the incoming parts for a product, either as packaging for the outgoing product or as part of the product itself. Recycling – Recycling is the reprocessing of materials (pre- and post-consumer) into new products. Emphasis is focused on recycling the largest primary components of a package: steel, aluminum, papers, plastics, etc. Small components can be chosen which are not difficult to separate and do not contaminate recycling operations. Packages can sometimes be designed to separate components to better facilitate recycling. Energy recovery – Waste-to-energy and refuse-derived fuel in approved facilities make use of the heat available from incinerating the packaging components. Disposal – Incineration, and placement in a sanitary landfill are undertaken for some materials. Certain US states regulate packages for toxic contents, which have the potential to contaminate emissions and ash from incineration and leachate from landfill. Packages should not be littered. Development of sustainable packaging is an area of considerable interest to standards organizations, governments, consumers, packagers, and retailers. Sustainability is the fastest-growing driver for packaging development, particularly for packaging manufacturers that work with the world's leading brands, as their CSR (Corporate Social Responsibility) targets often exceed those of the EU Directive. Packaging machinery Choosing packaging machinery includes an assessment of technical capabilities, labor requirements, worker safety, maintainability, serviceability, reliability, ability to integrate into the packaging line, capital cost, floorspace, flexibility (change-over, materials, multiple products, etc.), energy requirements, quality of outgoing packages, qualifications (for food, pharmaceuticals, etc.), throughput, efficiency, productivity, ergonomics, return on investment, etc. Packaging machinery can be: purchased as standard, off-the-shelf equipment purchased custom-made or custom-tailored to specific operations manufactured or modified by in-house engineers and maintenance staff Efforts at packaging line automation increasingly use programmable logic controllers and robotics. Packaging machines may be of the following general types: Accumulating and collating machines Blister packs, skin packs and vacuum packaging machines Bottle caps equipment, over-capping, lidding, closing, seaming and sealing machines Box, case, tray, and carrier forming, packing, unpacking, closing, and sealing machines Cartoning machines Cleaning, sterilizing, cooling and drying machines Coding, printing, marking, stamping, and imprinting machines Converting machines Conveyor belts, accumulating and related machines Feeding, orienting, placing and related machines Filling machines: handling dry, powdered, solid, liquid, gas, or viscous products Inspecting: visual, sound, metal detecting, etc. Label dispenser Orienting, unscrambling machines Package filling and closing machines Palletizing, depalletizing, unit load assembly Product identification: labeling, marking, etc. Sealing machines: heat sealer or glue units Slitting machines Weighing machines: check weigher, multihead weigher Wrapping machines: stretch wrapping, shrink wrap, banding Form, fill and seal machines Other specialty machinery: slitters, perforating, laser cutters, parts attachment, etc.
Technology
Containers
null
568726
https://en.wikipedia.org/wiki/Giant%20star
Giant star
A giant star has a substantially larger radius and luminosity than a main-sequence (or dwarf) star of the same surface temperature. They lie above the main sequence (luminosity class V in the Yerkes spectral classification) on the Hertzsprung–Russell diagram and correspond to luminosity classes II and III. The terms giant and dwarf were coined for stars of quite different luminosity despite similar temperature or spectral type (namely K and M) by Ejnar Hertzsprung in 1905 or 1906. Giant stars have radii up to a few hundred times the Sun and luminosities between 10 and a few thousand times that of the Sun. Stars still more luminous than giants are referred to as supergiants and hypergiants. A hot, luminous main-sequence star may also be referred to as a giant, but any main-sequence star is properly called a dwarf, regardless of how large and luminous it is. Formation A star becomes a giant after all the hydrogen available for fusion at its core has been depleted and, as a result, leaves the main sequence. The behaviour of a post-main-sequence star depends largely on its mass. Intermediate-mass stars For a star with a mass above about 0.25 solar masses (), once the core is depleted of hydrogen it contracts and heats up so that hydrogen starts to fuse in a shell around the core. The portion of the star outside the shell expands and cools, but with only a small increase in luminosity, and the star becomes a subgiant. The inert helium core continues to grow and increase in temperature as it accretes helium from the shell, but in stars up to about it does not become hot enough to start helium burning (higher-mass stars are supergiants and evolve differently). Instead, after just a few million years the core reaches the Schönberg–Chandrasekhar limit, rapidly collapses, and may become degenerate. This causes the outer layers to expand even further and generates a strong convective zone that brings heavy elements to the surface in a process called the first dredge-up. This strong convection also increases the transport of energy to the surface, the luminosity increases dramatically, and the star moves onto the red-giant branch where it will stably burn hydrogen in a shell for a substantial fraction of its entire life (roughly 10% for a Sun-like star). The core continues to gain mass, contract, and increase in temperature, whereas there is some mass loss in the outer layers., § 5.9. If the star's mass, when on the main sequence, was below approximately , it will never reach the central temperatures necessary to fuse helium., p. 169. It will therefore remain a hydrogen-fusing red giant until it runs out of hydrogen, at which point it will become a helium white dwarf., § 4.1, 6.1. According to stellar evolution theory, no star of such low mass can have evolved to that stage within the age of the Universe. In stars above about the core temperature eventually reaches 108 K and helium will begin to fuse to carbon and oxygen in the core by the triple-alpha process.,§ 5.9, chapter 6. When the core is degenerate helium fusion begins explosively, but most of the energy goes into lifting the degeneracy and the core becomes convective. The energy generated by helium fusion reduces the pressure in the surrounding hydrogen-burning shell, which reduces its energy-generation rate. The overall luminosity of the star decreases, its outer envelope contracts again, and the star moves from the red-giant branch to the horizontal branch., chapter 6. When the core helium is exhausted, a star with up to about has a carbon–oxygen core that becomes degenerate and starts helium burning in a shell. As with the earlier collapse of the helium core, this starts convection in the outer layers, triggers a second dredge-up, and causes a dramatic increase in size and luminosity. This is the asymptotic giant branch (AGB) analogous to the red-giant branch but more luminous, with a hydrogen-burning shell contributing most of the energy. Stars only remain on the AGB for around a million years, becoming increasingly unstable until they exhaust their fuel, go through a planetary nebula phase, and then become a carbon–oxygen white dwarf., § 7.1–7.4. High-mass stars Main-sequence stars with masses above about are already very luminous and they move horizontally across the HR diagram when they leave the main sequence, briefly becoming blue giants before they expand further into blue supergiants. They start core-helium burning before the core becomes degenerate and develop smoothly into red supergiants without a strong increase in luminosity. At this stage they have comparable luminosities to bright AGB stars although they have much higher masses, but will further increase in luminosity as they burn heavier elements and eventually become a supernova. Stars in the range have somewhat intermediate properties and have been called super-AGB stars. They largely follow the tracks of lighter stars through RGB, HB, and AGB phases, but are massive enough to initiate core carbon burning and even some neon burning. They form oxygen–magnesium–neon cores, which may collapse in an electron-capture supernova, or they may leave behind an oxygen–neon white dwarf. O class main sequence stars are already highly luminous. The giant phase for such stars is a brief phase of slightly increased size and luminosity before developing a supergiant spectral luminosity class. Type O giants may be more than a hundred thousand times as luminous as the sun, brighter than many supergiants. Classification is complex and difficult with small differences between luminosity classes and a continuous range of intermediate forms. The most massive stars develop giant or supergiant spectral features while still burning hydrogen in their cores, due to mixing of heavy elements to the surface and high luminosity which produces a powerful stellar wind and causes the star's atmosphere to expand. Low-mass stars A star whose initial mass is less than approximately will not become a giant star at all. For most of their lifetimes, such stars have their interior thoroughly mixed by convection and so they can continue fusing hydrogen for a time in excess of years, much longer than the current age of the Universe. They steadily become hotter and more luminous throughout this time. Eventually they do develop a radiative core, subsequently exhausting hydrogen in the core and burning hydrogen in a shell surrounding the core. (Stars with a mass in excess of may expand at this point, but will never become very large.) Shortly thereafter, the star's supply of hydrogen will be completely exhausted and it is expected to become a helium white dwarf, although the universe is too young for any such star to exist yet, so no star with that history has ever been observed. Subclasses There are a wide range of giant-class stars and several subdivisions are commonly used to identify smaller groups of stars. Subgiants Subgiants are an entirely separate spectroscopic luminosity class (IV) from giants, but share many features with them. Although some subgiants are simply over-luminous main-sequence stars due to chemical variation or age, others are a distinct evolutionary track towards true giants. Examples: Gamma Geminorum (γ Gem), an A-type subgiant; Eta Bootis (η Boo), a G-type subgiant. Delta Scorpii (δ Sco), a B-type subgiant. Bright giants Bright giants are stars of luminosity class II in the Yerkes spectral classification. These are stars which straddle the boundary between ordinary giants and supergiants, based on the appearance of their spectra. The bright giant luminosity class was first defined in 1943. Well known stars which are classified as bright giants include: Canopus Albireo Epsilon Canis Majoris Theta Scorpii Beta Draconis Alpha Herculis Gamma Canis Majoris Red giants Within any giant luminosity class, the cooler stars of spectral class K, M, S, and C, (and sometimes some G-type stars) are called red giants. Red giants include stars in a number of distinct evolutionary phases of their lives: a main red-giant branch (RGB); a red horizontal branch or red clump; the asymptotic giant branch (AGB), although AGB stars are often large enough and luminous enough to get classified as supergiants; and sometimes other large cool stars such as immediate post-AGB stars. The RGB stars are by far the most common type of giant star due to their moderate mass, relatively long stable lives, and luminosity. They are the most obvious grouping of stars after the main sequence on most HR diagrams, although white dwarfs are more numerous but far less luminous. Examples: Pollux, a K-type giant. Epsilon Ophiuchi, a G-type red giant. Arcturus (α Boötis), a K-type giant. R Doradus, a M-type giant. Mira (ο Ceti), an M-type giant and prototype Mira variable. Aldebaran, a K-type giant Yellow giants Giant stars with intermediate temperatures (spectral class G, F, and at least some A) are called yellow giants. They are far less numerous than red giants, partly because they only form from stars with somewhat higher masses, and partly because they spend less time in that phase of their lives. However, they include a number of important classes of variable stars. High-luminosity yellow stars are generally unstable, leading to the instability strip on the HR diagram where the majority of stars are pulsating variables. The instability strip reaches from the main sequence up to hypergiant luminosities, but at the luminosities of giants there are several classes of pulsating variable stars: RR Lyrae variables, pulsating horizontal-branch class A (sometimes F) stars with periods less than a day and amplitudes of a magnitude of less; W Virginis variables, more-luminous pulsating variables also known as type II Cepheids, with periods of 10–20 days; Type I Cepheid variables, more luminous still and mostly supergiants, with even longer periods; Delta Scuti variables, includes subgiant and main-sequence stars. Yellow giants may be moderate-mass stars evolving for the first time towards the red-giant branch, or they may be more evolved stars on the horizontal branch. Evolution towards the red-giant branch for the first time is very rapid, whereas stars can spend much longer on the horizontal branch. Horizontal-branch stars, with more heavy elements and lower mass, are more unstable. Examples: Sigma Octantis (σ Octantis), an F-type giant and a Delta Scuti variable; Capella Aa (α Aurigae Aa), a G-type giant. Beta Corvi (β Corvi), a G-type bright giant. Blue (and sometimes white) giants The hottest giants, of spectral classes O, B, and sometimes early A, are called blue giants. Sometimes A- and late-B-type stars may be referred to as white giants. The blue giants are a very heterogeneous grouping, ranging from high-mass, high-luminosity stars just leaving the main sequence to low-mass, horizontal-branch stars. Higher-mass stars leave the main sequence to become blue giants, then bright blue giants, and then blue supergiants, before expanding into red supergiants, although at the very highest masses the giant stage is so brief and narrow that it can hardly be distinguished from a blue supergiant. Lower-mass, core-helium-burning stars evolve from red giants along the horizontal branch and then back again to the asymptotic giant branch, and depending on mass and metallicity they can become blue giants. It is thought that some post-AGB stars experiencing a late thermal pulse can become peculiar blue giants. Examples: Meissa (λ Orionis A), an O-type giant. Alcyone (η Tauri), a B-type giant, the brightest star in the Pleiades; Thuban (α Draconis), an A-type giant.
Physical sciences
Stellar astronomy
null
569005
https://en.wikipedia.org/wiki/Gmail
Gmail
Gmail is the email service provided by Google. it had 1.5 billion active users worldwide, making it the largest email service in the world. It also provides a webmail interface, accessible through a web browser, and is also accessible through the official mobile application. Google also supports the use of third-party email clients via the POP and IMAP protocols. At its launch in 2004, Gmail (or Google Mail at the time) provided a storage capacity of one gigabyte per user, which was significantly higher than its competitors offered at the time. Today, the service comes with 15 gigabytes of storage for free for individual users, which is divided among other Google services, such as Google Drive, and Google Photos. Users in need of more storage can purchase Google One to increase this 15 GB limit across most Google services. Users can receive emails up to 50 megabytes in size, including attachments, and can send emails up to 25 megabytes. Gmail supports integration with Google Drive, allowing for larger attachments. Gmail has a search-oriented interface and supports a "conversation view" similar to an Internet forum. The service is notable among website developers for its early adoption of Ajax. Google's mail servers automatically scan emails for multiple purposes, including to filter spam and malware and, prior to June 2017, to add context-sensitive advertisements next to emails. This advertising practice has been significantly criticized by privacy advocates with concerns over unlimited data retention, ease of monitoring by third parties, users of other email providers not having agreed to the policy upon sending emails to Gmail addresses, and the potential for Google to change its policies to further decrease privacy by combining information with other Google data usage. The company has been the subject of lawsuits concerning the issues. Google has stated that email users must "necessarily expect" their emails to be subject to automated processing and claims that the service refrains from displaying ads next to potentially sensitive messages, such as those mentioning race, religion, sexual orientation, health, or financial statements. In June 2017, Google announced the end of the use of contextual Gmail content for advertising purposes, relying instead on data gathered from the use of its other services. Features Storage On April 1, 2004, Gmail was launched with one gigabyte (GB) of storage space, a significantly higher amount than competitors offered at the time. The limit was doubled to two gigabytes of storage on April 1, 2005, the first anniversary of Gmail. Georges Harik, the product management director for Gmail, stated that Google would "keep giving people more space forever." In October 2007, Gmail increased storage to 4 gigabytes, after recent changes from competitors Yahoo and Microsoft. On April 24, 2012, Google announced the increase of storage included in Gmail from 7.5 to 10 gigabytes ("and counting") as part of the launch of Google Drive. On May 13, 2013, Google announced the overall merge of storage across Gmail, Google Drive, and Google+ Photos, allowing users 15 gigabytes of included storage among three services. On August 15, 2018, Google launched Google One, a service where users can pay for additional storage, shared among Gmail, Google Drive and Google Photos, through a monthly subscription plan. , storage of up to 15 gigabytes is included, and paid plans are available for up to 2 terabytes for personal use. There are also storage limits to individual Gmail messages. Initially, one message, including all attachments, could not be larger than 25 megabytes. This was changed in March 2017 to allow receiving an email of up to 50 megabytes, while the limit for sending an email stayed at 25 megabytes. In order to send larger files, users can insert files from Google Drive into the message. Interface The Gmail user interface initially differed from other web-mail systems with its focus on search and conversation threading of emails, grouping several messages between two or more people onto a single page, an approach that was later copied by its competitors. Gmail's user interface designer, Kevin Fox, intended users to feel as if they were always on one page and just changing things on that page, rather than having to navigate to other places. Gmail's interface also makes use of 'labels' (tags) – that replace the conventional folders and provide a more flexible method of organizing emails; filters for automatically organizing, deleting or forwarding incoming emails to other addresses; and importance markers for automatically marking messages as 'important'. In November 2011, Google began rolling out a redesign of its interface that "simplified" the look of Gmail into a more minimalist design to provide a more consistent look throughout its products and services as part of an overall Google design change. Majorly redesigned elements included a streamlined conversation view, configurable density of information, new higher-quality themes, a resizable navigation bar with always-visible labels and contacts, and better search. Users were able to preview the new interface design for months prior to the official release, as well as revert to the old interface, until March 2012, when Google discontinued the ability to revert and completed the transition to the new design for all users. In May 2013, Google updated the Gmail inbox with tabs which allow the application to categorize the user's emails. The five tabs are: Primary, Social, Promotions, Updates, and Forums. In addition to customization options, the entire update can be disabled, allowing users to return to the traditional inbox structure. In April 2018, Google introduced a new web UI for Gmail. The new redesign follows Google's Material Design, and changes in the user interface include the use of Google's Product Sans font. Other updates include a Confidential mode, which allows the sender to set an expiration date for a sensitive message or to revoke it entirely, integrated rights management and two-factor authentication. On 16 November 2020, Google announced new settings for smart features and personalization in Gmail. Under the new settings users were given control of their data in Gmail, Chat, and Meet, offering smart features like Smart Compose and Smart Reply. On 6 April 2021, Google rolled out Google Chat and Room (early access) feature to all Gmail users. On 28 July 2022, Google rolled out Material You to all Gmail users. Spam filter Gmail's spam filtering features a community-driven system: when any user marks an email as spam, this provides information to help the system identify similar future messages for all Gmail users. In the April 2018 update, the spam filtering banners got a redesign, with bigger and bolder lettering. Gmail Labs The Gmail Labs feature, introduced on June 5, 2008, allows users to test new or experimental features of Gmail. Users can enable or disable Labs features selectively and provide feedback about each of them. This allows Gmail engineers to obtain user input about new features to improve them and also to assess their popularity. Popular features, like the "Undo Send" option, often "graduate" from Gmail Labs to become a formal setting in Gmail. All Labs features are experimental and are subject to termination at any time. Search Gmail incorporates a search bar for searching emails. The search bar can also search contacts, files stored in Google Drive, events from Google Calendar, and Google Sites. In May 2012, Gmail improved the search functionality to include auto-complete predictions from the user's emails. Gmail's search functionality does not support searching for word fragments (also known as 'substring search' or partial word search). Workarounds exist. Language support , the Gmail interface supports 72 languages, including: Arabic, Basque, Bulgarian, Catalan, Chinese (simplified), Chinese (traditional), Croatian, Czech, Danish, Dutch, English (UK), English (US), Estonian, Finnish, French, German, Greek, Gujarati, Hebrew, Hindi, Hungarian, Icelandic, Indonesian, Italian, Japanese, Kannada, Korean, Latvian, Lithuanian, Malay, Malayalam, Marathi, Norwegian (Bokmål), Odia, Persian, Polish, Punjabi, Portuguese (Brazil), Portuguese (Portugal), Romanian, Russian, Serbian, Sinhala, Slovak, Slovenian, Spanish, Swedish, Tagalog (Filipino), Tamil, Telugu, Thai, Turkish, Ukrainian, Urdu, Vietnamese, Welsh and Zulu. Language input styles In October 2012, Google added over 100 virtual keyboards, transliterations, and input method editors to Gmail, enabling users different types of input styles for different languages in an effort to help users write in languages that are not "limited by the language of your keyboard." In October 2013, Google added handwriting input support to Gmail. In August 2014, Gmail became the first major email provider to let users send and receive emails from addresses with accent marks and letters from outside the Latin alphabet. Platforms Web browsers The modern AJAX version is officially supported in the current and previous major releases of Google Chrome, Firefox, Microsoft Edge and Safari web browsers on a rolling basis. Gmail's "basic HTML" version works on almost all browsers. This version of Gmail has been discontinued from January 2024. In August 2011, Google introduced Gmail Offline, an HTML5-powered app for providing access to the service while offline. Gmail Offline runs on the Google Chrome browser and can be downloaded from the Chrome Web Store. In addition to the native apps on iOS and Android, users can access Gmail through the web browser on a mobile device. Mobile Gmail has native applications for iOS devices (including iPhone, iPad, and iPod Touch) and for Android devices. In November 2014, Google introduced functionality in the Gmail Android app that enabled sending and receiving emails from non-Gmail addresses (such as Yahoo! Mail and Outlook.com) through POP or IMAP. In November 2016, Google redesigned the Gmail app for the iOS platform, bringing the first complete visual overhaul in "nearly four years". The update added much more use of colors, sleeker transitions, and the addition of several "highly-requested" features, including Undo Send, faster search with instant results and spelling suggestions, and Swipe to Archive/Delete. In May 2017, Google updated Gmail on Android to feature protection from phishing attacks. Media outlets noticed that the new protection was announced amid a widespread phishing attack on a combination of Gmail and Google's Docs document service that occurred on the same day. Later in May, Google announced the addition of "Smart Reply" to Gmail on Android and iOS. "Smart Reply", a feature originally launched for Google's Inbox by Gmail service, scans a message for information and uses machine intelligence to offer three responses the user can optionally edit and send. The feature is limited to the English language at launch, with additional support for Spanish, followed by other languages arriving later. Inbox by Gmail, another app from the Gmail team, was also available for iOS and Android devices. It was discontinued in April 2019. Third-party programs can be used to access Gmail, using the POP or IMAP protocols. In 2019, Google rolled out dark mode for its mobile apps in Android and iOS. Inbox by Gmail In October 2014, Google introduced Inbox by Gmail on an invitation-only basis. Developed by the Gmail team, but serving as a "completely different type of inbox", the service is made to help users deal with the challenges of an active email. Citing issues such as distractions, difficulty in finding important information buried in messages, and receiving more emails than ever, Inbox by Gmail has several important differences from Gmail, including bundles that automatically sort emails of the same topic together, highlights that surface key information from messages, and reminders, assists, and snooze, that help the user in handling incoming emails at appropriate times. Inbox by Gmail became publicly available in May 2015. In September 2018, Google announced it would end the service at the end of March 2019, most of its key features having been incorporated into the standard Gmail service. The service was discontinued on April 2, 2019. Integration with Google products In August 2010, Google released a plugin that provides integrated telephone service within Gmail's Google Chat interface. The feature initially lacked an official name, with Google referring to it as both "Google Voice in Gmail chat" and "Call Phones in Gmail". The service logged over one million calls in 24 hours. In March 2014, Google Voice was discontinued, and replaced with functionality from Google Hangouts, another communication platform from Google. On February 9, 2010, Google commenced its new social networking tool, Google Buzz, which integrated with Gmail, allowing users to share links and media, as well as status updates. Google Buzz was discontinued in October 2011, replaced with new functionality in Google+, Google's then-new social networking platform. Gmail was integrated with Google+ in December 2011, as part of an effort to have all Google information across one Google account, with a centralized Google+ user profile. Backlash from the move caused Google to step back and remove the requirement of a Google+ user account, keeping only a private Google account without a public-facing profile, starting in July 2015. In May 2013, Google announced the integration between Google Wallet and Gmail, which would allow Gmail users to send money as email attachments. Although the sender must use a Gmail account, the recipient does not need to be using a Gmail address. The feature has no transaction fees, but there are limits to the amount of money that can be sent. Initially only available on the web, the feature was expanded to the Android app in March 2017, for people living in the United States. In September 2016, Google released Google Trips, an app that, based on information from a user's Gmail messages, automatically generates travel cards. A travel card contains itinerary details, such as plane tickets and car rentals, and recommends activities, food and drinks, and attractions based on location, time, and interests. The app also has offline functionality. In April 2017, Google Trips received an update adding several significant features. The app now also scans Gmail for bus and train tickets, and allows users to manually input trip reservations. Users can send trip details to other users' email, and if the recipient also has Google Trips, the information will be automatically available in their apps as well. Security History Google has supported the secure HTTPS since the day it launched. In the beginning, it was only default on the login page, a reason that Google engineer Ariel Rideout stated was because HTTPS made "your mail slower". However, users could manually switch to secure HTTPS mode inside the inbox after logging in. In July 2008, Google simplified the ability to manually enable secure mode, with a toggle in the settings menu. In 2007, Google fixed a cross-site scripting security issue that could let attackers collect information from Gmail contact lists. In January 2010, Google began rolling out HTTPS as the default for all users. In June 2012, a new security feature was introduced to protect users from state-sponsored attacks. A banner will appear at the top of the page that warns users of an unauthorized account compromise. In March 2014, Google announced that an encrypted HTTPS connection would be used for the sending and receiving of all Gmail emails, and "every single email message you send or receive —100% of them —is encrypted while moving internally" through the company's systems. Whenever possible, Gmail uses transport layer security (TLS) to automatically encrypt emails sent and received. On the web and on Android devices, users can check if a message is encrypted by checking if the message has a closed or open red padlock. Gmail automatically scans all incoming and outgoing e-mails for viruses in email attachments. For security reasons, some file types, including executables, are not allowed to be sent in emails. At the end of May 2017, Google announced that it had applied machine learning technology to identify emails with phishing and spam, having a 99.9% detection accuracy. The company also announced that Gmail would selectively delay some messages, approximately 0.05% of all, to perform more detailed analysis and aggregate details to improve its algorithms. In November 2020, Google started adding click-time link protection by redirecting clicked links to Google in official Gmail clients. Third-party encryption in transit In Google's Transparency Report under the Safer email section, it provides information on the percentage of emails encrypted in transit between Gmail and third-party email providers. Two-step verification Gmail supports two-step verification, an optional additional measure for users to protect their accounts when logging in. Once enabled, users are required to verify their identity using a second method after entering their username and password when logging in on a new device. Common methods include entering a code sent to a user's mobile phone through a text message, entering a code using the Google Authenticator smartphone app, responding to a prompt on an Android/iOS device or by inserting a physical security key into the computer's USB port. Using a security key for two-step verification was made available as an option in October 2014. 24-hour lockdowns If an algorithm detects what Google calls "abnormal usage that may indicate that your account has been compromised", the account can be automatically locked down for between one minute and 24 hours, depending on the type of activity detected. Listed reasons for a lock-down include: Receiving, deleting, or downloading large amounts of mail from POP/IMAP client within a short period of time. Sending a large number of messages which fail to deliver. Using software which automatically logs into one's account. Leaving multiple instances of Gmail open. Anti-child pornography policy Google combats child pornography through Gmail's servers in conjunction with the National Center for Missing & Exploited Children (NCMEC) to find children suffering abuse around the world. In collaboration with the NCMEC, Google creates a database of child pornography pictures. Each one of the images is given a unique numerical number known as a hash. Google then scans Gmail looking for the unique hashes. When suspicious images are located Google reports the incident to the appropriate national authorities. History The idea for Gmail was developed by Paul Buchheit several years before it was announced to the public. The project was known by the code name Caribou. During early development, the project was kept secret from most of Google's own engineers. This changed once the project improved, and by early 2004, most employees were using it to access the company's internal email system. Gmail was announced to the public by Google on April 1, 2004, as a limited beta release. In November 2006, Google began offering a Java-based application of Gmail for mobile phones. In October 2007, Google began a process of rewriting parts of the code that Gmail used, which would make the service faster and add new features, such as custom keyboard shortcuts and the ability to bookmark specific messages and email searches. Gmail also added IMAP support in October 2007. An update around January 2008 changed elements of Gmail's use of JavaScript, and resulted in the failure of a third-party script some users had been using. Google acknowledged the issue and helped users with workarounds. Gmail exited the beta status on July 7, 2009. Prior to December 2013, users had to approve to see images in emails, which acted as a security measure. This changed in December 2013, when Google, citing improved image handling, enabled images to be visible without user approval. Images are now routed through Google's secure proxy servers rather than the original external host servers. MarketingLand noted that the change to image handling means email marketers will no longer be able to track the recipient's IP address or information about what kind of device the recipient is using. However, Wired stated that the new change means senders can track the time when an email is first opened, as the initial loading of the images requires the system to make a "callback" to the original server. Growth In June 2012, Google announced that Gmail had 425 million active users globally. In May 2015, Google announced that Gmail had 900 million active users, 75% of whom were using the service on mobile devices. In February 2016, Google announced that Gmail had passed 1 billion active users. In July 2017, Google announced that Gmail had passed 1.2 billion active users. In the business sector, Quartz reported in August 2014 that, among 150 companies checked in three major categories in the United States (Fortune 50 largest companies, mid-size tech and media companies, and startup companies from the last Y Combinator incubator class), only one Fortune 50 company used Gmail – Google itself – while 60% of mid-sized companies and 92% of startup companies were using Gmail. In May 2014, Gmail became the first app on the Google Play Store to hit one billion installations on Android devices. Gamil Design company and misspellings Before the introduction of Gmail, the website of product and graphic design from Gamil Design in Raleigh, North Carolina, received 3,000 hits per month. In May 2004, a Google engineer who had accidentally gone to the Gamil site a number of times contacted the company and asked if the site had experienced an increase in traffic. In fact, the site's activity had doubled. Two years later, with 600,000 hits per month, the Internet service provider wanted to charge more, and Gamil posted the message on its site "You may have arrived here by misspelling Gmail. We understand. Typing fast is not our strongest skill. But since you've typed your way here, let's share." Google Workspace As part of Google Workspace (formerly G Suite), Google's business-focused offering, Gmail comes with additional features, including: Email addresses with the customer's domain name (@yourcompany.com) 99.9% guaranteed uptime with zero scheduled downtime for maintenance Either 30 GB or unlimited storage shared with Google Drive, depending on the plan 24/7 phone and email support Synchronization compatibility with Microsoft Outlook and other email providers Support for add-ons that integrate third-party apps purchased from the Google Workspace Marketplace with Gmail Reception Gmail is noted by web developers for its early adoption of Ajax. Awards Gmail was ranked second in PC World'''s "100 Best Products of 2005", behind Firefox. Gmail also won 'Honorable Mention' in the Bottom Line Design Awards 2005. In September 2006, Forbes declared Gmail to be the best webmail application for small businesses. In November 2006, Gmail received PC World's 4-star rating. Criticism Privacy Google has one privacy policy that covers all of its services. Google claims that they "will not target ads based on sensitive information, such as race, religion, sexual orientation, health, or sensitive financial categories." Automated scanning of email content Google's mail servers automatically scan emails for multiple purposes, including filtering spam and malware, and (until 2017) adding context-sensitive advertisements next to emails. Privacy advocates raised concerns about this practice; concerns included that allowing email content to be read by a machine (as opposed to a person) can allow Google to keep unlimited amounts of information forever; the automated background scanning of data raises the risk that the expectation of privacy in email usage will be reduced or eroded; information collected from emails could be retained by Google for years after its current relevancy to build complete profiles on users; emails sent by users from other email providers get scanned despite never having agreed to Google's privacy policy or terms of service; Google can change its privacy policy unilaterally, and for minor changes to the policy it can do so without informing users; in court cases, governments and organizations can potentially find it easier to legally monitor email communications; at any time, Google can change its current company policies to allow combining information from emails with data gathered from use of its other services; and any internal security problem on Google's systems can potentially expose many – or all – of its users. In 2004, thirty-one privacy and civil liberties organizations wrote a letter calling upon Google to suspend its Gmail service until the privacy issues were adequately addressed. The letter also called upon Google to clarify its written information policies regarding data retention and data sharing among its business units. The organizations also voiced their concerns about Google's plan to scan the text of all incoming messages for the purposes of ad placement, noting that the scanning of confidential email for inserting third-party ad content violates the implicit trust of an email service provider. On June 23, 2017, Google announced that, later in 2017, it would phase out the scanning of email content to generate contextual advertising, relying on personal data collected through other Google services instead. The company stated that this change was meant to clarify its practices and quell concerns among enterprise G Suite (now Google Workspace) customers who felt an ambiguous distinction between the free consumer and paid professional variants, the latter being advertising-free. Lawsuits In March 2011, a former Gmail user in Texas sued Google, claiming that its Gmail service violates users' privacy by scanning e-mail messages to serve relevant ads. In July 2012, some California residents filed two class action lawsuits against Google and Yahoo!, claiming that they illegally intercept emails sent by individual non-Gmail or non-Yahoo! email users to Gmail and Yahoo! recipients without the senders' knowledge, consent or permission. A motion filed by Google's attorneys in the case concedes that Gmail users have "no expectation of privacy". A court filing uncovered by advocacy group Consumer Watchdog in August 2013 revealed that Google stated in a court filing that no "reasonable expectation" exists among Gmail users in regard to the assured confidentiality of their emails. In response to a lawsuit filed in May 2013, Google explained:"... all users of email must necessarily expect that their emails will be subject to automated processing ...  Just as a sender of a letter to a business colleague cannot be surprised that the recipient's assistant opens the letter, people who use web-based email today cannot be surprised if their communications are processed by the recipient's ECS [electronic communications service] provider in the course of delivery.A Google spokesperson stated to the media on August 15, 2013, that the corporation takes the privacy and security concerns of Gmail users "very seriously". April 2014 Terms of service update Google updated its terms of service for Gmail in April 2014 to create full transparency for its users in regard to the scanning of email content. The relevant revision states: "Our automated systems analyze your content (including emails) to provide you personally relevant product features, such as customized search results, tailored advertising, and spam and malware detection. This analysis occurs as the content is sent, received, and when it is stored." A Google spokesperson explained that the corporation wishes for its policies "to be simple and easy for users to understand." In response to the update, Jim Killock, executive director of the Open Rights Group, stated: "The really dangerous things that Google is doing are things like the information held in Analytics, cookies in advertising and the profiling that it is able to do on individual accounts". Microsoft ad campaign against Google In 2013, Microsoft launched an advertising campaign to attack Google for scanning email messages, arguing that most consumers are not aware that Google monitors their personal messages to deliver targeted ads. Microsoft claims that its email service Outlook does not scan the contents of messages and a Microsoft spokesperson called the issue of privacy "Google's kryptonite". In response, Google stated; "We work hard to make sure that ads are safe, unobtrusive and relevant ... No humans read your e-mail or Google Account information in order to show you advertisements or related information. An automated algorithm — similar to that used for features like Priority Inbox or spam filtering — determines which ads are shown." The New York Times cites "Google supporters", who say that "Microsoft's ads are distasteful, the last resort of a company that has been unsuccessful at competing against Google on the more noble battleground of products". Other privacy issues 2010 attack from China In January 2010, Google detected a "highly sophisticated" cyberattack on its infrastructure that originated from China. The targets of the attack were Chinese human rights activists, but Google discovered that accounts belonging to European, American and Chinese activists for human rights in China had been "routinely accessed by third parties". Additionally, Google stated that their investigation revealed that "at least" 20 other large companies from a "wide range of businesses" - including the Internet, finance, technology, media and chemical sectors – had been similarly targeted. Google was in the process of notifying those companies and it had also worked with relevant US authorities. In light of the attacks, Google enhanced the security and architecture of its infrastructure, and advised individual users to install anti-virus and anti-spyware on their computers, update their operating systems and web browsers, and be cautious when clicking on Internet links or when sharing personal information in instant messages and emails. Social network integration The February 2010 launch of Google Buzz, a now defunct social network linked to Gmail, immediately drew criticism for publicly sharing details of users' contacts unless the default settings were changed. A new Gmail feature was launched in January 2014, whereby users could email people with Google+ accounts even though they do not know the email address of the recipient. Marc Rotenberg, President of the Electronic Privacy Information Center, called the feature "troubling", and compared it to the initial privacy flaw of Google Buzz's launch. Update to DoubleClick privacy policy In June 2016, Julia Angwin of ProPublica wrote about Google's updated privacy policy, which deleted a clause that had stated Google would not combine DoubleClick web browsing cookie information with personally identifiable information from its other services. This change has allowed Google to merge users' personally identifiable information from different Google services to create one unified ad profile for each user. After publication of the article, Google reached out to ProPublica to say that the merge would not include Gmail keywords in ad targeting. Outages Gmail suffered at least seven outages in 2009, causing doubts about the reliability of its service. It suffered a new outage on February 28, 2011, in which a bug caused Gmail accounts to seem empty. Google stated in a blog post that "email was never lost" and restoration was in progress. Other outages occurred on April 17, 2012, September 24, 2013, January 24, 2014, January 29, 2019 and August 20, 2020. Google has stated that "Gmail remains more than 99.9% available to all users, and we're committed to keeping events like [the 2009 outage] notable for their rarity." "On behalf of" tag In May 2009, Farhad Manjoo wrote on The New York Times blog about Gmail's "on behalf of" tag. Manjoo explained: "The problems [sic]'' is, when you try to send outbound mail from your Gmail universal inbox, Gmail adds a tag telling your recipients that you're actually using Gmail and not your office e-mail. If your recipient is using Microsoft Outlook, he'll see a message like, 'From youroffice@domain.com on behalf of yourgmail@gmail.com. Manjoo further wrote that "Google explains that it adds the tag in order to prevent your e-mail from being considered spam by your recipient; the theory is that if the e-mail is honest about its origins, it shouldn't arouse suspicion by spam checking software". The following July, Google announced a new option that would remove the "On behalf of" tag, by sending the email from the server of the other email address instead of using Gmail's servers.
Technology
Communication
null
569180
https://en.wikipedia.org/wiki/Plant%20propagation
Plant propagation
Plant propagation is the process by which new plants grow from various sources, including seeds, cuttings, and other plant parts. Plant propagation can refer to both man-made and natural processes. Propagation typically occurs as a step in the overall cycle of plant growth. For seeds, it happens after ripening and dispersal; for vegetative parts, it happens after detachment or pruning; for asexually-reproducing plants, such as strawberry, it happens as the new plant develops from existing parts. Countless plants are propagated each day in horticulture and agriculture. Plant propagation is vital to agriculture and horticulture, not just for human food production but also for forest and fibre crops, as well as traditional and herbal medicine. It is also important for plant breeding. Sexual propagation Seeds and spores can be used for reproduction (e.g. sowing). Seeds are typically produced from sexual reproduction within a species because genetic recombination has occurred. A plant grown from seeds may have different characteristics from its parents. Some species produce seeds that require special conditions to germinate, such as cold treatment. The seeds of many Australian plants and plants from southern Africa and the American west require smoke or fire to germinate. Some plant species, including many trees, do not produce seeds until they reach maturity, which may take many years. Seeds can be difficult to acquire, and some plants do not produce seed at all. Some plants (like certain plants modified using genetic use restriction technology) may produce seed, but not a fertile seed. In certain cases, this is done to prevent the accidental spreading of these plants, for example by birds and other animals. Asexual propagation Plant roots, stems, and leaves have a number of mechanisms for asexual or vegetative reproduction, which horticulturists employ to multiply or clone plants rapidly, such as in tissue culture and grafting. Plants are produced using material from a single parent and as such, there is no exchange of genetic material, therefore vegetative propagation methods almost always produce plants that are identical to the parent. In some plants, seeds can be produced without fertilization and the seeds contain only the genetic material of the parent plant. Therefore, propagation via asexual seeds or apomixis is asexual reproduction but not vegetative propagation. Techniques for vegetative propagation include: Air or ground layering Division Grafting and bud grafting, widely used in fruit tree propagation Micropropagation Offsets Stolons (runners) Storage organs such as bulbs, corms, tubers, and rhizomes Striking or cuttings Twin-scaling Heated propagator A heated propagator is a horticultural device to maintain a warm and damp environment for seeds and cuttings to grow in. They generally provide bottom heat (maintained at a particular temperature) and high humidity, which is essential in successful seed germination and in helping cuttings to take root. In colder climates they are sometimes used for plants like peppers and sweet peas which need warmer environments (about 15°C, for the plants listed) in order to germinate. If excessive condensation forms on the inside of the lid, the gardener can open the ventilating holes to regulate the temperature a little. Non-electric propagators (mainly a seed tray and a clear plastic lid) are a lot cheaper to purchase than a heated propagator, but without the constant regulated warmth and bottom heat provided by a heated propagator, growth of seedlings tends to be slower and less consistent (with increased risk of seeds failing to germinate). Seed propagation mat An electric seed-propagation mat is a heated rubber mat covered by a metal cage that is used in gardening. The mats are made so that planters containing seedlings can be placed on top of the metal cage without the risk of starting a fire. Another example is a seedling heat mat, multiple layers of durable, water resistant plastic material with insulated heating coils embedded inside (similar to underfloor heating systems, but with rubber mat instead of flooring). In extreme cold, gardeners place a loose plastic cover over the planters/mats which creates a sort of miniature greenhouse. The constant and predictable heat allows people to raise seedlings in the winter months when the weather is generally too cold for seedlings to survive naturally outside. When combined with a lighting system, many plants can be grown indoors using these mats. This can increase the variety of plants that a gardener can use.
Technology
Horticulture
null
569213
https://en.wikipedia.org/wiki/Dimethyl%20sulfoxide
Dimethyl sulfoxide
Dimethyl sulfoxide (DMSO) is an organosulfur compound with the formula (CH3)2. This colorless liquid is the sulfoxide most widely used commercially. It is an important polar aprotic solvent that dissolves both polar and nonpolar compounds and is miscible in a wide range of organic solvents as well as water. It has a relatively high boiling point. DMSO is metabolised to compounds that leave a garlic-like taste in the mouth after DMSO is absorbed by skin. In terms of chemical structure, the molecule has idealized Cs symmetry. It has a trigonal pyramidal molecular geometry consistent with other three-coordinate S(IV) compounds, with a nonbonded electron pair on the approximately tetrahedral sulfur atom. Synthesis and production Dimethyl sulfoxide was first synthesized in 1866 by the Russian scientist Alexander Zaytsev, who reported his findings in 1867. Its modern use as an industrial solvent began through popularization by Thor Smedslund at the Stepan Chemical Company. Dimethyl sulfoxide is produced industrially from dimethyl sulfide, a by-product of the Kraft process, by oxidation with oxygen or nitrogen dioxide. Reactions Reactions with electrophiles The sulfur center in DMSO is nucleophilic toward soft electrophiles and the oxygen is nucleophilic toward hard electrophiles. With methyl iodide it forms trimethylsulfoxonium iodide, [(CH3)3SO]I: (CH3)2SO + CH3I → [(CH3)3SO]I This salt can be deprotonated with sodium hydride to form the sulfur ylide: [(CH3)3SO]I + NaH → (CH3)2S(CH2)O + NaI + H2 Acidity The methyl groups of DMSO are only weakly acidic, with a . For this reason, the basicities of many weakly basic organic compounds have been examined in this solvent. Deprotonation of DMSO requires strong bases like lithium diisopropylamide and sodium hydride. Stabilization of the resultant carbanion is provided by the S(O)R group. The sodium derivative of DMSO formed in this way is referred to as dimsyl sodium. It is a base, e.g., for the deprotonation of ketones to form sodium enolates, phosphonium salts to form Wittig reagents, and formamidinium salts to form diaminocarbenes. It is also a potent nucleophile. Oxidant In organic synthesis, DMSO is used as a mild oxidant. It forms the basis of several selective sulfonium-based oxidation reactions including the Pfitzner–Moffatt oxidation, Corey–Kim oxidation and the Swern oxidation. The Kornblum oxidation is conceptually similar. These all involve formation of an intermediate sulfonium species (R2S+X where X is a heteroatom) Ligand and Lewis base Related to its ability to dissolve many salts, DMSO is a common ligand in coordination chemistry. Illustrative is the complex dichlorotetrakis(dimethyl sulfoxide)ruthenium(II) (RuCl2(dmso)4). In this complex, three DMSO ligands are bonded to ruthenium through sulfur. The fourth DMSO is bonded through oxygen. In general, the oxygen-bonded mode is more common. In carbon tetrachloride solutions DMSO functions as a Lewis base with a variety of Lewis acids such as I2, phenols, trimethyltin chloride, metalloporphyrins, and the dimer Rh2Cl2(CO)4. The donor properties are discussed in the ECW model. The relative donor strength of DMSO toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots. Applications Solvent DMSO is a polar aprotic solvent and is less toxic than other members of this class, such as dimethylformamide, dimethylacetamide, N-methyl-2-pyrrolidone, and hexamethylphosphoramide (HMPA). DMSO is frequently used as a solvent for chemical reactions involving salts, most notably Finkelstein reactions and other nucleophilic substitutions. It is also extensively used as an extractant in biochemistry and cell biology. Because DMSO is only weakly acidic, it tolerates relatively strong bases and as such has been extensively used in the study of carbanions. A set of non-aqueous pKa values (C-H, O-H, S-H and N-H acidities) for thousands of organic compounds have been determined in DMSO solution. Because of its high boiling point, , DMSO evaporates slowly at normal atmospheric pressure. Samples dissolved in DMSO cannot as easily be recovered compared to other solvents, as it is very difficult to remove all traces of DMSO by conventional rotary evaporation. One technique to fully recover samples is removal of the organic solvent by evaporation followed by addition of water (to dissolve DMSO) and cryodesiccation to remove both DMSO and water. Reactions conducted in DMSO are often diluted with water to precipitate or phase-separate products. The relatively high freezing point of DMSO, , means that at, or just below, room temperature it is a solid, which can limit its utility in some chemical processes (e.g. crystallization with cooling). In its deuterated form (DMSO-d6), it is a useful solvent for NMR spectroscopy, again due to its ability to dissolve a wide range of analytes, the simplicity of its own spectrum, and its suitability for high-temperature NMR spectroscopic studies. Disadvantages to the use of DMSO-d6 are its high viscosity, which broadens signals, and its hygroscopicity, which leads to an overwhelming H2O resonance in the 1H-NMR spectrum. It is often mixed with CDCl3 or CD2Cl2 for lower viscosity and melting points. DMSO is used to dissolve test compounds in in vitro drug discovery and drug design screening programs, including high-throughput screening programs. This is because it is able to dissolve both polar and nonpolar compounds, can be used to maintain stock solutions of test compounds (important when working with a large chemical library), is readily miscible with water and cell culture media, and has a high boiling point (this improves the accuracy of test compound concentrations by reducing room temperature evaporation). One limitation with DMSO is that it can affect cell line growth and viability, with low DMSO concentrations sometimes stimulating cell growth, and high DMSO concentrations sometimes inhibiting or killing cells. DMSO is used as a vehicle in in vivo studies of test compounds. It has, for example, been employed as a co-solvent to assist absorption of the flavonol glycoside Icariin in the nematode worm Caenorhabditis elegans. As with its use in in vitro studies, DMSO has some limitations in animal models. Pleiotropic effects can occur and, if DMSO control groups are not carefully planned, then solvent effects can falsely be attributed to the prospective drug. For example, even a very low dose of DMSO has a powerful protective effect against paracetamol (acetaminophen)-induced liver injury in mice. DMSO is finding increased use in manufacturing processes to produce microelectronic devices. It is widely used to strip photoresist in TFT-LCD 'flat panel' displays and advanced packaging applications (such as wafer-level packaging / solder bump patterning). DMSO is an effective paint stripper, being safer than many of the others such as nitromethane and dichloromethane. Biology DMSO is used in polymerase chain reaction (PCR) to inhibit secondary structures in the DNA template or the DNA primers. It is added to the PCR mix before reacting, where it interferes with the self-complementarity of the DNA, minimizing interfering reactions. DMSO in a PCR is applicable for supercoiled plasmids (to relax before amplification) or DNA templates with high GC-content (to decrease thermostability). For example, 10% final concentration of DMSO in the PCR mixture with Phusion decreases primer annealing temperature (i.e. primer melting temperature) by . It is well known as a reversible cell cycle arrester at phase G1 of human lymphoid cells. DMSO may also be used as a cryoprotectant, added to cell media to reduce ice formation and thereby prevent cell death during the freezing process. Approximately 10% may be used with a slow-freeze method, and the cells may be frozen at or stored in liquid nitrogen safely. In cell culture, DMSO is used to induce differentiation of P19 embryonic carcinoma cells into cardiomyocytes and skeletal muscle cells. Medicine Use of DMSO in medicine dates from around 1963, when an Oregon Health & Science University Medical School team, headed by Stanley Jacob, discovered it could penetrate the skin and other membranes without damaging them and could carry other compounds into a biological system. In medicine, DMSO is predominantly used as a topical analgesic, a vehicle for topical application of pharmaceuticals, as an anti-inflammatory, and an antioxidant. Because DMSO increases the rate of absorption of some compounds through biological tissues, including skin, it is used in some transdermal drug delivery systems. Its effect may be enhanced with the addition of EDTA. It is frequently compounded with antifungal medications, enabling them to penetrate not just skin but also toenails and fingernails. DMSO has been examined for the treatment of numerous conditions and ailments, but the U.S. Food and Drug Administration (FDA) has approved its use only for the symptomatic relief of patients with interstitial cystitis. A 1978 study concluded that DMSO brought significant relief to the majority of the 213 patients with inflammatory genitourinary disorders that were studied. The authors recommended DMSO for genitourinary inflammatory conditions not caused by infection or tumor in which symptoms were severe or patients failed to respond to conventional therapy. In interventional radiology, DMSO is used as a solvent for ethylene vinyl alcohol in the Onyx liquid embolic agent, which is used in embolization, the therapeutic occlusion of blood vessels. In cryobiology DMSO has been used as a cryoprotectant and is still an important constituent of cryoprotectant vitrification mixtures used to preserve organs, tissues, and cell suspensions. Without it, up to 90% of frozen cells will become inactive. It is particularly important in the freezing and long-term storage of embryonic stem cells and hematopoietic stem cells, which are often frozen in a mixture of 10% DMSO, a freezing medium, and 30% fetal bovine serum. In the cryogenic freezing of heteroploid cell lines (MDCK, VERO, etc.) a mixture of 10% DMSO with 90% EMEM (70% EMEM + 30% fetal bovine serum + antibiotic mixture) is used. As part of an autologous bone marrow transplant the DMSO is re-infused along with the patient's own hematopoietic stem cells. DMSO is metabolized by disproportionation to dimethyl sulfide and dimethyl sulfone. It is subject to renal and pulmonary excretion. A possible side effect of DMSO is therefore elevated blood dimethyl sulfide, which may cause a blood borne halitosis symptom. Alternative medicine DMSO's popularity as an alternative medicine is stated to stem from a March 1980 60 Minutes documentary "The Riddle of DMSO" and April 1980 Time magazine article covering the treatments of ardent DMSO advocate Dr. Stanley Jacob beginning in the 1960s. The use of DMSO as an alternative treatment for cancer is of particular concern, as it has been shown to interfere with a variety of chemotherapy drugs, including cisplatin, carboplatin, and oxaliplatin. There is insufficient evidence to support the hypothesis that DMSO has any effect, and most sources agree that its history of side effects when tested warrants caution when using it as a dietary supplement, for which it is marketed heavily with the usual disclaimer. DMSO is an ingredient in some products listed by the U.S. FDA as fake cancer cures and the FDA has had a running battle with distributors. One such distributor is Mildred Miller, who promoted DMSO for a variety of disorders and was consequently convicted of Medicare fraud. Veterinary medicine DMSO is commonly used in veterinary medicine as a liniment for horses, alone or in combination with other ingredients. In the latter case, often, the intended function of the DMSO is as a solvent, to carry the other ingredients across the skin. Also in horses, DMSO is used intravenously, again alone or in combination with other drugs. It is used alone for the treatment of increased intracranial pressure and/or cerebral edema in horses. Taste The perceived garlic taste upon skin contact with DMSO may be due to nonolfactory activation of TRPA1 receptors in trigeminal ganglia. Unlike dimethyl and diallyl disulfides (which have odors resembling garlic), mono- and tri- sulfides (which typically have foul odors), and similar odiferous sulfur compounds, the pure chemical DMSO is odorless. Safety Toxicity DMSO is a non-toxic solvent with a median lethal dose higher than ethanol (DMSO: LD50, oral, rat, 14,500 mg/kg; ethanol: LD50, oral, rat, 7,060 mg/kg). DMSO can cause contaminants, toxins, and medicines to be absorbed through the skin, which may cause unexpected effects. DMSO is thought to increase the effects of blood thinners, steroids, heart medicines, sedatives, and other drugs. In some cases this could be harmful or dangerous. Because DMSO easily penetrates the skin, substances dissolved in DMSO may quickly be absorbed. Glove selection is important when working with DMSO. Butyl rubber, fluoroelastomer, neoprene, or thick (15mil / 0.4mm) latex gloves are recommended. Nitrile gloves, which are very commonly used in chemical laboratories, may protect from brief contact but have been found to degrade rapidly with exposure to DMSO. Regulation In Australia, it is listed as a Schedule 4 (S4) Drug, and a company has been prosecuted for adding it to products as a preservative. Clinical safety Early clinical trials with DMSO were stopped because of questions about its safety, especially its ability to harm the eye. The most commonly reported side effects include headaches and burning and itching on contact with the skin. Strong allergic reactions have been reported. On September 9, 1965, The Wall Street Journal reported that a manufacturer of the chemical warned that the death of an Irish woman after undergoing DMSO treatment for a sprained wrist may have been due to the treatment, although no autopsy was done, nor was a causal relationship established. Clinical research using DMSO was halted and did not begin again until the National Academy of Sciences (NAS) published findings in favor of DMSO in 1972. In 1978, the US FDA approved DMSO for treating interstitial cystitis. In 1980, the US Congress held hearings on claims that the FDA was slow in approving DMSO for other medical uses. In 2007, the US FDA granted "fast track" designation on clinical studies of DMSO's use in reducing brain tissue swelling following traumatic brain injury. DMSO exposure to developing mouse brains can produce brain degeneration. This neurotoxicity could be detected at doses as low as 0.3mL/kg, a level exceeded in children exposed to DMSO during bone marrow transplant. Odor problem DMSO disposed into sewers can cause odor problems in municipal effluents: waste water bacteria transform DMSO under hypoxic (anoxic) conditions into dimethyl sulfide (DMS) that has a strong disagreeable odor, similar to rotten cabbage. However, chemically pure DMSO is odorless because of the lack of C-S-C (sulfide) and C-S-H (mercaptan) linkages. Deodorization of DMSO is achieved by removing the odorous impurities it contains. Explosion hazard Dimethyl sulfoxide can produce an explosive reaction when exposed to acyl chlorides; at a low temperature, this reaction produces the oxidant for Swern oxidation. DMSO can decompose at the boiling temperature of 189 °C at normal pressure, possibly leading to an explosion. The decomposition is catalyzed by acids and bases and therefore can be relevant at even lower temperatures. A strong to explosive reaction also takes place in combination with halogen compounds, metal nitrides, metal perchlorates, sodium hydride, periodic acid and fluorinating agents.
Physical sciences
Concepts: General
Chemistry
569315
https://en.wikipedia.org/wiki/Dimethyl%20ether
Dimethyl ether
Dimethyl ether (DME; also known as methoxymethane) is the organic compound with the formula CH3OCH3, (sometimes ambiguously simplified to C2H6O as it is an isomer of ethanol). The simplest ether, it is a colorless gas that is a useful precursor to other organic compounds and an aerosol propellant that is currently being demonstrated for use in a variety of fuel applications. Dimethyl ether was first synthesised by Jean-Baptiste Dumas and Eugene Péligot in 1835 by distillation of methanol and sulfuric acid. Production Approximately 50,000 tons were produced in 1985 in Western Europe by dehydration of methanol: The required methanol is obtained from synthesis gas (syngas). Other possible improvements call for a dual catalyst system that permits both methanol synthesis and dehydration in the same process unit, with no methanol isolation and purification. Both the one-step and two-step processes above are commercially available. The two-step process is relatively simple and start-up costs are relatively low. A one-step liquid-phase process is in development. From biomass Dimethyl ether is a synthetic second generation biofuel (BioDME), which can be produced from lignocellulosic biomass. The EU is considering BioDME in its potential biofuel mix in 2030; It can also be made from biogas or methane from animal, food, and agricultural waste, or even from shale gas or natural gas. The Volvo Group is the coordinator for the European Community Seventh Framework Programme project BioDME where Chemrec's BioDME pilot plant is based on black liquor gasification in Piteå, Sweden. Applications The largest use of dimethyl ether is as the feedstock for the production of the methylating agent, dimethyl sulfate, which entails its reaction with sulfur trioxide: Dimethyl ether can also be converted into acetic acid using carbonylation technology related to the Monsanto acetic acid process: Laboratory reagent and solvent Dimethyl ether is a low-temperature solvent and extraction agent, applicable to specialised laboratory procedures. Its usefulness is limited by its low boiling point (), but the same property facilitates its removal from reaction mixtures. Dimethyl ether is the precursor to the useful alkylating agent, trimethyloxonium tetrafluoroborate. Niche applications A mixture of dimethyl ether and propane is used in some over-the-counter "freeze spray" products to treat warts by freezing them. In this role, it has supplanted halocarbon compounds (Freon). Dimethyl ether is also a component of certain high temperature "Map-Pro" blowtorch gas blends, supplanting the use of methyl acetylene and propadiene mixtures. Dimethyl ether is also used as a propellant in aerosol products. Such products include hair spray, bug spray and some aerosol glue products. Research Fuel A potentially major use of dimethyl ether is as substitute for propane in LPG used as fuel in household and industry. Dimethyl ether can also be used as a blendstock in propane autogas. It is also a promising fuel in diesel engines, and gas turbines. For diesel engines, an advantage is the high cetane number of 55, compared to that of diesel fuel from petroleum, which is 40–53. Only moderate modifications are needed to convert a diesel engine to burn dimethyl ether. The simplicity of this short carbon chain compound leads to very low emissions of particulate matter during combustion. For these reasons as well as being sulfur-free, dimethyl ether meets even the most stringent emission regulations in Europe (EURO5), U.S. (U.S. 2010), and Japan (2009 Japan). At the European Shell Eco Marathon, an unofficial World Championship for mileage, vehicle running on 100 % dimethyl ether drove 589 km/L (169.8 cm3/100 km), fuel equivalent to gasoline with a 50 cm3 displacement 2-stroke engine. As well as winning they beat the old standing record of 306 km/liter (326.8 cm3/100 km), set by the same team in 2007. To study the dimethyl ether for the combustion process a chemical kinetic mechanism is required which can be used for Computational fluid dynamics calculation. Refrigerant Dimethyl ether is a refrigerant with ASHRAE refrigerant designation R-E170. It is also used in refrigerant blends with e.g. ammonia, carbon dioxide, butane and propene. Dimethyl ether was the first refrigerant. In 1876, the French engineer Charles Tellier bought the ex-Elder-Dempster a 690 tons cargo ship Eboe and fitted a methyl-ether refrigerating plant of his design. The ship was renamed Le Frigorifique and successfully imported a cargo of refrigerated meat from Argentina. However the machinery could be improved and in 1877 another refrigerated ship called Paraguay with a refrigerating plant improved by Ferdinand Carré was put into service on the South American run. Safety Unlike other alkyl ethers, dimethyl ether resists autoxidation. Dimethyl ether is also relatively non-toxic, although it is highly flammable. On July 28, 1948, a BASF factory in Ludwigshafen suffered an explosion after 30 tonnes of dimethyl ether leaked from a tank and ignited in the air. 200 people died, and a third of the industrial plant was destroyed. Data sheet Routes to produce dimethyl ether Vapor pressure
Physical sciences
Esters and ethers
Chemistry
569459
https://en.wikipedia.org/wiki/White-tailed%20deer
White-tailed deer
The white-tailed deer (Odocoileus virginianus), also known commonly as the whitetail and the Virginia deer, is a medium-sized species of deer native to North America, Central America, and South America as far south as Peru and Bolivia, where it predominately inhabits high mountain terrains of the Andes. It has also been introduced to New Zealand, all the Greater Antilles in the Caribbean (Cuba, Jamaica, Hispaniola, and Puerto Rico), and some countries in Europe, such as the Czech Republic, Finland, France, Germany, Romania and Serbia. In the Americas, it is the most widely distributed wild ungulate. In North America, the species is widely distributed east of the Rocky Mountains as well as in southwestern Arizona and most of Mexico, except Lower California. It is mostly displaced by the black-tailed or mule deer (Odocoileus hemionus) from that point west except for mixed deciduous riparian corridors, river valley bottomlands, and lower foothills of the northern Rocky Mountain region from Wyoming west to eastern Washington and eastern Oregon and north to northeastern British Columbia and southern Yukon, including in the Montana valley and foothill grasslands. The westernmost population of the species, known as the Columbian white-tailed deer, was once widespread in the mixed forests along the Willamette and Cowlitz River valleys of western Oregon and southwestern Washington, but current numbers are considerably reduced, and it is classified as near-threatened. This population is separated from other white-tailed deer populations. Texas is home to the most white-tailed deer of any U.S. state or Canadian province, with an estimated population of 5.3 million. High populations of white-tailed deer exist in the Edwards Plateau of central Texas. Michigan, Minnesota, Iowa, Mississippi, Missouri, New Jersey, Illinois, Wisconsin, Maryland, New York, North Dakota, Ohio, and Indiana also boast high deer densities. The conversion of land adjacent to the Canadian Rockies to agriculture use and partial clear-cutting of coniferous trees, resulting in widespread deciduous vegetation, has been favorable to the white-tailed deer and has pushed its distribution to as far north as Yukon. Populations of deer around the Great Lakes have expanded their range northwards, also due to conversion of land to agricultural use, with local caribou, elk, and moose populations declining. White-tailed deer are crepuscular, meaning they are most active during the dawn and dusk hours. Taxonomy Some taxonomists have attempted to separate white-tailed deer into a host of subspecies, based largely on morphological differences. Genetic studies, however, suggest fewer subspecies within the animal's range, as compared to the 30 to 40 subspecies that some scientists have described in the last century. The Florida Key deer, O. v. clavium, and the Columbian white-tailed deer, O. v. leucurus, are both listed as endangered under the U.S. Endangered Species Act. In the United States, the Virginia white-tail, O. v. virginianus, is among the most widespread subspecies. Several local deer populations, especially in the Southern United States, are descended from white-tailed deer transplanted from various localities east of the Continental Divide. Some of these deer populations may have been from as far north as the Great Lakes region to as far west as Texas, yet are also quite at home in the Appalachian and Piedmont regions of the south. These deer, over time, have intermixed with the local indigenous deer (O. v. virginianus and/or O. v. macrourus) populations. Central and South America have a complex number of white-tailed deer subspecies that range from Guatemala to as far south as Peru. This list of subspecies of deer is more exhaustive than the list of North American subspecies, and the number of subspecies is also questionable. However, the white-tailed deer populations in these areas are difficult to study, due to overhunting in many parts and a lack of protection. Some areas no longer carry deer, so assessing the genetic difference of these animals is difficult. Subspecies There are 26 subspecies; seventeen of these occur in North America, ordered alphabetically. (Numbers in parentheses are range map locations.) North and Central America O. v. acapulcensis  (1)– (Southern coastal Mexico) O. v. borealis  (2)– northern white-tailed deer (the largest and darkest of the white-tailed deer) O. v. carminis  (4)– Carmen Mountains white-tailed deer (Texas-Mexico border) O. v. chiriquensis  (5)– (Panama) O. v. clavium  (6)– Key deer or Florida Keys white-tailed deer O. v. couesi  (7)– Coues' white-tailed deer, Arizona white-tailed deer, or fantail deer O. v. dacotensis  (9)– Dakota white-tailed deer or northern plains white-tailed deer (most northerly distribution, rivals the northern white-tailed deer in size) O. v. hiltonensis  (12)– Hilton Head Island white-tailed deer O. v. leucurus  (13)– Columbian white-tailed deer (Oregon and western coastal area) O. v. macrourus  (14)– Kansas white-tailed deer O. v. mcilhennyi  (15)– Avery Island white-tailed deer O. v. mexicanus  (17)– (central Mexico) O. v. miquihuanensis  (18)– (northern central Mexico) O. v. nelsoni  (19)– (southern Mexico to Nicaragua) O. v. nemoralis  (20)– Nicaraguan white-tailed deer (Gulf of Mexico to Suriname in South America; further restricted from Honduras to Panama) O. v. nigribarbis  (21)– Blackbeard Island white-tailed deer O. v. oaxacensis  (22)– (southern Mexico) O. v. ochrourus  (23)– northwestern white-tailed deer or northern Rocky Mountains white-tailed deer O. v. osceola  (24)– Florida coastal white-tailed deer O. v. rothschildi  (26)– (Coiba Island, Panama) O. v. seminolus  (27)– Florida white-tailed deer O. v. sinaloae  (28)– (southern Mexico) O. v. taurinsulae  (29)– Bulls Island white-tailed deer (Bulls Island, South Carolina) O. v. texanus  (30)– Texas white-tailed deer O. v. thomasi  (31)– (southern Mexico) O. v. toltecu  (32)– (southern Mexico to El Salvador) O. v. venatorius  (35)– Hunting Island white-tailed deer (Hunting Island, South Carolina) O. v. veraecrucis (36)– (eastern coastal Mexico) O. v. virginianus  (37)– Virginia white-tailed deer or southern white-tailed deer O. v. yucatanesis (38)– (northern Yucatán, Mexico) South America O. v. cariacou  (3)– (French Guiana and northern Brazil) O. v. curassavicus  (8)– (Curaçao) O. v. goudotii  (10)– (Colombia (Andes) and western Venezuela) O. v. gymnotis  (11)– South American white-tailed deer (northern half of Venezuela, including Venezuela's Llanos region) O. v. margaritae  (16)– (Margarita Island) O. v. nemoralis  (20)– Nicaraguan white-tailed deer (Gulf of Mexico to Suriname in South America; further restricted from Honduras to Panama) O. v. peruvianus  (25)– South American white-tailed deer or Andean white-tailed deer (most southerly distribution in Peru and possibly Bolivia) O. v. tropicalis  (33)– Peru and Ecuador (possibly Colombia) O. v. ustus (34)– Ecuador (possibly southern Colombia and northern Peru) Description The white-tailed deer's coat is a reddish-brown in the spring and summer, and turns to a grey-brown throughout the fall and winter. The white-tailed deer can be recognized by the characteristic white underside to its tail. It raises its tail when it is alarmed to warn the predator that it has been detected. An indication of a deer's age is the length of the snout and the color of the coat, with older deer tending to have longer snouts and grayer coats. A population of white-tailed deer in New York is entirely white except for the nose and hooves – not albino – in color. The former Seneca Army Depot in Romulus, New York, has the largest known concentration of white deer. Strong conservation efforts have allowed white deer to thrive within the confines of the depot. The white-tailed deer's horizontally slit pupil allows for good night vision and color vision during the day. Whitetails process visual images at a much more rapid rate than humans and are better at detecting motion in low-light conditions. Size and weight The white-tailed deer is highly variable in size, generally following both Allen's rule and Bergmann's rule that the average size is larger farther away from the equator. North American male deer (also known as a buck) usually weigh , but mature bucks over have been recorded in the northernmost reaches of their native range, namely Minnesota, Ontario, and Manitoba. In 1926, Carl J. Lenander Jr. took a white-tailed buck near Tofte, Minnesota, that weighed after it was field-dressed (internal organs and blood removed) and was estimated at when alive. The female (doe) in North America usually weighs from . White-tailed deer from the tropics and the Florida Keys are markedly smaller-bodied than temperate populations, averaging , with an occasional adult female as small as . White-tailed deer from the Andes are larger than other tropical deer of this species and have thick, slightly woolly-looking fur. Length ranges from , including a tail of , and the shoulder height is . Including all races, the average summer weight of adult males is and is in adult females. It is among the largest deer species in North America, and is also one of the largest in South America, behind only the marsh deer. Deer have dichromatic (two-color) vision with blue and yellow primaries; humans normally have trichromatic vision. Thus, deer poorly distinguish the oranges and reds that stand out so well to humans. This makes it very convenient to use deer-hunter orange as a safety color on caps and clothing to avoid accidental shootings during hunting seasons. Antlers Males regrow their antlers every year. About one in 10,000 females also has antlers, although this is usually associated with freemartinism. Bucks without branching antlers are often termed "spikehorn", "spiked bucks", "spike bucks", or simply "spikes/spikers". The spikes can be quite long or very short. Length and branching of antlers are determined by nutrition, age, and genetics. Rack growth tends to be very important from late spring until about a month before velvet sheds. Healthy deer in some areas that are well-fed can have eight-point branching antlers as yearlings (1.5 years old). Although antler size typically increases with age, antler characteristics (e.g., number of points, length, or thickness of the antlers) are not good indicators of buck age, in general, because antler development is influenced by the local environment. The individual deer's nutritional needs for antler growth is dependent on the diet of the deer, particularly protein intake. Good antler-growth nutritional needs (calcium) and good genetics combine to produce wall trophies in some of their range. Spiked bucks are different from "button bucks" or "nubbin' bucks", that are male fawns and are generally about six to nine months of age during their first winter. They have skin-covered nobs on their heads. They can have bony protrusions up to in length, but that is very rare, and they are not the same as spikes. Antlers begin to grow in late spring, covered with a highly vascularised tissue known as velvet. Bucks either have a typical or atypical antler arrangement. Typical antlers are symmetrical and the points grow straight up off the main beam. Atypical antlers are asymmetrical and the points may project at any angle from the main beam. These descriptions are not the only limitations for typical and atypical antler arrangement. The Boone and Crockett or Pope and Young scoring systems also define relative degrees of typicality and atypicality by procedures to measure what proportion of the antlers is asymmetrical. Therefore, bucks with only slight asymmetry are scored as "typical". A buck's inside spread can be from . Bucks shed their antlers when all females have been bred, from late December to February. Ecology White-tailed deer are generalists and can adapt to a wide variety of habitats. The largest deer occur in the temperate regions of North America. The northern white-tailed deer (O. v. borealis), Dakota white-tailed deer (O. v. dacotensis), and northwest white-tailed deer (O. v. ochrourus) are some of the largest animals, with large antlers. The smallest deer occur in the Florida Keys and in partially wooded lowlands in the Neotropics. Although most often thought of as forest animals depending on relatively small openings and edges, white-tailed deer can equally adapt themselves to life in more open prairie, savanna woodlands, and sage communities as in the Southwestern United States and northern Mexico. These savanna-adapted deer have relatively large antlers in proportion to their body size and large tails. Also, a noticeable difference exists in size between male and female deer of the savannas. The Texas white-tailed deer (O. v. texanus), of the prairies and oak savannas of Texas and parts of Mexico, are the largest savanna-adapted deer in the Southwest, with impressive antlers that might rival deer found in Canada and the northern United States. Populations of Arizona (O. v. couesi) and Carmen Mountains (O. v. carminis) white-tailed deer inhabit montane mixed oak and pine woodland communities. The Arizona and Carmen Mountains deer are smaller, but may also have impressive antlers, considering their size. The white-tailed deer of the Llanos region of Colombia and Venezuela (O. v. apurensis and O. v. gymnotis) have antler dimensions similar to the Arizona white-tailed deer. In some western regions of North America, the white-tailed deer range overlaps with those of the mule deer. White-tail incursions in the Trans-Pecos region of Texas have resulted in some hybrids. In the extreme north of the range, their habitat is also used by moose in some areas. White-tailed deer may occur in areas that are also exploited by elk (wapiti) such as in mixed deciduous river valley bottomlands and formerly in the mixed deciduous forest of eastern United States. In places such as Glacier National Park in Montana and several national parks in the Columbian Mountains (Mount Revelstoke National Park) and Canadian Rocky Mountains, as well as in the Yukon Territory (Yoho National Park and Kootenay National Park), white-tailed deer are shy and more reclusive than the coexisting mule deer, elk, and moose. Central American white-tailed deer prefer tropical and subtropical dry broadleaf forests, seasonal mixed deciduous forests, savanna, and adjacent wetland habitats over dense tropical and subtropical moist broadleaf forests. South American subspecies of white-tailed deer live in two types of environments. The first type, similar to the Central American deer, consists of savannas, dry deciduous forests, and riparian corridors that cover much of Venezuela and eastern Colombia. The other type is the higher elevation mountain grassland/mixed forest ecozones in the Andes Mountains, from Venezuela to Peru. The Andean white-tailed deer seem to retain gray coats due to the colder weather at high altitudes, whereas the lowland savanna forms retain the reddish brown coats. South American white-tailed deer, like those in Central America, also generally avoid dense moist broadleaf forests. Since the second half of the 19th century, white-tailed deer have been introduced to Europe. A population in the Brdy area remains stable today. In 1935, white-tailed deer were introduced to Finland. The introduction was successful, and the deer began spreading through northern Scandinavia and southern Karelia, competing with, and sometimes displacing, native species. The 2020 population of some 109,000 deer originated from four animals provided by Finnish Americans from Minnesota. Diet White-tailed deer eat large amounts of food, commonly eating legumes and foraging on other plants, including shoots, leaves, cacti (in deserts), prairie forbs, and grasses. They also eat acorns, fruit, and corn. Their multi-chambered stomachs allow them to eat some foods humans cannot, such as mushrooms (even those that are toxic to humans) and poison ivy. Their diets vary by season according to the availability of food sources. They also eat hay, grass, white clover, and other foods they can find in a farmyard. Though almost entirely herbivorous, white-tailed deer have been known to opportunistically feed on nesting songbirds, field mice, and birds trapped in mist nets, if the need arises. When additional amounts of minerals such as calcium are needed in their diet, they can resort to osteophagy, chewing on bones of dead animals. A grown deer can eat around of vegetable matter annually. A population of around can start to destroy the forest environment in their foraging area. Their diet consists mostly of woody shoots, stems, and leaves of woody plants as well as grasses, cultivated crops, nuts, berries, and wildflowers. The items they feed on are not generally abundant in mature forests and are mostly found at "edges". Edges are described as a "mosaic of vegetation types that create numerous interwoven 'edges' where their respective boundaries intersect" and provide optimum cover for browsers such as the white-tailed deer. White-tailed deer can easily thrive in suburban areas, as a combination of increased safety from some predators (including human hunting), high quality and abundance of foods in home gardens, city parks, open farmland, and other factors all create landscapes with an abundance of edge habitat. The white-tailed deer is a ruminant, which means it has a four-chambered stomach. Each chamber has a different and specific function that allows the deer to eat a variety of different foods, digesting it at a later time in a safe area of cover. The stomach hosts a complex set of microbes that change as the deer's diet changes through the seasons. If the microbes necessary for digestion of a particular food (e.g., hay) are absent, it will not be digested. Utilizing foregut fermentation, the fermented ingesta (known as cud) is regurgitated and chewed again, to mix it with saliva and reduce the particle size. Smaller particle size allows for increased nutrient absorption and the saliva is important because it provides liquid for the microbial population, recirculates nitrogen and minerals, and acts as a buffer for the rumen pH. Predators There are several natural predators of white-tailed deer, with wolves, cougars, American alligators, jaguars (in the American southwest, Mexico, and Central and South America) and humans being the most effective natural predators. Aside from humans, these predators frequently pick out easily caught young or infirm deer (which is believed to improve the genetic stock of a population), but can and do take healthy adults of any size. Bobcats, Canada lynx, grizzly and American black bears, wolverines, and packs of coyotes usually prey mainly on fawns. Bears may sometimes attack adult deer, while lynxes, coyotes, and wolverines are most likely to take adult deer when the ungulates are weakened by harsh winter weather. Many scavengers rely on deer as carrion, including New World vultures, raptors, red and gray foxes, and corvids. Few wild predators can afford to be picky and any will readily consume deer as carrion. Records exist of American crows and common ravens attempting to prey on white-tailed deer fawns by pecking around their face and eyes, though no accounts of success are given. Occasionally, both golden and bald eagles may capture deer fawns with their talons. In one case, a golden eagle was filmed in Illinois unsuccessfully trying to prey on a large mature white-tailed deer. White-tailed deer typically respond to the presence of potential predators by breathing very heavily (also called blowing) and fleeing. When they blow, the sound alerts other deer in the area. As they run, the flash of their white tails warns other deer. This especially serves to warn fawns when their mother is alarmed. Most natural predators of white-tailed deer hunt by ambush, although canids may engage in an extended chase, hoping to exhaust the prey. Felids typically try to suffocate the deer by biting the throat. Cougars and jaguars will initially knock the deer off balance with their powerful forelegs, whereas the smaller bobcats and lynxes will jump astride the deer to deliver a killing bite. In the case of canids and wolverines, the predators bite at the limbs and flanks, hobbling the deer, until they can reach vital organs and kill it through loss of blood. Bears, which usually target fawns, often simply knock down the prey and then start eating it while it is still alive. Alligators snatch deer as they try to drink from or cross bodies of water, grabbing them with their powerful jaws and dragging them into the water to drown. Most primary natural predators of white-tailed deer have been essentially extirpated in eastern North America, with a very small number of reintroduced critically endangered red wolves, around North Carolina and a small remnant population of Florida panthers, a subspecies of the cougar. Gray wolves, the leading cause of deer mortality where they overlap, co-occur with whitetails in northern Minnesota, Wisconsin, Michigan, and most of Canada. This almost certainly plays a role in the overpopulation issues with this species. Coyotes, widespread and with a rapidly expanding population, are often the only major nonhuman predator of the species in the Eastern U.S., besides an occasional domestic dog. In some areas, American black bears are also significant predators. In north-central Pennsylvania, black bears were found to be nearly as common predators of fawns as coyotes. Bobcats, still fairly widespread, usually only exploit deer as prey when smaller prey is scarce. Discussions have occurred regarding the possible reintroduction of gray wolves and cougars to sections of the eastern United States, largely because of the apparent controlling effect they have through deer predation on local ecosystems, as has been illustrated in the reintroduction of wolves to Yellowstone National Park and their controlling effect on previously overpopulated elk. However, due to the heavy urban development in much of the Eastern U.S., and fear for livestock and human lives, such ideas have ultimately been rejected by local communities and/or by government services and have not been carried through. In areas where they are heavily hunted by humans, deer run almost immediately from people and are quite wary even where not heavily hunted. White-tailed deer can run faster than their predators and have been recorded sprinting at speeds of per hour and sustaining speeds of per hour over distances of ; this ranks them amongst the fastest of all deer, alongside the Eurasian roe deer. They can also jump high and up to forward. When shot at, a white-tailed deer will run at high speeds with its tail down. If frightened, the deer will hop in a zig-zag with its tail straight up. If the deer feels extremely threatened, however, it may choose to attack, charging the person or predator posing the threat, using its antlers or, if none are present, its head to fight off its target. Forest alteration In certain parts of eastern North America, high deer densities have caused large reductions in plant biomass, including the density and heights of certain forest wildflowers, tree seedlings, and shrubs. Although they can be seen as a nuisance species, white-tailed deer also play an important role in biodiversity. At the same time, increases in browse-tolerant grasses and sedges and unpalatable ferns have often accompanied intensive deer herbivory. Changes to the structure of forest understories have, in turn, altered the composition and abundance of forest bird communities in some areas. In regions of intermediate density, deer activity has also been shown to increase herbaceous plant diversity, particularly in disturbed areas, by reducing competitively dominant plants; and to increase the growth rates of important canopy trees, perhaps by increased nutrient inputs into the soil. In northeastern hardwood forests, high-density deer populations affect plant succession, particularly following clear-cuts and patch cuts. In succession without deer, annual herbs and woody plants are followed by commercially valuable, shade-tolerant oak and maple. The shade-tolerant trees prevent the invasion of less commercial cherry and American beech, which are stronger nutrient competitors, but not as shade tolerant. Although deer eat shade-tolerant plants and acorns, this is not the only way deer can shift the balance in favor of nutrient competitors. Deer consuming earlier-succession plants allows in enough light for nutrient competitors to invade. Since slow-growing oaks need several decades to develop root systems sufficient to compete with faster-growing species, removal of the canopy prior to that point amplifies the effect of deer on succession. High-density deer populations possibly could browse eastern hemlock seedlings out of existence in northern hardwood forests; however, this scenario seems unlikely, given that deer browsing is not considered the critical factor preventing hemlock re-establishment at large scales. Ecologists have also expressed concern over the facilitative effect high deer populations have on invasions of exotic plant species. In a study of eastern hemlock forests, browsing by white-tailed deer caused populations of three exotic plants to rise faster than they do in the areas which are absent of deer. Seedlings of the three invading species rose exponentially with deer density, while the most common native species fell exponentially with deer density, because deer were preferentially eating the native species. The effects of deer on the invasive and native plants were magnified in cases of canopy disturbance. Population and controls The white-tailed deer population in North America has declined by several million since 2000, but as of 2017 is considered healthy and is approximately equal to the historical pre-colonization white-tailed population on the continent. The species has rebounded considerably after being overhunted nearly to extinction in the late 1800s and very early 1900s. By contrast, the species' closest cousins (blacktail deer and mule deer) have seen their populations cut by more than half in North America after peaking in 1960 and have never regained their pre-colonization numbers. In the 21st century, the loss of natural predators has been more than offset by the ongoing loss of natural habitat to human development, and changes to logging operations. Several methods have been developed to curb the population of white-tailed deer in suburban areas where they are perceived as overabundant, and these can be separated into lethal and nonlethal strategies. Most common in the U.S. is the use of extended hunting as population control, as well as a way to provide meat for humans. In Maryland and many other states, a state agency sets regulations on bag limits and hunting in the area depending on the deer population levels assessed. Hunting seasons may fluctuate in duration, or restrictions may be set to affect how many deer or what type of deer can be hunted in certain regions. For the 2015–2016 white-tailed deer-hunting season, some areas allowed only the hunting of antlerless white-tailed deer. These included young bucks and females, which encouraged the culling of does, aiding in population control. A more targeted yet more expensive removal strategy than public hunting is a method referred to as sharpshooting. Sharpshooting can be an option when the area inhabited by the deer is unfit for public hunting. This strategy may work in areas close to human populations, since it is done by professional marksmen, and requires a submitted plan of action to the city with details of the time and location of the action, as well as number of deer to be culled. Another controversial method involves trapping the deer in a net or other trap, and then administering a chemical euthanizing agent or extermination by firearm. A main issue in questioning the humaneness of this method is the stress that the deer endure while trapped and awaiting extermination. Nonlethal methods include contraceptive injections, sterilization, and translocation of deer. While lethal methods have municipal support as being the most effective in the short term, some opponents of this view suggest that extermination has no significant impact on deer populations. Opponents of contraceptive methods point out that fertility control cannot provide meat and proves ineffective over time as populations in open-field systems move about. Concerns are voiced that the contraceptives have not been adequately researched for the effect they could have on humans. Fertility control also does nothing to affect the current population and the effects their grazing may be having on the forest plant make-up. Translocation has been considered overly costly for the little benefit it provides. Deer experience high stress and are at high risk of dying in the process, putting into question its humaneness. Another concern regarding translocation is the possible spreading of chronic wasting disease to unaffected deer populations and concerns about exposure to human populations. In addition to the danger of deer-vehicle collisions the National Agricultural Statistics Service (NASS) reported that the estimated loss in field crops, nuts, fruits, and vegetables in 2001 was near $765 million, (equivalent to $ in ). Behavior Males compete for the opportunity of breeding females. Sparring among males determines a dominance hierarchy. Bucks attempt to copulate with as many females as possible, losing physical condition, since they rarely eat or rest during the rut. The general geographical trend is for the rut to be shorter in duration at increased latitude. Many factors determine how intense the "rutting season" will be; air temperature is a major one. Any time the temperature rises above , the males do much less traveling looking for females, else they will be subject to overheating or dehydrating. Another factor for the strength of rutting activity is competition. If numerous males are in a particular area, then they compete more with the females. If fewer males or more females are present, then the selection process will not need to be as competitive. Reproduction Females enter estrus, colloquially called the rut, in the autumn, normally in late October or early November, triggered mainly by the declining photoperiod. Sexual maturation of females depends on population density, as well as the availability of food. Young females often flee from an area heavily populated with males. Some does may be as young as six months when they reach sexual maturity, but the average age of maturity is 18 months. Copulation consists of a brief copulatory jump. Females give birth to one to three spotted young, known as fawns, in mid-to-late spring, generally in May or June. Fawns lose their spots during the first summer and weigh from by the first winter. Male fawns tend to be slightly larger and heavier than females. For the first four weeks, fawns are hidden in vegetation by their mothers, who nurse them four to five times a day. This strategy keeps scent levels low to avoid predators. After about a month, the fawns are then able to follow their mothers on foraging trips. They are usually weaned after 8–10 weeks, but cases have been seen where mothers have continued to allow nursing long after the fawns have lost their spots (for several months, or until the end of fall) as seen by rehabilitators and other studies. Males leave their mothers after a year and females leave after two. Bucks are generally sexually mature at 1.5 years old and begin to breed even in populations stacked with older bucks. Communication White-tailed deer have many forms of communication involving sounds, scent, body language, and marking. In addition to the blowing as mentioned above in the presence of danger, all white-tailed deer can produce audible noises unique to each animal. Fawns release a high-pitched squeal, known as a bleat, to call out to their mothers. This bleat deepens as the fawn grows until it becomes the grunt of the mature deer, a guttural sound that attracts the attention of any other deer in the area. A doe makes maternal grunts when searching for her bedded fawns. Bucks also grunt, at a pitch lower than that of the doe; this grunt deepens as the buck matures. In addition to grunting, both does and bucks also snort, a sound that often signals an imminent threat. Mature bucks also produce a grunt-snort-wheeze pattern, unique to each animal, that asserts its dominance, aggression, and hostility. White-tailed deer also use "tail-flagging," a behavior where the tail is raised when they detect a threat. However, the function of this behavior is disputed, and it appears to be a signal to predators more than an intraspecific communication warning other deer. Marking White-tailed deer possess many glands that allow them to produce scents, some of which are so potent they can be detected by the human nose. Four major glands are the preorbital, forehead, tarsal, and metatarsal glands. Secretions from the preorbital glands (in front of the eye) were thought to be rubbed on tree branches, but research suggests this is not so. Scent from the forehead or sudoriferous glands (found on the head, between the antlers and eyes) is used to deposit scent on branches that overhang scrapes (areas scraped by the deer's front hooves before rub-urination). The tarsal glands are found on the upper inside of the hock (middle joint) on each hind leg. The scent is deposited from these glands when deer walk through and rub against vegetation. These scrapes are used by bucks as a sort of "sign-post" by which bucks know which other bucks are in the area, and to let does know a buck is regularly passing through the area—for breeding purposes. The scent from the metatarsal glands, found on the outside of each hind leg, between the ankle and hooves, may be used as an alarm scent. The scent from the interdigital glands, which are located between the hooves of each foot, emit a yellow waxy substance with an offensive odor. Deer can be seen stomping their hooves if they sense danger through sight, sound, or smell; this action leaves an excessive amount of odor for warning other deer of possible danger. Throughout the year, deer rub-urinate, a process during which a deer squats while urinating so the urine will run down the insides of the deer's legs, over the tarsal glands, and onto the hair covering these glands. Bucks rub-urinate more frequently during the breeding season. Secretions from the preputial glands and tarsal glands mix with the urine and bacteria to produce a strong-smelling odor. During the breeding season, does release hormones and pheromones that tell bucks a doe is in heat and able to breed. Bucks also rub trees and shrubs with their antlers and heads during the breeding season, possibly transferring scent from the forehead glands to the tree, leaving a scent other deer can detect. Sign-post marking (scrapes and rubs) is a very obvious way white-tailed deer communicate. Although bucks do most of the marking, does visit these locations often. To make a rub, a buck uses his antlers to strip the bark off small-diameter trees, helping to mark his territory and polish his antlers. To mark areas they regularly pass through, bucks make scrapes. Often occurring in patterns known as scrape lines, scrapes are areas where a buck has used his front hooves to expose bare earth. They often rub-urinate into these scrapes, which are often found under twigs that have been marked with scent from the forehead glands. Hunting White-tailed deer have long been hunted as game, for pure sport and for their commodities, and is probably the most hunted native big game species in the Americas. In Mesoamerica, white-tailed deer (Odocoileus virginianus) were hunted from very early times. Rites and rituals in preparation for deer hunting and celebration for an auspicious hunt are still practiced in the area today. Ancient hunters ask their gods for permission to hunt, and some deer rites take place in caves. Venison, or deer meat, is a nutritious form of lean animal protein. In some areas where their populations are very high, white-tailed deer are considered a pest, and hunting is used as a method to control them. In 1884, one of the first hunts of white-tailed deer in Europe was conducted in Opočno and Dobříš (Brdy Mountains area), in what is now the Czech Republic. In the same era, white-tailed deer were hunted to near extinction in North America, but numbers have since rebounded to approximate pre-colonization levels. In the United States, whitetail hunting is far more popular in some states than others. The top five states for whitetail hunter concentrations are all in the Northeast and Midwest (Pennsylvania, Rhode Island, New York, Wisconsin, and Ohio). The Northeast in particular has twice the hunter density of the Midwest and Southeast and ten times that of the West. Since whitetail deer are very adaptable, inhabiting diverse regions ranging from tropical rain forests to high-altitude mountain chains of the Andes Mountains at more than 13,000 feet, different hunting methods as well as types of guns and ammo may be used. Most common cartridges used include the .243 Winchester, .308 Winchester, .25-06 Remington, 6.5mm Creedmoor, .270 Winchester, 7mm Remington Magnum, .30-06 Springfield, .30-30 Winchester (.30 WCF), .300 Winchester Magnum and 12 gauge shotshells. Due to the whitetail deer's frame and weight, cup and core bullets are the most recommended for taking clean, ethical shots. Sport hunting for whitetail deer is a way of conservation of natural habitats as well as a population management. Human interactions By the early 20th century, commercial exploitation and unregulated hunting had severely depressed deer populations in much of their range. For example, by about 1930, the U.S. population was thought to number about 300,000. After an outcry by hunters and conservation ecologists, commercial exploitation of deer became illegal and conservation programs along with regulated hunting were introduced. In 2005, estimates put the deer population in the United States at around 30 million. Conservation practices have proved so successful, in parts of their range, the white-tailed deer populations currently far exceed their cultural carrying capacity and the animal may be considered a nuisance. A reduction in non-human predators (which normally cull young, sick, or infirm specimens) has also contributed to locally abundant populations. At high population densities, farmers can suffer economic damage from deer feeding on cash crops, especially in corn and orchards. It has become nearly impossible to grow some crops in some areas unless very burdensome deer-deterring measures are taken. Deer can easily jump fences, and their fear of motion and sounds meant to scare them away is soon dulled. Timber harvesting and forest clearance have historically resulted in increased deer population densities, which in turn have slowed the rate of reforestation following logging in some areas. High densities of deer can have severe impacts on native plants and animals in parks and natural areas; however, deer browsing can also promote plant and animal diversity in some areas. Deer can also cause substantial damage to landscape plants in suburban areas, leading to limited hunting or trapping to relocate or sterilize them. In parts of the Eastern US with high deer populations and fragmented woodlands, deer often wander into suburban and urban habitats that are less than ideal for the species. Farming In New Zealand, the United States, and Canada, white-tailed deer are kept as livestock, and are extensively as well as intensively farmed for their meat, antlers, and pelts. The industry for farming white-tailed deer has grown significantly in the past two decades. In recent years, sales of white-tailed deer have generated up to $44 million in revenue. They are a good business venture because they have a high fertility rate and long reproductive life, can tolerate all weather, can be raised on land that is not suitable for agriculture and offer many by-products that can be sold. The North American white-tailed deer industry is split between breeding farms and hunting ranches. While some people care about the by-products produced by the deer, some people just care for the pursuit of a hunt. In the United States alone, around 13-14 million hunting licenses are sold every year. This could be a very profitable industry, especially considering the invasiveness of this species and the rate they have shown they are able to reproduce. However, this industry could have great repercussions on the ecosystem the farms are placed in because overpopulation of deer causes damage to local fauna. Deer–vehicle collisions Motor vehicle collisions with deer are a significant issue in many parts of their range, especially at night and during rutting season, causing injuries and fatalities among both deer and humans. Vehicular damage can be substantial in some cases. In the United States, such collisions increased from 200,000 in 1980 to 500,000 in 1991. By 2009, the insurance industry estimated 2.4 million deer–vehicle collisions had occurred over the past two years, estimating damage cost to be over 7 billion dollars and 300 human deaths. Despite the high rate of these accidents, the effect on deer density is still quite low. Vehicle collisions of deer were monitored for two years in Virginia, and the collective annual mortality did not surpass 20% of the estimated deer population. Many techniques have been investigated to prevent roadside mortality. Fences or road under- or over- passes have been shown to decrease deer-vehicle collisions, but are expensive and difficult to implement on a large scale. Roadside habitat modifications could also successfully decrease the number of collisions along roadways. An essential procedure in understanding factors resulting in accidents is to quantify risks, which involves the driver's behavior in terms of safe speed and ability to observe the deer. Some have suggested that reducing speed limits during the winter months when deer density is exceptionally high would likely reduce deer-vehicle collisions, but this may be an impractical solution. Diseases Another issue that exists with high deer density is the spreading of infectious diseases. Increased deer populations lead to increased transmission of tick-borne diseases, which pose a threat to human health, to livestock, and to other deer. Deer are the primary host and vector for the adult black-legged tick, which transmits the Lyme disease bacterium to humans. Lyme disease is the most common vector-borne disease in the country with confirmed cases, according to 2019 CDC data, in virtually every state in the U.S. with the highest incidence levels in the states from Maine to Virginia, Minnesota, and Wisconsin. In 2019 the number of confirmed and probable cases totaled about 35,000. Furthermore, the incidence of Lyme disease seems to reflect deer density in the eastern United States, which suggests a strong correlation. White-tailed deer also serve as intermediate hosts for many diseases that infect humans through ticks, such as Rocky Mountain spotted fever. Newer evidence suggests the white-footed mouse is the most significant vector. SARS-CoV-2 Blood samples gathered by USDA researchers in 2021 also showed that 40% of sampled white-tailed deer demonstrated evidence of SARS-CoV-2 antibodies, with the highest percentages in Michigan, at 67%, and Pennsylvania, at 44%. A later study by Penn State University and wildlife officials in Iowa showed that up to 80 percent of Iowa deer sampled from April 2020 through January 2021 had tested positive for active SARS-CoV-2 infection, rather than solely antibodies from prior infection. This data, confirmed by the National Veterinary Services Laboratory, alerted scientists to the possibility that white-tailed deer had become a natural reservoir for the coronavirus, serving as a potential "variant factory" for eventual retransmission back into humans. An Ohio State University study further showed that humans had transmitted SARS-CoV-2 to white-tailed deer on at least six separate occasions and that deer possessed six mutations that were uncommon in humans at the time of the study. Infected deer can shed virus via nasal secretions and feces for five to six days and frequently engage in activities conductive to viral spread, such as sniffing food intermingled with waste, nuzzling noses, polygamy, and the sharing of salt licks. Canadian researchers uncovered an entirely new SARS-CoV-2 variant within a November–December 2021 study of Ontario white-tailed deer. The new COVID variant had also infected a person who had close contact with local deer, potentially marking the first instance of deer-to-human transmission. Cultural significance In the U.S., the species is the state animal of Arkansas, Georgia, Illinois, Michigan, Mississippi, Nebraska, New Hampshire, Ohio, Pennsylvania, and South Carolina, the game animal of Oklahoma, and the wildlife symbol of Wisconsin. The white-tailed deer is also the inspiration of the professional basketball team the Milwaukee Bucks. The profile of a white-tailed deer buck caps the coat of arms of Vermont, is on the flag of Vermont, and is in stained glass at the Vermont State House. It is the national animal of Honduras and Costa Rica and the provincial animal of Canadian Saskatchewan and Finnish Pirkanmaa. It appears on the reverse side of the Costa Rican 1,000 colón note. The 1942 Disney film adaptation of Bambi, famously changed Bambi's species from the novel's roe deer into a white-tailed deer. Climate change Migration patterns Climate change is affecting the white-tailed deer by changing their migration patterns and increasing their population size. This species of deer is restricted from moving northward due to cold harsh winters. Consequently, as climate change warms up Earth, these deer are allowed to migrate further north which will result in the populations of the white-tailed deer increasing. Between 1980 and 2000 in a study by Dawe and Boutin, presence of white-tailed deer in Alberta, Canada was driven primarily by changes in the climate. Populations of white-tailed deer have also moved anywhere from 50 to 250 km north of the eastern Alberta study site. Another study by Kennedy-Slaney, Bowman, Walpole, and Pond found that if current CO2 emissions remained the same, global warming resulting from the increased greenhouse gases in the atmosphere will allow white-tailed deer to survive further and further north by 2100. Food web When species are introduced to foreign ecosystems, they could potentially wreak havoc on the existing food web. For example, when the deer moved north in Alberta, gray wolf populations increased. This butterfly effect was also demonstrated in Yellowstone National Park when the rivers changed because wolves were re-introduced to the ecosystem. It is also possible that the increasing white-tailed deer populations could result in them becoming an invasive species for various plants in Alberta, Canada. Disease The species is vulnerable to diseases that are more prevalent in the summer. Insects carrying these diseases are usually killed during the first snowfall. However, as time goes on, they will be able to live longer than they used to meaning the deer are at higher risk of getting sick. It is possible that this will increase the deers' mortality rate from disease. Examples of these diseases are hemorrhagic disease (HD), epizootic hemorrhagic disease and bluetongue viruses, which are transmitted by biting midges. The hotter summers, longer droughts, and more intense rains create the perfect environment for the midges to thrive in. Ticks also thrive in warmer weather heat results in faster development in all of their life stages. 18 different species of tick infest white-tailed deer in the United States alone. Ticks are parasitic to white-tailed deer transmit diseases causing irritation, anemia, and infections.
Biology and health sciences
Deer
Animals
569480
https://en.wikipedia.org/wiki/Receptor%20%28biochemistry%29
Receptor (biochemistry)
In biochemistry and pharmacology, receptors are chemical structures, composed of protein, that receive and transduce signals that may be integrated into biological systems. These signals are typically chemical messengers which bind to a receptor and produce physiological responses such as change in the electrical activity of a cell. For example, GABA, an inhibitory neurotransmitter, inhibits electrical activity of neurons by binding to GABA receptors. There are three main ways the action of the receptor can be classified: relay of signal, amplification, or integration. Relaying sends the signal onward, amplification increases the effect of a single ligand, and integration allows the signal to be incorporated into another biochemical pathway. Receptor proteins can be classified by their location. Cell surface receptors, also known as transmembrane receptors, include ligand-gated ion channels, G protein-coupled receptors, and enzyme-linked hormone receptors. Intracellular receptors are those found inside the cell, and include cytoplasmic receptors and nuclear receptors. A molecule that binds to a receptor is called a ligand and can be a protein, peptide (short protein), or another small molecule, such as a neurotransmitter, hormone, pharmaceutical drug, toxin, calcium ion or parts of the outside of a virus or microbe. An endogenously produced substance that binds to a particular receptor is referred to as its endogenous ligand. E.g. the endogenous ligand for the nicotinic acetylcholine receptor is acetylcholine, but it can also be activated by nicotine and blocked by curare. Receptors of a particular type are linked to specific cellular biochemical pathways that correspond to the signal. While numerous receptors are found in most cells, each receptor will only bind with ligands of a particular structure. This has been analogously compared to how locks will only accept specifically shaped keys. When a ligand binds to a corresponding receptor, it activates or inhibits the receptor's associated biochemical pathway, which may also be highly specialised. Receptor proteins can be also classified by the property of the ligands. Such classifications include chemoreceptors, mechanoreceptors, gravitropic receptors, photoreceptors, magnetoreceptors and gasoreceptors. Structure The structures of receptors are very diverse and include the following major categories, among others: Type 1: Ligand-gated ion channels (ionotropic receptors) – These receptors are typically the targets of fast neurotransmitters such as acetylcholine (nicotinic) and GABA; activation of these receptors results in changes in ion movement across a membrane. They have a heteromeric structure in that each subunit consists of the extracellular ligand-binding domain and a transmembrane domain which includes four transmembrane alpha helices. The ligand-binding cavities are located at the interface between the subunits. Type 2: G protein-coupled receptors (metabotropic receptors) – This is the largest family of receptors and includes the receptors for several hormones and slow transmitters e.g. dopamine, metabotropic glutamate. They are composed of seven transmembrane alpha helices. The loops connecting the alpha helices form extracellular and intracellular domains. The binding-site for larger peptide ligands is usually located in the extracellular domain whereas the binding site for smaller non-peptide ligands is often located between the seven alpha helices and one extracellular loop. The aforementioned receptors are coupled to different intracellular effector systems via G proteins. G proteins are heterotrimers made up of 3 subunits: α (alpha), β (beta), and γ (gamma). In the inactive state, the three subunits associate together and the α-subunit binds GDP. G protein activation causes a conformational change, which leads to the exchange of GDP for GTP. GTP-binding to the α-subunit causes dissociation of the β- and γ-subunits. Furthermore, the three subunits, α, β, and γ have additional four main classes based on their primary sequence. These include Gs, Gi, Gq and G12. Type 3: Kinase-linked and related receptors (see "Receptor tyrosine kinase" and "Enzyme-linked receptor") – They are composed of an extracellular domain containing the ligand binding site and an intracellular domain, often with enzymatic-function, linked by a single transmembrane alpha helix. The insulin receptor is an example. Type 4: Nuclear receptors – While they are called nuclear receptors, they are actually located in the cytoplasm and migrate to the nucleus after binding with their ligands. They are composed of a C-terminal ligand-binding region, a core DNA-binding domain (DBD) and an N-terminal domain that contains the AF1(activation function 1) region. The core region has two zinc fingers that are responsible for recognizing the DNA sequences specific to this receptor. The N terminus interacts with other cellular transcription factors in a ligand-independent manner; and, depending on these interactions, it can modify the binding/activity of the receptor. Steroid and thyroid-hormone receptors are examples of such receptors. Membrane receptors may be isolated from cell membranes by complex extraction procedures using solvents, detergents, and/or affinity purification. The structures and actions of receptors may be studied by using biophysical methods such as X-ray crystallography, NMR, circular dichroism, and dual polarisation interferometry. Computer simulations of the dynamic behavior of receptors have been used to gain understanding of their mechanisms of action. Binding and activation Ligand binding is an equilibrium process. Ligands bind to receptors and dissociate from them according to the law of mass action in the following equation, for a ligand L and receptor, R. The brackets around chemical species denote their concentrations. One measure of how well a molecule fits a receptor is its binding affinity, which is inversely related to the dissociation constant Kd. A good fit corresponds with high affinity and low Kd. The final biological response (e.g. second messenger cascade, muscle-contraction), is only achieved after a significant number of receptors are activated. Affinity is a measure of the tendency of a ligand to bind to its receptor. Efficacy is the measure of the bound ligand to activate its receptor. Agonists versus antagonists Not every ligand that binds to a receptor also activates that receptor. The following classes of ligands exist: (Full) agonists are able to activate the receptor and result in a strong biological response. The natural endogenous ligand with the greatest efficacy for a given receptor is by definition a full agonist (100% efficacy). Partial agonists do not activate receptors with maximal efficacy, even with maximal binding, causing partial responses compared to those of full agonists (efficacy between 0 and 100%). Antagonists bind to receptors but do not activate them. This results in a receptor blockade, inhibiting the binding of agonists and inverse agonists. Receptor antagonists can be competitive (or reversible), and compete with the agonist for the receptor, or they can be irreversible antagonists that form covalent bonds (or extremely high affinity non-covalent bonds) with the receptor and completely block it. The proton pump inhibitor omeprazole is an example of an irreversible antagonist. The effects of irreversible antagonism can only be reversed by synthesis of new receptors. Inverse agonists reduce the activity of receptors by inhibiting their constitutive activity (negative efficacy). Allosteric modulators: They do not bind to the agonist-binding site of the receptor but instead on specific allosteric binding sites, through which they modify the effect of the agonist. For example, benzodiazepines (BZDs) bind to the BZD site on the GABAA receptor and potentiate the effect of endogenous GABA. Note that the idea of receptor agonism and antagonism only refers to the interaction between receptors and ligands and not to their biological effects. Constitutive activity A receptor which is capable of producing a biological response in the absence of a bound ligand is said to display "constitutive activity". The constitutive activity of a receptor may be blocked by an inverse agonist. The anti-obesity drugs rimonabant and taranabant are inverse agonists at the cannabinoid CB1 receptor and though they produced significant weight loss, both were withdrawn owing to a high incidence of depression and anxiety, which are believed to relate to the inhibition of the constitutive activity of the cannabinoid receptor. The GABAA receptor has constitutive activity and conducts some basal current in the absence of an agonist. This allows beta carboline to act as an inverse agonist and reduce the current below basal levels. Mutations in receptors that result in increased constitutive activity underlie some inherited diseases, such as precocious puberty (due to mutations in luteinizing hormone receptors) and hyperthyroidism (due to mutations in thyroid-stimulating hormone receptors). Theories of drug-receptor interaction Occupation Early forms of the receptor theory of pharmacology stated that a drug's effect is directly proportional to the number of receptors that are occupied. Furthermore, a drug effect ceases as a drug-receptor complex dissociates. Ariëns & Stephenson introduced the terms "affinity" & "efficacy" to describe the action of ligands bound to receptors. Affinity: The ability of a drug to combine with a receptor to create a drug-receptor complex. Efficacy: The ability of drug to initiate a response after the formation of drug-receptor complex. Rate In contrast to the accepted Occupation Theory, Rate Theory proposes that the activation of receptors is directly proportional to the total number of encounters of a drug with its receptors per unit time. Pharmacological activity is directly proportional to the rates of dissociation and association, not the number of receptors occupied: Agonist: A drug with a fast association and a fast dissociation. Partial-agonist: A drug with an intermediate association and an intermediate dissociation. Antagonist: A drug with a fast association & slow dissociation Induced-fit As a drug approaches a receptor, the receptor alters the conformation of its binding site to produce drug—receptor complex. Spare Receptors In some receptor systems (e.g. acetylcholine at the neuromuscular junction in smooth muscle), agonists are able to elicit maximal response at very low levels of receptor occupancy (<1%). Thus, that system has spare receptors or a receptor reserve. This arrangement produces an economy of neurotransmitter production and release. Receptor regulation Cells can increase (upregulate) or decrease (downregulate) the number of receptors to a given hormone or neurotransmitter to alter their sensitivity to different molecules. This is a locally acting feedback mechanism. Change in the receptor conformation such that binding of the agonist does not activate the receptor. This is seen with ion channel receptors. Uncoupling of the receptor effector molecules is seen with G protein-coupled receptors. Receptor sequestration (internalization), e.g. in the case of hormone receptors. Examples and ligands The ligands for receptors are as diverse as their receptors. GPCRs (7TMs) are a particularly vast family, with at least 810 members. There are also LGICs for at least a dozen endogenous ligands, and many more receptors possible through different subunit compositions. Some common examples of ligands and receptors include: Ion channels and G protein coupled receptors Some example ionotropic (LGIC) and metabotropic (specifically, GPCRs) receptors are shown in the table below. The chief neurotransmitters are glutamate and GABA; other neurotransmitters are neuromodulatory. This list is by no means exhaustive. Enzyme linked receptors Enzyme linked receptors include Receptor tyrosine kinases (RTKs), serine/threonine-specific protein kinase, as in bone morphogenetic protein and guanylate cyclase, as in atrial natriuretic factor receptor. Of the RTKs, 20 classes have been identified, with 58 different RTKs as members. Some examples are shown below: Intracellular Receptors Receptors may be classed based on their mechanism or on their position in the cell. 4 examples of intracellular LGIC are shown below: Role in health and disease In genetic disorders Many genetic disorders involve hereditary defects in receptor genes. Often, it is hard to determine whether the receptor is nonfunctional or the hormone is produced at decreased level; this gives rise to the "pseudo-hypo-" group of endocrine disorders, where there appears to be a decreased hormonal level while in fact it is the receptor that is not responding sufficiently to the hormone. In the immune system The main receptors in the immune system are pattern recognition receptors (PRRs), toll-like receptors (TLRs), killer activated and killer inhibitor receptors (KARs and KIRs), complement receptors, Fc receptors, B cell receptors and T cell receptors.
Biology and health sciences
Cell processes
Biology
2151693
https://en.wikipedia.org/wiki/Cassegrain%20reflector
Cassegrain reflector
The Cassegrain reflector is a combination of a primary concave mirror and a secondary convex mirror, often used in optical telescopes and radio antennas, the main characteristic being that the optical path folds back onto itself, relative to the optical system's primary mirror entrance aperture. This design puts the focal point at a convenient location behind the primary mirror and the convex secondary adds a telephoto effect creating a much longer focal length in a mechanically short system. In a symmetrical Cassegrain both mirrors are aligned about the optical axis, and the primary mirror usually contains a hole in the center, thus permitting the light to reach an eyepiece, a camera, or an image sensor. Alternatively, as in many radio telescopes, the final focus may be in front of the primary. In an asymmetrical Cassegrain, the mirror(s) may be tilted to avoid obscuration of the primary or to avoid the need for a hole in the primary mirror (or both). The classic Cassegrain configuration uses a parabolic reflector as the primary while the secondary mirror is hyperbolic. Modern variants may have a hyperbolic primary for increased performance (for example, the Ritchey–Chrétien design); and either or both mirrors may be spherical or elliptical for ease of manufacturing. The Cassegrain reflector is named after a published reflecting telescope design that appeared in the April 25, 1672 Journal des sçavans which has been attributed to Laurent Cassegrain. Similar designs using convex secondary mirrors have been found in the Bonaventura Cavalieri's 1632 writings describing burning mirrors and Marin Mersenne's 1636 writings describing telescope designs. James Gregory's 1662 attempts to create a reflecting telescope included a Cassegrain configuration, judging by a convex secondary mirror found among his experiments. The Cassegrain design is also used in catadioptric systems. Cassegrain designs "Classic" Cassegrain telescopes The "classic" Cassegrain has a parabolic primary mirror and a hyperbolic secondary mirror that reflects the light back down through a hole in the primary. Folding the optics makes this a compact design. On smaller telescopes, and camera lenses, the secondary is often mounted on an optically flat, optically clear glass plate that closes the telescope tube. This support eliminates the "star-shaped" diffraction effects caused by a straight-vaned support spider. The closed tube stays clean, and the primary is protected, at the cost of some loss of light-gathering power. It makes use of the special properties of parabolic and hyperbolic reflectors. A concave parabolic reflector will reflect all incoming light rays parallel to its axis of symmetry to a single point, the focus. A convex hyperbolic reflector has two foci and will reflect all light rays directed at one of its two foci towards its other focus. The mirrors in this type of telescope are designed and positioned so that they share one focus and so that the second focus of the hyperbolic mirror will be at the same point at which the image is to be observed, usually just outside the eyepiece. In most Cassegrain systems, the secondary mirror blocks a central portion of the aperture. This ring-shaped entrance aperture significantly reduces a portion of the modulation transfer function (MTF) over a range of low spatial frequencies, compared to a full-aperture design such as a refractor or an offset Cassegrain. This MTF notch has the effect of lowering image contrast when imaging broad features. In addition, the support for the secondary (the spider) may introduce diffraction spikes in images. The radii of curvature of the primary and secondary mirrors, respectively, in the classic configuration are and where is the effective focal length of the system, is the back focal length (the distance from the secondary to the focus), is the distance between the two mirrors and is the secondary magnification. If, instead of and , the known quantities are the focal length of the primary mirror, , and the distance to the focus behind the primary mirror, , then and . The conic constant of the primary mirror is that of a parabola, . Thanks to that there is no spherical aberration introduced by the primary mirror. The secondary mirror, however, is of a hyperbolic shape with one focus coinciding with that of the primary mirror and the other focus being at the back focal length . Thus, the classical Cassegrain has ideal focus for the chief ray (the center spot diagram is one point). We have, , where . Actually, as the conic constants should not depend on scaling, the formulae for both and can be greatly simplified and presented only as functions of the secondary magnification. Finally, and . Ritchey-Chrétien The Ritchey-Chrétien is a specialized Cassegrain reflector which has two hyperbolic mirrors (instead of a parabolic primary). It is free of coma and spherical aberration at a flat focal plane, making it well suited for wide field and photographic observations. It was invented by George Willis Ritchey and Henri Chrétien in the early 1910s. This design is very common in large professional research telescopes, including the Hubble Space Telescope, the Keck Telescopes, and the Very Large Telescope (VLT); it is also found in high-grade amateur telescopes. Dall-Kirkham The Dall-Kirkham Cassegrain telescope design was created by Horace Dall in 1928 and took on the name in an article published in Scientific American in 1930 following discussion between amateur astronomer Allan Kirkham and Albert G. Ingalls, the magazine's astronomy editor at the time. It uses a concave elliptical primary mirror and a convex spherical secondary. While this system is easier to polish than a classic Cassegrain or Ritchey-Chretien system, the off-axis coma is significantly worse, so the image degrades quickly off-axis. Because this is less noticeable at longer focal ratios, Dall-Kirkhams are seldom faster than f/15. Off-axis configurations An unusual variant of the Cassegrain is the Schiefspiegler telescope ("skewed" or "oblique reflector"; also known as the "Kutter telescope" after its inventor, Anton Kutter) which uses tilted mirrors to avoid the secondary mirror casting a shadow on the primary. However, while eliminating diffraction patterns this leads to several other aberrations that must be corrected. Several different off-axis configurations are used for radio antennas. Another off-axis, unobstructed design and variant of the Cassegrain is the 'Yolo' reflector invented by Arthur Leonard. This design uses a spherical or parabolic primary and a mechanically warped spherical secondary to correct for off-axis induced astigmatism. When set up correctly the Yolo can give uncompromising unobstructed views of planetary objects and non-wide field targets, with no lack of contrast or image quality caused by spherical aberration. The lack of obstruction also eliminates the diffraction associated with Cassegrain and Newtonian reflector astrophotography. Catadioptric Cassegrains Catadioptric Cassegrains use two mirrors, often with a spherical primary mirror to reduce cost, combined with refractive corrector element(s) to correct the resulting aberrations. Schmidt-Cassegrain The Schmidt-Cassegrain was developed from the wide-field Schmidt camera, although the Cassegrain configuration gives it a much narrower field of view. The first optical element is a Schmidt corrector plate. The plate is figured by placing a vacuum on one side, and grinding the exact correction required to correct the spherical aberration caused by the spherical primary mirror. Schmidt-Cassegrains are popular with amateur astronomers. An early Schmidt-Cassegrain camera was patented in 1946 by artist/architect/physicist Roger Hayward, with the film holder placed outside the telescope. Maksutov-Cassegrain The Maksutov-Cassegrain is a variation of the Maksutov telescope named after the Soviet optician and astronomer Dmitri Dmitrievich Maksutov. It starts with an optically transparent corrector lens that is a section of a hollow sphere. It has a spherical primary mirror, and a spherical secondary that is usually a mirrored section of the corrector lens. Argunov-Cassegrain In the Argunov-Cassegrain telescope all optics are spherical, and the classical Cassegrain secondary mirror is replaced by a sub-aperture corrector consisting of three air spaced lens elements. The element farthest from the primary mirror is a Mangin mirror, which acts as a secondary mirror. Klevtsov-Cassegrain The Klevtsov-Cassegrain, like the Argunov-Cassegrain, uses a sub-aperture corrector consisting of a small meniscus lens and a Mangin mirror as its "secondary mirror". Cassegrain radio antennas Cassegrain designs are also utilized in satellite telecommunication earth station antennas and radio telescopes, ranging in size from 2.4 metres to 70 metres. The centrally located sub-reflector serves to focus radio frequency signals in a similar fashion to optical telescopes. An example of a cassegrain radio antenna is the 70-meter dish at JPL's Goldstone antenna complex. For this antenna, the final focus is in front of the primary, at the top of the pedestal protruding from the mirror.
Technology
Telescope
null
2152225
https://en.wikipedia.org/wiki/Thin-layer%20chromatography
Thin-layer chromatography
Thin-layer chromatography (TLC) is a chromatography technique that separates components in non-volatile mixtures. It is performed on a TLC plate made up of a non-reactive solid coated with a thin layer of adsorbent material. This is called the stationary phase. The sample is deposited on the plate, which is eluted with a solvent or solvent mixture known as the mobile phase (or eluent). This solvent then moves up the plate via capillary action. As with all chromatography, some compounds are more attracted to the mobile phase, while others are more attracted to the stationary phase. Therefore, different compounds move up the TLC plate at different speeds and become separated. To visualize colourless compounds, the plate is viewed under UV light or is stained. Testing different stationary and mobile phases is often necessary to obtain well-defined and separated spots. TLC is quick, simple, and gives high sensitivity for a relatively low cost. It can monitor reaction progress, identify compounds in a mixture, determine purity, or purify small amounts of compound. Procedure The process for TLC is similar to paper chromatography but provides faster runs, better separations, and the choice between different stationary phases. Plates can be labelled before or after the chromatography process with a pencil or other implement that will not interfere with the process. There are four main stages to running a thin-layer chromatography plate: Plate preparation: Using a capillary tube, a small amount of a concentrated solution of the sample is deposited near the bottom edge of a TLC plate. The solvent is allowed to completely evaporate before the next step. A vacuum chamber may be necessary for non-volatile solvents. To make sure there is sufficient compound to obtain a visible result, the spotting procedure can be repeated. Depending on the application, multiple different samples may be placed in a row the same distance from the bottom edge; each sample will move up the plate in its own "lane." Development chamber preparation: The development solvent or solvent mixture is placed into a transparent container (separation/development chamber) to a depth of less than 1 centimetre. A strip of filter paper (aka "wick") is also placed along the container wall. This filter paper should touch the solvent and almost reach the top of the container. The container is covered with a lid and the solvent vapors are allowed to saturate the atmosphere of the container. Failure to do so results in poor separation and non-reproducible results. Development: The TLC plate is placed in the container such that the sample spot(s) are not submerged into the mobile phase. The container is covered to prevent solvent evaporation. The solvent migrates up the plate by capillary action, meets the sample mixture, and carries it up the plate (elutes the sample). The plate is removed from the container before the solvent reaches the top of the plate; otherwise, the results will be misleading. The solvent front, the highest mark the solvent has travelled along the plate, is marked. Visualization: The solvent evaporates from the plate. Visualization methods include UV light, staining, and many more. Separation process and principle The separation of compounds is due to the differences in their attraction to the stationary phase and because of differences in solubility in the solvent. As a result, the compounds and the mobile phase compete for binding sites on the stationary phase. Different compounds in the sample mixture travel at different rates due to the differences in their partition coefficients. Different solvents, or different solvent mixtures, gives different separation. The retardation factor (Rf), or retention factor, quantifies the results. It is the distance traveled by a given substance divided by the distance traveled by the mobile phase. In normal-phase TLC, the stationary phase is polar. Silica gel is very common in normal-phase TLC. More polar compounds in a sample mixture interact more strongly with the polar stationary phase. As a result, more-polar compounds move less (resulting in smaller Rf) while less-polar compounds move higher up the plate (higher Rf). A more-polar mobile phase also binds more strongly to the plate, competing more with the compound for binding sites; a more-polar mobile phase also dissolves polar compounds more. As such, all compounds on the TLC plate move higher up the plate in polar solvent mixtures. "Strong" solvents move compounds higher up the plate, whereas "weak" solvents move them less. If the stationary phase is non-polar, like C18-functionalized silica plates, it is called reverse-phase TLC. In this case, non-polar compounds move less and polar compounds move more. The solvent mixture will also be much more polar than in normal-phase TLC. Solvent choice An eluotropic series, which orders solvents by how much they move compounds, can help in selecting a mobile phase. Solvents are also divided into solvent selectivity groups. Using solvents with different elution strengths or different selectivity groups can often give very different results. While single-solvent mobile phases can sometimes give good separation, some cases may require solvent mixtures. In normal-phase TLC, the most common solvent mixtures include ethyl acetate/hexanes (EtOAc/Hex) for less-polar compounds and methanol/dichloromethane (MeOH/DCM) for more polar compounds. Different solvent mixtures and solvent ratios can help give better separation. In reverse-phase TLC, solvent mixtures are typically water with a less-polar solvent: Typical choices are water with tetrahydrofuran (THF), acetonitrile (ACN), or methanol. Analysis As the chemicals being separated may be colourless, several methods exist to visualise the spots: Placing the plate under blacklight (366 nm light) makes fluorescent compounds glow TLC plates containing a small amount of fluorescent compound (usually manganese-activated zinc silicate) in the adsorbent layer allow for visualisation of some compounds under UV-C light (254 nm). The adsorbent layer will fluoresce light-green, while spots containing compounds that absorb UV-C light will not. Placing the plate in a container filled with iodine vapours temporarily stains the spots. They typically become a yellow or brown colour. The TLC plate can either be dipped in or sprayed with a stain and sometimes heated depending on the stain used. Many stains exist for a large range of chemical moieties but some examples include: Potassium permanganate (no heating, for oxidisable groups) Ninhydrin (heating, amines and amino-acids) Acidic vanillin (heating, general reagent) Phosphomolybdic acid (no heating, general reagent) In the case of lipids, the chromatogram may be transferred to a polyvinylidene fluoride membrane and then subjected to further analysis, for example, mass spectrometry. This technique is known as far-eastern blot. Plate production TLC plates are usually commercially available, with standard particle size ranges to improve reproducibility. They are prepared by mixing the adsorbent, such as silica gel, with a small amount of inert binder like calcium sulfate (gypsum) and water. This mixture is spread as a thick slurry on an unreactive carrier sheet, usually glass, thick aluminum foil, or plastic. The resultant plate is dried and activated by heating in an oven for thirty minutes at 110 °C. The thickness of the absorbent layer is typically around 0.1–0.25 mm for analytical purposes and around 0.5–2.0 mm for preparative TLC. Other adsorbent coatings include aluminium oxide (alumina), or cellulose. Applications Reaction monitoring and characterization TLC is a useful tool for reaction monitoring. For this, the plate normally contains a spot of starting material, a spot from the reaction mixture, and a co-spot (or cross-spot) containing both. The analysis will show if the starting material disappeared and if any new products appeared. This provides a quick and easy way to estimate how far a reaction has proceeded. In one study, TLC has been applied in the screening of organic reactions. The researchers react an alcohol and a catalyst directly in the co-spot of a TLC plate before developing it. This provides quick and easy small-scale testing of different reagents. Compound characterization with TLC is also possible and is similar to reaction monitoring. However, rather than spotting with starting material and reaction mixture, it is with an unknown and a known compound. They may be the same compound if both spots have the same Rf and look the same under the chosen visualization method. However, co-elution complicates both reaction monitoring and characterization. This is because different compounds will move to the same spot on the plate. In such cases, different solvent mixtures may provide better separation. Purity and purification TLC helps show the purity of a sample. A pure sample should only contain one spot by TLC. TLC is also useful for small-scale purification. Because the separated compounds will be on different areas of the plate, a scientist can scrape off the stationary phase particles containing the desired compound and dissolve them into an appropriate solvent. Once all the compound dissolves in the solvent, they filter out the silica particles, then evaporate the solvent to isolate the product. Big preparative TLC plates with thick silica gel coatings can separate more than 100 mg of material. For larger-scale purification and isolation, TLC is useful to quickly test solvent mixtures before running flash column chromatography on a large batch of impure material. A compound elutes from a column when the amount of solvent collected is equal to 1/Rf. The eluent from flash column chromatography gets collected across several containers (for example, test tubes) called fractions. TLC helps show which fractions contain impurities and which contain pure compound. Furthermore, two-dimensional TLC can help check if a compound is stable on a particular stationary phase. This test requires two runs on a square-shaped TLC plate. The plate is rotated by 90º before the second run. If the target compound appears on the diagonal of the square, it is stable on the chosen stationary phase. Otherwise, it is decomposing on the plate. If this is the case, an alternative stationary phase may prevent this decomposition. TLC is also an analytical method for the direct separation of enantiomers and the control of enantiomeric purity, e.g. active pharmaceutical ingredients (APIs) that are chiral.
Physical sciences
Chromatography
Chemistry
2152928
https://en.wikipedia.org/wiki/Archaeognatha
Archaeognatha
The Archaeognatha are an order of apterygotes, known by various common names such as jumping bristletails. Among extant insect taxa they are some of the most evolutionarily primitive; they appeared in the Middle Devonian period at about the same time as the arachnids. Specimens that closely resemble extant species have been found as both body and trace fossils (the latter including body imprints and trackways) in strata from the remainder of the Paleozoic Era and more recent periods. For historical reasons an alternative name for the order is Microcoryphia. Until the late 20th century the suborders Zygentoma and Archaeognatha comprised the order Thysanura; both orders possess three-pronged tails comprising two lateral cerci and a medial epiproct or appendix dorsalis. Of the three organs, the appendix dorsalis is considerably longer than the two cerci; in this the Archaeognatha differ from the Zygentoma, in which the three organs are subequal in length. In the late 20th century, it was recognized that the order Thysanura was paraphyletic, thus the two suborders were each raised to the status of an independent monophyletic order, with Archaeognatha sister taxon to the Dicondylia, including the Zygentoma. The order Archaeognatha is cosmopolitan; it includes roughly 500 species in two families. No species is currently evaluated as being at conservation risk. Description Archaeognatha are small insects with elongated bodies and backs that are arched, especially over the thorax. Their abdomen ends in three long tail-like structures, of which the lateral two are cerci, while the medial filament, which is longest, is an epiproct. The tenth abdominal segment is reduced. The antennae are flexible. The two large compound eyes meet at the top of the head, and there are three ocelli. The mouthparts are partly retractable, with simple chewing mandibles and seven-segmented maxillary palps which are commonly longer than the legs. Unlike other insect orders, they do not have olfactory receptor-coreceptors (Orco), which have either been lost or were never present in the first place. Archaeognatha differ from Zygentoma in various ways, such as their relatively small head, their bodies being compressed laterally (from side to side) instead of flattened dorsiventrally, and in their being able to use their tails to spring up to into the air if disturbed. They have eight pairs of short appendages called styli on abdominal segments 2 to 9. Family Machilidae is also unique among insects in possessing small muscleless styli on the second and third thoracic legs, but are absent on the second pair of thoracic legs in some genera. Similar stylets on the legs are absent in family Meinertellidae. They have one or two pairs of eversible membranous vesicles on the underside of abdominal segments 1 to 7, which are used to absorb water and assisting with molting. There are nine pairs of spiracles; two pairs on the thorax, and seven pairs on abdominal segments 2 to 8. The pair of spiracles on the first abdominal segment has been lost. Further unusual features are that the abdominal sternites are each composed of three sclerites, and they cement themselves to the substrate before molting, often using their own feces as glue. The body is covered with readily detached scales, that make the animals difficult to grip and also may protect the exoskeleton from abrasion. The thin exoskeleton offers little protection against dehydration, and they accordingly must remain in moist air, such as in cool, damp situations under stones or bark. Etymology The name Archaeognatha is derived from the Greek , meaning 'ancient', and , meaning 'jaw'. This refers to the articulation of the mandibles, which are different from those of other insects. It was originally believed that Archaeognatha possessed a single phylogenetically primitive condyle each (thus the name "Monocondylia"), where all more derived insects have two, but this has since been shown to be incorrect; all insects, including Archaeognatha, have dicondylic mandibles, but archaeognaths possess two articulations that are homologous to those in other insects, though slightly different. An alternative name, Microcoryphia, comes from the Greek , and , which in context means 'head'. Taxonomy Biology Archaeognatha occur in a wide range of habitats. While most species live in moist soil, others have adapted to chaparral, and even sandy deserts. They feed primarily on algae, but also lichens, mosses, or decaying organic detritus. Three types of mating behavior are known. In some species the male spins a silk thread (carrier thread) stretched out on the ground. On the thread there are droplets of sperm which the female will take up when her ovipositor makes contact. In others a packet of sperm (spermatophore) is deposited on the top of a short stalk. If a female takes up the sperm or not is often random, but in many species the male will try to lead the female's genitals over the sperm during courtship. A more direct way of fertilization occurs in species of the genus Petrobius, where the male place a droplet of sperm directly on the female's ovipositor. One hypothesis is that the external genitals of insects started as structures specialized for water-uptake, which could reach deeper crevices than the coxal vesicles, and over time the female would use it to take up sperm from the ground instead of water. After fertilization she lays a batch of around 30 eggs in a suitable crevice. The young resemble the adults, and take up to two years to reach sexual maturity, depending on the species and conditions such as temperature and available food. Unlike most insects, the adults continue to moult after reaching adulthood, and typically mate once at each instar. Archaeognaths may have a total lifespan of up to four years, longer than most larger insects.
Biology and health sciences
Insects: General
Animals
2154225
https://en.wikipedia.org/wiki/Psocoptera
Psocoptera
Psocoptera () are a paraphyletic group of insects that are commonly known as booklice, barklice or barkflies. The name Psocoptera has been replaced with Psocodea in recent literature, with the inclusion of the former order Phthiraptera into Psocodea (as part of the suborder Troctomorpha). These insects first appeared in the Permian period, 295–248 million years ago. They are often regarded as the most primitive of the hemipteroids. Their name originates from the Greek word ψῶχος (psokhos), meaning "gnawed" or "rubbed" and πτερά (ptera), meaning "wings". There are more than 5,500 species in 41 families in three suborders. Many of these species have only been described in the early twenty-first century. They range in size from in length. The species known as booklice received their common name because they are commonly found amongst old books—they feed upon the paste used in binding. The barklice are found on trees, feeding on algae and lichen. Anatomy and biology Psocids are small, scavenging insects with a relatively generalized body plan. They feed primarily on fungi, algae, lichen, and organic detritus in nature but are also known to feed on starch-based household items like grains, wallpaper glue and book bindings. They have chewing mandibles, and the central lobe of the maxilla is modified into a slender rod. This rod is used to brace the insect while it scrapes up detritus with its mandibles. They also have a swollen forehead, large compound eyes, and three ocelli. Their bodies are soft with a segmented abdomen. Some species can spin silk from glands in their mouth. They may festoon large sections of trunk and branches in dense swathes of silk. Some psocids have small ovipositors that are up to 1.5 times as long as the hindwings, and all four wings have a relatively simple venation pattern, with few cross-veins. The wings, if present, are held tent-like over the body. The legs are slender and adapted for jumping, rather than gripping, as in the true lice. The abdomen has nine segments, and no cerci. There is often considerable variation in the appearance of individuals within the same species. Many have no wings or ovipositors, and may have a different shape to the thorax. Other, more subtle, variations are also known, such as changes to the development of the setae. The significance of such changes is uncertain, but their function appears to be different from similar variations in, for example, aphids. Like aphids, however, many psocids are parthenogenic, and the presence of males may even vary between different races of the same species. Psocids lay their eggs in minute crevices or on foliage, although a few species are known to be viviparous. The young are born as miniature, wingless versions of the adult. These nymphs typically molt six times before reaching full adulthood. The total lifespan of a psocid is rarely more than a few months. Booklice range from approximately . Some species are wingless and they are easily mistaken for bedbug nymphs and vice versa. Booklouse eggs take two to four weeks to hatch and can reach adulthood approximately two months later. Adult booklice can live for six months. Besides damaging books, they also sometimes infest food storage areas, where they feed on dry, starchy materials. Although some psocids feed on starchy household products, the majority of psocids are woodland insects with little to no contact with humans, therefore they are of little economic importance. They are scavengers and do not bite humans. Psocids can affect the ecosystems in which they reside. Many psocids can affect decomposition by feeding on detritus, especially in environments with lower densities of predacious micro arthropods that may eat psocids. The nymph of a psocid species, Psilopsocus mimulus, is the first known wood-boring psocopteran. These nymphs make their own burrows in woody material, rather than inhabiting vacated, existing burrows. This boring activity can create habitats that other organisms may use. Interaction with humans Some species of psocids, such as Liposcelis bostrychophila, are common pests of stored products. Psocids, among other arthropods, have been studied to develop new pest control techniques in food manufacturing. One study found that modified atmospheres during packing (MAP) helped to control the reoccurrence of pests during the manufacturing process and prevented further infestation in the final products that go to consumers. Classification In the 2000s, morphological and molecular phylogenetic evidence has shown that the parasitic lice (Phthiraptera) evolved from within the psocopteran suborder Troctomorpha, thus making Psocoptera paraphyletic with respect to Phthiraptera. In modern systematics, Psocoptera and Phthiraptera are therefore treated together in the order Psocodea. Here is a cladogram showing the relationships within Psocodea, with the former grouping Psocoptera highlighted:
Biology and health sciences
Insects: General
Animals
2154325
https://en.wikipedia.org/wiki/Cannabis%20indica
Cannabis indica
Cannabis indica is an annual plant species in the family Cannabaceae indigenous to the Hindu Kush mountains of Southern Asia. The plant produces large amounts of tetrahydrocannabinol (THC) and tetrahydrocannabivarin (THCV), with total cannabinoid levels being as high as 53.7%. It is now widely grown in China, India, Nepal, Thailand, Afghanistan, and Pakistan, as well as southern and western Africa, and is cultivated for purposes including hashish in India. The high concentrations of THC or THCV provide euphoric effects making it popular for use for several purposes, not only simple pleasure but also clinical drug research, potential new drug research, and use in alternative medicine, among many others. Taxonomy In 1785, Jean-Baptiste Lamarck published a description of a second species of Cannabis, which he named Cannabis indica. Lamarck based his description of the newly named species on plant specimens collected in India. Richard Evans Schultes described C. indica as relatively short, conical, and densely branched, whereas C. sativa was described as tall and laxly branched. Loran C. Anderson described C. indica plants as having short, broad leaflets whereas those of C. sativa were characterized as relatively long and narrow. C. indica plants conforming to Schultes's and Anderson's descriptions originated from the Hindu Kush mountain range. Because of the often harsh and variable climate of those parts (extremely cold winters and warm summers), C. indica is well-suited for cultivation in temperate climates. The specific epithet indica is Latin for "of India" and has come to be synonymous with the cannabis strain. There was very little debate about the taxonomy of Cannabis until the 1970s, when botanists like Richard Evans Schultes began testifying in court on behalf of accused persons who sought to avoid criminal charges of possession of C. sativa by arguing that the plant material could instead be C. indica. Cultivation Broad-leafed C. indica plants in the Indian Subcontinent are traditionally cultivated for the production of charas, a form of hashish. Pharmacologically, C. indica landraces tend to have higher THC content than C. sativa strains. Some users report more of a "stoned" feeling and less of a "high" from C. indica when compared to C. sativa. (The terms sativa and indica, used in this sense, are more appropriately termed "narrow-leaflet" and "wide-leaflet" drug type, respectively.) The C. indica high is often referred to as a "body buzz" and has beneficial properties such as pain relief in addition to being an effective treatment for insomnia and an anxiolytic, as opposed to C. sativa's more common reports of a cerebral, creative and energetic high, and even (albeit rarely) including hallucinations. Differences in the terpenoid content of the essential oil may account for some of these differences in effect. Common C. indica strains for recreational or medicinal use include Kush and Northern Lights. A recent genetic analysis included both the narrow-leaflet and wide-leaflet drug "biotypes" under C. indica, as well as southern and eastern Asian hemp (fiber/seed) landraces and wild Himalayan populations. Genome In 2011, a team of Canadian researchers led by Andrew Sud announced that they had sequenced a draft genome of the Purple Kush strain of C. indica. Gallery
Biology and health sciences
Rosales
Plants
2154347
https://en.wikipedia.org/wiki/Cannabis%20ruderalis
Cannabis ruderalis
Cannabis ruderalis is a variety, subspecies, or species of Cannabis native to Central and Eastern Europe and Russia. It contains a relatively low quantity of psychoactive compound tetrahydrocannabinol (THC) and does not require photoperiod to blossom (unlike C. indica and C. sativa). Some scholars accept C. ruderalis as its own species due to its unique traits and phenotypes which distinguish it from C. indica and C. sativa; others debate whether ruderalis is a subspecies under C. sativa. Description This species is smaller than other species of the genus, rarely growing over in height. The plants have "thin, slightly fibrous stems" with little branching. The foliage is typically open with large leaves. C. ruderalis reaches maturity much quicker than other species of Cannabis, typically 5–7 weeks after being planted from seed. Unlike other species of the genus, C. ruderalis enters the flowering stage based on the plant's maturity rather than its light cycle. With C. sativa and C. indica varieties, the plant stays in the vegetative state indefinitely as long as a long daylight cycle is maintained. Cannabis geneticists today refer to this feature as "autoflowering" when C. ruderalis is cross-bred. Regarding its cannabinoid profile, it usually contains less tetrahydrocannabinol (THC) in its resin compared to other Cannabis species but is often high in cannabidiol (CBD). Taxonomy Species description There is no consensus in the botany community that C. ruderalis is one separate species, rather than a subspecies from C. sativa. It was first described in 1924 by D. E. Janischewsky, noting the visible differences in the fruits' seed (an achene), shape and size from previously classified Cannabis sativa. Genomic studies Recently, genomic DNA studies utilizing molecular markers and different varieties of plants from diverse geographical origins have been employed to enrich the Cannabis taxonomy discussion. In 2005, Hillig reinforced the polytypic classification system based on allozyme variation at 17 genomic loci. Hillig's approach, proposed a more detailed taxonomy encompassing three species with seven subspecies or varieties: C. sativa C. sativa subsp. sativa var. sativa C. sativa subsp. sativa var. spontanea C. sativa subsp. indica var. kafiristanica C. indica C. indica C. indica sensu C. chinensis C. ruderalis. Clarke and Merlin carried out more studies in 2013 to analyze the genus mixing molecular markers, chemotypes and morphological characteristics. They proposed a refinement in Hillig's hypothesis and suggested that C. ruderalis could be the wild ancestor of C. sativa and C. indica. However, these affirmations were based on a limited sample size. Etymology The term ruderalis is derived from the Latin rūdera, which is the plural form of rūdus, meaning "rubble", "lump", or "rough piece of bronze". In botanical Latin, ruderalis means "weedy" or "growing among waste". A ruderal species refers to any plant that is the first to colonize land after a disturbance removing competition. Distribution and habitat C. ruderalis was first scientifically described in 1924 (from plants collected in southern Siberia), although it grows wild in other areas of Russia. The Russian botanist, Janischewski, was studying wild Cannabis in the Volga River system and realized he had come upon a third species. C. ruderalis is a hardier variety grown in the northern Himalayas and southern states of the former Soviet Union, characterized by a more sparse, "weedy" growth. Similar C. ruderalis populations can be found in most of the areas where hemp cultivation was once prevalent. The most notable region in North America is the midwestern United States, though populations occur sporadically throughout the United States and Canada. Large wild C. ruderalis populations are found in central and eastern Europe, most of them in Ukraine, Lithuania, Belarus, Latvia, Estonia and adjacent countries. Without human selection, these plants have lost many of the traits they were originally selected for, and have acclimated to their environment. Cultivation Seeds of C. ruderalis were brought to Amsterdam in the early 1980s in order to enhance the breeding program of seed banks. C. ruderalis has lower THC content than either C. sativa or C. indica, so it is rarely grown for recreational use. Also, the shorter stature of C. ruderalis limits its application for hemp production. C. ruderalis strains are high in the cannabіnoid cannabidiol (CBD), so they are grown by some medical marijuana users. Because C. ruderalis transitions from the vegetative stage to the flowering stage with age, as opposed to the light cycle required with photoperiod strains, it is bred with other household sativa and indica strains of cannabis to create "auto-flowering cannabis strains". This trait offers breeders some agricultural possibilities and advantages over the photoperiodic flowering varieties, as well as resistance aspects to insect and disease pressures. C. indica strains are frequently cross-bred with C. ruderalis to produce autoflowering plants with high THC content, improved hardiness and reduced height. Cannabis x intersita Sojak, a strain identified in 1960, is a cross between C. sativa and C. ruderalis. Attempts to produce a Cannabis strain with a shorter growing season are another application of cultivating C. ruderalis. C. ruderalis when crossed with sativa and indica strains will carry the recessive autoflowering trait. Further crosses will stabilise this trait and give a plant which flowers automatically and can be fully mature in as little as 10 weeks. Cultivators also favor ruderalis plants due to their reduced production time, typically finishing in 3–4 months rather than 6–8 months . The auto-flowering trait is extremely beneficial because it allows for multiple harvests in one outdoor growing season without the use of light deprivation techniques necessary for multiple harvests of photoperiod-dependent strains. Uses C. ruderalis is traditionally used in Russian and Mongolian folk medicine, especially for uses in treating depression. Because C. ruderalis is among the lowest THC producing biotypes of Cannabis, C. ruderalis is rarely used for recreational purposes. In modern use, C. ruderalis has been crossed with Bedrocan strains to produce the strain Bediol for patients with medical prescriptions. The typically higher concentration of CBD may make ruderalis plants viable for the treatment of anxiety or epilepsy. Bibliography Books Articles
Biology and health sciences
Rosales
Plants
13224331
https://en.wikipedia.org/wiki/Ecological%20resilience
Ecological resilience
In ecology, resilience is the capacity of an ecosystem to respond to a perturbation or disturbance by resisting damage and subsequently recovering. Such perturbations and disturbances can include stochastic events such as fires, flooding, windstorms, insect population explosions, and human activities such as deforestation, fracking of the ground for oil extraction, pesticide sprayed in soil, and the introduction of exotic plant or animal species. Disturbances of sufficient magnitude or duration can profoundly affect an ecosystem and may force an ecosystem to reach a threshold beyond which a different regime of processes and structures predominates. When such thresholds are associated with a critical or bifurcation point, these regime shifts may also be referred to as critical transitions. Human activities that adversely affect ecological resilience such as reduction of biodiversity, exploitation of natural resources, pollution, land use, and anthropogenic climate change are increasingly causing regime shifts in ecosystems, often to less desirable and degraded conditions. Interdisciplinary discourse on resilience now includes consideration of the interactions of humans and ecosystems via socio-ecological systems, and the need for shift from the maximum sustainable yield paradigm to environmental resource management and ecosystem management, which aim to build ecological resilience through "resilience analysis, adaptive resource management, and adaptive governance". Ecological resilience has inspired other fields and continues to challenge the way they interpret resilience, e.g. supply chain resilience. Definitions The IPCC Sixth Assessment Report defines resilience as, “not just the ability to maintain essential function, identity and structure, but also the capacity for transformation.” The IPCC considers resilience both in terms of ecosystem recovery as well as the recovery and adaptation of human societies to natural disasters. The concept of resilience in ecological systems was first introduced by the Canadian ecologist C.S. Holling in order to describe the persistence of natural systems in the face of changes in ecosystem variables due to natural or anthropogenic causes. Resilience has been defined in two ways in ecological literature: as the time required for an ecosystem to return to an equilibrium or steady-state following a perturbation (which is also defined as stability by some authors). This definition of resilience is used in other fields such as physics and engineering, and hence has been termed ‘engineering resilience’ by Holling. as "the capacity of a system to absorb disturbance and reorganize while undergoing change so as to still retain essentially the same function, structure, identity, and feedbacks". The second definition has been termed ‘ecological resilience’, and it presumes the existence of multiple stable states or regimes. For example, some shallow temperate lakes can exist within either clear water regime, which provides many ecosystem services, or a turbid water regime, which provides reduced ecosystem services and can produce toxic algae blooms. The regime or state is dependent upon lake phosphorus cycles, and either regime can be resilient dependent upon the lake's ecology and management. Likewise, Mulga woodlands of Australia can exist in a grass-rich regime that supports sheep herding, or a shrub-dominated regime of no value for sheep grazing. Regime shifts are driven by the interaction of fire, herbivory, and variable rainfall. Either state can be resilient dependent upon management. Theory Ecologists Brian Walker, C S Holling and others describe four critical aspects of resilience: latitude, resistance, precariousness, and panarchy. The first three can apply both to a whole system or the sub-systems that make it up. Latitude: the maximum amount a system can be changed before losing its ability to recover (before crossing a threshold which, if breached, makes recovery difficult or impossible). Resistance: the ease or difficulty of changing the system; how “resistant” it is to being changed. Precariousness: how close the current state of the system is to a limit or “threshold.”. Panarchy: the degree to which a certain hierarchical level of an ecosystem is influenced by other levels. For example, organisms living in communities that are in isolation from one another may be organized differently from the same type of organism living in a large continuous population, thus the community-level structure is influenced by population-level interactions. Closely linked to resilience is adaptive capacity, which is the property of an ecosystem that describes change in stability landscapes and resilience. Adaptive capacity in socio-ecological systems refers to the ability of humans to deal with change in their environment by observation, learning and altering their interactions. Human impacts Resilience refers to ecosystem's stability and capability of tolerating disturbance and restoring itself.  If the disturbance is of sufficient magnitude or duration, a threshold may be reached where the ecosystem undergoes a regime shift, possibly permanently. Sustainable use of environmental goods and services requires understanding and consideration of the resilience of the ecosystem and its limits. However, the elements which influence ecosystem resilience are complicated. For example, various elements such as the water cycle, fertility, biodiversity, plant diversity and climate, interact fiercely and affect different systems. There are many areas where human activity impacts upon and is also dependent upon the resilience of terrestrial, aquatic and marine ecosystems. These include agriculture, deforestation, pollution, mining, recreation, overfishing, dumping of waste into the sea and climate change. Agriculture Agriculture can be used as a significant case study in which the resilience of terrestrial ecosystems should be considered. The organic matter (elements carbon and nitrogen) in soil, which is supposed to be recharged by multiple plants, is the main source of nutrients for crop growth. In response to global food demand and shortages, however, intensive agriculture practices including the application of herbicides to control weeds, fertilisers to accelerate and increase crop growth and pesticides to control insects, reduce plant biodiversity while the supply of organic matter to replenish soil nutrients and prevent surface runoff is diminished. This leads to a reduction in soil fertility and productivity. More sustainable agricultural practices would take into account and estimate the resilience of the land and monitor and balance the input and output of organic matter. Deforestation The term deforestation has a meaning that covers crossing the threshold of forest's resilience and losing its ability to return to its originally stable state. To recover itself, a forest ecosystem needs suitable interactions among climate conditions and bio-actions, and enough area. In addition, generally, the resilience of a forest system allows recovery from a relatively small scale of damage (such as lightning or landslide) of up to 10 percent of its area. The larger the scale of damage, the more difficult it is for the forest ecosystem to restore and maintain its balance. Deforestation also decreases biodiversity of both plant and animal life and can lead to an alteration of the climatic conditions of an entire area. According to the IPCC Sixth Assessment Report, carbon emissions due to land use and land use changes predominantly come from deforestation, thereby increasing the long-term exposure of forest ecosystems to drought and other climate change-induced damages. Deforestation can also lead to species extinction, which can have a domino effect particularly when keystone species are removed or when a significant number of species is removed and their ecological function is lost. Climate change Overfishing It has been estimated by the United Nations Food and Agriculture Organisation that over 70% of the world's fish stocks are either fully exploited or depleted which means overfishing threatens marine ecosystem resilience and this is mostly by rapid growth of fishing technology. One of the negative effects on marine ecosystems is that over the last half-century the stocks of coastal fish have had a huge reduction as a result of overfishing for its economic benefits. Blue fin tuna is at particular risk of extinction. Depletion of fish stocks results in lowered biodiversity and consequently imbalance in the food chain, and increased vulnerability to disease. In addition to overfishing, coastal communities are suffering the impacts of growing numbers of large commercial fishing vessels in causing reductions of small local fishing fleets. Many local lowland rivers which are sources of fresh water have become degraded because of the inflows of pollutants and sediments. Dumping of waste into the sea Dumping both depends upon ecosystem resilience whilst threatening it. Dumping of sewage and other contaminants into the ocean is often undertaken for the dispersive nature of the oceans and adaptive nature and ability for marine life to process the marine debris and contaminants. However, waste dumping threatens marine ecosystems by poisoning marine life and eutrophication. Poisoning marine life According to the International Maritime Organisation oil spills can have serious effects on marine life. The OILPOL Convention recognized that most oil pollution resulted from routine shipboard operations such as the cleaning of cargo tanks.  In the 1950s, the normal practice was simply to wash the tanks out with water and then pump the resulting mixture of oil and water into the sea. OILPOL 54   prohibited the dumping of oily wastes within a certain distance from land and in 'special areas' where the danger to the environment was especially acute. In 1962 the limits were extended by means of an amendment adopted at a conference organized by IMO. Meanwhile, IMO in 1965 set up a Subcommittee on Oil Pollution, under the auspices of its Maritime Safety committee, to address oil pollution issues. The threat of oil spills to marine life is recognised by those likely to be responsible for the pollution, such as the International Tanker Owners Pollution Federation: The marine ecosystem is highly complex and natural fluctuations in species composition, abundance and distribution are a basic feature of its normal function. The extent of damage can therefore be difficult to detect against this background variability. Nevertheless, the key to understanding damage and its importance is whether spill effects result in a downturn in breeding success, productivity, diversity and the overall functioning of the system. Spills are not the only pressure on marine habitats; chronic urban and industrial contamination or the exploitation of the resources they provide are also serious threats. Eutrophication and algal blooms The Woods Hole Oceanographic Institution calls nutrient pollution the most widespread, chronic environmental problem in the coastal ocean. The discharges of nitrogen, phosphorus, and other nutrients come from agriculture, waste disposal, coastal development, and fossil fuel use. Once nutrient pollution reaches the coastal zone, it stimulates harmful overgrowths of algae, which can have direct toxic effects and ultimately result in low-oxygen conditions. Certain types of algae are toxic. Overgrowths of these algae result in harmful algal blooms, which are more colloquially referred to as "red tides" or "brown tides". Zooplankton eat the toxic algae and begin passing the toxins up the food chain, affecting edibles like clams, and ultimately working their way up to seabirds, marine mammals, and humans. The result can be illness and sometimes death. Sustainable development There is increasing awareness that a greater understanding and emphasis of ecosystem resilience is required to reach the goal of sustainable development. A similar conclusion is drawn by Perman et al. who use resilience to describe one of 6 concepts of sustainability; "A sustainable state is one which satisfies minimum conditions for ecosystem resilience through time". Resilience science has been evolving over the past decade, expanding beyond ecology to reflect systems of thinking in fields such as economics and political science. And, as more and more people move into densely populated cities, using massive amounts of water, energy, and other resources, the need to combine these disciplines to consider the resilience of urban ecosystems and cities is of paramount importance. Academic perspectives The interdependence of ecological and social systems has gained renewed recognition since the late 1990s by academics including Berkes and Folke and developed further in 2002 by Folke et al. As the concept of sustainable development has evolved beyond the 3 pillars of sustainable development to place greater political emphasis on economic development. This is a movement which causes wide concern in environmental and social forums and which Clive Hamilton describes as "the growth fetish". The purpose of ecological resilience that is proposed is ultimately about averting our extinction as Walker cites Holling in his paper: "[..] "resilience is concerned with [measuring] the probabilities of extinction” (1973, p. 20)". Becoming more apparent in academic writing is the significance of the environment and resilience in sustainable development. Folke et al state that the likelihood of sustaining development is raised by "Managing for resilience" whilst Perman et al. propose that safeguarding the environment to "deliver a set of services" should be a "necessary condition for an economy to be sustainable". The growing application of resilience to sustainable development has produced a diversity of approaches and scholarly debates. The flaw of the free market The challenge of applying the concept of ecological resilience to the context of sustainable development is that it sits at odds with conventional economic ideology and policy making. Resilience questions the free market model within which global markets operate. Inherent to the successful operation of a free market is specialisation which is required to achieve efficiency and increase productivity. This very act of specialisation weakens resilience by permitting systems to become accustomed to and dependent upon their prevailing conditions. In the event of unanticipated shocks; this dependency reduces the ability of the system to adapt to these changes. Correspondingly; Perman et al. note that; "Some economic activities appear to reduce resilience, so that the level of disturbance to which the ecosystem can be subjected to without parametric change taking place is reduced". Moving beyond sustainable development Berkes and Folke table a set of principles to assist with "building resilience and sustainability" which consolidate approaches of adaptive management, local knowledge-based management practices and conditions for institutional learning and self-organisation. More recently, it has been suggested by Andrea Ross that the concept of sustainable development is no longer adequate in assisting policy development fit for today's global challenges and objectives. This is because the concept of sustainable development is "based on weak sustainability" which doesn't take account of the reality of "limits to earth's resilience". Ross draws on the impact of climate change on the global agenda as a fundamental factor in the "shift towards ecological sustainability" as an alternative approach to that of sustainable development. Because climate change is a major and growing driver of biodiversity loss, and that biodiversity and ecosystem functions and services, significantly contribute to climate change adaptation, mitigation and disaster risk reduction, proponents of ecosystem-based adaptation suggest that the resilience of vulnerable human populations and the ecosystem services upon which they depend are critical factors for sustainable development in a changing climate. In environmental policy Scientific research associated with resilience is beginning to play a role in influencing policy-making and subsequent environmental decision making. This occurs in a number of ways: Observed resilience within specific ecosystems drives management practice. When resilience is observed to be low, or impact seems to be reaching the threshold, management response can be to alter human behavior to result in less adverse impact to the ecosystem. Ecosystem resilience impacts upon the way that development is permitted/environmental decision making is undertaken, similar to the way that existing ecosystem health impacts upon what development is permitted. For instance, remnant vegetation in the states of Queensland and New South Wales are classified in terms of ecosystem health and abundance. Any impact that development has upon threatened ecosystems must consider the health and resilience of these ecosystems. This is governed by the Threatened Species Conservation Act 1995 in New South Wales and the Vegetation Management Act 1999 in Queensland. International level initiatives aim at improving socio-ecological resilience worldwide through the cooperation and contributions of scientific and other experts. An example of such an initiative is the Millennium Ecosystem Assessment whose objective is "to assess the consequences of ecosystem change for human well-being and the scientific basis for action needed to enhance the conservation and sustainable use of those systems and their contribution to human well-being". Similarly, the United Nations Environment Programme aim is "to provide leadership and encourage partnership in caring for the environment by inspiring, informing, and enabling nations and peoples to improve their quality of life without compromising that of future generations. Environmental management in legislation Ecological resilience and the thresholds by which resilience is defined are closely interrelated in the way that they influence environmental policy-making, legislation and subsequently environmental management. The ability of ecosystems to recover from certain levels of environmental impact is not explicitly noted in legislation, however, because of ecosystem resilience, some levels of environmental impact associated with development are made permissible by environmental policy-making and ensuing legislation. Some examples of the consideration of ecosystem resilience within legislation include: Environmental Planning and Assessment Act 1979 (NSW) – A key goal of the Environmental Assessment procedure is to determine whether proposed development will have a significant impact upon ecosystems. Protection of the Environment (Operations) Act 1997 (NSW) – Pollution control is dependent upon keeping levels of pollutants emitted by industrial and other human activities below levels which would be harmful to the environment and its ecosystems. Environmental protection licenses are administered to maintain the environmental objectives of the POEO Act and breaches of license conditions can attract heavy penalties and in some cases criminal convictions. Threatened Species Conservation Act 1995 (NSW) – This Act seeks to protect threatened species while balancing it with development. History The theoretical basis for many of the ideas central to climate resilience have actually existed since the 1960s. Originally an idea defined for strictly ecological systems, resilience in ecology was initially outlined by C.S. Holling as the capacity for ecological systems and relationships within those systems to persist and absorb changes to "state variables, driving variables, and parameters." This definition helped form the foundation for the notion of ecological equilibrium: the idea that the behavior of natural ecosystems is dictated by a homeostatic drive towards some stable set point. Under this school of thought (which maintained quite a dominant status during this time period), ecosystems were perceived to respond to disturbances largely through negative feedback systems – if there is a change, the ecosystem would act to mitigate that change as much as possible and attempt to return to its prior state. As greater amounts of scientific research in ecological adaptation and natural resource management was conducted, it became clear that oftentimes, natural systems were subjected to dynamic, transient behaviors that changed how they reacted to significant changes in state variables: rather than work back towards a predetermined equilibrium, the absorbed change was harnessed to establish a new baseline to operate under. Rather than minimize imposed changes, ecosystems could integrate and manage those changes, and use them to fuel the evolution of novel characteristics. This new perspective of resilience as a concept that inherently works synergistically with elements of uncertainty and entropy first began to facilitate changes in the field of adaptive management and environmental resources, through work whose basis was built by Holling and colleagues yet again. By the mid 1970s, resilience began gaining momentum as an idea in anthropology, culture theory, and other social sciences. There was significant work in these relatively non-traditional fields that helped facilitate the evolution of the resilience perspective as a whole. Part of the reason resilience began moving away from an equilibrium-centric view and towards a more flexible, malleable description of social-ecological systems was due to work such as that of Andrew Vayda and Bonnie McCay in the field of social anthropology, where more modern versions of resilience were deployed to challenge traditional ideals of cultural dynamics.
Biology and health sciences
Ecology
Biology
13233321
https://en.wikipedia.org/wiki/Clostridium%20tetani
Clostridium tetani
Clostridium tetani is a common soil bacterium and the causative agent of tetanus. Vegetative cells of Clostridium tetani are usually rod-shaped and up to 2.5 μm long, but they become enlarged and tennis racket- or drumstick-shaped when forming spores. C. tetani spores are extremely hardy and can be found globally in soil or in the gastrointestinal tract of animals. If inoculated into a wound, C. tetani can grow and produce a potent toxin, tetanospasmin, which interferes with motor neurons, causing tetanus. The toxin's action can be prevented with tetanus toxoid vaccines, which are often administered to children worldwide. Characteristics Clostridium tetani is a rod-shaped, Gram-positive bacterium, typically up to 0.5 μm wide and 2.5 μm long. It is motile by way of various flagella that surround its body. C. tetani cannot grow in the presence of oxygen. It grows best at temperatures ranging from 33 to 37 °C. Upon exposure to various conditions, C. tetani can shed its flagellums and form a spore. Each cell can form a single spore, generally at one end of the cell, giving the cell a distinctive drumstick shape. C. tetani spores are extremely hardy and are resistant to heat, various antiseptics, and boiling for several minutes. The spores are long-lived and are distributed worldwide in soils as well as in the intestines of various livestock and companion animals. Evolution Clostridium tetani is classified within the genus Clostridium, a broad group of over 150 species of Gram-positive bacteria. C. tetani falls within a cluster of nearly 100 species that are more closely related to each other than they are to any other genus. This cluster includes other pathogenic Clostridium species such as C. botulinum and C. perfringens. The closest relative to C. tetani is C. cochlearium. Other Clostridium species can be divided into a number of genetically related groups, many of which are more closely related to members of other genera than they are to C. tetani. Examples of this include the human pathogen C. difficile, which is more closely related to members of genus Peptostreptococcus than to C. tetani. Role in disease While C. tetani is frequently benign in the soil or in the intestinal tracts of animals, it can sometimes cause the severe disease tetanus. Disease generally begins with spores entering the body through a wound. In deep wounds, such as those from a puncture or contaminated needle injection the combination of tissue death and limited exposure to surface air can result in a very low-oxygen environment, allowing C. tetani spores to germinate and grow. As C. tetani grows at the wound site, it releases the toxins tetanolysin and tetanospasmin as cells lyse. The function of tetanolysin is unclear, although it may help C. tetani to establish infection within a wound. Tetanospasmin ("tetanus toxin") is a potent toxin with an estimated lethal dose less than 2.5 nanograms per kilogram of body weight, and is responsible for the symptoms of tetanus. Tetanospasmin spreads via the lymphatic system and bloodstream throughout the body, where it is taken up into various parts of the nervous system. In the nervous system, tetanospasmin acts by blocking the release of the inhibitory neurotransmitters glycine and gamma-aminobutyric acid at motor nerve endings. This blockade leads to the widespread activation of motor neurons and spasming of muscles throughout the body. These muscle spasms generally begin at the top of the body and move down, beginning about 8 days after infection with lockjaw, followed by spasms of the abdominal muscles and the limbs. Muscle spasms continue for several weeks. The gene encoding tetanospasmin is found on a plasmid carried by many strains of C. tetani; strains of bacteria lacking the plasmid are unable to produce toxin. The function of tetanospasmin in bacterial physiology is unknown. Treatment and prevention Clostridium tetani is susceptible to a number of antibiotics, including chloramphenicol, clindamycin, erythromycin, penicillin G, and tetracycline. However, the usefulness of treating C. tetani infections with antibiotics remains unclear. Instead, tetanus is often treated with tetanus immune globulin to bind up circulating tetanospasmin. Additionally, benzodiazepines or muscle relaxants may be given to reduce the effects of the muscle spasms. Damage from C. tetani infection is generally prevented by administration of a tetanus vaccine consisting of tetanospasmin inactivated by formaldehyde, called tetanus toxoid. This is made commercially by growing large quantities of C. tetani in fermenters, then purifying the toxin and inactivating in 40% formaldehyde for 4–6 weeks. The toxoid is generally coadministered with diphtheria toxoid and some form of pertussis vaccine as DPT vaccine or DTaP. This is given in several doses spaced out over months or years to elicit an immune response that protects the host from the effects of the toxin. Research Clostridium tetani can be grown on various anaerobic growth media such as thioglycolate media, casein hydrolysate media, and blood agar. Cultures grow particularly well on media at a neutral to alkaline pH, supplemented with reducing agents. The genome of a C. tetani strain has been sequenced, containing 2.80 million base pairs with 2,373 protein coding genes. History Clinical descriptions of tetanus associated with wounds are found at least as far back as the 4th century BCE, in Hippocrates' Aphorisms. The first clear connection to the soil was in 1884, when Arthur Nicolaier showed that animals injected with soil samples would develop tetanus. In 1889, C. tetani was isolated from a human victim by Kitasato Shibasaburō, who later showed that the organism could produce disease when injected into animals, and that the toxin could be neutralized by specific antibodies. In 1897, Edmond Nocard showed that tetanus antitoxin induced passive immunity in humans, and could be used for prophylaxis and treatment. In World War I, injection of tetanus antiserum from horses was widely used as a prophylaxis against tetanus in wounded soldiers, leading to a dramatic decrease in tetanus cases over the course of the war. The modern method of inactivating tetanus toxin with formaldehyde was developed by Gaston Ramon in the 1920s; this led to the development of the tetanus toxoid vaccine by P. Descombey in 1924, which was widely used to prevent tetanus induced by battle wounds during World War II.
Biology and health sciences
Gram-positive bacteria
Plants
7115718
https://en.wikipedia.org/wiki/Ball-and-stick%20model
Ball-and-stick model
In chemistry, the ball-and-stick model is a molecular model of a chemical substance which displays both the three-dimensional position of the atoms and the bonds between them. The atoms are typically represented by spheres, connected by rods which represent the bonds. Double and triple bonds are usually represented by two or three curved rods, respectively, or alternately by correctly positioned sticks for the sigma and pi bonds. In a good model, the angles between the rods should be the same as the angles between the bonds, and the distances between the centers of the spheres should be proportional to the distances between the corresponding atomic nuclei. The chemical element of each atom is often indicated by the sphere's color. In a ball-and-stick model, the radius of the spheres is usually much smaller than the rod lengths, in order to provide a clearer view of the atoms and bonds throughout the model. As a consequence, the model does not provide a clear insight about the space occupied by the model. In this aspect, ball-and-stick models are distinct from space-filling (calotte) models, where the sphere radii are proportional to the Van der Waals atomic radii in the same scale as the atom distances, and therefore show the occupied space but not the bonds. Ball-and-stick models can be physical artifacts or virtual computer models. The former are usually built from molecular modeling kits, consisting of a number of coil springs or plastic or wood sticks, and a number of plastic balls with pre-drilled holes. The sphere colors commonly follow the CPK coloring. Some university courses on chemistry require students to buy such models as learning material. History In 1865, German chemist August Wilhelm von Hofmann was the first to make ball-and-stick molecular models. He used such models in lecture at the Royal Institution of Great Britain. Specialist companies manufacture kits and models to order. One of the earlier companies was Woosters at Bottisham, Cambridgeshire, UK. Besides tetrahedral, trigonal and octahedral holes, there were all-purpose balls with 24 holes. These models allowed rotation about the single rod bonds, which could be both an advantage (showing molecular flexibility) and a disadvantage (models are floppy). The approximate scale was 5 cm per ångström (0.5 m/nm or 500,000,000:1), but was not consistent over all elements. The Beeverses Miniature Models company in Edinburgh (now operating as Miramodus) produced small models beginning in 1961 using PMMA balls and stainless steel rods. In these models, the use of individually drilled balls with precise bond angles and bond lengths enabled large crystal structures to be accurately created in a light and rigid form.
Physical sciences
Substance
Chemistry
14383139
https://en.wikipedia.org/wiki/Allotropes%20of%20sulfur
Allotropes of sulfur
The element sulfur exists as many allotropes. In number of allotropes, sulfur is second only to carbon. In addition to the allotropes, each allotrope often exists in polymorphs (different crystal structures of the same covalently bonded Sn molecules) delineated by Greek prefixes (α, β, etc.). Furthermore, because elemental sulfur has been an item of commerce for centuries, its various forms are given traditional names. Early workers identified some forms that have later proved to be single or mixtures of allotropes. Some forms have been named for their appearance, e.g. "mother of pearl sulfur", or alternatively named for a chemist who was pre-eminent in identifying them, e.g. "Muthmann's sulfur I" or "Engel's sulfur". The most commonly encountered form of sulfur is the orthorhombic polymorph of , which adopts a puckered ring – or "crown" – structure. Two other polymorphs are known, also with nearly identical molecular structures. In addition to , sulfur rings of 6, 7, 9–15, 18, and 20 atoms are known. At least five allotropes are uniquely formed at high pressures, two of which are metallic. The number of sulfur allotropes reflects the relatively strong S−S bond of 265 kJ/mol. Furthermore, unlike most elements, the allotropes of sulfur can be manipulated in solutions of organic solvents and are analysed by HPLC. Phase diagram The pressure-temperature (P-T) phase diagram for sulfur is complex (see image). The region labeled I (a solid region), is α-sulfur. High-pressure solid allotropes In a high-pressure study at ambient temperatures, four new solid forms, termed II, III, IV, V have been characterized, where α-sulfur is form I. Solid forms II and III are polymeric, while IV and V are metallic (and are superconductive below 10 K and 17 K, respectively). Laser irradiation of solid samples produces three sulfur forms below 200–300 kbar (20–30 GPa). Solid cyclo allotrope preparation Two methods exist for the preparation of the cyclo-sulfur allotropes. One of the methods, which is most famous for preparing hexasulfur, is to treat hydrogen polysulfides with polysulfur dichloride: A second strategy uses titanocene pentasulfide as a source of the unit. This complex is easily made from polysulfide solutions: Titanocene pentasulfide reacts with polysulfur chloride: Solid cyclo-sulfur allotropes Cyclo-hexasulfur, cyclo- This allotrope was first prepared by M. R. Engel in 1891 by treating thiosulfate with HCl. Cyclo- is orange-red and forms a rhombohedral crystal. It is called ρ-sulfur, ε-sulfur, Engel's sulfur and Aten's sulfur. Another method of preparation involves the reaction of a polysulfane with sulfur monochloride: (dilute solution in diethyl ether) The sulfur ring in cyclo- has a "chair" conformation, reminiscent of the chair form of cyclohexane. All of the sulfur atoms are equivalent. Cyclo-heptasulfur, cyclo- It is a bright yellow solid. Four (α-, β-, γ-, δ-) forms of cyclo-heptasulfur are known. Two forms (γ-, δ-) have been characterized. The cyclo- ring has an unusual range of bond lengths of 199.3–218.1 pm. It is said to be the least stable of all of the sulfur allotropes. Cyclo-octasulfur, cyclo- Octasulfur contains puckered rings, and is known in three forms that differ only in the way the rings are packed in the crystal. α-Sulfur α-Sulfur is the form most commonly found in nature. When pure it has a greenish-yellow colour (traces of cyclo- in commercially available samples make it appear yellower). It is practically insoluble in water and is a good electrical insulator with poor thermal conductivity. It is quite soluble in carbon disulfide: 35.5 g/100 g solvent at 25 °C. It has an orthorhombic crystal structure. α-Sulfur is the predominant form found in "flowers of sulfur", "roll sulfur" and "milk of sulfur". It contains puckered rings, alternatively called a crown shape. The S–S bond lengths are all 203.7 pm and the S-S-S angles are 107.8° with a dihedral angle of 98°. At 95.3 °C, α-sulfur converts to β-sulfur. β-Sulfur β-Sulfur is a yellow solid with a monoclinic crystal form and is less dense than α-sulfur. It is unusual because it is only stable above 95.3 °C; below this temperature it converts to α-sulfur. β-Sulfur can be prepared by crystallising at 100 °C and cooling rapidly to slow down formation of α-sulfur. It has a melting point variously quoted as 119.6 °C and 119.8 °C but as it decomposes to other forms at around this temperature the observed melting point can vary. The 119 °C melting point has been termed the "ideal melting point" and the typical lower value (114.5 °C) when decomposition occurs, the "natural melting point". γ-Sulfur γ-Sulfur was first prepared by F.W. Muthmann in 1890. It is sometimes called "nacreous sulfur" or "mother of pearl sulfur" because of its appearance. It crystallises in pale yellow monoclinic needles. It is the densest form of the three. It can be prepared by slowly cooling molten sulfur that has been heated above 150 °C or by chilling solutions of sulfur in carbon disulfide, ethyl alcohol or hydrocarbons. It is found in nature as the mineral rosickyite. It has been tested in carbon fiber-stabilized form as a cathode in lithium-sulfur (Li-S) batteries and was observed to stop the formation of polysulfides that compromise battery life. Cyclo- (n = 9–15, 18, 20) These allotropes have been synthesised by various methods for example, treating titanocene pentasulfide and a dichlorosulfane of suitable sulfur chain length, : or alternatively treating a dichlorosulfane, and a polysulfane, : , , and can also be prepared from . With the exception of cyclo-, the rings contain S–S bond lengths and S-S-S bond angle that differ one from another. Cyclo- is the most stable cyclo-allotrope. Its structure can be visualised as having sulfur atoms in three parallel planes, 3 in the top, 6 in the middle and three in the bottom. Two forms (α-, β-) of cyclo- are known, one of which has been characterized. Two forms of cyclo- are known where the conformation of the ring is different. To differentiate these structures, rather than using the normal crystallographic convention of α-, β-, etc., which in other cyclo- compounds refer to different packings of essentially the same conformer, these two conformers have been termed endo- and exo-. Cyclo-·cyclo- adduct This adduct is produced from a solution of cyclo- and cyclo- in . It has a density midway between cyclo- and cyclo-. The crystal consists of alternate layers of cyclo- and cyclo-. This material is a rare example of an allotrope that contains molecules of different sizes. Catena sulfur forms The term "Catena sulfur forms" refers to mixtures of sulfur allotropes that are high in catena (polymer chain) sulfur. The naming of the different forms is very confusing and care has to be taken to determine what is being described because some names are used interchangeably. Amorphous sulfur Amorphous sulfur is the quenched product from molten sulfur hotter than the λ-transition at 160 °C, where polymerization yields catena sulfur molecules. (Above this temperature, the properties of the liquid melt change remarkably. For example, the viscosity increases more than 10000-fold as the temperature increases through the transition). As it anneals, solid amorphous sulfur changes from its initial glassy form, to a plastic form, hence its other names of plastic, and glassy or vitreous sulfur. The plastic form is also called χ-sulfur. Amorphous sulfur contains a complex mixture of catena-sulfur forms mixed with cyclo-forms. Insoluble sulfur Insoluble sulfur is obtained by washing quenched liquid sulfur with . It is sometimes called polymeric sulfur, μ-S or ω-S. Fibrous (φ-) sulfur Fibrous (φ-) sulfur is a mixture of the allotropic ψ- form and γ-cyclo-. ω-Sulfur ω-Sulfur is a commercially available product prepared from amorphous sulfur that has not been stretched prior to extraction of soluble forms with . It sometimes called "white sulfur of Das" or supersublimated sulfur. It is a mixture of ψ-sulfur and lamina sulfur. The composition depends on the exact method of production and the sample's history. One well known commercial form is "Crystex". ω-sulfur is used in the vulcanization of rubber. λ-Sulfur λ-Sulfur is molten sulfur just above the melting temperature. It is a mixture containing mostly cyclo-. Cooling λ-sulfur slowly gives predominantly β-sulfur. μ-Sulfur μ-Sulfur is the name applied to solid insoluble sulfur and the melt prior to quenching. π-Sulfur π-Sulfur is a dark-coloured liquid formed when λ-sulfur is left to stay molten. It contains mixture of rings. Biradical catena () chains This term is applied to biradical catena-chains in sulfur melts or the chains in the solid. Solid catena allotropes The production of pure forms of catena-sulfur has proved to be extremely difficult. Complicating factors include the purity of the starting material and the thermal history of the sample. ψ-Sulfur This form, also called fibrous sulfur or ω1-sulfur, has been well characterized. It has a density of 2.01 g·cm−3 (α-sulfur 2.069 g·cm−3) and decomposes around its melting point of 104 °C. It consists of parallel helical sulfur chains. These chains have both left and right-handed "twists" and a radius of 95 pm. The S–S bond length is 206.6 pm, the S-S-S bond angle is 106° and the dihedral angle is 85.3°, (comparable figures for α-sulfur are 203.7 pm, 107.8° and 98.3°). Lamina sulfur Lamina sulfur has not been well characterized but is believed to consist of criss-crossed helices. It is also called χ-sulfur or ω2-sulfur. High-temperature gaseous allotropes Monatomic sulfur can be produced from photolysis of carbonyl sulfide. Disulfur, Disulfur, , is the predominant species in sulfur vapour above 720 °C (a temperature above that shown in the phase diagram); at low pressure (1 mmHg) at 530 °C, it comprises 99% of the vapor. It is a triplet diradical (like dioxygen and sulfur monoxide), with an S−S bond length of 188.7 pm. The blue colour of burning sulfur is due to the emission of light by the molecule produced in the flame. The molecule has been trapped in the compound (E = As, Sb) for crystallographic measurements, produced by treating elemental sulfur with excess iodine in liquid sulfur dioxide. The cation has an "open-book" structure, in which each ion donates the unpaired electron in the π* molecular orbital to a vacant orbital of the molecule. Trisulfur, is found in sulfur vapour, comprising 10% of vapour species at 440 °C and 10 mmHg. It is cherry red in colour, with a bent structure, similar to ozone, . Tetrasulfur, has been detected in the vapour phase, but it has not been well characterized. Diverse structures (e.g. chains, branched chains and rings) have been proposed. Theoretical calculations suggest a cyclic structure. Pentasulfur, Pentasulfur has been detected in sulfur vapours but has not been isolated in pure form. List of allotropes and forms Allotropes are in Bold.
Physical sciences
Group 16
Chemistry
5444531
https://en.wikipedia.org/wiki/Oriental%20rat%20flea
Oriental rat flea
The Oriental rat flea (Xenopsylla cheopis), also known as the tropical rat flea or the rat flea, is a parasite of rodents, primarily of the genus Rattus, and is a primary vector for bubonic plague and murine typhus. This occurs when a flea that has fed on an infected rodent bites a human, although this flea can live on any warm blooded mammal. Body structure The Oriental rat flea has no genal or pronotal combs. This characteristic can be used to differentiate the Oriental rat flea from the cat flea, dog flea, and other fleas. The flea's body is about one tenth of an inch long (about ). Its body is constructed to make it easier to jump long distances. The flea's body consists of three regions: head, thorax, and abdomen. The head and the thorax have rows of bristles (called combs), and the abdomen consists of eight visible segments. A flea's mouth has two functions: one for squirting saliva or partly digested blood into the bite, and one for sucking up blood from the host. This process mechanically transmits pathogens that may cause diseases it might carry. Fleas smell exhaled carbon dioxide from humans and animals and jump rapidly to the source to feed on the newly found host. The flea is wingless so it can not fly, but it can jump long distances with the help of small, powerful legs. A flea's leg consists of four parts: the part that is closest to the body is the coxa; next are the femur, tibia, and tarsus. A flea can use its legs to jump up to 200 times its own body length (about ). Life cycle There are four stages in a flea's life. The first stage is the egg stage. Microscopic white eggs fall easily from the female to the ground or from the animal she lays on. If they are laid on an animal, they soon fall off in the dust or in the animal's bedding. If the eggs do fall immediately on the ground, then they fall into crevices on the floor where they will be safe until they hatch one to ten days later (depending on the environment that they live in, it may take longer to hatch). They hatch into a larva that looks very similar to a worm and is about two millimeters long. It only has a small body and a mouth part. At this stage, the flea does not drink blood; instead it eats dead skin cells, flea droppings, and other smaller parasites lying around them in the dust. When the larva is mature it makes a silken cocoon around itself and pupates. The flea remains a pupa from one week to six months changing in a process called metamorphosis. When the flea emerges, it begins the final cycle, called the adult stage. A flea can now suck blood from hosts and mate with other fleas. A single female flea can mate once and lay eggs every day with up to 50 eggs per day. Experimentally, it has been shown that the fleas flourish in dry climatic conditions with temperatures of , they can live up to a year and can stay in the cocoon stage for up to a year if the conditions are not favourable. History The Oriental rat flea was collected in Shendi, Sudan by Charles Rothschild along with Karl Jordan and described in 1903. He named it cheopis after the Cheops pyramids. Disease transmission This species can act as a vector for plague, Yersinia pestis, Rickettsia typhi and also act as a host for the tapeworms Hymenolepis diminuta and Hymenolepis nana. Diseases can be transmitted from one generation of fleas to the next through the eggs. Gallery
Biology and health sciences
Insects: General
Animals
5446107
https://en.wikipedia.org/wiki/Plain%20weave
Plain weave
Plain weave (also called tabby weave, linen weave or taffeta weave) is the most basic of three fundamental types of textile weaves (along with satin weave and twill). It is strong and hard-wearing, and is used for fashion and furnishing fabrics. Fabrics with a plain weave are generally strong, durable, and have a smooth surface. They are often used for a variety of applications, including clothing, home textiles, and industrial fabrics. In plain weave cloth, the warp and weft threads cross at right angles, aligned so they form a simple criss-cross pattern. Each weft thread crosses the warp threads by going over one, then under the next, and so on. The next weft thread goes under the warp threads that its neighbor went over, and vice versa. Balanced plain weaves are fabrics in which the warp and weft are made of threads of the same weight (size) and the same number of ends per inch as picks per inch. Basketweave is a variation of plain weave in which two or more threads are bundled and then woven as one in the warp or weft, or both. A balanced plain weave can be identified by its checkerboard-like appearance. It is also known as one-up-one-down weave or over and under pattern. Examples of fabric with plain weave are chiffon, organza, percale and taffeta. Etymology According to the 12th-century geographer al-Idrīsī, in Andalusī-era Almería, imitations of Iraqī and Persian silks called «عَتَّابِيِّ» —‘attābī— were manufactured, which David Jacoby identifies as "a taffeta fabric made of silk and cotton (natural fibers) originally produced in Attabiya, a district of Baghdad." The word was adopted into Medieval Latin as attabi, then French as tabis and English as tabby, as in "tabby weave". End uses Its uses range from heavy and coarse canvas and blankets made of thick yarns to the lightest and finest cambries and muslins made in extremely fine yarns. Chiffon, organza, percale and taffeta are also plain weave fabrics.
Technology
Weaving
null
5449464
https://en.wikipedia.org/wiki/Vizing%27s%20theorem
Vizing's theorem
In graph theory, Vizing's theorem states that every simple undirected graph may be edge colored using a number of colors that is at most one larger than the maximum degree of the graph. At least colors are always necessary, so the undirected graphs may be partitioned into two classes: "class one" graphs for which colors suffice, and "class two" graphs for which colors are necessary. A more general version of Vizing's theorem states that every undirected multigraph without loops can be colored with at most colors, where is the multiplicity of the multigraph. The theorem is named for Vadim G. Vizing who published it in 1964. Discovery The theorem discovered by Soviet mathematician Vadim G. Vizing was published in 1964 when Vizing was working in Novosibirsk and became known as Vizing's theorem. Indian mathematician R. P. Gupta independently discovered the theorem, while undertaking his doctorate (1965-1967). Examples When , the graph must itself be a matching, with no two edges adjacent, and its edge chromatic number is one. That is, all graphs with are of class one. When , the graph must be a disjoint union of paths and cycles. If all cycles are even, they can be 2-edge-colored by alternating the two colors around each cycle. However, if there exists at least one odd cycle, then no 2-edge-coloring is possible. That is, a graph with is of class one if and only if it is bipartite. Proof This proof is inspired by . Let be a simple undirected graph. We proceed by induction on , the number of edges. If the graph is empty, the theorem trivially holds. Let and suppose a proper -edge-coloring exists for all where . We say that color } is missing in with respect to proper -edge-coloring if for all . Also, let -path from denote the unique maximal path starting in with -colored edge and alternating the colors of edges (the second edge has color , the third edge has color and so on), its length can be . Note that if is a proper -edge-coloring of then every vertex has a missing color with respect to . Suppose that no proper -edge-coloring of exists. This is equivalent to this statement: (1) Let and be arbitrary proper -edge-coloring of and be missing from and be missing from with respect to . Then the -path from ends in . This is equivalent, because if (1) doesn't hold, then we can interchange the colors and on the -path and set the color of to be , thus creating a proper -edge-coloring of from . The other way around, if a proper -edge-coloring exists, then we can delete , restrict the coloring and (1) won't hold either. Now, let and be a proper -edge-coloring of and be missing in with respect to . We define to be a maximal sequence of neighbours of such that is missing in with respect to for all . We define colorings as for all , not defined, otherwise. Then is a proper -edge-coloring of due to definition of . Also, note that the missing colors in are the same with respect to for all . Let be the color missing in with respect to , then is also missing in with respect to for all . Note that cannot be missing in , otherwise we could easily extend , therefore an edge with color is incident to for all . From the maximality of , there exists such that . From the definition of this holds: Let be the -path from with respect to . From (1), has to end in . But is missing in , so it has to end with an edge of color . Therefore, the last edge of is . Now, let be the -path from with respect to . Since is uniquely determined and the inner edges of are not changed in , the path uses the same edges as in reverse order and visits . The edge leading to clearly has color . But is missing in , so ends in . Which is a contradiction with (1) above. Classification of graphs Several authors have provided additional conditions that classify some graphs as being of class one or class two, but do not provide a complete classification. For instance, if the vertices of the maximum degree in a graph form an independent set, or more generally if the induced subgraph for this set of vertices is a forest, then must be of class one. showed that almost all graphs are of class one. That is, in the Erdős–Rényi model of random graphs, in which all -vertex graphs are equally likely, let be the probability that an -vertex graph drawn from this distribution is of class one; then approaches one in the limit as goes to infinity. For more precise bounds on the rate at which converges to one, see . Planar graphs showed that a planar graph is of class one if its maximum degree is at least eight. In contrast, he observed that for any maximum degree in the range from two to five, there exist planar graphs of class two. For degree two, any odd cycle is such a graph, and for degree three, four, and five, these graphs can be constructed from platonic solids by replacing a single edge by a path of two adjacent edges. In Vizing's planar graph conjecture, states that all simple, planar graphs with maximum degree six or seven are of class one, closing the remaining possible cases. Independently, and partially proved Vizing's planar graph conjecture by showing that all planar graphs with maximum degree seven are of class one. Thus, the only case of the conjecture that remains unsolved is that of maximum degree six. This conjecture has implications for the total coloring conjecture. The planar graphs of class two constructed by subdivision of the platonic solids are not regular: they have vertices of degree two as well as vertices of higher degree. The four color theorem (proved by ) on vertex coloring of planar graphs, is equivalent to the statement that every bridgeless 3-regular planar graph is of class one . Graphs on nonplanar surfaces In 1969, Branko Grünbaum conjectured that every 3-regular graph with a polyhedral embedding on any two-dimensional oriented manifold such as a torus must be of class one. In this context, a polyhedral embedding is a graph embedding such that every face of the embedding is topologically a disk and such that the dual graph of the embedding is simple, with no self-loops or multiple adjacencies. If true, this would be a generalization of the four color theorem, which was shown by Tait to be equivalent to the statement that 3-regular graphs with a polyhedral embedding on a sphere are of class one. However, showed the conjecture to be false by finding snarks that have polyhedral embeddings on high-genus orientable surfaces. Based on this construction, he also showed that it is NP-complete to tell whether a polyhedrally embedded graph is of class one. Algorithms describe a polynomial time algorithm for coloring the edges of any graph with colors, where is the maximum degree of the graph. That is, the algorithm uses the optimal number of colors for graphs of class two, and uses at most one more color than necessary for all graphs. Their algorithm follows the same strategy as Vizing's original proof of his theorem: it starts with an uncolored graph, and then repeatedly finds a way of recoloring the graph in order to increase the number of colored edges by one. More specifically, suppose that is an uncolored edge in a partially colored graph. The algorithm of Misra and Gries may be interpreted as constructing a directed pseudoforest (a graph in which each vertex has at most one outgoing edge) on the neighbors of : for each neighbor of , the algorithm finds a color that is not used by any of the edges incident to , finds the vertex (if it exists) for which edge has color , and adds as an edge to . There are two cases: If the pseudoforest constructed in this way contains a path from to a vertex that has no outgoing edges in , then there is a color that is available both at and . Recoloring edge with color allows the remaining edge colors to be shifted one step along this path: for each vertex in the path, edge takes the color that was previously used by the successor of in the path. This leads to a new coloring that includes edge . If, on the other hand, the path starting from in the pseudoforest leads to a cycle, let be the neighbor of at which the path joins the cycle, let be the color of edge , and let be a color that is not used by any of the edges at vertex . Then swapping colors and on a Kempe chain either breaks the cycle or the edge on which the path joins the cycle, leading to the previous case. With some simple data structures to keep track of the colors that are used and available at each vertex, the construction of and the recoloring steps of the algorithm can all be implemented in time , where is the number of vertices in the input graph. Since these steps need to be repeated times, with each repetition increasing the number of colored edges by one, the total time is . In an unpublished technical report, claimed a faster time bound for the same problem of coloring with colors. History In both and , Vizing mentions that his work was motivated by a theorem of showing that multigraphs could be colored with at most colors. Although Vizing's theorem is now standard material in many graph theory textbooks, Vizing had trouble publishing the result initially, and his paper on it appears in an obscure journal, Diskret. Analiz.
Mathematics
Graph theory
null
971658
https://en.wikipedia.org/wiki/Helminthiasis
Helminthiasis
Helminthiasis, also known as worm infection, is any macroparasitic disease of humans and other animals in which a part of the body is infected with parasitic worms, known as helminths. There are numerous species of these parasites, which are broadly classified into tapeworms, flukes, and roundworms. They often live in the gastrointestinal tract of their hosts, but they may also burrow into other organs, where they induce physiological damage. Soil-transmitted helminthiasis and schistosomiasis are the most important helminthiases, and are among the neglected tropical diseases. These group of helminthiases have been targeted under the joint action of the world's leading pharmaceutical companies and non-governmental organizations through a project launched in 2012 called the London Declaration on Neglected Tropical Diseases, which aimed to control or eradicate certain neglected tropical diseases by 2020. Helminthiasis has been found to result in poor birth outcome, poor cognitive development, poor school and work performance, poor socioeconomic development, and poverty. Chronic illness, malnutrition, and anemia are further examples of secondary effects. Soil-transmitted helminthiases are responsible for parasitic infections in as much as a quarter of the human population worldwide. One well-known example of soil-transmitted helminthiases is ascariasis. Types of parasitic helminths Of all the known helminth species, the most important helminths with respect to understanding their transmission pathways, their control, inactivation and enumeration in samples of human excreta from dried feces, faecal sludge, wastewater, and sewage sludge are: soil-transmitted helminths, including Ascaris lumbricoides (the most common worldwide), Trichuris trichiura, Necator americanus, Strongyloides stercoralis and Ancylostoma duodenale Hymenolepis nana Taenia saginata Enterobius Fasciola hepatica Schistosoma mansoni Toxocara canis Toxocara cati Helminthiases are classified as follows (the disease names end with "-sis" and the causative worms are in brackets): Roundworm infection (nematodiasis) Filariasis (Wuchereria bancrofti, Brugia malayi infection) Onchocerciasis (Onchocerca volvulus infection) Soil-transmitted helminthiasis – this includes ascariasis (Ascaris lumbricoides infection), trichuriasis (Trichuris infection), and hookworm infection (includes necatoriasis and Ancylostoma duodenale infection) Trichostrongyliasis (Trichostrongylus spp. infection) Dracunculiasis (guinea worm infection) Baylisascaris (raccoon roundworm, may be transmitted to pets, livestock, and humans) Tapeworm infection (cestodiasis) Echinococcosis (Echinococcus infection) Hymenolepiasis (Hymenolepis infection) Taeniasis/cysticercosis (Taenia infection) Coenurosis (T. multiceps, T. serialis, T. glomerata, and T. brauni infection) Trematode infection (trematodiasis) Amphistomiasis (amphistomes infection) Clonorchiasis (Clonorchis sinensis infection) Fascioliasis (Fasciola infection) Fasciolopsiasis (Fasciolopsis buski infection) Opisthorchiasis (Opisthorchis infection) Paragonimiasis (Paragonimus infection) Schistosomiasis/bilharziasis (Schistosoma infection) Acanthocephala infection Moniliformis infection Signs and symptoms The signs and symptoms of helminthiasis depend on a number of factors including: the site of the infestation within the body; the type of worm involved; the number of worms and their volume; the type of damage the infesting worms cause; and, the immunological response of the body. Where the burden of parasites in the body is light, there may be no symptoms. Certain worms may cause particular constellations of symptoms. For instance, taeniasis can lead to seizures due to neurocysticercosis. Mass and volume In extreme cases of intestinal infestation, the mass and volume of the worms may cause the outer layers of the intestinal wall, such as the muscular layer, to tear. This may lead to peritonitis, volvulus, and gangrene of the intestine. Immunological response As pathogens in the body, helminths induce an immune response. Immune-mediated inflammatory changes occur in the skin, lung, liver, intestine, central nervous system, and eyes. Signs of the body's immune response may include eosinophilia, edema, and arthritis. An example of the immune response is the hypersensitivity reaction that may lead to anaphylaxis. Another example is the migration of Ascaris larvae through the bronchi of the lungs causing asthma. Secondary effects Immune changes In humans, T helper cells and eosinophils respond to helminth infestation. It is well established that T helper 2 cells are the central players of protective immunity to helminths, while the roles for B cells and antibodies are context-dependent. Inflammation leads to encapsulation of egg deposits throughout the body. Helminths excrete into the intestine toxic substances after they feed. These substances then enter the circulatory and lymphatic systems of the host body. Chronic immune responses to helminthiasis may lead to increased susceptibility to other infections such as tuberculosis, HIV, and malaria. There is conflicting information about whether deworming reduces HIV progression and viral load and increases CD4 counts in antiretroviral naive and experienced individuals, although the most recent Cochrane review found some evidence that this approach might have favorable effects. Helminth infection also lowers the immune responses to vaccination for other diseases such as BCG, measles, and Hepatitis B. Chronic illness Chronic helminthiasis may cause severe morbidity. Helminthiasis has been found to result in poor birth outcome, poor cognitive development, poor school and work performance, decreased productivity, poor socioeconomic development, and poverty. Malnutrition Helminthiasis may cause chronic illness through malnutrition including vitamin deficiencies, stunted growth, anemia, and protein-energy malnutrition. Worms compete directly with their hosts for nutrients, but the magnitude of this effect is likely minimal as the nutritional requirements of worms is relatively small. In pigs and humans, Ascaris has been linked to lactose intolerance and vitamin A, amino acid, and fat malabsorption. Impaired nutrient uptake may result from direct damage to the intestinal mucosal wall or from more subtle changes such as chemical imbalances and changes in gut flora. Alternatively, the worms’ release of protease inhibitors to defend against the body's digestive processes may impair the breakdown of other nutrients. In addition, worm induced diarrhoea may shorten gut transit time, thus reducing absorption of nutrients. Malnutrition due to worms can give rise to anorexia. A study of 459 children in Zanzibar revealed spontaneous increases in appetite after deworming. Anorexia might be a result of the body's immune response and the stress of combating infection. Specifically, some of the cytokines released in the immune response to worm infestation have been linked to anorexia in animals. Anemia Helminths may cause iron-deficiency anemia. This is most severe in heavy hookworm infections, as Necator americanus and Ancylostoma duodenale feed directly on the blood of their hosts. Although the daily consumption of an individual worm (0.02–0.07 ml and 0.14–0.26 ml respectively) is small, the collective consumption under heavy infection can be clinically significant. Intestinal whipworm may also cause anemia. Anemia has also been associated with reduced stamina for physical labor, a decline in the ability to learn new information, and apathy, irritability, and fatigue. A study of the effect of deworming and iron supplementation in 47 students from the Democratic Republic of the Congo found that the intervention improved cognitive function. Another study found that in 159 Jamaican schoolchildren, deworming led to better auditory short-term memory and scanning and retrieval of long-term memory over a period of nine-weeks. Cognitive changes Malnutrition due to helminths may affect cognitive function leading to low educational performance, decreased concentration and difficulty with abstract cognitive tasks. Iron deficiency in infants and preschoolers is associated with "lower scores ... on tests of mental and motor development ... [as well as] increased fearfulness, inattentiveness, and decreased social responsiveness". Studies in the Philippines and Indonesia found a significant correlation between helminthiasis and decreased memory and fluency. Large parasite burdens, particularly severe hookworm infections, are also associated with absenteeism, under-enrollment, and attrition in school children. Transmission Helminths are transmitted to the final host in several ways. The most common infection is through ingestion of contaminated vegetables, drinking water, and raw or undercooked meat. Contaminated food may contain eggs of nematodes such as Ascaris, Enterobius, and Trichuris; cestodes such as Taenia, Hymenolepis, and Echinococcus; and trematodes such as Fasciola. Raw or undercooked meats are the major sources of Taenia (pork, beef and venison), Trichinella (pork and bear), Diphyllobothrium (fish), Clonorchis (fish), and Paragonimus (crustaceans). Schistosomes and nematodes such as hookworms (Ancylostoma and Necator) and Strongyloides can penetrate the skin directly. The roundworm, Dracunculus has a complex mode of transmission: it is acquired from drinking infested water or eating frogs and fish that contain (had eaten) infected crustaceans (copepods); and can also be transmitted from infected pets (cats and dogs). Roundworms such as Brugia, Wuchereria and Onchocerca are directly transmitted by mosquitoes. In the developing world, the use of contaminated water is a major risk factor for infection. Infection can also take place through the practice of geophagy, which is not uncommon in parts of sub-Saharan Africa. Soil is eaten, for example, by children or pregnant women to counteract a real or perceived deficiency of minerals in their diet. Diagnosis Specific helminths can be identified through microscopic examination of their eggs (ova) found in faecal samples. The number of eggs is measured in units of eggs per gram. However, it does not quantify mixed infections, and in practice, is inaccurate for quantifying the eggs of schistosomes and soil-transmitted helminths. Sophisticated tests such as serological assays, antigen tests, and molecular diagnosis are also available; however, they are time-consuming, expensive and not always reliable. Prevention Disrupting the cycle of the worm will prevent infestation and re-infestation. Prevention of infection can largely be achieved by addressing the issues of WASH—water, sanitation and hygiene. The reduction of open defecation is particularly called for, as is stopping the use of human waste as fertilizer. Further preventive measures include adherence to appropriate food hygiene, wearing of shoes, regular deworming of pets, and the proper disposal of their feces. Scientists are also searching for a vaccine against helminths, such as a hookworm vaccine. Treatment Medications Broad-spectrum benzimidazoles (such as albendazole and mebendazole) are the first line treatment of intestinal roundworm and tapeworm infections. Macrocyclic lactones (such as ivermectin) are effective against adult and migrating larval stages of nematodes. Praziquantel is the drug of choice for schistosomiasis, taeniasis, and most types of food-borne trematodiases. Oxamniquine is also widely used in mass deworming programmes. Pyrantel is commonly used for veterinary nematodiasis. Artemisinins and derivatives are proving to be candidates as drugs of choice for trematodiasis. Mass deworming In regions where helminthiasis is common, mass deworming treatments may be performed, particularly among school-age children, who are a high-risk group. Most of these initiatives are undertaken by the World Health Organization (WHO) with positive outcomes in many regions. Deworming programs can improve school attendance by 25 percent. Although deworming improves the health of an individual, outcomes from mass deworming campaigns, such as reduced deaths or increases in cognitive ability, nutritional benefits, physical growth, and performance, are uncertain or not apparent. Surgery If complications of helminthiasis, such as intestinal obstruction occur, emergency surgery may be required. Patients who require non-emergency surgery, for instance for removal of worms from the biliary tree, can be pre-treated with the anthelmintic drug albendazole. Epidemiology Areas with the highest prevalence of helminthiasis are tropical and subtropical areas including sub-Saharan Africa, central and east Asia, and the Americas. Neglected tropical diseases Some types of helminthiases are classified as neglected tropical diseases. They include: Soil-transmitted helminthiases Roundworm infections such as lymphatic filariasis, dracunculiasis, and onchocerciasis Trematode infections, such as schistosomiasis, and food-borne trematodiases, including fascioliasis, clonorchiasis, opisthorchiasis, and paragonimiasis Tapeworm infections such as cysticercosis, taeniasis, and echinococcosis Prevalence The soil-transmitted helminths (A. lumbricoides, T. trichiura, N. americanus, A. duodenale), schistosomes, and filarial worms collectively infect more than a quarter of the human population worldwide at any one time, far surpassing HIV and malaria together. Schistosomiasis is the second most prevalent parasitic disease of humans after malaria. In 2014–15, the WHO estimated that approximately 2 billion people were infected with soil-transmitted helminthiases, 249 million with schistosomiasis, 56 million people with food-borne trematodiasis, 120 million with lymphatic filariasis, 37 million people with onchocerciasis, and 1 million people with echinococcosis. Another source estimated a much higher figure of 3.5 billion infected with one or more soil-transmitted helminths. In 2014, only 148 people were reported to have dracunculiasis because of a successful eradication campaign for that particular helminth, which is easier to eradicate than other helminths as it is transmitted only by drinking contaminated water. Because of their high mobility and lower standards of hygiene, school-age children are particularly vulnerable to helminthiasis. Most children from developing nations will have at least one infestation. Multi-species infections are very common. The most common intestinal parasites in the United States are Enterobius vermicularis, Giardia lamblia, Ancylostoma duodenale, Necator americanus, and Entamoeba histolytica. In a developing country like Bangladesh, the most common species are round worm (Ascaris lumbricoides), whipworm (Tricurias tricuras) and hookworm (Ancylostoma duodenalis). Variations within communities Even in areas of high prevalence, the frequency and severity of infection is not uniform within communities or families. A small proportion of community members harbour the majority of worms, and this depends on age. The maximum worm burden is at five to ten years of age, declining rapidly thereafter. Individual predisposition to helminthiasis for people with the same sanitation infrastructure and hygiene behavior is thought to result from differing immunocompetence, nutritional status, and genetic factors. Because individuals are predisposed to a high or a low worm burden, the burden reacquired after successful treatment is proportional to that before treatment. Disability-adjusted life years It is estimated that intestinal nematode infections cause 5 million disability-adjusted life years (DALYS) to be lost, of which hookworm infections account for more than 3 million DALYS and ascaris infections more than 1 million. There are also signs of progress: The Global Burden of Disease Study published in 2015 estimates a 46 percent (59 percent when age standardised) reduction in years lived with disability (YLD) for the 13-year time period from 1990 to 2013 for all intestinal/nematode infections, and even a 74 percent (80 percent when age standardised) reduction in YLD from ascariasis. Deaths As many as 135,000 die annually from soil transmitted helminthiasis. The 1990–2013 Global Burden of Disease Study estimated 5,500 direct deaths from schistosomiasis, while more than 200,000 people were estimated in 2013 to die annually from causes related to schistosomiasis. Another 20 million have severe consequences from the disease. It is the most deadly of the neglected tropical diseases.
Biology and health sciences
Concepts
Health
971961
https://en.wikipedia.org/wiki/Plant%20reproductive%20morphology
Plant reproductive morphology
Plant reproductive morphology is the study of the physical form and structure (the morphology) of those parts of plants directly or indirectly concerned with sexual reproduction. Among all living organisms, flowers, which are the reproductive structures of angiosperms, are the most varied physically and show a correspondingly great diversity in methods of reproduction. Plants that are not flowering plants (green algae, mosses, liverworts, hornworts, ferns and gymnosperms such as conifers) also have complex interplays between morphological adaptation and environmental factors in their sexual reproduction. The breeding system, or how the sperm from one plant fertilizes the ovum of another, depends on the reproductive morphology, and is the single most important determinant of the genetic structure of nonclonal plant populations. Christian Konrad Sprengel (1793) studied the reproduction of flowering plants and for the first time it was understood that the pollination process involved both biotic and abiotic interactions. Charles Darwin's theories of natural selection utilized this work to build his theory of evolution, which includes analysis of the coevolution of flowers and their insect pollinators. Use of sexual terminology Plants have complex lifecycles involving alternation of generations. One generation, the sporophyte, gives rise to the next generation, the gametophyte asexually via spores. Spores may be identical isospores or come in different sizes (microspores and megaspores), but strictly speaking, spores and sporophytes are neither male nor female because they do not produce gametes. The alternate generation, the gametophyte, produces gametes, eggs and/or sperm. A gametophyte can be monoicous (bisexual), producing both eggs and sperm, or dioicous (unisexual), either female (producing eggs) or male (producing sperm). In the bryophytes (liverworts, mosses, and hornworts), the sexual gametophyte is the dominant generation. In ferns and seed plants (including cycads, conifers, flowering plants, etc.) the sporophyte is the dominant generation; the obvious visible plant, whether a small herb or a large tree, is the sporophyte, and the gametophyte is very small. In bryophytes and ferns, the gametophytes are independent, free-living plants, while in seed plants, each female megagametophyte, and the megaspore that gives rise to it, is hidden within the sporophyte and is entirely dependent on it for nutrition. Each male gametophyte typically consists of two to four cells enclosed within the protective wall of a pollen grain. The sporophyte of a flowering plant is often described using sexual terms (e.g. "female" or "male") . For example, a sporophyte that produces spores that give rise only to male gametophytes may be described as "male", even though the sporophyte itself is asexual, producing only spores. Similarly, flowers produced by the sporophyte may be described as "unisexual" or "bisexual", meaning that they give rise to either one sex of gametophyte or both sexes of the gametophyte. Flowering plants Basic flower morphology The flower is the characteristic structure concerned with sexual reproduction in flowering plants (angiosperms). Flowers vary enormously in their structure (morphology). A perfect flower, like that of Ranunculus glaberrimus shown in the figure, has a calyx of outer sepals and a corolla of inner petals and both male and female sex organs. The sepals and petals together form the perianth. Next inwards there are numerous stamens, which produce pollen grains, each containing a microscopic male gametophyte. Stamens may be called the "male" parts of a flower and collectively form the androecium. Finally in the middle there are carpels, which at maturity contain one or more ovules, and within each ovule is a tiny female gametophyte. Carpels may be called the "female" parts of a flower and collectively form the gynoecium. Each carpel in Ranunculus species is an achene that produces one ovule, which when fertilized becomes a seed. If the carpel contains more than one seed, as in Eranthis hyemalis, it is called a follicle. Two or more carpels may be fused together to varying degrees and the entire structure, including the fused styles and stigmas may be called a pistil. The lower part of the pistil, where the ovules are produced, is called the ovary. It may be divided into chambers (locules) corresponding to the separate carpels. Variations A perfect flower has both stamens and carpels, and is described as "bisexual" or "hermaphroditic". A unisexual flower is one in which either the stamens or the carpels are missing, vestigial or otherwise non-functional. Each flower is either staminate (having only functional stamens and thus male), or carpellate or pistillate (having only functional carpels and thus female). If separate staminate and carpellate flowers are always found on the same plant, the species is described as monoecious. If separate staminate and carpellate flowers are always found on different plants, the species is described as dioecious. A 1995 study found that about 6% of angiosperm species are dioecious, and that 7% of genera contain some dioecious species. Members of the birch family (Betulaceae) are examples of monoecious plants with unisexual flowers. A mature alder tree (Alnus species) produces long catkins containing only male flowers, each with four stamens and a minute perianth, and separate stalked groups of female flowers, each without a perianth. (See the illustration of Alnus serrulata.) Most hollies (members of the genus Ilex) are dioecious. Each plant produces either functionally male flowers or functionally female flowers. In Ilex aquifolium (see the illustration), the common European holly, both kinds of flower have four sepals and four white petals; male flowers have four stamens, female flowers usually have four non-functional reduced stamens and a four-celled ovary. Since only female plants are able to set fruit and produce berries, this has consequences for gardeners. Amborella represents the first known group of flowering plants to separate from their common ancestor. It too is dioecious; at any one time, each plant produces either flowers with functional stamens but no carpels, or flowers with a few non-functional stamens and a number of fully functional carpels. However, Amborella plants may change their "sex" over time. In one study, five cuttings from a male plant produced only male flowers when they first flowered, but at their second flowering three switched to producing female flowers. In extreme cases, almost all of the parts present in a complete flower may be missing, so long as at least one carpel or one stamen is present. This situation is reached in the female flowers of duckweeds (Lemna), which consist of a single carpel, and in the male flowers of spurges (Euphorbia) which consist of a single stamen. A species such as Fraxinus excelsior, the common ash of Europe, demonstrates one possible kind of variation. Ash flowers are wind-pollinated and lack petals and sepals. Structurally, the flowers may be bisexual, consisting of two stamens and an ovary, or may be male (staminate), lacking a functional ovary, or female (carpellate), lacking functional stamens. Different forms may occur on the same tree, or on different trees. The Asteraceae (sunflower family), with close to 22,000 species worldwide, have highly modified inflorescences made up of flowers (florets) collected together into tightly packed heads. Heads may have florets of one sexual morphology – all bisexual, all carpellate or all staminate (when they are called homogamous), or may have mixtures of two or more sexual forms (heterogamous). Thus goatsbeards (Tragopogon species) have heads of bisexual florets, like other members of the tribe Cichorieae, whereas marigolds (Calendula species) generally have heads with the outer florets bisexual and the inner florets staminate (male). Like Amborella, some plants undergo sex-switching. For example, Arisaema triphyllum (Jack-in-the-pulpit) expresses sexual differences at different stages of growth: smaller plants produce all or mostly male flowers; as plants grow larger over the years the male flowers are replaced by more female flowers on the same plant. Arisaema triphyllum thus covers a multitude of sexual conditions in its lifetime: nonsexual juvenile plants, young plants that are all male, larger plants with a mix of both male and female flowers, and large plants that have mostly female flowers. Other plant populations have plants that produce more male flowers early in the year and as plants bloom later in the growing season they produce more female flowers. Terminology The complexity of the morphology of flowers and its variation within populations has led to a rich terminology. Androdioecious: having male flowers on some plants, bisexual ones on others. Androecious: having only male flowers (the male of a dioecious population); producing pollen but no seed. Androgynous: see bisexual. Androgynomonoecious: having male, female, and bisexual flowers on the same plant, also called trimonoecious. Andromonoecious: having both bisexual and male flowers on the same plant. Bisexual: each flower of each individual has both male and female structures, i.e. it combines both sexes in one structure. Flowers of this kind are called perfect, having both stamens and carpels. Other terms used for this condition are androgynous, hermaphroditic, monoclinous and synoecious. Dichogamous: having sexes developing at different times; producing pollen when the stigmas are not receptive, either protandrous or protogynous. This promotes outcrossing by limiting self-pollination. Some dichogamous plants have bisexual flowers, others have unisexual flowers. Diclinous: see Unisexual. Dioecious: having either only male or only female flowers. No individual plant of the population produces both pollen and ovules. (From the Greek for "two households".
Biology and health sciences
Plant reproduction
null
972457
https://en.wikipedia.org/wiki/Mass%20wasting
Mass wasting
Mass wasting, also known as mass movement, is a general term for the movement of rock or soil down slopes under the force of gravity. It differs from other processes of erosion in that the debris transported by mass wasting is not entrained in a moving medium, such as water, wind, or ice. Types of mass wasting include creep, solifluction, rockfalls, debris flows, and landslides, each with its own characteristic features, and taking place over timescales from seconds to hundreds of years. Mass wasting occurs on both terrestrial and submarine slopes, and has been observed on Earth, Mars, Venus, Jupiter's moon Io, and on many other bodies in the Solar System. Subsidence is sometimes regarded as a form of mass wasting. A distinction is then made between mass wasting by subsidence, which involves little horizontal movement, and mass wasting by slope movement. Rapid mass wasting events, such as landslides, can be deadly and destructive. More gradual mass wasting, such as soil creep, poses challenges to civil engineering, as creep can deform roadways and structures and break pipelines. Mitigation methods include slope stabilization, construction of walls, catchment dams, or other structures to contain rockfall or debris flows, afforestation, or improved drainage of source areas. Types Mass wasting is a general term for any process of erosion that is driven by gravity and in which the transported soil and rock is not entrained in a moving medium, such as water, wind, or ice. The presence of water usually aids mass wasting, but the water is not abundant enough to be regarded as a transporting medium. Thus, the distinction between mass wasting and stream erosion lies between a mudflow (mass wasting) and a very muddy stream (stream erosion), without a sharp dividing line. Many forms of mass wasting are recognized, each with its own characteristic features, and taking place over timescales from seconds to hundreds of years. Based on how the soil, regolith or rock moves downslope as a whole, mass movements can be broadly classified as either creeps or landslides. Subsidence is sometimes also regarded as a form of mass wasting. A distinction is then made between mass wasting by subsidence, which involves little horizontal movement, and mass wasting by slope movement. Creep Soil creep is a slow and long term mass movement. The combination of small movements of soil or rock in different directions over time is directed by gravity gradually downslope. The steeper the slope, the faster the creep. The creep makes trees and shrubs curve to maintain their perpendicularity, and they can trigger landslides if they lose their root footing. The surface soil can migrate under the influence of cycles of freezing and thawing, or hot and cold temperatures, inching its way towards the bottom of the slope forming terracettes. Landslides are often preceded by soil creep accompanied with soil sloughing—loose soil that falls and accumulates at the base of the steepest creep sections. Solifluction Solifluction is a form of creep characteristics of arctic or alpine climates. It takes place in soil saturated with moisture that thaws during the summer months to creep downhill. It takes place on moderate slopes, relatively free of vegetation, that are underlain by permafrost and receive a constant supply of new debris by weathering. Solifluction affects the entire slope rather than being confined to channels and can produce terrace-like landforms or stone rivers. Landslide A landslide, also called a landslip, is a relatively rapid movement of a large mass of earth and rocks down a hill or a mountainside. Landslides can be further classified by the importance of water in the mass wasting process. In a narrow sense, landslides are rapid movement of large amounts of relatively dry debris down moderate to steep slopes. With increasing water content, the mass wasting takes the form of debris avalanches, then earthflows, then mudflows. Further increase in water content produces a sheetflood, which is a form of sheet erosion rather than mass wasting. Occurrences On Earth, mass wasting occurs on both terrestrial and submarine slopes. Submarine mass wasting is particularly common along glaciated coastlines where glaciers are retreating and great quantities of sediments are being released. Submarine slides can transport huge volumes of sediments for hundreds of kilometers in a few hours. Mass wasting is a common phenomenon throughout the Solar System, occurring where volatile materials are lost from a regolith. Such mass wasting has been observed on Mars, Io, Triton, and possibly Europa and Ganymede. Mass wasting also occurs in the equatorial regions of Mars, where stopes of soft sulfate-rich sediments are steepened by wind erosion. Mass wasting on Venus is associated with the rugged terrain of tesserae. Io shows extensive mass wasting of its volcanic mountains. Deposits and landforms Mass wasting affects geomorphology, most often in subtle, small-scale ways, but occasionally more spectacularly. Soil creep is rarely apparent but can produce such subtle effects as curved forest growth and tilted fences and telephone poles. It occasionally produces low scarps and shallow depressions. Solifluction produced lobed or sheetlike deposits, with fairly definite edges, in which clasts (rock fragments) are oriented perpendicular to the contours of the deposit. Rockfall can produce talus slopes at the feet of cliffs. A more dramatic manifestation of rockfall is rock glaciers, which form from rockfall from cliffs oversteepened by glaciers. Landslides can produce scarps and step-like small terraces. Landslide deposits are poorly sorted. Those rich in clay may show stretched clay lumps (a phenomenon called boudinage) and zones of concentrated shear. Debris flow deposits take the form of long, narrow tracks of very poorly sorted material. These may have natural levees at the sides of the tracks, and sometimes consist of lenses of rock fragments alternating with lenses of fine-grained earthy material. Debris flows often form much of the upper slopes of alluvial fans. Causes Triggers for mass wasting can be divided into passive and activating (initiating) causes. Passive causes include: Rock and soil lithology. Unconsolidated or weak debris are more susceptible to mass wasting, as are materials that lose cohesion when wetted. Stratigraphy, such as thinly bedded rock or alternating beds of weak and strong or impermeable or permiable rock lithologies. Faults or other geologic structures that weaken the rock. Topography, such as steep slopes or cliffs. Climate, with large temperature swings, frequent freezing and thawing, or abundant rainfall Lack of vegetation Activating causes include: Undercutting of the slope by excavation or erosion Increased overburden from structures Increased soil moisture Earthquakes Hazards and mitigation Mass wasting causes problems for civil engineering, particularly highway construction. It can displace roads, buildings, and other construction and can break pipelines. Historically, mitigation of landslide hazards on the Gaillard Cut of the Panama Canal accounted for of the of material removed while excavating the cut. Rockslides or landslides can have disastrous consequences, both immediate and delayed. The Oso disaster of March 2014 was a landslide that caused 43 fatalities in Oso, Washington, US. Delayed consequences of landslides can arise from the formation of landslide dams, as at Thistle, Utah, in April 1983. Volcano flanks can become over-steep resulting in instability and mass wasting. This is now a recognised part of the growth of all active volcanoes. It is seen on submarine volcanoes as well as surface volcanoes: Kamaʻehuakanaloa (formerly Loihi) in the Hawaiian–Emperor seamount chain and Kick 'em Jenny in the Lesser Antilles Volcanic Arc are two submarine volcanoes that are known to undergo mass wasting. The failure of the northern flank of Mount St. Helens in 1980 showed how rapidly volcanic flanks can deform and fail. Methods of mitigation of mass wasting hazards include: Afforestation Construction of fences, walls, or ditches to contain rockfall Construction of catchment dams to contain debris flows Improved drainage of source areas Slope stabilization
Physical sciences
Geomorphology: General
Earth science
972601
https://en.wikipedia.org/wiki/Elementary%20equivalence
Elementary equivalence
In model theory, a branch of mathematical logic, two structures M and N of the same signature σ are called elementarily equivalent if they satisfy the same first-order σ-sentences. If N is a substructure of M, one often needs a stronger condition. In this case N is called an elementary substructure of M if every first-order σ-formula φ(a1, …, an) with parameters a1, …, an from N is true in N if and only if it is true in M. If N is an elementary substructure of M, then M is called an elementary extension of N. An embedding h: N → M is called an elementary embedding of N into M if h(N) is an elementary substructure of M. A substructure N of M is elementary if and only if it passes the Tarski–Vaught test: every first-order formula φ(x, b1, …, bn) with parameters in N that has a solution in M also has a solution in N when evaluated in M. One can prove that two structures are elementarily equivalent with the Ehrenfeucht–Fraïssé games. Elementary embeddings are used in the study of large cardinals, including rank-into-rank. Elementarily equivalent structures Two structures M and N of the same signature σ are elementarily equivalent if every first-order sentence (formula without free variables) over σ is true in M if and only if it is true in N, i.e. if M and N have the same complete first-order theory. If M and N are elementarily equivalent, one writes M ≡ N. A first-order theory is complete if and only if any two of its models are elementarily equivalent. For example, consider the language with one binary relation symbol '<'. The model R of real numbers with its usual order and the model Q of rational numbers with its usual order are elementarily equivalent, since they both interpret '<' as an unbounded dense linear ordering. This is sufficient to ensure elementary equivalence, because the theory of unbounded dense linear orderings is complete, as can be shown by the Łoś–Vaught test. More generally, any first-order theory with an infinite model has non-isomorphic, elementarily equivalent models, which can be obtained via the Löwenheim–Skolem theorem. Thus, for example, there are non-standard models of Peano arithmetic, which contain other objects than just the numbers 0, 1, 2, etc., and yet are elementarily equivalent to the standard model. Elementary substructures and elementary extensions N is an elementary substructure or elementary submodel of M if N and M are structures of the same signature σ such that for all first-order σ-formulas φ(x1, …, xn) with free variables x1, …, xn, and all elements a1, …, an of N, φ(a1, …, an) holds in N if and only if it holds in M: This definition first appears in Tarski, Vaught (1957). It follows that N is a substructure of M. If N is a substructure of M, then both N and M can be interpreted as structures in the signature σN consisting of σ together with a new constant symbol for every element of N. Then N is an elementary substructure of M if and only if N is a substructure of M and N and M are elementarily equivalent as σN-structures. If N is an elementary substructure of M, one writes N M and says that M is an elementary extension of N: M N. The downward Löwenheim–Skolem theorem gives a countable elementary substructure for any infinite first-order structure in at most countable signature; the upward Löwenheim–Skolem theorem gives elementary extensions of any infinite first-order structure of arbitrarily large cardinality. Tarski–Vaught test The Tarski–Vaught test (or Tarski–Vaught criterion) is a necessary and sufficient condition for a substructure N of a structure M to be an elementary substructure. It can be useful for constructing an elementary substructure of a large structure. Let M be a structure of signature σ and N a substructure of M. Then N is an elementary substructure of M if and only if for every first-order formula φ(x, y1, …, yn) over σ and all elements b1, …, bn from N, if M x φ(x, b1, …, bn), then there is an element a in N such that M φ(a, b1, …, bn). Elementary embeddings An elementary embedding of a structure N into a structure M of the same signature σ is a map h: N → M such that for every first-order σ-formula φ(x1, …, xn) and all elements a1, …, an of N, N φ(a1, …, an) if and only if M φ(h(a1), …, h(an)). Every elementary embedding is a strong homomorphism, and its image is an elementary substructure. Elementary embeddings are the most important maps in model theory. In set theory, elementary embeddings whose domain is V (the universe of set theory) play an important role in the theory of large cardinals (see also Critical point).
Mathematics
Model theory
null
972784
https://en.wikipedia.org/wiki/Military%20aviation
Military aviation
Military aviation comprises military aircraft and other flying machines for the purposes of conducting or enabling aerial warfare, including national airlift (air cargo) capacity to provide logistical supply to forces stationed in a war theater or along a front. Airpower includes the national means of conducting such warfare, including the intersection of transport and warcraft. Military aircraft include bombers, fighters, transports, trainer aircraft, and reconnaissance aircraft. History The first military uses of aviation involved lighter-than-air balloons. During the Battle of Fleurus in 1794, the French observation balloon l'Entreprenant was used to monitor Austrian troop movements. The use of lighter-than-air aircraft in warfare became prevalent in the 19th century, including regular use in the American Civil War. Lighter-than-air military aviation persisted until shortly after World War II, gradually being withdrawn from various roles as heavier-than-air aircraft improved. Heavier-than-air aircraft were recognized as having military applications early on, despite resistance from traditionalists and the severe limitations of early aircraft. The U.S. Army Signal Corps purchased a Wright Model A on 2 August 1909 which became the first military aircraft in history. In 1911, the Italians used a variety of aircraft types in reconnaissance, photo-reconnaissance, and bombing roles during the Italo-Turkish War. On October 23, 1911, an Italian pilot, Captain Carlo Piazza, flew over Turkish lines on the world's first aerial reconnaissance mission, and on November 1, the first ever aerial bomb was dropped by Sottotenente Giulio Gavotti, on Turkish troops in Libya, from an early model of Etrich Taube aircraft. The Turks, lacking anti-aircraft weapons, were the first to shoot down an airplane by rifle fire. The earliest military role filled by aircraft was reconnaissance, however, by the end of World War I, military aviation had rapidly embraced many specialized roles, such as artillery spotting, air superiority, bombing, ground attack, and anti-submarine patrols. Technological improvements were made at a frenzied pace, and the first all-metal cantilevered airplanes were going into service as the war ended. Between the major world wars incremental improvements made in many areas, especially powerplants, aerodynamics, structures, and weapons, led to an even more rapid advance in aircraft technology during World War II, with large performance increases and the introduction of aircraft into new roles, including Airborne Early Warning, electronic warfare, weather reconnaissance, and flying lifeboats. Great Britain used aircraft to suppress revolts throughout the Empire during the interwar period and introduced the first military transports, which revolutionized logistics, allowing troops and supplies to be quickly delivered over vastly greater distances. While they first appeared during World War I, ground attack aircraft didn't provide a decisive contribution until the Germans introduced Blitzkrieg during the Invasion of Poland and Battle of France, where aircraft functioned as mobile flying artillery to quickly disrupt defensive formations. The Allies would later use rocket-equipped fighters in the same role, immobilizing German armored divisions during the Battle of Normandy and afterwards. World War I also saw the creation of the first strategic bomber units, however, they wouldn't be tested until the Spanish Civil War where the perceived effects of mass bombardment would encourage their widespread use during World War II. Carrier aviation also first appeared during World War I, and likewise came to play a major role during World War II, with most major navies recognizing the aircraft carrier's advantages over the battleship and devoting massive resources to the building of new carriers. During World War II, U-boats threatened the ability of the Allies to transport troops and war materiel to Europe, spurring the development of very long range Maritime patrol aircraft, whose capability of independently detecting and destroying submerged submarines was greatly increased with new detection systems, including sonobuoys, Leigh Lights, and radar, along with better weapons including homing torpedoes and improved depth charges. This played a major role in winning the Battle of the Atlantic. Aircraft also played a much expanded role, with many notable engagements being decided solely through the use of military aircraft, such as the Battle of Britain or the attack on Pearl Harbor, and the conclusion of the Pacific War against Japan was marked by two lone aircraft dropping the atomic bombs, devastating the cities of Hiroshima and Nagasaki. The introduction of the jet engine, radar, early missiles, helicopters, and computers are World War II advancements which are felt to the present day. Post World War II, the development of military aviation was spurred by the Cold War stand-off between the super-powers. The helicopter appeared late in World War II and matured into an indispensable part of military aviation, transporting troops and providing expanded anti-submarine capabilities to smaller warships, negating the need for large numbers of small carriers. The need to out-perform opponents pushed new technology and aircraft developments in the U.S.S.R. and the United States, among others, and the Korean War and the Vietnam War tested the resulting designs. Incredible advances in electronics were made, starting with the first electronic computers during World War II and steadily expanding from its original role of cryptography into communications, data processing, reconnaissance, remotely piloted aircraft, and many other roles until it has become an integral aspect of modern warfare. In the early 1960s, missiles were expected to replace manned interceptors and the guns in other manned aircraft. They failed to live up to expectations as surface-to-air missiles lacked flexibility and were not as effective as manned interceptors, and fighters equipped only with air-to-air missiles had limited effectiveness against opposing aircraft which could avoid being hit. Missiles were also expensive, especially against low-value ground targets. The 1970s saw the return of the gun-armed fighter, and a greater emphasis on maneuverability. The 1980s through to the present day were characterized by stealth technology and other countermeasures. Today, a country's military aviation forces are often the first line of defense against an attack, or the first forces to attack the enemy, and effective military aviation forces (or lack thereof) have proved decisive in several recent conflicts such as the Gulf War. Categories Airborne Early Warning provides advance warning of enemy activities to reduce the chance of being surprised. Many also have command functions that allow them to direct or vector friendly fighters onto incoming bogeys. Bombers are capable of carrying large payloads of bombs and may sacrifice speed or maneuverability to maximize payload. Experimental aircraft are designed to test advanced aerodynamic, structural, avionic, or propulsion concepts. These are usually well instrumented, with performance data telemetered on radio-frequency data links to ground stations located at the test ranges where they are flown. Fighters establish and maintain air superiority. Speed and maneuverability are usually requirements and they carry a variety of weapons, including machine guns and guided missiles, to do this. Forward Air Control directs close air support aircraft to ensure that the intended targets are nullified and friendly troops remain uninjured. Ground-attack aircraft support ground troops by weakening or nullifying enemy defenses. Helicopter gunships and specialized ground attack aircraft attack enemy armor or troops and provide close air support for ground troops. Liaison aircraft are usually small, unarmed aircraft used to deliver messages and key personnel. Maritime Patrol Aircraft are used to control sea-lanes, and are often equipped with special electronic gear for detecting and sinking submarines, such as sonar. They are also used for search and rescue missions and fisheries patrols. Multirole combat aircraft combine the capabilities of both a fighter or a bomber, depending on what the mission calls for. Reconnaissance aircraft and scout helicopters are primarily used to gather intelligence. They are equipped with photographic, infrared, radar, and television sensors. This role is increasingly being filled by spy satellites and unmanned aerial vehicles. Refueling aircraft are used to refuel fighters and reconnaissance aircraft, extending mission reach and flying range. These aircraft include but are not limited to the KC-135, KC-46, KC-767, A310 MRTT, and the KC-130J. These aircraft are a part of many countries' militant assets. Training aircraft are used to train recruits to fly aircraft and to provide additional training for specialized roles such as in air combat. Transport aircraft transport troops and supplies. Cargo can be on pallets for quick unloading. Cargo, and personnel may also be discharged from flying aircraft on parachutes. Also included in this category are aerial tankers, which can refuel other aircraft while in flight. Helicopters and gliders can transport troops and supplies to areas where other aircraft would be unable to land. Air forces An air force is the branch of a nation's armed forces that is responsible for aerial warfare as distinct from the army, navy, or other branches. Most nations either maintain an air force or, in the case of smaller and less well-developed countries, an air wing (see List of air forces). Air forces are usually tasked with the air defense of a country, as well as strategic bombing, interdiction, close air support, intelligence gathering, battlespace management, transport functions, and providing services to civil government agencies. Air force operations may also include space-based operations such as reconnaissance or satellite operations. Other branches Other branches of a nation's armed forces may use aviation (naval aviation and army aviation), in addition to or instead of, a dedicated air force. In some cases, this includes coast guard services that are also an armed service, as well as gendarmeries and equivalent forces.
Technology
Military aviation
null
972800
https://en.wikipedia.org/wiki/Abyssal%20plain
Abyssal plain
An abyssal plain is an underwater plain on the deep ocean floor, usually found at depths between . Lying generally between the foot of a continental rise and a mid-ocean ridge, abyssal plains cover more than 50% of the Earth's surface. They are among the flattest, smoothest, and least explored regions on Earth. Abyssal plains are key geologic elements of oceanic basins (the other elements being an elevated mid-ocean ridge and flanking abyssal hills). The creation of the abyssal plain is the result of the spreading of the seafloor (plate tectonics) and the melting of the lower oceanic crust. Magma rises from above the asthenosphere (a layer of the upper mantle), and as this basaltic material reaches the surface at mid-ocean ridges, it forms new oceanic crust, which is constantly pulled sideways by spreading of the seafloor. Abyssal plains result from the blanketing of an originally uneven surface of oceanic crust by fine-grained sediments, mainly clay and silt. Much of this sediment is deposited by turbidity currents that have been channelled from the continental margins along submarine canyons into deeper water. The rest is composed chiefly of pelagic sediments. Metallic nodules are common in some areas of the plains, with varying concentrations of metals, including manganese, iron, nickel, cobalt, and copper. There are also amounts of carbon, nitrogen, phosphorus and silicon, due to material that comes down and decomposes. Owing in part to their vast size, abyssal plains are believed to be major reservoirs of biodiversity. They also exert significant influence upon ocean carbon cycling, dissolution of calcium carbonate, and atmospheric CO2 concentrations over time scales of a hundred to a thousand years. The structure of abyssal ecosystems is strongly influenced by the rate of flux of food to the seafloor and the composition of the material that settles. Factors such as climate change, fishing practices, and ocean fertilization have a substantial effect on patterns of primary production in the euphotic zone. Animals absorb dissolved oxygen from the oxygen-poor waters. Much dissolved oxygen in abyssal plains came from polar regions that had melted long ago. Due to scarcity of oxygen, abyssal plains are inhospitable for organisms that would flourish in the oxygen-enriched waters above. Deep sea coral reefs are mainly found in depths of 3,000 meters and deeper in the abyssal and hadal zones. Abyssal plains were not recognized as distinct physiographic features of the sea floor until the late 1940s and, until recently, none had been studied on a systematic basis. They are poorly preserved in the sedimentary record, because they tend to be consumed by the subduction process. Due to darkness and a water pressure that can reach about 750 times atmospheric pressure (76 megapascal), abyssal plains are not well explored. Oceanic zones The ocean can be conceptualized as zones, depending on depth, and presence or absence of sunlight. Nearly all life forms in the ocean depend on the photosynthetic activities of phytoplankton and other marine plants to convert carbon dioxide into organic carbon, which is the basic building block of organic matter. Photosynthesis in turn requires energy from sunlight to drive the chemical reactions that produce organic carbon. The stratum of the water column nearest the surface of the ocean (sea level) is referred to as the photic zone. The photic zone can be subdivided into two different vertical regions. The uppermost portion of the photic zone, where there is adequate light to support photosynthesis by phytoplankton and plants, is referred to as the euphotic zone (also referred to as the epipelagic zone, or surface zone). The lower portion of the photic zone, where the light intensity is insufficient for photosynthesis, is called the dysphotic zone (dysphotic means "poorly lit" in Greek). The dysphotic zone is also referred to as the mesopelagic zone, or the twilight zone. Its lowermost boundary is at a thermocline of , which, in the tropics generally lies between 200 and 1,000 metres. The euphotic zone is somewhat arbitrarily defined as extending from the surface to the depth where the light intensity is approximately 0.1–1% of surface sunlight irradiance, depending on season, latitude and degree of water turbidity. In the clearest ocean water, the euphotic zone may extend to a depth of about 150 metres, or rarely, up to 200 metres. Dissolved substances and solid particles absorb and scatter light, and in coastal regions the high concentration of these substances causes light to be attenuated rapidly with depth. In such areas the euphotic zone may be only a few tens of metres deep or less. The dysphotic zone, where light intensity is considerably less than 1% of surface irradiance, extends from the base of the euphotic zone to about 1,000 metres. Extending from the bottom of the photic zone down to the seabed is the aphotic zone, a region of perpetual darkness. Since the average depth of the ocean is about 4,300 metres, the photic zone represents only a tiny fraction of the ocean's total volume. However, due to its capacity for photosynthesis, the photic zone has the greatest biodiversity and biomass of all oceanic zones. Nearly all primary production in the ocean occurs here. Life forms which inhabit the aphotic zone are often capable of movement upwards through the water column into the photic zone for feeding. Otherwise, they must rely on material sinking from above, or find another source of energy and nutrition, such as occurs in chemosynthetic archaea found near hydrothermal vents and cold seeps. The aphotic zone can be subdivided into three different vertical regions, based on depth and temperature. First is the bathyal zone, extending from a depth of 1,000 metres down to 3,000 metres, with water temperature decreasing from to as depth increases. Next is the abyssal zone, extending from a depth of 3,000 metres down to 6,000 metres. The final zone includes the deep oceanic trenches, and is known as the hadal zone. This, the deepest oceanic zone, extends from a depth of 6,000 metres down to approximately 11,034 meters, at the very bottom of the Mariana Trench, the deepest point on planet Earth. Abyssal plains are typically in the abyssal zone, at depths from 3,000 to 6,000 metres. The table below illustrates the classification of oceanic zones: Formation Oceanic crust, which forms the bedrock of abyssal plains, is continuously being created at mid-ocean ridges (a type of divergent boundary) by a process known as decompression melting. Plume-related decompression melting of solid mantle is responsible for creating ocean islands like the Hawaiian islands, as well as the ocean crust at mid-ocean ridges. This phenomenon is also the most common explanation for flood basalts and oceanic plateaus (two types of large igneous provinces). Decompression melting occurs when the upper mantle is partially melted into magma as it moves upwards under mid-ocean ridges. This upwelling magma then cools and solidifies by conduction and convection of heat to form new oceanic crust. Accretion occurs as mantle is added to the growing edges of a tectonic plate, usually associated with seafloor spreading. The age of oceanic crust is therefore a function of distance from the mid-ocean ridge. The youngest oceanic crust is at the mid-ocean ridges, and it becomes progressively older, cooler and denser as it migrates outwards from the mid-ocean ridges as part of the process called mantle convection. The lithosphere, which rides atop the asthenosphere, is divided into a number of tectonic plates that are continuously being created and consumed at their opposite plate boundaries. Oceanic crust and tectonic plates are formed and move apart at mid-ocean ridges. Abyssal hills are formed by stretching of the oceanic lithosphere. Consumption or destruction of the oceanic lithosphere occurs at oceanic trenches (a type of convergent boundary, also known as a destructive plate boundary) by a process known as subduction. Oceanic trenches are found at places where the oceanic lithospheric slabs of two different plates meet, and the denser (older) slab begins to descend back into the mantle. At the consumption edge of the plate (the oceanic trench), the oceanic lithosphere has thermally contracted to become quite dense, and it sinks under its own weight in the process of subduction. The subduction process consumes older oceanic lithosphere, so oceanic crust is seldom more than 200 million years old. The overall process of repeated cycles of creation and destruction of oceanic crust is known as the Supercontinent cycle, first proposed by Canadian geophysicist and geologist John Tuzo Wilson. New oceanic crust, closest to the mid-oceanic ridges, is mostly basalt at shallow levels and has a rugged topography. The roughness of this topography is a function of the rate at which the mid-ocean ridge is spreading (the spreading rate). Magnitudes of spreading rates vary quite significantly. Typical values for fast-spreading ridges are greater than 100 mm/yr, while slow-spreading ridges are typically less than 20 mm/yr. Studies have shown that the slower the spreading rate, the rougher the new oceanic crust will be, and vice versa. It is thought this phenomenon is due to faulting at the mid-ocean ridge when the new oceanic crust was formed. These faults pervading the oceanic crust, along with their bounding abyssal hills, are the most common tectonic and topographic features on the surface of the Earth. The process of seafloor spreading helps to explain the concept of continental drift in the theory of plate tectonics. The flat appearance of mature abyssal plains results from the blanketing of this originally uneven surface of oceanic crust by fine-grained sediments, mainly clay and silt. Much of this sediment is deposited from turbidity currents that have been channeled from the continental margins along submarine canyons down into deeper water. The remainder of the sediment comprises chiefly dust (clay particles) blown out to sea from land, and the remains of small marine plants and animals which sink from the upper layer of the ocean, known as pelagic sediments. The total sediment deposition rate in remote areas is estimated at two to three centimeters per thousand years. Sediment-covered abyssal plains are less common in the Pacific Ocean than in other major ocean basins because sediments from turbidity currents are trapped in oceanic trenches that border the Pacific Ocean. Abyssal plains are typically covered by deep sea, but during parts of the Messinian salinity crisis much of the Mediterranean Sea's abyssal plain was exposed to air as an empty deep hot dry salt-floored sink. Discovery The landmark scientific expedition (December 1872 – May 1876) of the British Royal Navy survey ship HMS Challenger yielded a tremendous amount of bathymetric data, much of which has been confirmed by subsequent researchers. Bathymetric data obtained during the course of the Challenger expedition enabled scientists to draw maps, which provided a rough outline of certain major submarine terrain features, such as the edge of the continental shelves and the Mid-Atlantic Ridge. This discontinuous set of data points was obtained by the simple technique of taking soundings by lowering long lines from the ship to the seabed. The Challenger expedition was followed by the 1879–1881 expedition of the Jeannette, led by United States Navy Lieutenant George Washington DeLong. The team sailed across the Chukchi Sea and recorded meteorological and astronomical data in addition to taking soundings of the seabed. The ship became trapped in the ice pack near Wrangel Island in September 1879, and was ultimately crushed and sunk in June 1881. The Jeannette expedition was followed by the 1893–1896 Arctic expedition of Norwegian explorer Fridtjof Nansen aboard the Fram, which proved that the Arctic Ocean was a deep oceanic basin, uninterrupted by any significant land masses north of the Eurasian continent. Beginning in 1916, Canadian physicist Robert William Boyle and other scientists of the Anti-Submarine Detection Investigation Committee (ASDIC) undertook research which ultimately led to the development of sonar technology. Acoustic sounding equipment was developed which could be operated much more rapidly than the sounding lines, thus enabling the German Meteor expedition aboard the German research vessel Meteor (1925–27) to take frequent soundings on east-west Atlantic transects. Maps produced from these techniques show the major Atlantic basins, but the depth precision of these early instruments was not sufficient to reveal the flat featureless abyssal plains. As technology improved, measurement of depth, latitude and longitude became more precise and it became possible to collect more or less continuous sets of data points. This allowed researchers to draw accurate and detailed maps of large areas of the ocean floor. Use of a continuously recording fathometer enabled Tolstoy & Ewing in the summer of 1947 to identify and describe the first abyssal plain. This plain, south of Newfoundland, is now known as the Sohm Abyssal Plain. Following this discovery many other examples were found in all the oceans. The Challenger Deep is the deepest surveyed point of all of Earth's oceans; it is at the south end of the Mariana Trench near the Mariana Islands group. The depression is named after HMS Challenger, whose researchers made the first recordings of its depth on 23 March 1875 at station 225. The reported depth was 4,475 fathoms (8184 meters) based on two separate soundings. On 1 June 2009, sonar mapping of the Challenger Deep by the Simrad EM120 multibeam sonar bathymetry system aboard the R/V Kilo Moana indicated a maximum depth of 10971 meters (6.82 miles). The sonar system uses phase and amplitude bottom detection, with an accuracy of better than 0.2% of water depth (this is an error of about 22 meters at this depth). Terrain features Hydrothermal vents A rare but important terrain feature found in the bathyal, abyssal and hadal zones is the hydrothermal vent. In contrast to the approximately 2 °C ambient water temperature at these depths, water emerges from these vents at temperatures ranging from 60 °C up to as high as 464 °C. Due to the high barometric pressure at these depths, water may exist in either its liquid form or as a supercritical fluid at such temperatures. At a barometric pressure of 218 atmospheres, the critical point of water is 375 °C. At a depth of 3,000 meters, the barometric pressure of sea water is more than 300 atmospheres (as salt water is denser than fresh water). At this depth and pressure, seawater becomes supercritical at a temperature of 407 °C (see image). However the increase in salinity at this depth pushes the water closer to its critical point. Thus, water emerging from the hottest parts of some hydrothermal vents, black smokers and submarine volcanoes can be a supercritical fluid, possessing physical properties between those of a gas and those of a liquid. Sister Peak (Comfortless Cove Hydrothermal Field, , elevation −2996 m), Shrimp Farm and Mephisto (Red Lion Hydrothermal Field, , elevation −3047 m), are three hydrothermal vents of the black smoker category, on the Mid-Atlantic Ridge near Ascension Island. They are presumed to have been active since an earthquake shook the region in 2002. These vents have been observed to vent phase-separated, vapor-type fluids. In 2008, sustained exit temperatures of up to 407 °C were recorded at one of these vents, with a peak recorded temperature of up to 464 °C. These thermodynamic conditions exceed the critical point of seawater, and are the highest temperatures recorded to date from the seafloor. This is the first reported evidence for direct magmatic-hydrothermal interaction on a slow-spreading mid-ocean ridge. The initial stages of a vent chimney begin with the deposition of the mineral anhydrite. Sulfides of copper, iron, and zinc then precipitate in the chimney gaps, making it less porous over the course of time. Vent growths on the order of 30 cm (1 ft) per day have been recorded.[11] An April 2007 exploration of the deep-sea vents off the coast of Fiji found those vents to be a significant source of dissolved iron (see iron cycle). Hydrothermal vents in the deep ocean typically form along the mid-ocean ridges, such as the East Pacific Rise and the Mid-Atlantic Ridge. These are locations where two tectonic plates are diverging and new crust is being formed. Cold seeps Another unusual feature found in the abyssal and hadal zones is the cold seep, sometimes called a cold vent. This is an area of the seabed where seepage of hydrogen sulfide, methane and other hydrocarbon-rich fluid occurs, often in the form of a deep-sea brine pool. The first cold seeps were discovered in 1983, at a depth of 3200 meters in the Gulf of Mexico. Since then, cold seeps have been discovered in many other areas of the World Ocean, including the Monterey Submarine Canyon just off Monterey Bay, California, the Sea of Japan, off the Pacific coast of Costa Rica, off the Atlantic coast of Africa, off the coast of Alaska, and under an ice shelf in Antarctica. Biodiversity Though the plains were once assumed to be vast, desert-like habitats, research over the past decade or so shows that they teem with a wide variety of microbial life. However, ecosystem structure and function at the deep seafloor have historically been poorly studied because of the size and remoteness of the abyss. Recent oceanographic expeditions conducted by an international group of scientists from the Census of Diversity of Abyssal Marine Life (CeDAMar) have found an extremely high level of biodiversity on abyssal plains, with up to 2000 species of bacteria, 250 species of protozoans, and 500 species of invertebrates (worms, crustaceans and molluscs), typically found at single abyssal sites. New species make up more than 80% of the thousands of seafloor invertebrate species collected at any abyssal station, highlighting our heretofore poor understanding of abyssal diversity and evolution. Richer biodiversity is associated with areas of known phytodetritus input and higher organic carbon flux. Abyssobrotula galatheae, a species of cusk eel in the family Ophidiidae, is among the deepest-living species of fish. In 1970, one specimen was trawled from a depth of 8370 meters in the Puerto Rico Trench. The animal was dead, however, upon arrival at the surface. In 2008, the hadal snailfish (Pseudoliparis amblystomopsis) was observed and recorded at a depth of 7700 meters in the Japan Trench. In December 2014 a type of snailfish was filmed at a depth of 8145 meters, followed in May 2017 by another sailfish filmed at 8178 meters. These are, to date, the deepest living fish ever recorded. Other fish of the abyssal zone include the fishes of the family Ipnopidae, which includes the abyssal spiderfish (Bathypterois longipes), tripodfish (Bathypterois grallator), feeler fish (Bathypterois longifilis), and the black lizardfish (Bathysauropsis gracilis). Some members of this family have been recorded from depths of more than 6000 meters. CeDAMar scientists have demonstrated that some abyssal and hadal species have a cosmopolitan distribution. One example of this would be protozoan foraminiferans, certain species of which are distributed from the Arctic to the Antarctic. Other faunal groups, such as the polychaete worms and isopod crustaceans, appear to be endemic to certain specific plains and basins. Many apparently unique taxa of nematode worms have also been recently discovered on abyssal plains. This suggests that the deep ocean has fostered adaptive radiations. The taxonomic composition of the nematode fauna in the abyssal Pacific is similar, but not identical to, that of the North Atlantic. A list of some of the species that have been discovered or redescribed by CeDAMar can be found here. Eleven of the 31 described species of Monoplacophora (a class of mollusks) live below 2000 meters. Of these 11 species, two live exclusively in the hadal zone. The greatest number of monoplacophorans are from the eastern Pacific Ocean along the oceanic trenches. However, no abyssal monoplacophorans have yet been found in the Western Pacific and only one abyssal species has been identified in the Indian Ocean. Of the 922 known species of chitons (from the Polyplacophora class of mollusks), 22 species (2.4%) are reported to live below 2000 meters and two of them are restricted to the abyssal plain. Although genetic studies are lacking, at least six of these species are thought to be eurybathic (capable of living in a wide range of depths), having been reported as occurring from the sublittoral to abyssal depths. A large number of the polyplacophorans from great depths are herbivorous or xylophagous, which could explain the difference between the distribution of monoplacophorans and polyplacophorans in the world's oceans. Peracarid crustaceans, including isopods, are known to form a significant part of the macrobenthic community that is responsible for scavenging on large food falls onto the sea floor. In 2000, scientists of the Diversity of the deep Atlantic benthos (DIVA 1) expedition (cruise M48/1 of the German research vessel RV Meteor III) discovered and collected three new species of the Asellota suborder of benthic isopods from the abyssal plains of the Angola Basin in the South Atlantic Ocean. In 2003, De Broyer et al. collected some 68,000 peracarid crustaceans from 62 species from baited traps deployed in the Weddell Sea, Scotia Sea, and off the South Shetland Islands. They found that about 98% of the specimens belonged to the amphipod superfamily Lysianassoidea, and 2% to the isopod family Cirolanidae. Half of these species were collected from depths of greater than 1000 meters. In 2005, the Japan Agency for Marine-Earth Science and Technology (JAMSTEC) remotely operated vehicle, KAIKO, collected sediment core from the Challenger Deep. 432 living specimens of soft-walled foraminifera were identified in the sediment samples. Foraminifera are single-celled protists that construct shells. There are an estimated 4,000 species of living foraminifera. Out of the 432 organisms collected, the overwhelming majority of the sample consisted of simple, soft-shelled foraminifera, with others representing species of the complex, multi-chambered genera Leptohalysis and Reophax. Overall, 85% of the specimens consisted of soft-shelled allogromiids. This is unusual compared to samples of sediment-dwelling organisms from other deep-sea environments, where the percentage of organic-walled foraminifera ranges from 5% to 20% of the total. Small organisms with hard calciferous shells have trouble growing at extreme depths because the water at that depth is severely lacking in calcium carbonate. The giant (5–20 cm) foraminifera known as xenophyophores are only found at depths of 500–10,000 metres, where they can occur in great numbers and greatly increase animal diversity due to their bioturbation and provision of living habitat for small animals. While similar lifeforms have been known to exist in shallower oceanic trenches (>7,000 m) and on the abyssal plain, the lifeforms discovered in the Challenger Deep may represent independent taxa from those shallower ecosystems. This preponderance of soft-shelled organisms at the Challenger Deep may be a result of selection pressure. Millions of years ago, the Challenger Deep was shallower than it is now. Over the past six to nine million years, as the Challenger Deep grew to its present depth, many of the species present in the sediment of that ancient biosphere were unable to adapt to the increasing water pressure and changing environment. Those species that were able to adapt may have been the ancestors of the organisms currently endemic to the Challenger Deep. Polychaetes occur throughout the Earth's oceans at all depths, from forms that live as plankton near the surface, to the deepest oceanic trenches. The robot ocean probe Nereus observed a 2–3 cm specimen (still unclassified) of polychaete at the bottom of the Challenger Deep on 31 May 2009. There are more than 10,000 described species of polychaetes; they can be found in nearly every marine environment. Some species live in the coldest ocean temperatures of the hadal zone, while others can be found in the extremely hot waters adjacent to hydrothermal vents. Within the abyssal and hadal zones, the areas around submarine hydrothermal vents and cold seeps have by far the greatest biomass and biodiversity per unit area. Fueled by the chemicals dissolved in the vent fluids, these areas are often home to large and diverse communities of thermophilic, halophilic and other extremophilic prokaryotic microorganisms (such as those of the sulfide-oxidizing genus Beggiatoa), often arranged in large bacterial mats near cold seeps. In these locations, chemosynthetic archaea and bacteria typically form the base of the food chain. Although the process of chemosynthesis is entirely microbial, these chemosynthetic microorganisms often support vast ecosystems consisting of complex multicellular organisms through symbiosis. These communities are characterized by species such as vesicomyid clams, mytilid mussels, limpets, isopods, giant tube worms, soft corals, eelpouts, galatheid crabs, and alvinocarid shrimp. The deepest seep community discovered thus far is in the Japan Trench, at a depth of 7700 meters. Probably the most important ecological characteristic of abyssal ecosystems is energy limitation. Abyssal seafloor communities are considered to be food limited because benthic production depends on the input of detrital organic material produced in the euphotic zone, thousands of meters above. Most of the organic flux arrives as an attenuated rain of small particles (typically, only 0.5–2% of net primary production in the euphotic zone), which decreases inversely with water depth. The small particle flux can be augmented by the fall of larger carcasses and downslope transport of organic material near continental margins. Exploitation of resources In addition to their high biodiversity, abyssal plains are of great current and future commercial and strategic interest. For example, they may be used for the legal and illegal disposal of large structures such as ships and oil rigs, radioactive waste and other hazardous waste, such as munitions. They may also be attractive sites for deep-sea fishing, and extraction of oil and gas and other minerals. Future deep-sea waste disposal activities that could be significant by 2025 include emplacement of sewage and sludge, carbon sequestration, and disposal of dredge spoils. As fish stocks dwindle in the upper ocean, deep-sea fisheries are increasingly being targeted for exploitation. Because deep sea fish are long-lived and slow growing, these deep-sea fisheries are not thought to be sustainable in the long term given current management practices. Changes in primary production in the photic zone are expected to alter the standing stocks in the food-limited aphotic zone. Hydrocarbon exploration in deep water occasionally results in significant environmental degradation resulting mainly from accumulation of contaminated drill cuttings, but also from oil spills. While the oil blowout involved in the Deepwater Horizon oil spill in the Gulf of Mexico originates from a wellhead only 1500 meters below the ocean surface, it nevertheless illustrates the kind of environmental disaster that can result from mishaps related to offshore drilling for oil and gas. Sediments of certain abyssal plains contain abundant mineral resources, notably polymetallic nodules. These potato-sized concretions of manganese, iron, nickel, cobalt, and copper, distributed on the seafloor at depths of greater than 4000 meters, are of significant commercial interest. The area of maximum commercial interest for polymetallic nodule mining (called the Pacific nodule province) lies in international waters of the Pacific Ocean, stretching from 118°–157°, and from 9°–16°N, an area of more than 3 million km2. The abyssal Clarion-Clipperton fracture zone (CCFZ) is an area within the Pacific nodule province that is currently under exploration for its mineral potential. Eight commercial contractors are currently licensed by the International Seabed Authority (an intergovernmental organization established to organize and control all mineral-related activities in the international seabed area beyond the limits of national jurisdiction) to explore nodule resources and to test mining techniques in eight claim areas, each covering 150,000 km2. When mining ultimately begins, each mining operation is projected to directly disrupt 300–800 km2 of seafloor per year and disturb the benthic fauna over an area 5–10 times that size due to redeposition of suspended sediments. Thus, over the 15-year projected duration of a single mining operation, nodule mining might severely damage abyssal seafloor communities over areas of 20,000 to 45,000 km2 (a zone at least the size of Massachusetts). Limited knowledge of the taxonomy, biogeography and natural history of deep sea communities prevents accurate assessment of the risk of species extinctions from large-scale mining. Data acquired from the abyssal North Pacific and North Atlantic suggest that deep-sea ecosystems may be adversely affected by mining operations on decadal time scales. In 1978, a dredge aboard the Hughes Glomar Explorer, operated by the American mining consortium Ocean Minerals Company (OMCO), made a mining track at a depth of 5000 meters in the nodule fields of the CCFZ. In 2004, the French Research Institute for Exploitation of the Sea (IFREMER) conducted the Nodinaut expedition to this mining track (which is still visible on the seabed) to study the long-term effects of this physical disturbance on the sediment and its benthic fauna. Samples taken of the superficial sediment revealed that its physical and chemical properties had not shown any recovery since the disturbance made 26 years earlier. On the other hand, the biological activity measured in the track by instruments aboard the crewed submersible bathyscaphe Nautile did not differ from a nearby unperturbed site. This data suggests that the benthic fauna and nutrient fluxes at the water–sediment interface has fully recovered. List of abyssal plains
Physical sciences
Oceanic and coastal landforms
null