id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
46981
https://en.wikipedia.org/wiki/Scutum%20%28constellation%29
Scutum (constellation)
Scutum is a small constellation. Its name is Latin for shield, and it was originally named Scutum Sobiescianum by Johannes Hevelius in 1684. Located just south of the celestial equator, its four brightest stars form a narrow diamond shape. It is one of the 88 IAU designated constellations defined in 1922. History Scutum was named in 1684 by Polish astronomer Johannes Hevelius (Jan Heweliusz), who originally named it Scutum Sobiescianum (Shield of Sobieski) to commemorate the victory of the Christian forces led by Polish King John III Sobieski (Jan III Sobieski) in the Battle of Vienna in 1683. Later, the name was shortened to Scutum. Five bright stars of Scutum (α Sct, β Sct, δ Sct, ε Sct and η Sct) were previously known as 1, 6, 2, 3, and 9 Aquilae respectively. The constellation of Scutum was adopted by the International Astronomical Union in 1922 as one of the 88 constellations covering the entire sky, with the official abbreviation of "Sct". The constellation boundaries are defined by a quadrilateral. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −3.83° and −15.94°. Coincidentally, the Chinese also associated these stars with battle armor, incorporating them into the larger asterism known as Tien Pien, i.e., the Heavenly Casque (or Helmet). Features Stars Scutum is not a bright constellation, with the brightest star, Alpha Scuti, being a K-type giant star at magnitude 3.85. However, some stars are notable in the constellation. Beta Scuti is the second brightest at magnitude 4.22, followed by Delta Scuti at magnitude 4.72. It is also known as 6 Aquilae. Beta Scuti is a binary system, with the primary with a spectral type similar to the Sun, although it is 1,270 times brighter. Delta Scuti is a bluish white giant star, which is now coming at the direction of the Solar System. Within 1.3 million years it will come as close to 10 light years from Earth, and will be much brighter than Sirius by that time. UY Scuti is a red supergiant and is also one of the largest stars currently known with a radius over 900 times that of the Sun. RSGC1-F01 is another red supergiant whose radius is over 1,450 times that of the Sun. Scutum contains several clusters of supergiant stars, including RSGC1, Stephenson 2 and RSGC3. Deep sky objects Although not a large constellation, Scutum contains several open clusters, as well as a globular cluster and a planetary nebula. The two best known deep sky objects in Scutum are M11 (the Wild Duck Cluster) and the open cluster M26 (NGC 6694). The globular cluster NGC 6712 and the planetary nebula IC 1295 can be found in the eastern part of the constellation, only 24 arcminutes apart. The most prominent open cluster in Scutum is the Wild Duck Cluster, M11. It was named by William Henry Smyth in 1844 for its resemblance in the eyepiece to a flock of ducks in flight. The cluster, 6200 light-years from Earth and 20 light-years in diameter, contains approximately 3000 stars, making it a particularly rich cluster. It is around 220 million years old, although some studies give older estimates. Estimates for the mass of the star cluster range from to . Space exploration The space probe Pioneer 11 is moving in the direction of this constellation. It will not near the closest star in this constellation for over a million years at its present speed, by which time its batteries will be long dead.
Physical sciences
Other
Astronomy
46999
https://en.wikipedia.org/wiki/Buffer%20solution
Buffer solution
A buffer solution is a solution where the pH does not change significantly on dilution or if an acid or base is added at constant temperature. Its pH changes very little when a small amount of strong acid or base is added to it. Buffer solutions are used as a means of keeping pH at a nearly constant value in a wide variety of chemical applications. In nature, there are many living systems that use buffering for pH regulation. For example, the bicarbonate buffering system is used to regulate the pH of blood, and bicarbonate also acts as a buffer in the ocean. Principles of buffering Buffer solutions resist pH change because of a chemical equilibrium between the weak acid HA and its conjugate base A−: When some strong acid is added to an equilibrium mixture of the weak acid and its conjugate base, hydrogen ions (H+) are added, and the equilibrium is shifted to the left, in accordance with Le Chatelier's principle. Because of this, the hydrogen ion concentration increases by less than the amount expected for the quantity of strong acid added. Similarly, if strong alkali is added to the mixture, the hydrogen ion concentration decreases by less than the amount expected for the quantity of alkali added. In Figure 1, the effect is illustrated by the simulated titration of a weak acid with pKa = 4.7. The relative concentration of undissociated acid is shown in blue, and of its conjugate base in red. The pH changes relatively slowly in the buffer region, pH = pKa ± 1, centered at pH = 4.7, where [HA] = [A−]. The hydrogen ion concentration decreases by less than the amount expected because most of the added hydroxide ion is consumed in the reaction and only a little is consumed in the neutralization reaction (which is the reaction that results in an increase in pH) Once the acid is more than 95% deprotonated, the pH rises rapidly because most of the added alkali is consumed in the neutralization reaction. Buffer capacity Buffer capacity is a quantitative measure of the resistance to change of pH of a solution containing a buffering agent with respect to a change of acid or alkali concentration. It can be defined as follows: where is an infinitesimal amount of added base, or where is an infinitesimal amount of added acid. pH is defined as −log10[H+], and d(pH) is an infinitesimal change in pH. With either definition the buffer capacity for a weak acid HA with dissociation constant Ka can be expressed as where [H+] is the concentration of hydrogen ions, and is the total concentration of added acid. Kw is the equilibrium constant for self-ionization of water, equal to 1.0. Note that in solution H+ exists as the hydronium ion H3O+, and further aquation of the hydronium ion has negligible effect on the dissociation equilibrium, except at very high acid concentration. This equation shows that there are three regions of raised buffer capacity (see figure 2). In the central region of the curve (coloured green on the plot), the second term is dominant, and Buffer capacity rises to a local maximum at pH = pKa. The height of this peak depends on the value of pKa. Buffer capacity is negligible when the concentration [HA] of buffering agent is very small and increases with increasing concentration of the buffering agent. Some authors show only this region in graphs of buffer capacity. Buffer capacity falls to 33% of the maximum value at pH = pKa ± 1, to 10% at pH = pKa ± 1.5 and to 1% at pH = pKa ± 2. For this reason the most useful range is approximately pKa ± 1. When choosing a buffer for use at a specific pH, it should have a pKa value as close as possible to that pH. With strongly acidic solutions, pH less than about 2 (coloured red on the plot), the first term in the equation dominates, and buffer capacity rises exponentially with decreasing pH: This results from the fact that the second and third terms become negligible at very low pH. This term is independent of the presence or absence of a buffering agent. With strongly alkaline solutions, pH more than about 12 (coloured blue on the plot), the third term in the equation dominates, and buffer capacity rises exponentially with increasing pH: This results from the fact that the first and second terms become negligible at very high pH. This term is also independent of the presence or absence of a buffering agent. Applications of buffers The pH of a solution containing a buffering agent can only vary within a narrow range, regardless of what else may be present in the solution. In biological systems this is an essential condition for enzymes to function correctly. For example, in human blood a mixture of carbonic acid (HCO) and bicarbonate (HCO) is present in the plasma fraction; this constitutes the major mechanism for maintaining the pH of blood between 7.35 and 7.45. Outside this narrow range (7.40 ± 0.05 pH unit), acidosis and alkalosis metabolic conditions rapidly develop, ultimately leading to death if the correct buffering capacity is not rapidly restored. If the pH value of a solution rises or falls too much, the effectiveness of an enzyme decreases in a process, known as denaturation, which is usually irreversible. The majority of biological samples that are used in research are kept in a buffer solution, often phosphate buffered saline (PBS) at pH 7.4. In industry, buffering agents are used in fermentation processes and in setting the correct conditions for dyes used in colouring fabrics. They are also used in chemical analysis and calibration of pH meters. Simple buffering agents {| class="wikitable" ! Buffering agent !! pKa !! Useful pH range |- | Citric acid || 3.13, 4.76, 6.40 || 2.1–7.4 |- | Acetic acid || 4.8 || 3.8–5.8 |- | KH2PO4 || 7.2 || 6.2–8.2 |- | CHES || 9.3 || 8.3–10.3 |- | Borate || 9.24 || 8.25–10.25 |} For buffers in acid regions, the pH may be adjusted to a desired value by adding a strong acid such as hydrochloric acid to the particular buffering agent. For alkaline buffers, a strong base such as sodium hydroxide may be added. Alternatively, a buffer mixture can be made from a mixture of an acid and its conjugate base. For example, an acetate buffer can be made from a mixture of acetic acid and sodium acetate. Similarly, an alkaline buffer can be made from a mixture of the base and its conjugate acid. "Universal" buffer mixtures By combining substances with pKa values differing by only two or less and adjusting the pH, a wide range of buffers can be obtained. Citric acid is a useful component of a buffer mixture because it has three pKa values, separated by less than two. The buffer range can be extended by adding other buffering agents. The following mixtures (McIlvaine's buffer solutions) have a buffer range of pH 3 to 8. {| class="wikitable" ! 0.2 M Na2HPO4 (mL) ! 0.1 M citric acid (mL) ! pH |- | 20.55 | 79.45 | style="background:#ff0000; color:white" | 3.0 |- | 38.55 | 61.45 | style="background:#ff7777; color:white" |4.0 |- | 51.50 | 48.50 | style="background:#ff7700;" | 5.0 |- | 63.15 | 36.85 | style="background:#ffff00;" |6.0 |- | 82.35 | 17.65 | style="background:#007777; color:white" | 7.0 |- | 97.25 | 2.75 |style="background:#0077ff; color:white" | 8.0 |} A mixture containing citric acid, monopotassium phosphate, boric acid, and diethyl barbituric acid can be made to cover the pH range 2.6 to 12. Other universal buffers are the Carmody buffer and the Britton–Robinson buffer, developed in 1931. Common buffer compounds used in biology For effective range see Buffer capacity, above. Also see Good's buffers for the historic design principles and favourable properties of these buffer substances in biochemical applications. Calculating buffer pH Monoprotic acids First write down the equilibrium expression This shows that when the acid dissociates, equal amounts of hydrogen ion and anion are produced. The equilibrium concentrations of these three components can be calculated in an ICE table (ICE standing for "initial, change, equilibrium"). {| class="wikitable" |+ ICE table for a monoprotic acid |- ! ! [HA] !! [A−] !! [H+] |- ! I | C0 || 0 || y |- ! C | −x || x || x |- ! E | C0 − x || x || x + y |} The first row, labelled I, lists the initial conditions: the concentration of acid is C0, initially undissociated, so the concentrations of A− and H+ would be zero; y is the initial concentration of added strong acid, such as hydrochloric acid. If strong alkali, such as sodium hydroxide, is added, then y will have a negative sign because alkali removes hydrogen ions from the solution. The second row, labelled C for "change", specifies the changes that occur when the acid dissociates. The acid concentration decreases by an amount −x, and the concentrations of A− and H+ both increase by an amount +x. This follows from the equilibrium expression. The third row, labelled E for "equilibrium", adds together the first two rows and shows the concentrations at equilibrium. To find x, use the formula for the equilibrium constant in terms of concentrations: Substitute the concentrations with the values found in the last row of the ICE table: Simplify to With specific values for C0, Ka and y, this equation can be solved for x. Assuming that pH = −log10[H+], the pH can be calculated as pH = −log10(x + y). Polyprotic acids Polyprotic acids are acids that can lose more than one proton. The constant for dissociation of the first proton may be denoted as Ka1, and the constants for dissociation of successive protons as Ka2, etc. Citric acid is an example of a polyprotic acid H3A, as it can lose three protons. {| class="wikitable" style="width: 230px; |+ Stepwise dissociation constants |- ! |Equilibrium!!Citric acid |- | H3A H2A− + H+||pKa1 = 3.13 |- | H2A− HA2− + H+|| pKa2 = 4.76 |- | HA2− A3− + H+|| pKa3 = 6.40 |} When the difference between successive pKa values is less than about 3, there is overlap between the pH range of existence of the species in equilibrium. The smaller the difference, the more the overlap. In the case of citric acid, the overlap is extensive and solutions of citric acid are buffered over the whole range of pH 2.5 to 7.5. Calculation of the pH with a polyprotic acid requires a speciation calculation to be performed. In the case of citric acid, this entails the solution of the two equations of mass balance: CA is the analytical concentration of the acid, CH is the analytical concentration of added hydrogen ions, βq are the cumulative association constants. Kw is the constant for self-ionization of water. There are two non-linear simultaneous equations in two unknown quantities [A3−] and [H+]. Many computer programs are available to do this calculation. The speciation diagram for citric acid was produced with the program HySS. N.B. The numbering of cumulative, overall constants is the reverse of the numbering of the stepwise, dissociation constants. {| class="wikitable" |+ Relationship between cumulative association constant (β) values and stepwise dissociation constant (K) values for a tribasic acid. ! Equilibrium!! Relationship |- | A3− + H+ AH2+||Log β1= pka3 |- | A3− + 2H+ AH2+||Log β2 =pka2 + pka3 |- | A3− + 3H+ AH3||Log β3 = pka1 + pka2 + pka3 |} Cumulative association constants are used in general-purpose computer programs such as the one used to obtain the speciation diagram above.
Physical sciences
Concepts
Chemistry
47011
https://en.wikipedia.org/wiki/Arrhenius%20equation
Arrhenius equation
In physical chemistry, the Arrhenius equation is a formula for the temperature dependence of reaction rates. The equation was proposed by Svante Arrhenius in 1889, based on the work of Dutch chemist Jacobus Henricus van 't Hoff who had noted in 1884 that the van 't Hoff equation for the temperature dependence of equilibrium constants suggests such a formula for the rates of both forward and reverse reactions. This equation has a vast and important application in determining the rate of chemical reactions and for calculation of energy of activation. Arrhenius provided a physical justification and interpretation for the formula. Currently, it is best seen as an empirical relationship. It can be used to model the temperature variation of diffusion coefficients, population of crystal vacancies, creep rates, and many other thermally induced processes and reactions. The Eyring equation, developed in 1935, also expresses the relationship between rate and energy. Formulation The Arrhenius equation describes the exponential dependence of the rate constant of a chemical reaction on the absolute temperature as where is the rate constant (frequency of collisions resulting in a reaction), is the absolute temperature, is the pre-exponential factor or Arrhenius factor or frequency factor. Arrhenius originally considered A to be a temperature-independent constant for each chemical reaction. However more recent treatments include some temperature dependence – see below. is the molar activation energy for the reaction, is the universal gas constant. Alternatively, the equation may be expressed as where is the activation energy for the reaction (in the same unit as kBT), is the Boltzmann constant. The only difference is the unit of : the former form uses energy per mole, which is common in chemistry, while the latter form uses energy per molecule directly, which is common in physics. The different units are accounted for in using either the gas constant, , or the Boltzmann constant, , as the multiplier of temperature . The unit of the pre-exponential factor are identical to those of the rate constant and will vary depending on the order of the reaction. If the reaction is first order it has the unit s−1, and for that reason it is often called the frequency factor or attempt frequency of the reaction. Most simply, is the number of collisions that result in a reaction per second, is the number of collisions (leading to a reaction or not) per second occurring with the proper orientation to react and is the probability that any given collision will result in a reaction. It can be seen that either increasing the temperature or decreasing the activation energy (for example through the use of catalysts) will result in an increase in rate of reaction. Given the small temperature range of kinetic studies, it is reasonable to approximate the activation energy as being independent of the temperature. Similarly, under a wide range of practical conditions, the weak temperature dependence of the pre-exponential factor is negligible compared to the temperature dependence of the factor ; except in the case of "barrierless" diffusion-limited reactions, in which case the pre-exponential factor is dominant and is directly observable. With this equation it can be roughly estimated that the rate of reaction increases by a factor of about 2 to 3 for every 10 °C rise in temperature, for common values of activation energy and temperature range. The factor denotes the fraction of molecules with energy greater than or equal to . Derivation Van't Hoff argued that the temperature of a reaction and the standard equilibrium constant exhibit the relation: where denotes the apposite standard internal energy change value. Let and respectively denote the forward and backward reaction rates of the reaction of interest, then , an equation from which naturally follows. Substituting the expression for in eq.(), we obtain . The preceding equation can be broken down into the following two equations: and where and are the activation energies associated with the forward and backward reactions respectively, with . Experimental findings suggest that the constants in eq.() and eq.() can be treated as being equal to zero, so that and Integrating these equations and taking the exponential yields the results and , where each pre-exponential factor or is mathematically the exponential of the constant of integration for the respective indefinite integral in question. Arrhenius plot Taking the natural logarithm of Arrhenius equation yields: Rearranging yields: This has the same form as an equation for a straight line: where x is the reciprocal of T. So, when a reaction has a rate constant obeying the Arrhenius equation, a plot of ln k versus T−1 gives a straight line, whose slope and intercept can be used to determine Ea and A respectively. This procedure is common in experimental chemical kinetics. The activation energy is simply obtained by multiplying by (−R) the slope of the straight line drawn from a plot of ln k versus (1/T): Modified Arrhenius equation The modified Arrhenius equation makes explicit the temperature dependence of the pre-exponential factor. The modified equation is usually of the form The original Arrhenius expression above corresponds to . Fitted rate constants typically lie in the range . Theoretical analyses yield various predictions for n. It has been pointed out that "it is not feasible to establish, on the basis of temperature studies of the rate constant, whether the predicted T1/2 dependence of the pre-exponential factor is observed experimentally". However, if additional evidence is available, from theory and/or from experiment (such as density dependence), there is no obstacle to incisive tests of the Arrhenius law. Another common modification is the stretched exponential form where β is a dimensionless number of order 1. This is typically regarded as a purely empirical correction or fudge factor to make the model fit the data, but can have theoretical meaning, for example showing the presence of a range of activation energies or in special cases like the Mott variable range hopping. Theoretical interpretation Arrhenius's concept of activation energy Arrhenius argued that for reactants to transform into products, they must first acquire a minimum amount of energy, called the activation energy Ea. At an absolute temperature T, the fraction of molecules that have a kinetic energy greater than Ea can be calculated from statistical mechanics. The concept of activation energy explains the exponential nature of the relationship, and in one way or another, it is present in all kinetic theories. The calculations for reaction rate constants involve an energy averaging over a Maxwell–Boltzmann distribution with as lower bound and so are often of the type of incomplete gamma functions, which turn out to be proportional to . Collision theory One approach is the collision theory of chemical reactions, developed by Max Trautz and William Lewis in the years 1916–18. In this theory, molecules are supposed to react if they collide with a relative kinetic energy along their line of centers that exceeds Ea. The number of binary collisions between two unlike molecules per second per unit volume is found to be where NA is the Avogadro constant, dAB is the average diameter of A and B, T is the temperature which is multiplied by the Boltzmann constant kB to convert to energy, and μAB is the reduced mass. The rate constant is then calculated as , so that the collision theory predicts that the pre-exponential factor is equal to the collision number zAB. However for many reactions this agrees poorly with experiment, so the rate constant is written instead as . Here is an empirical steric factor, often much less than 1.00, which is interpreted as the fraction of sufficiently energetic collisions in which the two molecules have the correct mutual orientation to react. Transition state theory The Eyring equation, another Arrhenius-like expression, appears in the "transition state theory" of chemical reactions, formulated by Eugene Wigner, Henry Eyring, Michael Polanyi and M. G. Evans in the 1930s. The Eyring equation can be written: where is the Gibbs energy of activation, is the entropy of activation, is the enthalpy of activation, is the Boltzmann constant, and is the Planck constant. At first sight this looks like an exponential multiplied by a factor that is linear in temperature. However, free energy is itself a temperature dependent quantity. The free energy of activation is the difference of an enthalpy term and an entropy term multiplied by the absolute temperature. The pre-exponential factor depends primarily on the entropy of activation. The overall expression again takes the form of an Arrhenius exponential (of enthalpy rather than energy) multiplied by a slowly varying function of T. The precise form of the temperature dependence depends upon the reaction, and can be calculated using formulas from statistical mechanics involving the partition functions of the reactants and of the activated complex. Limitations of the idea of Arrhenius activation energy Both the Arrhenius activation energy and the rate constant k are experimentally determined, and represent macroscopic reaction-specific parameters that are not simply related to threshold energies and the success of individual collisions at the molecular level. Consider a particular collision (an elementary reaction) between molecules A and B. The collision angle, the relative translational energy, the internal (particularly vibrational) energy will all determine the chance that the collision will produce a product molecule AB. Macroscopic measurements of E and k are the result of many individual collisions with differing collision parameters. To probe reaction rates at molecular level, experiments are conducted under near-collisional conditions and this subject is often called molecular reaction dynamics. Another situation where the explanation of the Arrhenius equation parameters falls short is in heterogeneous catalysis, especially for reactions that show Langmuir-Hinshelwood kinetics. Clearly, molecules on surfaces do not "collide" directly, and a simple molecular cross-section does not apply here. Instead, the pre-exponential factor reflects the travel across the surface towards the active site. There are deviations from the Arrhenius law during the glass transition in all classes of glass-forming matter. The Arrhenius law predicts that the motion of the structural units (atoms, molecules, ions, etc.) should slow down at a slower rate through the glass transition than is experimentally observed. In other words, the structural units slow down at a faster rate than is predicted by the Arrhenius law. This observation is made reasonable assuming that the units must overcome an energy barrier by means of a thermal activation energy. The thermal energy must be high enough to allow for translational motion of the units which leads to viscous flow of the material.
Physical sciences
Kinetics
Chemistry
47139
https://en.wikipedia.org/wiki/Incandescent%20light%20bulb
Incandescent light bulb
An incandescent light bulb, incandescent lamp or incandescent light globe is an electric light with a filament that is heated until it glows. The filament is enclosed in a glass bulb that is either evacuated or filled with inert gas to protect the filament from oxidation. Electric current is supplied to the filament by terminals or wires embedded in the glass. A bulb socket provides mechanical support and electrical connections. Incandescent bulbs are manufactured in a wide range of sizes, light output, and voltage ratings, from 1.5 volts to about 300 volts. They require no external regulating equipment, have low manufacturing costs, and work equally well on either alternating current or direct current. As a result, the incandescent bulb became widely used in household and commercial lighting, for portable lighting such as table lamps, car headlamps, and flashlights, and for decorative and advertising lighting. Incandescent bulbs are much less efficient than other types of electric lighting. Less than 5% of the energy they consume is converted into visible light; the rest is lost as heat. The luminous efficacy of a typical incandescent bulb for 120 V operation is 16 lumens per watt (lm/W), compared with 60 lm/W for a compact fluorescent bulb or 100 lm/W for typical white LED lamps. The heat produced by filaments is used in some applications, such as heat lamps in incubators, lava lamps, Edison effect bulbs, and the Easy-Bake Oven toy. Quartz envelope halogen infrared heaters are used for industrial processes such as paint curing and space heating. Incandescent bulbs typically have shorter lifetimes compared to other types of lighting; around 1,000 hours for home light bulbs versus typically 10,000 hours for compact fluorescents and 20,000–30,000 hours for lighting LEDs. Most incandescent bulbs can be replaced by fluorescent lamps, high-intensity discharge lamps, and light-emitting diode lamps (LED). Some governments have begun a phase-out of incandescent light bulbs to reduce energy consumption. History Historians Robert Friedel and Paul Israel list inventors of incandescent lamps prior to Joseph Swan and Thomas Edison of General Electric. They conclude that Edison's version was the first practical implementation, able to outstrip the others because of a combination of four factors: an effective incandescent material; a vacuum higher than other implementations which was achieved through the use of a Sprengel pump; a high resistance that made power distribution from a centralized source economically viable, and the development of the associated components required for a large-scale lighting system. Historian Thomas Hughes has attributed Edison's success to his development of an entire, integrated system of electric lighting. Early pre-commercial research In 1761, Ebenezer Kinnersley demonstrated heating a wire to incandescence. However such wires tended to melt or oxidize very rapidly (burn) in the presence of air. Limelight became a popular form of stage lighting in the early 19th century, by heating a piece of calcium oxide to incandescence with an oxyhydrogen torch. In 1802, Humphry Davy used what he described as "a battery of immense size", consisting of 2,000 cells housed in the basement of the Royal Institution of Great Britain, to create an incandescent light by passing the current through a thin strip of platinum, chosen because the metal had an extremely high melting point. It was not bright enough nor did it last long enough to be practical, but it was the precedent behind the efforts of scores of experimenters over the next 75 years. Davy also demonstrated the electric arc, by passing high current between two pieces of charcoal. For the next 40 years much research was given to turning the carbon arc lamp into a practical means of lighting. The carbon arc itself was dim and violet in color, emitting most of its energy in the ultraviolet, but the positive electrode was heated to just below the melting point of carbon and glowed very brightly with incandescence very close to that of sunlight. Arc lamps burned up their carbon rods very rapidly, expelled dangerous carbon monoxide, and tended to produce outputs in the tens of kilowatts. Therefore, they were only practical for lighting large areas, so researchers continued to search for a way to make lamps suitable for home use. Over the first three-quarters of the 19th century, many experimenters worked with various combinations of platinum or iridium wires, carbon rods, and evacuated or semi-evacuated enclosures. Many of these devices were demonstrated and some were patented. In 1835, James Bowman Lindsay demonstrated a constant electric light at a public meeting in Dundee, Scotland. He stated that he could "read a book at a distance of one and a half feet". However he did not develop the electric light any further. In 1838, Belgian lithographer Marcellin Jobard invented an incandescent light bulb with a vacuum atmosphere using a carbon filament. In 1840, British scientist Warren De la Rue enclosed a coiled platinum filament in a vacuum tube and passed an electric current through it. The design was based on the concept that the high melting point of platinum would allow it to operate at high temperatures and that the evacuated chamber would contain fewer gas molecules to react with the platinum, improving its longevity. Although a workable design, the cost of the platinum made it impractical for commercial use. In 1841, Frederick de Moleyns of England was granted the first patent for an incandescent lamp, with a design using platinum wires contained within a vacuum bulb. He also used carbon. In 1845, American John W. Starr patented an incandescent light bulb using carbon filaments. His invention was never produced commercially. In 1851, Jean Eugène Robert-Houdin publicly demonstrated incandescent light bulbs on his estate in Blois, France. His light bulbs are on display in the museum of the Château de Blois. In 1859, Moses G. Farmer built an electric incandescent light bulb using a platinum filament. Thomas Edison later saw one of these bulbs in a shop in Boston, and asked Farmer for advice on the electric light business. In 1872, Russian Alexander Lodygin invented an incandescent light bulb and obtained a Russian patent in 1874. He used as a burner two carbon rods of diminished section in a glass receiver, hermetically sealed, and filled with nitrogen, electrically arranged so that the current could be passed to the second carbon when the first had been consumed. Later he lived in the US, changed his name to Alexander de Lodyguine and applied for and obtained patents for incandescent lamps having chromium, iridium, rhodium, ruthenium, osmium, molybdenum and tungsten filaments. On 24 July 1874, a Canadian patent was filed by Henry Woodward and Mathew Evans for a lamp consisting of carbon rods mounted in a nitrogen-filled glass cylinder. They were unsuccessful at commercializing their lamp, and sold rights to their patent to Thomas Edison in 1879. (Edison needed ownership of the novel claim of lamps connected in a parallel circuit.) The government of Canada maintains that it is Woodward and Evans who invented the lightbulb. On 4 March 1880, just five months after Edison's light bulb, Alessandro Cruto created his first incandescent lamp. Cruto produced a filament by deposition of graphite on thin platinum filaments, by heating it with an electric current in the presence of gaseous ethyl alcohol. Heating this platinum at high temperatures leaves behind thin filaments of platinum coated with pure graphite. By September 1881 he had achieved a successful version of this the first synthetic filament. The light bulb invented by Cruto lasted five hundred hours as opposed to the forty of Edison's original version. In 1882 Munich Electrical Exhibition in Bavaria, Germany Cruto's lamp was more efficient than the Edison's one and produced a better, white light. In 1893, Heinrich Göbel claimed he had designed the first incandescent light bulb in 1854, with a thin carbonized bamboo filament of high resistance, platinum lead-in wires in an all-glass envelope, and a high vacuum. Judges of four courts raised doubts about the alleged Göbel anticipation, but there was never a decision in a final hearing due to the expiration of Edison's patent. Research work published in 2007 concluded that the story of the Göbel lamps in the 1850s is fictitious. Commercialization Carbon filament and vacuum Joseph Swan (1828–1914) was a British physicist and chemist. In 1850, he began working with carbonized paper filaments in an evacuated glass bulb. By 1860, he was able to demonstrate a working device but the lack of a good vacuum and an adequate supply of electricity resulted in a short lifetime for the bulb and an inefficient source of light. By the mid-1870s better pumps had become available, and Swan returned to his experiments. With the help of Charles Stearn, an expert on vacuum pumps, in 1878, Swan developed a method of processing that avoided the early bulb blackening. This received a British Patent in 1880. On 18 December 1878, a lamp using a slender carbon rod was shown at a meeting of the Newcastle Chemical Society, and Swan gave a working demonstration at their meeting on 17 January 1879. It was also shown to 700 who attended a meeting of the Literary and Philosophical Society of Newcastle upon Tyne on 3 February 1879. These lamps used a carbon rod from an arc lamp rather than a slender filament. Thus they had low resistance and required very large conductors to supply the necessary current, so they were not commercially practical, although they did furnish a demonstration of the possibilities of incandescent lighting with relatively high vacuum, a carbon conductor, and platinum lead-in wires. This bulb lasted about 40 hours. Swan then turned his attention to producing a better carbon filament and the means of attaching its ends. He devised a method of treating cotton to produce 'parchmentised thread' in the early 1880s and obtained British Patent 4933 that same year. From this year he began installing light bulbs in homes and landmarks in England. His house, Underhill, Low Fell, Gateshead, was the first in the world to be lit by a lightbulb. In the early 1880s he had started his company. In 1881, the Savoy Theatre in the City of Westminster, London was lit by Swan incandescent lightbulbs, which was the first theatre, and the first public building in the world, to be lit entirely by electricity. The first street in the world to be lit by an incandescent lightbulb was Mosley Street, Newcastle upon Tyne, United Kingdom. It was lit by Joseph Swan's incandescent lamp on 3 February 1879. Thomas Edison began serious research into developing a practical incandescent lamp in 1878. Edison filed his first patent application for "Improvement in Electric Lights" on 14 October 1878. After many experiments, first with carbon in the early 1880s and then with platinum and other metals, in the end Edison returned to a carbon filament. The first successful test was on 22 October 1879, and lasted 13.5 hours. Edison continued to improve this design and by 4 November 1879, filed for a US patent for an electric lamp using "a carbon filament or strip coiled and connected ... to platina contact wires." Although the patent described several ways of creating the carbon filament including using "cotton and linen thread, wood splints, papers coiled in various ways," Edison and his team later discovered that a carbonized bamboo filament could last more than 1200 hours. In 1880, the Oregon Railroad and Navigation Company steamer, Columbia, became the first application for Edison's incandescent electric lamps (it was also the first ship to use a dynamo). Albon Man, a New York lawyer, started Electro-Dynamic Light Company in 1878 to exploit his patents and those of William Sawyer. Weeks later the United States Electric Lighting Company was organized. This company did not make their first commercial installation of incandescent lamps until the fall of 1880, at the Mercantile Safe Deposit Company in New York City, about six months after the Edison incandescent lamps had been installed on the Columbia. Hiram S. Maxim was the chief engineer at the US Electric Lighting Co. After the great success in the United States, the incandescent light bulb patented by Edison also began to gain widespread popularity in Europe as well; among other places, the first Edison light bulbs in the Nordic countries were installed at the weaving hall of the Finlayson's textile factory in Tampere, Finland in March 1882. Lewis Latimer, employed at the time by Edison, developed an improved method of heat-treating carbon filaments which reduced breakage and allowed them to be molded into novel shapes, such as the characteristic "M" shape of Maxim filaments. On 17 January 1882, Latimer received a patent for the "Process of Manufacturing Carbons", an improved method for the production of light bulb filaments, which was purchased by the United States Electric Light Company. Latimer patented other improvements such as a better way of attaching filaments to their wire supports. In Britain, the Edison and Swan companies merged into the Edison and Swan United Electric Company (later known as Ediswan, and ultimately incorporated into Thorn Lighting Ltd). Edison was initially against this combination, but Edison was eventually forced to cooperate and the merger was made. Eventually, Edison acquired all of Swan's interest in the company. Swan sold his US patent rights to the Brush Electric Company in June 1882. The United States Patent Office gave a ruling 8 October 1883, that Edison's patents were based on the prior art of William Sawyer and were invalid. Litigation continued for a number of years. Eventually on 6 October 1889, a judge ruled that Edison's electric light improvement claim for "a filament of carbon of high resistance" was valid. The main difficulty with evacuating the lamps was moisture inside the bulb, which split when the lamp was lit, with resulting oxygen attacking the filament. In the 1880s, phosphoric anhydride was used in combination with expensive mercury vacuum pumps. However, about 1893, Italian inventor (1865–1939), who lacked these pumps, discovered that phosphorus vapours did the job of chemically binding the remaining amounts of water and oxygen. In 1896 he patented a process of introducing red phosphorus as the so-called getter inside the bulb ), which allowed obtaining economic bulbs lasting 800 hours; his patent was acquired by Edison in 1898. In 1897, German physicist and chemist Walther Nernst developed the Nernst lamp, a form of incandescent lamp that used a ceramic globar and did not require enclosure in a vacuum or inert gas. Twice as efficient as carbon filament lamps, Nernst lamps were briefly popular until overtaken by lamps using metal filaments. Metal filament, inert gas US575002A patent on 01.Dec.1897 to Alexander Lodyguine (Lodygin, Russia) describes filament made of rare metals, amongst them was tungsten. Lodygin invented a process where rare metals such as tungsten can be chemically treated and heat-vaporized onto an electrically heated thread-like wire (platinum, carbon, gold) acting as a temporary base or skeletal form. (US patent 575,002). Lodygin later sold the patent rights to GE. In 1902, Siemens developed a tantalum lamp filament that was more efficient than even graphitized carbon filaments since they could operate at higher temperature. Since tantalum metal has a lower resistivity than carbon, the tantalum lamp filament was quite long and required multiple internal supports. The metal filament gradually shortened in use; the filaments were installed with large slack loops. Lamps used for several hundred hours became quite fragile. Metal filaments had the property of breaking and re-welding, though this would usually decrease resistance and shorten the life of the filament. General Electric bought the rights to use tantalum filaments and produced them in the US until 1913. From 1898 to around 1905, osmium was also used as a filament in lamps made by Carl Auer von Welsbach. The metal was so expensive that used lamps could be returned for partial credit. It could not be made for 110 V or 220 V so several lamps were wired in series for use on standard voltage circuits. These were primarily sold in Europe. Tungsten filament On 13 December 1904, Hungarian Sándor Just and Croatian Franjo Hanaman were granted a Hungarian patent (No. 34541) for a tungsten filament lamp that lasted longer and gave brighter light than the carbon filament. Tungsten filament lamps were first marketed by the Hungarian company Tungsram in 1904. This type is often called Tungsram-bulbs in many European countries. Filling a bulb with an inert gas such as argon or nitrogen slows down the evaporation of the tungsten filament compared to operating it in a vacuum. This allows for greater temperatures and therefore greater efficacy with less reduction in filament life. In 1906, William D. Coolidge developed a method of making "ductile tungsten" from sintered tungsten which could be made into filaments while working for General Electric Company. By 1911 General Electric had begun selling incandescent light bulbs with ductile tungsten wire. In 1913, Irving Langmuir found that filling a lamp with inert gas (nitrogen at first, and later argon) instead of a vacuum resulted in twice the luminous efficacy and reduced bulb blackening.. He patented his device on April 18, 1916. In 1917, Burnie Lee Benbow was granted a patent for the coiled coil filament, in which a coiled filament is then itself wrapped into a coil by use of a mandrel. In 1921, Junichi Miura created the first double-coil bulb using a coiled coil tungsten filament while working for Hakunetsusha (a predecessor of Toshiba). At the time, machinery to mass-produce coiled coil filaments did not exist. Hakunetsusha developed a method to mass-produce coiled coil filaments by 1936. Between 1924 and the outbreak of the Second World War, the Phoebus cartel attempted to fix prices and sales quotas for bulb manufacturers outside of North America. In 1925, Marvin Pipkin, an American chemist, patented a process for frosting the inside of lamp bulbs without weakening them. In 1947, he patented a process for coating the inside of lamps with silica. In 1930, Hungarian Imre Bródy filled lamps with krypton gas rather than argon, and designed a process to obtain krypton from air. Production of krypton filled lamps based on his invention started at Ajka in 1937, in a factory co-designed by Polányi and Hungarian-born physicist Egon Orowan. By 1964, improvements in efficiency and production of incandescent lamps had reduced the cost of providing a given quantity of light by a factor of thirty, compared with the cost at introduction of Edison's lighting system. Consumption of incandescent light bulbs grew rapidly in the US. In 1885, an estimated 300,000 general lighting service lamps were sold, all with carbon filaments. When tungsten filaments were introduced, about 50 million lamp sockets existed in the US. In 1914, 88.5 million lamps were used, (only 15% with carbon filaments), and by 1945, annual sales of lamps were 795 million (more than 5 lamps per person per year). Efficacy and efficiency Less than 5% of the power consumed by a typical incandescent light bulb is converted into visible light, with most of the rest being emitted as invisible infrared radiation. Light bulbs are rated by their luminous efficacy, which is the ratio of the amount of visible light emitted (luminous flux) to the electrical power consumed. Luminous efficacy is measured in lumens per watt (lm/W). The luminous efficiency of a source is defined as the ratio of its luminous efficacy to the maximum possible luminous efficacy, which is 683 lm/W. An ideal white light source could produce about 250 lumens per watt, corresponding to a luminous efficiency of 37%. For a given quantity of light, an incandescent light bulb consumes more power and emits more heat than most other types of electric light. In buildings where air conditioning is used, incandescent lamps' heat output increases load on the air conditioning system. While heat from lights will reduce the need to run a building's heating system, the latter can usually produce the same amount of heat at lower cost than incandescent lights. The chart below lists the luminous efficacy and efficiency for several types of incandescent bulb. A longer chart in luminous efficacy compares a broader array of light sources. Color rendering The spectrum of light produced by an incandescent lamp closely approximates that of a black body radiator at the same temperature. The basis for light sources used as the standard for color perception is a tungsten incandescent lamp operating at a defined temperature. Light sources such as fluorescent lamps, high-intensity discharge lamps and LED lamps have higher luminous efficiency. These devices produce light by luminescence. Their light has bands of characteristic wavelengths, without the "tail" of invisible infrared emissions, instead of the continuous spectrum produced by a thermal source. By careful selection of fluorescent phosphor coatings or filters which modify the spectral distribution, the spectrum emitted can be tuned to mimic the appearance of incandescent sources, or other different color temperatures of white light. When used for tasks sensitive to color, such as motion picture lighting, these sources may require particular techniques to duplicate the appearance of incandescent lighting. Metamerism describes the effect of different light spectrum distributions on the perception of color. Cost of lighting The initial cost of an incandescent bulb is small compared to the cost of the energy it uses over its lifetime. Incandescent bulbs have a shorter life than most other lighting, an important factor if replacement is inconvenient or expensive. Some types of lamp, including incandescent and fluorescent, emit less light as they age; this may be an inconvenience, or may reduce effective lifetime due to lamp replacement before total failure. A comparison of incandescent lamp operating cost with other light sources must include illumination requirements, cost of the lamp and labor cost to replace lamps (taking into account effective lamp lifetime), cost of electricity used, effect of lamp operation on heating and air conditioning systems. When used for lighting in houses and commercial buildings, the energy lost to heat can significantly increase the energy required by a building's air conditioning system. During the heating season heat produced by the bulbs is not wasted, although in most cases it is more cost effective to obtain heat from the heating system. Regardless, over the course of a year a more efficient lighting system saves energy in nearly all climates. Measures to ban use Since incandescent light bulbs use more energy than alternatives such as CFLs and LED lamps, many governments have introduced measures to ban their use, by setting minimum efficacy standards higher than can be achieved by incandescent lamps. Measures to ban light bulbs have been implemented in the European Union, the United States, Russia, Brazil, Argentina, Canada and Australia, among others. The European Commission has calculated that the ban contributes to to the economy and saves 40 TWh of electricity every year, translating in emission reductions of . Objections to banning the use of incandescent light bulbs include the higher initial cost of alternatives and lower quality of light of fluorescent lamps. Some people have concerns about the health effects of fluorescent lamps. Efforts to improve efficacy Some research has been carried out to improve the efficacy of commercial incandescent lamps. In 2007, General Electric announced a high efficiency incandescent (HEI) lamp project, which they claimed would ultimately be as much as four times more efficient than current incandescents, although their initial production goal was to be approximately twice as efficient. The HEI program was terminated in 2008 due to slow progress. US Department of Energy research at Sandia National Laboratories initially indicated the potential for dramatically improved efficiency from a photonic lattice filament. However, later work indicated that initially promising results were in error. Prompted by legislation in various countries mandating increased bulb efficiency, hybrid incandescent bulbs have been introduced by Philips. The Halogena Energy Saver incandescents can produce about 23 lm/W; about 30 percent more efficient than traditional incandescents, by using a reflective capsule to reflect formerly wasted infrared radiation back to the filament from which some is re-emitted as visible light. This concept was pioneered by Duro-Test in 1980 with a commercial product that produced 29.8 lm/W. More advanced reflectors based on interference filters or photonic crystals can theoretically result in higher efficiency, up to a limit of about 270 lm/W (40% of the maximum efficacy possible). Laboratory proof-of-concept experiments have produced as much as 45 lm/W, approaching the efficacy of compact fluorescent bulbs. Construction Incandescent light bulbs consist of an air-tight glass enclosure (the envelope, or bulb) with a filament of tungsten wire inside the bulb, through which an electric current is passed. Contact wires and a base with two (or more) conductors provide electrical connections to the filament. Incandescent light bulbs usually contain a stem or glass mount anchored to the bulb's base that allows the electrical contacts to run through the envelope without air or gas leaks. Small wires embedded in the stem in turn support the filament and its lead wires. An electric current heats the filament to typically , well below tungsten's melting point of . Filament temperatures depend on the filament type, shape, size, and amount of current drawn. The heated filament emits light that approximates a continuous spectrum. The useful part of the emitted energy is visible light, but most energy is given off as heat in the near-infrared wavelengths. Bulbs Most light bulbs have either clear or coated glass. Coated glass bulbs have kaolin clay blown in and electrostatically deposited on the interior of the bulb. The powder layer diffuses the light from the filament. Pigments may be added to the clay to adjust the color of the light emitted. Kaolin diffused bulbs are used extensively in interior lighting because of their comparatively gentle light. Other kinds of colored bulbs are also made, including the various colors used for "party bulbs", Christmas tree lights and other decorative lighting. These are created by coloring the glass with a dopant; which is often a metal like cobalt (blue) or chromium (green). Neodymium-containing glass is sometimes used to provide a more natural-appearing light. The glass bulb of a general service lamp can reach temperatures between . Lamps intended for high power operation or used for heating purposes will have envelopes made of hard glass or fused quartz. If a light bulb envelope leaks, the hot tungsten filament reacts with air, yielding an aerosol of brown tungsten nitride, brown tungsten dioxide, violet-blue tungsten pentoxide, and yellow tungsten trioxide that then gets deposited on the nearby surfaces or the bulb interior. Gas fill Most modern bulbs are filled with an inert gas to reduce evaporation of the filament and prevent its oxidation. The gas is at a pressure of about . The gas reduces evaporation of the filament, but the fill must be chosen carefully to avoid introducing significant heat losses. For these properties, chemical inertness and high atomic or molecular weight is desirable. The presence of gas molecules knocks the liberated tungsten atoms back to the filament, reducing its evaporation and allowing it to be operated at higher temperature without reducing its life (or, for operating at the same temperature, prolongs the filament life). On the other hand, the presence of the gas leads to heat loss from the filament—and therefore efficiency loss due to reduced incandescence—by heat conduction and heat convection. Early lamps used only a vacuum to protect the filament from oxygen. The vacuum increases evaporation of the filament but eliminates two modes of heat loss. Some small modern lamps use vacuum as well. The most commonly used fills are: Vacuum, used in small lamps. Provides best thermal insulation of the filament but does not protect against its evaporation. Used also in larger lamps where the outer bulb surface temperature has to be limited. Argon (93%) and nitrogen (7%), where argon is used for its inertness, low thermal conductivity and low cost, and the nitrogen is added to increase the breakdown voltage and prevent arcing between parts of the filament Nitrogen, used in some higher-power lamps, e.g. projection lamps, and where higher breakdown voltage is needed due to proximity of filament parts or lead-in wires Krypton, which is more advantageous than argon due to its higher atomic weight and lower thermal conductivity (which also allows use of smaller bulbs), but its use is hindered by much higher cost, confining it mostly to smaller-size bulbs. Krypton mixed with xenon, where xenon improves the gas properties further due to its higher atomic weight. Its use is however limited by its very high cost. The improvements by using xenon are modest in comparison to its cost. Hydrogen, in special flashing lamps where rapid filament cooling is required; its high thermal conductivity is exploited here. Halogen, a small amount mixed with inert gas. This is used in halogen lamps, which are a distinct type of incandescent lamp. The gas fill must be free of traces of water, which greatly accelerates bulb blackening (see below). The gas layer close to the filament (called the Langmuir layer) is stagnant, with heat transfer occurring only by conduction. Only at some distance does convection occur to carry heat to the bulb's envelope. The orientation of the filament influences efficiency. Gas flow parallel to the filament, e.g., a vertically oriented bulb with vertical (or axial) filament, reduces convective losses. The efficiency of the lamp increases with a larger filament diameter. Thin-filament, low-power bulbs benefit less from a fill gas, so are often only evacuated. Early light bulbs with carbon filaments also used carbon monoxide, nitrogen, or mercury vapor. However, carbon filaments operate at lower temperatures than tungsten ones, so the effect of the fill gas was not significant as the heat losses offset any benefits. Manufacturing Early bulbs were laboriously assembled by hand. After automatic machinery was developed, the cost of bulbs fell. Until 1910, when Libbey's Westlake machine went into production, bulbs were generally produced by a team of three workers (two gatherers and a master gaffer) blowing the bulbs into wooden or cast-iron molds, coated with a paste. Around 150 bulbs per hour were produced by the hand-blowing process in the 1880s at Corning Glass Works. The Westlake machine, developed by Libbey Glass, was based on an adaptation of the Owens-Libbey bottle-blowing machine. Corning Glass Works soon began developing competing automated bulb-blowing machines, the first of which to be used in production was the E-Machine. Ribbon machine Corning continued developing automated bulb-production machines, installing the Ribbon Machine in 1926 in its Wellsboro, Pennsylvania, factory. The Ribbon Machine surpassed any previous attempts to automate bulb production and was used to produce incandescent bulbs into the 21st century. The inventor, William Woods, along with his colleague at Corning Glass Works, David E. Gray, had created a machine that by 1939 was turning out 1,000 bulbs per minute. The Ribbon Machine works by passing a continuous ribbon of glass along a conveyor belt, heated in a furnace, and then blown by precisely aligned air nozzles through holes in the conveyor belt into molds. Thus the glass bulbs or envelopes are created. A typical machine of this sort can produce anywhere from 50,000 to 120,000 bulbs per hour, depending on the size of the bulb. By the 1970s, 15 ribbon machines installed in factories around the world produced the entire supply of incandescent bulbs. The filament and its supports are assembled on a glass stem, which is then fused to the bulb. The air is pumped out of the bulb, and the evacuation tube in the stem press is sealed by a flame. The bulb is then inserted into the lamp base, and the whole assembly tested. The 2016 closing of Osram-Sylvania's Wellsboro, Pennsylvania plant meant that one of the last remaining ribbon machines in the United States was shut down. Filament Carbon has the highest melting point of any element, and in carbon arc lamps it had been demonstrated to produce incandescence fairly close to that of sunlight. However, carbon has a tendency to sublimate before reaching its melting point depending on pressure, which led to rapid blackening of vacuumed bulbs. The first commercially successful light bulb filaments were made from carbonized paper or bamboo. Carbon filaments have a negative temperature coefficient of resistance—as they get hotter, their electrical resistance decreases. This made the lamp sensitive to fluctuations in the power supply, since a small increase of voltage would cause the filament to heat up, reducing its resistance and causing it to draw even more power and heat even further. Carbon filaments were "flashed" by heating in a hydrocarbon vapor (usually gasoline), to improve their strength and uniformity. Metallized or "graphitized" filaments were first heated to high temperature to transform them into graphite, which further strengthened and smoothed the filament. These filaments have a positive temperature coefficient, like a metallic conductor, which stabilized the lamps operating properties against minor variations in supply voltage. Metal filaments were tried in 1897 and started to displace carbon starting around 1904. Tungsten has the highest available melting point, but brittleness was a big obstacle. By 1910, a process was developed by William D. Coolidge at General Electric for production of a ductile form of tungsten. The process required pressing tungsten powder into bars, then several steps of sintering, swaging, and then wire drawing. It was found that very pure tungsten formed filaments that sagged in use, and that a very small "doping" treatment with potassium, silicon, and aluminium oxides at the level of a few hundred parts per million (so-called AKS tungsten) greatly improved the life and durability of the tungsten filaments. The predominant mechanism for failure in tungsten filaments even now is grain boundary sliding accommodated by diffusional creep. During operation, the tungsten wire is stressed under the load of its own weight and because of the diffusion that can occur at high temperatures, grains begin to rotate and slide. This stress, because of variations in the filament, causes the filament to sag nonuniformly, which ultimately introduces further torque on the filament. It is this sagging that inevitably results in a rupture of the filament, rendering the incandescent lightbulb useless. Coiled coil filament To improve the efficiency of the lamp, the filament usually consists of multiple coils of coiled fine wire, also known as a coiled coil. Light bulbs using coiled coil filaments are sometimes referred to as 'double-coil bulbs'. For a 60-watt 120-volt lamp, the uncoiled length of the tungsten filament is usually , and the filament diameter is . The advantage of the coiled coil is that evaporation of the tungsten filament is at the rate of a tungsten cylinder having a diameter equal to that of the coiled coil. The coiled-coil filament evaporates more slowly than a straight filament of the same surface area and light-emitting power. As a result, the filament can then run hotter, which results in a more efficient light source while lasting longer than a straight filament at the same temperature. Manufacturers designate different forms of lamp filament with an alphanumeric code. Electrical filaments are also used in hot cathodes of fluorescent lamps and vacuum tubes as a source of electrons or in vacuum tubes to heat an electron-emitting electrode. When used as a source of electrons, they may have a special coating that increases electron production. Reducing filament evaporation During ordinary operation, the tungsten of the filament evaporates; hotter, more-efficient filaments evaporate faster. Because of this, the lifetime of a filament lamp is a trade-off between efficiency and longevity. The trade-off is typically set to provide a lifetime of 1,000 to 2,000 hours for lamps used for general illumination. Theatrical, photographic, and projection lamps may have a useful life of only a few hours, trading life expectancy for high output in a compact form. Long-life general service lamps have lower efficiency, but prior to the development of compact fluorescent and LED lamps they were useful in applications where the bulb was difficult to change. Irving Langmuir found that an inert gas, instead of vacuum, would retard evaporation. General service incandescent light bulbs over about 25 watts in rating are now filled with a mixture of mostly argon and some nitrogen, or sometimes krypton. While inert gas reduces filament evaporation, it also conducts heat from the filament, thereby cooling the filament and reducing efficiency. At constant pressure and temperature, the thermal conductivity of a gas depends upon the molecular weight of the gas and the cross sectional area of the gas molecules. Higher molecular weight gases have lower thermal conductivity, because both the molecular weight and cross sectional area are higher. Xenon gas improves efficiency because of its high molecular weight, but is also more expensive, so its use is limited to smaller lamps. Filament notching is due to uneven evaporation of the filament. Small variations in resistivity along the filament cause "hot spots" to form at points of higher resistivity; a variation of diameter of only 1% will cause a 25% reduction in service life. Since filament resistance is highly temperature-dependent, spots with higher temperature will have higher resistance, causing them to dissipate more energy, making them hotter – a positive feedback loop. These hot spots evaporate faster than the rest of the filament, permanently increasing the resistance at that point. The process ends in the familiar tiny gap in an otherwise healthy-looking filament. Lamps operated on direct current develop random stairstep irregularities on the filament surface which may cut lifespan in half compared to AC operation; different alloys of tungsten and rhenium can be used to counteract the effect. Since a filament breaking in a gas-filled bulb can form an electric arc, which may spread between the terminals and draw very heavy current, intentionally thin lead-in wires or more elaborate protection devices are therefore often used as fuses built into the light bulb. More nitrogen is used in higher-voltage lamps to reduce the possibility of arcing. Bulb blackening In a conventional lamp, the evaporated tungsten eventually condenses on the inner surface of the glass envelope, darkening it. For bulbs that contain a vacuum, the darkening is uniform across the entire surface of the envelope. When a filling of inert gas is used, the evaporated tungsten is carried in the thermal convection currents of the gas, and is deposited preferentially on the uppermost part of the envelope, blackening just that portion of the envelope. An incandescent lamp that gives 93% or less of its initial light output at 75% of its rated life is regarded as unsatisfactory, when tested according to IEC Publication 60064. Light loss is due to filament evaporation and bulb blackening. Study of the problem of bulb blackening led to the discovery of thermionic emission, the invention of the vacuum tube, and evaporation deposition used to make mirrors and other optical coatings. A very small amount of water vapor inside a light bulb can significantly increase lamp darkening. Water vapor dissociates into hydrogen and oxygen at the hot filament. The oxygen attacks the tungsten metal, and the resulting tungsten oxide particles travel to cooler parts of the lamp. Hydrogen from water vapor reduces the oxide, reforming water vapor and continuing this water cycle. The equivalent of a drop of water distributed over 500,000 lamps will significantly increase darkening. Small amounts of substances such as zirconium are placed within the lamp as a getter to react with any oxygen that may bake out of the lamp components during operation. Some old, high-powered lamps used in theater, projection, searchlight, and lighthouse service with heavy, sturdy filaments contained loose tungsten powder within the envelope. From time to time, the operator would remove the bulb and shake it, allowing the tungsten powder to scrub off most of the tungsten that had condensed on the interior of the envelope, removing the blackening and brightening the lamp again. Halogen lamps The halogen lamp reduces uneven evaporation of the filament and eliminates darkening of the envelope by filling the lamp with a halogen gas at low pressure, along with an inert gas. The halogen cycle increases the lifetime of the bulb and prevents its darkening by redepositing tungsten from the inside of the bulb back onto the filament. The halogen lamp can operate its filament at a higher temperature than a standard gas filled lamp of similar power without loss of operating life. Such bulbs are much smaller than normal incandescent bulbs, and are widely used where intense illumination is needed in a limited space. Fiber-optic lamps for optical microscopy is one typical application. Incandescent arc lamps A variation of the incandescent lamp did not use a hot wire filament, but instead used an arc struck on a spherical bead electrode to produce heat. The electrode then became incandescent, with the arc contributing little to the light produced. Such lamps were used for projection or illumination for scientific instruments such as microscopes. These arc lamps ran on relatively low voltages and incorporated tungsten filaments to start ionization within the envelope. They provided the intense concentrated light of an arc lamp but were easier to operate. Developed around 1915, these lamps were displaced by mercury and xenon arc lamps. Electrical characteristics Power Incandescent lamps are nearly pure resistive loads with a power factor of 1. Unlike discharge lamps or LED lamps, the power consumed is equal to the apparent power in the circuit. Incandescent light bulbs are usually marketed according to the electrical power consumed. This depends mainly on the operating resistance of the filament. For two bulbs of the same voltage, and type, the higher-powered bulb gives more light. The table shows the approximate typical output, in lumens, of standard 120 volt incandescent light bulbs at various powers. Light output of similar 230 V bulbs is slightly less. The lower current (higher voltage) filament is thinner and has to be operated at a slightly lower temperature for the same life expectancy, which reduces energy efficiency. The lumen values for "soft white" bulbs will generally be slightly lower than for clear bulbs at the same power. Current and resistance The resistance of the filament is temperature dependent. The cold resistance of tungsten-filament lamps is about the resistance when operating. For example, a 100-watt, 120-volt lamp has a resistance of 144 ohms when lit, but the cold resistance is much lower (about 9.5 ohms). Since incandescent lamps are resistive loads, simple phase-control TRIAC dimmers can be used to control brightness. Electrical contacts may carry a "T" rating symbol indicating that they are designed to control circuits with the high inrush current characteristic of tungsten lamps. For a 100-watt, 120-volt general-service lamp, the current stabilizes in about 0.10 seconds, and the lamp reaches 90% of its full brightness after about 0.13 seconds. Physical characteristics Safety The filament in a tungsten light bulb is not easy to break when the bulb is cold, but filaments are more vulnerable when they are hot because the incandescent metal is less rigid. An impact on the outside of the bulb may cause the filament to break or experience a surge in electric current that causes part of it to melt or vaporize. In most modern incandescent bulbs, part of the wire inside the bulb acts like a fuse: if a broken filament produces an electrical short inside the bulb, the fusible section of wire will melt and cut the current off to prevent damage to the supply lines. A hot glass bulb may fracture on contact with cold objects. When the glass envelope breaks, the bulb implodes, exposing the filament to ambient air. The air then usually destroys the hot filament through oxidation. Bulb shapes Bulb shape and size designations are given in national standards. Some designations are one or more letters followed by one or more numbers, e.g. A55 or PAR38, where the letters identify the shape and the numbers some characteristic size. National standards such as ANSI C79.1-2002, IS 14897:2000 and JIS C 7710:1988 cover a common terminology for bulb shapes. Common shape codes General Service/General Lighting Service (GLS) Light emitted in (nearly) all directions. Available either clear or frosted. Types: General (A), elliptical (E), mushroom (M), sign (S), tubular (T) 120 V sizes: A17, 19 and 21 230 V sizes: A55 and 60 High Wattage General Service Lamps greater than 200 watts. Types: Pear-shaped (PS) Decorative lamps used in chandeliers, etc. Smaller candle-sized bulbs may use a smaller socket. Types: candle (B), twisted candle, bent-tip candle (CA & BA), flame (F), globe (G), lantern chimney (H), fancy round (P) 230 V sizes: P45, G95 Reflector (R) Reflective coating inside the bulb directs light forward. Flood types (FL) spread light. Spot types (SP) concentrate the light. Reflector (R) bulbs put approximately double the amount of light (foot-candles) on the front central area as General Service (A) of same wattage. Types: Standard reflector (R), bulged reflector (BR), elliptical reflector (ER), crown-silvered 120 V sizes: R16, 20, 25 and 30 230 V sizes: R50, 63, 80 and 95 Parabolic aluminized reflector (PAR) Parabolic aluminized reflector (PAR) bulbs control light more precisely. They produce about four times the concentrated light intensity of general service (A), and are used in recessed and track lighting. Weatherproof casings are available for outdoor spot and flood fixtures. 120 V sizes: PAR 16, 20, 30, 38, 56 and 64 230 V sizes: PAR 16, 20, 30, 38, 56 and 64 Available in numerous spot and flood beam spreads. Like all light bulbs, the number represents the diameter of the bulb in of an inch. Therefore, a PAR 16 is in diameter, a PAR 20 is in diameter, PAR 30 is and a PAR 38 is in diameter. Multifaceted reflector (MR) Multifaceted reflector bulbs are usually smaller in size and run at a lower voltage, often 12 V. HIR/IRC "HIR" is a GE designation for a lamp with an infrared reflective coating. Since less heat escapes, the filament burns hotter and more efficiently. The Osram designation for a similar coating is "IRC". Lamp bases Large lamps may have a screw base or a bayonet base, with one or more contacts on the base. The shell may serve as an electrical contact or only as a mechanical support. Bayonet base lamps are frequently used in automotive lamps to resist loosening by vibration. Some tubular lamps have an electrical contact at either end. Miniature lamps may have a wedge base and wire contacts, and some automotive and special purpose lamps have screw terminals for connection to wires. Very small lamps may have the filament support wires extended through the base of the lamp for connections. A bipin base is often used for halogen or reflector lamps. In the late 19th century, manufacturers introduced a multitude of incompatible lamp bases. General Electric's "Mazda" standard base sizes were soon adopted across the US. Lamp bases may be secured to the bulb with a cement, or by mechanical crimping to indentations molded into the glass bulb. Lamps intended for use in optical systems have bases with alignment features so that the filament is positioned accurately within the optical system. A screw-base lamp may have a random orientation of the filament when the lamp is installed in the socket. Contacts in the lightbulb socket allow the electric current to pass through the base to the filament. The socket provides electrical connections and mechanical support, and allows changing the lamp when it burns out. Light output and lifetime Incandescent lamps are very sensitive to changes in the supply voltage. These characteristics are of great practical and economic importance. For a supply voltage V near the rated voltage of the lamp: Light output is approximately proportional to V 3.4 Power consumption is approximately proportional to V 1.6 Lifetime is approximately proportional to V −16 Color temperature is approximately proportional to V 0.42 A 5% reduction in voltage will double the life of the bulb, but reduce its light output by about 16%. Long-life bulbs take advantage of this trade-off in applications such as traffic signal lamps. Since electric energy they use costs more than the cost of the bulb, general service lamps emphasize efficiency over long operating life. The objective is to minimize the cost of light, not the cost of lamps. Early bulbs had a life of up to 2500 hours, but in 1924 the Phoebus cartel agreed to limit life to 1000 hours. When this was exposed in 1953, General Electric and other leading American manufacturers were banned from limiting the life. The relationships above are valid for only a few percent change of voltage around standard rated conditions, but they indicate that a lamp operated at low voltage could last much longer than at rated voltage, albeit with greatly reduced light output. The "Centennial Light" is a light bulb that is accepted by the Guinness Book of World Records as having been burning almost continuously at a fire station in Livermore, California, since 1901. However, the bulb emits the equivalent light of a four watt bulb. A similar story can be told of a 40-watt bulb in Texas that has been illuminated since 21 September 1908. It once resided in an opera house where notable celebrities stopped to take in its glow, and was moved to an area museum in 1977. Photoflood lamps used for photographic lighting favor light output over life, with some lasting only two hours. The upper temperature limit for the filament is the melting point of the metal. Tungsten is the metal with the highest melting point, . A 50-hour-life projection bulb, for instance, is designed to operate only below that melting point. Such a lamp may achieve up to 22 lumens per watt, compared with 17.5 for a 750-hour general service lamp. Lamps of the same power rating but designed for different voltages have different luminous efficacy. For example, a 100-watt, 1000 hour, 120-volt lamp will produce about 17.1 lumens per watt. A similar lamp designed for 230 V would produce only around 12.8 lumens per watt, and one designed for 30 volts (train lighting) would produce as much as 19.8 lumens per watt. Lower voltage lamps have a thicker filament, for the same power rating. They can run hotter for the same lifetime before the filament evaporates. The wires used to support the filament make it mechanically stronger, but remove heat, creating another tradeoff between efficiency and long life. Many general-service 120-volt lamps use no additional support wires, but lamps designed for "rough service" or "vibration service" may have as many as five. Low-voltage lamps have filaments made of heavier wire and do not require additional support wires. Very low voltages are inefficient since the lead wires would conduct too much heat away from the filament, so the practical lower limit for incandescent lamps is 1.5 volts. Very long filaments for high voltages are fragile, and lamp bases become more difficult to insulate, so lamps for illumination are not made with rated voltages over 300 volts. Some infrared heating elements are made for higher voltages, but these use tubular bulbs with widely separated terminals.
Technology
Electricity generation and distribution
null
47200
https://en.wikipedia.org/wiki/4%20Vesta
4 Vesta
Vesta (minor-planet designation: 4 Vesta) is one of the largest objects in the asteroid belt, with a mean diameter of . It was discovered by the German astronomer Heinrich Wilhelm Matthias Olbers on 29 March 1807 and is named after Vesta, the virgin goddess of home and hearth from Roman mythology. Vesta is thought to be the second-largest asteroid, both by mass and by volume, after the dwarf planet Ceres. Measurements give it a nominal volume only slightly larger than that of Pallas (about 5% greater), but it is 25% to 30% more massive. It constitutes an estimated 9% of the mass of the asteroid belt. Vesta is the only known remaining rocky protoplanet (with a differentiated interior) of the kind that formed the terrestrial planets. Numerous fragments of Vesta were ejected by collisions one and two billion years ago that left two enormous craters occupying much of Vesta's southern hemisphere. Debris from these events has fallen to Earth as howardite–eucrite–diogenite (HED) meteorites, which have been a rich source of information about Vesta. Vesta is the brightest asteroid visible from Earth. It is regularly as bright as magnitude 5.1, at which times it is faintly visible to the naked eye. Its maximum distance from the Sun is slightly greater than the minimum distance of Ceres from the Sun, although its orbit lies entirely within that of Ceres. NASA's Dawn spacecraft entered orbit around Vesta on 16 July 2011 for a one-year exploration and left the orbit of Vesta on 5 September 2012 en route to its final destination, Ceres. Researchers continue to examine data collected by Dawn for additional insights into the formation and history of Vesta. History Discovery Heinrich Olbers discovered Pallas in 1802, the year after the discovery of Ceres. He proposed that the two objects were the remnants of a destroyed planet. He sent a letter with his proposal to the British astronomer William Herschel, suggesting that a search near the locations where the orbits of Ceres and Pallas intersected might reveal more fragments. These orbital intersections were located in the constellations of Cetus and Virgo. Olbers commenced his search in 1802, and on 29 March 1807 he discovered Vesta in the constellation Virgo—a coincidence, because Ceres, Pallas, and Vesta are not fragments of a larger body. Because the asteroid Juno had been discovered in 1804, this made Vesta the fourth object to be identified in the region that is now known as the asteroid belt. The discovery was announced in a letter addressed to German astronomer Johann H. Schröter dated 31 March. Because Olbers already had credit for discovering a planet (Pallas; at the time, the asteroids were considered to be planets), he gave the honor of naming his new discovery to German mathematician Carl Friedrich Gauss, whose orbital calculations had enabled astronomers to confirm the existence of Ceres, the first asteroid, and who had computed the orbit of the new planet in the remarkably short time of 10 hours. Gauss decided on the Roman virgin goddess of home and hearth, Vesta. Name and symbol Vesta was the fourth asteroid to be discovered, hence the number 4 in its formal designation. The name Vesta, or national variants thereof, is in international use with two exceptions: Greece and China. In Greek, the name adopted was the Hellenic equivalent of Vesta, Hestia in English, that name is used for (Greeks use the name "Hestia" for both, with the minor-planet numbers used for disambiguation). In Chinese, Vesta is called the 'hearth-god(dess) star', , naming the asteroid for Vesta's role, similar to the Chinese names of Uranus, Neptune, and Pluto. Upon its discovery, Vesta was, like Ceres, Pallas, and Juno before it, classified as a planet and given a planetary symbol. The symbol represented the altar of Vesta with its sacred fire and was designed by Gauss. In Gauss's conception, now obsolete, this was drawn . His form is in the pipeline for Unicode 17.0 as U+1F777 . The asteroid symbols were gradually retired from astronomical use after 1852, but the symbols for the first four asteroids were resurrected for astrology in the 1970s. The abbreviated modern astrological variant of the Vesta symbol is . After the discovery of Vesta, no further objects were discovered for 38 years, and during this time the Solar System was thought to have eleven planets. However, in 1845, new asteroids started being discovered at a rapid pace, and by 1851 there were fifteen, each with its own symbol, in addition to the eight major planets (Neptune had been discovered in 1846). It soon became clear that it would be impractical to continue inventing new planetary symbols indefinitely, and some of the existing ones proved difficult to draw quickly. That year, the problem was addressed by Benjamin Apthorp Gould, who suggested numbering asteroids in their order of discovery, and placing this number in a disk (circle) as the generic symbol of an asteroid. Thus, the fourth asteroid, Vesta, acquired the generic symbol . This was soon coupled with the name into an official number–name designation, as the number of minor planets increased. By 1858, the circle had been simplified to parentheses, which were easier to typeset. Other punctuation, such as and was also briefly used, but had more or less completely died out by 1949. Early measurements Photometric observations of Vesta were made at the Harvard College Observatory in 1880–1882 and at the Observatoire de Toulouse in 1909. These and other observations allowed the rotation rate of Vesta to be determined by the 1950s. However, the early estimates of the rotation rate came into question because the light curve included variations in both shape and albedo. Early estimates of the diameter of Vesta ranged from in 1825, to . E.C. Pickering produced an estimated diameter of in 1879, which is close to the modern value for the mean diameter, but the subsequent estimates ranged from a low of up to a high of during the next century. The measured estimates were based on photometry. In 1989, speckle interferometry was used to measure a dimension that varied between during the rotational period. In 1991, an occultation of the star SAO 93228 by Vesta was observed from multiple locations in the eastern United States and Canada. Based on observations from 14 different sites, the best fit to the data was an elliptical profile with dimensions of about . Dawn confirmed this measurement. These measurements will help determine the thermal history, size of the core, role of water in asteroid evolution and what meteorites found on Earth come from these bodies, with the ultimate goal of understanding the conditions and processes present at the solar system's earliest epoch and the role of water content and size in planetary evolution. Vesta became the first asteroid to have its mass determined. Every 18 years, the asteroid 197 Arete approaches within of Vesta. In 1966, based upon observations of Vesta's gravitational perturbations of Arete, Hans G. Hertz estimated the mass of Vesta at (solar masses). More refined estimates followed, and in 2001 the perturbations of 17 Thetis were used to calculate the mass of Vesta to be . Dawn determined it to be . Orbit Vesta orbits the Sun between Mars and Jupiter, within the asteroid belt, with a period of 3.6 Earth years, specifically in the inner asteroid belt, interior to the Kirkwood gap at 2.50 AU. Its orbit is moderately inclined (i = 7.1°, compared to 7° for Mercury and 17° for Pluto) and moderately eccentric (e = 0.09, about the same as for Mars). True orbital resonances between asteroids are considered unlikely. Because of their small masses relative to their large separations, such relationships should be very rare. Nevertheless, Vesta is able to capture other asteroids into temporary 1:1 resonant orbital relationships (for periods up to 2 million years or more) and about forty such objects have been identified. Decameter-sized objects detected in the vicinity of Vesta by Dawn may be such quasi-satellites rather than proper satellites. Rotation Vesta's rotation is relatively fast for an asteroid (5.342 h) and prograde, with the north pole pointing in the direction of right ascension 20 h 32 min, declination +48° (in the constellation Cygnus) with an uncertainty of about 10°. This gives an axial tilt of 29°. Coordinate systems Two longitudinal coordinate systems are used for Vesta, with prime meridians separated by 150°. The IAU established a coordinate system in 1997 based on Hubble photos, with the prime meridian running through the center of Olbers Regio, a dark feature 200 km across. When Dawn arrived at Vesta, mission scientists found that the location of the pole assumed by the IAU was off by 10°, so that the IAU coordinate system drifted across the surface of Vesta at 0.06° per year, and also that Olbers Regio was not discernible from up close, and so was not adequate to define the prime meridian with the precision they needed. They corrected the pole, but also established a new prime meridian 4° from the center of Claudia, a sharply defined crater 700 meters across, which they say results in a more logical set of mapping quadrangles. All NASA publications, including images and maps of Vesta, use the Claudian meridian, which is unacceptable to the IAU. The IAU Working Group on Cartographic Coordinates and Rotational Elements recommended a coordinate system, correcting the pole but rotating the Claudian longitude by 150° to coincide with Olbers Regio. It was accepted by the IAU, although it disrupts the maps prepared by the Dawn team, which had been positioned so they would not bisect any major surface features. Physical characteristics Vesta is the second most massive body in the asteroid belt, although it is only 28% as massive as Ceres, the most massive body. Vesta is however the most massive body that formed in the asteroid belt, as Ceres is believed to have formed between Jupiter and Saturn. Vesta's density is lower than those of the four terrestrial planets but is higher than those of most asteroids, as well as all of the moons in the Solar System except Io. Vesta's surface area is about the same as the land area of Pakistan, Venezuela, Tanzania, or Nigeria; slightly under . It has a differentiated interior. Vesta is only slightly larger () than 2 Pallas () in mean diameter, but is about 25% more massive. Vesta's shape is close to a gravitationally relaxed oblate spheroid, but the large concavity and protrusion at the southern pole (see 'Surface features' below) combined with a mass less than precluded Vesta from automatically being considered a dwarf planet under International Astronomical Union (IAU) Resolution XXVI 5. A 2012 analysis of Vesta's shape and gravity field using data gathered by the Dawn spacecraft has shown that Vesta is currently not in hydrostatic equilibrium. Temperatures on the surface have been estimated to lie between about with the Sun overhead, dropping to about at the winter pole. Typical daytime and nighttime temperatures are and , respectively. This estimate is for 6 May 1996, very close to perihelion, although details vary somewhat with the seasons. Surface features Before the arrival of the Dawn spacecraft, some Vestan surface features had already been resolved using the Hubble Space Telescope and ground-based telescopes (e.g., the Keck Observatory). The arrival of Dawn in July 2011 revealed the complex surface of Vesta in detail. Rheasilvia and Veneneia The most prominent of these surface features are two enormous impact basins, the -wide Rheasilvia, centered near the south pole; and the wide Veneneia. The Rheasilvia impact basin is younger and overlies the Veneneia. The Dawn science team named the younger, more prominent crater Rheasilvia, after the mother of Romulus and Remus and a mythical vestal virgin. Its width is 95% of the mean diameter of Vesta. The crater is about deep. A central peak rises above the lowest measured part of the crater floor and the highest measured part of the crater rim is above the crater floor low point. It is estimated that the impact responsible excavated about 1% of the volume of Vesta, and it is likely that the Vesta family and V-type asteroids are the products of this collision. If this is the case, then the fact that fragments have survived bombardment until the present indicates that the crater is at most only about 1 billion years old. It would also be the site of origin of the HED meteorites. All the known V-type asteroids taken together account for only about 6% of the ejected volume, with the rest presumably either in small fragments, ejected by approaching the 3:1 Kirkwood gap, or perturbed away by the Yarkovsky effect or radiation pressure. Spectroscopic analyses of the Hubble images have shown that this crater has penetrated deep through several distinct layers of the crust, and possibly into the mantle, as indicated by spectral signatures of olivine. The large peak at the center of Rheasilvia is high and wide, and is possibly a result of a planetary-scale impact. Other craters Several old, degraded craters approach Rheasilvia and Veneneia in size, although none are quite so large. They include Feralia Planitia, shown at right, which is across. More-recent, sharper craters range up to Varronilla and Postumia. Dust fills up some craters, creating so-called dust ponds. They are a phenomenon where pockets of dust are seen in celestial bodies without a significant atmosphere. These are smooth deposits of dust accumulated in depressions on the surface of the body (like craters), contrasting from the Rocky terrain around them. On the surface of Vesta, we have identified both type 1 (formed from impact melt) and type 2 (electrostatically made) dust ponds within 0˚–30°N/S, that is, Equatorial region. 10 craters have been identified with such formations. "Snowman craters" The "snowman craters" are a group of three adjacent craters in Vesta's northern hemisphere. Their official names, from largest to smallest (west to east), are Marcia, Calpurnia, and Minucia. Marcia is the youngest and cross-cuts Calpurnia. Minucia is the oldest. Troughs The majority of the equatorial region of Vesta is sculpted by a series of parallel troughs designated Divalia Fossae; its longest trough is wide and long. Despite the fact that Vesta is a one-seventh the size of the Moon, Divalia Fossae dwarfs the Grand Canyon. A second series, inclined to the equator, is found further north. This northern trough system is named Saturnalia Fossae, with its largest trough being roughly 40 km wide and over 370 km long. These troughs are thought to be large-scale graben resulting from the impacts that created Rheasilvia and Veneneia craters, respectively. They are some of the longest chasms in the Solar System, nearly as long as Ithaca Chasma on Tethys. The troughs may be graben that formed after another asteroid collided with Vesta, a process that can happen only in a body that, like Vesta, is differentiated. Vesta's differentiation is one of the reasons why scientists consider it a protoplanet. Alternatively, it is proposed that the troughs may be radial sculptures created by secondary cratering from Rheasilvia. Surface composition Compositional information from the visible and infrared spectrometer (VIR), gamma-ray and neutron detector (GRaND), and framing camera (FC), all indicate that the majority of the surface composition of Vesta is consistent with the composition of the howardite, eucrite, and diogenite meteorites. The Rheasilvia region is richest in diogenite, consistent with the Rheasilvia-forming impact excavating material from deeper within Vesta. The presence of olivine within the Rheasilvia region would also be consistent with excavation of mantle material. However, olivine has only been detected in localized regions of the northern hemisphere, not within Rheasilvia. The origin of this olivine is currently unclear. Though olivine was expected by astronomers to have originated from Vesta's mantle prior to the arrival of the Dawn orbiter, the lack of olivine within the Rheasilvia and Veneneia impact basins complicates this view. Both impact basins excavated Vestian material down to 60–100 km, far deeper than the expected thickness of ~30–40 km for Vesta's crust. Vesta's crust may be far thicker than expected or the violent impact events that created Rheasilvia and Veneneia may have mixed material enough to obscure olivine from observations. Alternatively, Dawn observations of olivine could instead be due to delivery by olivine-rich impactors, unrelated to Vesta's internal structure. Features associated with volatiles Pitted terrain has been observed in four craters on Vesta: Marcia, Cornelia, Numisia and Licinia. The formation of the pitted terrain is proposed to be degassing of impact-heated volatile-bearing material. Along with the pitted terrain, curvilinear gullies are found in Marcia and Cornelia craters. The curvilinear gullies end in lobate deposits, which are sometimes covered by pitted terrain, and are proposed to form by the transient flow of liquid water after buried deposits of ice were melted by the heat of the impacts. Hydrated materials have also been detected, many of which are associated with areas of dark material. Consequently, dark material is thought to be largely composed of carbonaceous chondrite, which was deposited on the surface by impacts. Carbonaceous chondrites are comparatively rich in mineralogically bound OH. Geology A large collection of potential samples from Vesta is accessible to scientists, in the form of over 1200 HED meteorites (Vestan achondrites), giving insight into Vesta's geologic history and structure. NASA Infrared Telescope Facility (NASA IRTF) studies of asteroid suggest that it originated from deeper within Vesta than the HED meteorites. Vesta is thought to consist of a metallic iron–nickel core 214–226 km in diameter, an overlying rocky olivine mantle, with a surface crust. From the first appearance of calcium–aluminium-rich inclusions (the first solid matter in the Solar System, forming about 4.567 billion years ago), a likely time line is as follows: Vesta is the only known intact asteroid that has been resurfaced in this manner. Because of this, some scientists refer to Vesta as a protoplanet. However, the presence of iron meteorites and achondritic meteorite classes without identified parent bodies indicates that there once were other differentiated planetesimals with igneous histories, which have since been shattered by impacts. On the basis of the sizes of V-type asteroids (thought to be pieces of Vesta's crust ejected during large impacts), and the depth of Rheasilvia crater (see below), the crust is thought to be roughly thick. Findings from the Dawn spacecraft have found evidence that the troughs that wrap around Vesta could be graben formed by impact-induced faulting (see Troughs section above), meaning that Vesta has more complex geology than other asteroids. Vesta's differentiated interior implies that it was in hydrostatic equilibrium and thus a dwarf planet in the past, but it is not today. The impacts that created the Rheasilvia and Veneneia craters occurred when Vesta was no longer warm and plastic enough to return to an equilibrium shape, distorting its once rounded shape and prohibiting it from being classified as a dwarf planet today. Regolith Vesta's surface is covered by regolith distinct from that found on the Moon or asteroids such as Itokawa. This is because space weathering acts differently. Vesta's surface shows no significant trace of nanophase iron because the impact speeds on Vesta are too low to make rock melting and vaporization an appreciable process. Instead, regolith evolution is dominated by brecciation and subsequent mixing of bright and dark components. The dark component is probably due to the infall of carbonaceous material, whereas the bright component is the original Vesta basaltic soil. Fragments Some small Solar System bodies are suspected to be fragments of Vesta caused by impacts. The Vestian asteroids and HED meteorites are examples. The V-type asteroid 1929 Kollaa has been determined to have a composition akin to cumulate eucrite meteorites, indicating its origin deep within Vesta's crust. Vesta is currently one of only eight identified Solar System bodies of which we have physical samples, coming from a number of meteorites suspected to be Vestan fragments. It is estimated that 1 out of 16 meteorites originated from Vesta. The other identified Solar System samples are from Earth itself, meteorites from Mars, meteorites from the Moon, and samples returned from the Moon, the comet Wild 2, and the asteroids 25143 Itokawa, 162173 Ryugu, and 101955 Bennu. Exploration In 1981, a proposal for an asteroid mission was submitted to the European Space Agency (ESA). Named the Asteroidal Gravity Optical and Radar Analysis (AGORA), this spacecraft was to launch some time in 1990–1994 and perform two flybys of large asteroids. The preferred target for this mission was Vesta. AGORA would reach the asteroid belt either by a gravitational slingshot trajectory past Mars or by means of a small ion engine. However, the proposal was refused by the ESA. A joint NASA–ESA asteroid mission was then drawn up for a Multiple Asteroid Orbiter with Solar Electric Propulsion (MAOSEP), with one of the mission profiles including an orbit of Vesta. NASA indicated they were not interested in an asteroid mission. Instead, the ESA set up a technological study of a spacecraft with an ion drive. Other missions to the asteroid belt were proposed in the 1980s by France, Germany, Italy and the United States, but none were approved. Exploration of Vesta by fly-by and impacting penetrator was the second main target of the first plan of the multi-aimed Soviet Vesta mission, developed in cooperation with European countries for realisation in 1991–1994 but canceled due to the dissolution of the Soviet Union. In the early 1990s, NASA initiated the Discovery Program, which was intended to be a series of low-cost scientific missions. In 1996, the program's study team recommended a mission to explore the asteroid belt using a spacecraft with an ion engine as a high priority. Funding for this program remained problematic for several years, but by 2004 the Dawn vehicle had passed its critical design review and construction proceeded. It launched on 27 September 2007 as the first space mission to Vesta. On 3 May 2011, Dawn acquired its first targeting image 1.2 million kilometers from Vesta. On 16 July 2011, NASA confirmed that it received telemetry from Dawn indicating that the spacecraft successfully entered Vesta's orbit. It was scheduled to orbit Vesta for one year, until July 2012. Dawn arrival coincided with late summer in the southern hemisphere of Vesta, with the large crater at Vesta's south pole (Rheasilvia) in sunlight. Because a season on Vesta lasts eleven months, the northern hemisphere, including anticipated compression fractures opposite the crater, would become visible to Dawn cameras before it left orbit. Dawn left orbit around Vesta on 4 September 2012 to travel to Ceres. NASA/DLR released imagery and summary information from a survey orbit, two high-altitude orbits (60–70 m/pixel) and a low-altitude mapping orbit (20 m/pixel), including digital terrain models, videos and atlases. Scientists also used Dawn to calculate Vesta's precise mass and gravity field. The subsequent determination of the J2 component yielded a core diameter estimate of about 220 km assuming a crustal density similar to that of the HED. Dawn data can be accessed by the public at the UCLA website. Observations from Earth orbit Observations from Dawn Vesta comes into view as the Dawn spacecraft approaches and enters orbit: True-color images Detailed images retrieved during the high-altitude (60–70 m/pixel) and low-altitude (~20 m/pixel) mapping orbits are available on the Dawn Mission website of JPL/NASA. Visibility Its size and unusually bright surface make Vesta the brightest asteroid, and it is occasionally visible to the naked eye from dark skies (without light pollution). In May and June 2007, Vesta reached a peak magnitude of +5.4, the brightest since 1989. At that time, opposition and perihelion were only a few weeks apart. It was brighter still at its 22 June 2018 opposition, reaching a magnitude of +5.3. Less favorable oppositions during late autumn 2008 in the Northern Hemisphere still had Vesta at a magnitude of from +6.5 to +7.3. Even when in conjunction with the Sun, Vesta will have a magnitude around +8.5; thus from a pollution-free sky it can be observed with binoculars even at elongations much smaller than near opposition. 2010–2011 In 2010, Vesta reached opposition in the constellation of Leo on the night of 17–18 February, at about magnitude 6.1, a brightness that makes it visible in binocular range but generally not for the naked eye. Under perfect dark sky conditions where all light pollution is absent it might be visible to an experienced observer without the use of a telescope or binoculars. Vesta came to opposition again on 5 August 2011, in the constellation of Capricornus at about magnitude 5.6. 2012–2013 Vesta was at opposition again on 9 December 2012. According to Sky and Telescope magazine, this year Vesta came within about 6 degrees of 1 Ceres during the winter of 2012 and spring 2013. Vesta orbits the Sun in 3.63 years and Ceres in 4.6 years, so every 17.4 years Vesta overtakes Ceres (the previous overtaking was in April 1996). On 1 December 2012, Vesta had a magnitude of 6.6, but it had decreased to 8.4 by 1 May 2013. 2014 Ceres and Vesta came within one degree of each other in the night sky in July 2014.
Physical sciences
Solar System
Astronomy
47262
https://en.wikipedia.org/wiki/2%20Pallas
2 Pallas
Pallas (minor-planet designation: 2 Pallas) is the third-largest asteroid in the Solar System by volume and mass. It is the second asteroid to have been discovered, after Ceres, and is likely a remnant protoplanet. Like Ceres, it is believed to have a mineral composition similar to carbonaceous chondrite meteorites, though significantly less hydrated than Ceres. It is 79% the mass of Vesta and 22% the mass of Ceres, constituting an estimated 7% of the mass of the asteroid belt. Its estimated volume is equivalent to a sphere in diameter, 90–95% the volume of Vesta. During the planetary formation era of the Solar System, objects grew in size through an accretion process to approximately the size of Pallas. Most of these protoplanets were incorporated into the growth of larger bodies, which became the planets, whereas others were ejected by the planets or destroyed in collisions with each other. Pallas, Vesta and Ceres appear to be the only intact bodies from this early stage of planetary formation to survive within the orbit of Neptune. When Pallas was discovered by the German astronomer Heinrich Wilhelm Matthias Olbers on 28 March 1802, it was considered to be a planet, as were other asteroids in the early 19th century. The discovery of many more asteroids after 1845 eventually led to the separate listing of "minor" planets from "major" planets, and the realization in the 1950s that such small bodies did not form in the same way as (other) planets led to the gradual abandonment of the term "minor planet" in favor of "asteroid" (or, for larger bodies such as Pallas, "planetoid"). With an orbital inclination of 34.8°, Pallas's orbit is unusually highly inclined to the plane of the asteroid belt, making Pallas relatively inaccessible to spacecraft, and its orbital eccentricity is nearly as large as that of Pluto. The high inclination of the orbit of Pallas results in the possibility of close conjunctions to stars that other solar objects always pass at great angular distance. This resulted in Pallas passing Sirius on 9 October 2022, only 8.5 arcminutes southwards, while no planet can get closer than 30 degrees to Sirius. History Discovery On the night of 5 April 1779, Charles Messier recorded Pallas on a star chart he used to track the path of a comet, now known as C/1779 A1 (Bode), that he observed in the spring of 1779, but apparently assumed it was nothing more than a star. In 1801, the astronomer Giuseppe Piazzi discovered an object which he initially believed to be a comet. Shortly thereafter he announced his observations of this object, noting that the slow, uniform motion was uncharacteristic of a comet, suggesting it was a different type of object. This was lost from sight for several months, but was recovered later that year by the Baron von Zach and Heinrich W. M. Olbers after a preliminary orbit was computed by Carl Friedrich Gauss. This object came to be named Ceres, and was the first asteroid to be discovered. A few months later, Olbers was again attempting to locate Ceres when he noticed another moving object in the vicinity. This was the asteroid Pallas, coincidentally passing near Ceres at the time. The discovery of this object created interest in the astronomy community. Before this point it had been speculated by astronomers that there should be a planet in the gap between Mars and Jupiter. Now, unexpectedly, a second such body had been found. When Pallas was discovered, some estimates of its size were as high as 3,380 km in diameter. Even as recently as 1979, Pallas was estimated to be 673 km in diameter, 26% greater than the currently accepted value. The orbit of Pallas was determined by Gauss, who found the period of 4.6 years was similar to the period for Ceres. Pallas has a relatively high orbital inclination to the plane of the ecliptic. Later observations In 1917, the Japanese astronomer Kiyotsugu Hirayama began to study asteroid motions. By plotting the mean orbital motion, inclination, and eccentricity of a set of asteroids, he discovered several distinct groupings. In a later paper he reported a group of three asteroids associated with Pallas, which became named the Pallas family, after the largest member of the group. Since 1994 more than 10 members of this family have been identified, with semi-major axes between 2.50 and 2.82 AU and inclinations of 33–38°. The validity of the family was confirmed in 2002 by a comparison of their spectra. Pallas has been observed occulting stars several times, including the best-observed of all asteroid occultation events, by 140 observers on 29 May 1983. These measurements resulted in the first accurate calculation of its diameter. After an occultation on 29 May 1979, the discovery of a possible tiny satellite with a diameter of about 1 km was reported, which was never confirmed. Radio signals from spacecraft in orbit around Mars and/or on its surface have been used to estimate the mass of Pallas from the tiny perturbations induced by it onto the motion of Mars. The Dawn team was granted viewing time on the Hubble Space Telescope in September 2007 for a once-in-twenty-year opportunity to view Pallas at closest approach, to obtain comparative data for Ceres and Vesta. Name and symbol Pallas is an epithet of the Greek goddess Athena (). In some versions of the myth, Athena killed Pallas, daughter of Triton, then adopted her friend's name out of mourning. The adjectival form of the name is Palladian. The d is part of the oblique stem of the Greek name, which appears before a vowel but disappears before the nominative ending -s. The oblique form is seen in the Italian and Russian names for the asteroid, and (). The stony-iron pallasite meteorites are not Palladian, being named instead after the German naturalist Peter Simon Pallas. The chemical element palladium, on the other hand, was named after the asteroid, which had been discovered just before the element. The old astronomical symbol of Pallas, still used in astrology, is a spear or lance, , one of the symbols of the goddess. The blade was most often a lozenge (), but various graphic variants were published, including an acute/elliptic leaf shape, a cordate leaf shape (: ), and a triangle (); the last made it effectively the alchemical symbol for sulfur, . The generic asteroid symbol of a disk with its discovery number, , was introduced in 1852 and quickly became the norm. The iconic lozenge symbol was resurrected for astrological use in 1973. Orbit and rotation Pallas has unusual dynamic parameters for such a large body. Its orbit is highly inclined and moderately eccentric, despite being at the same distance from the Sun as the central part of the asteroid belt. Furthermore, Pallas has a very high axial tilt of 84°, with its north pole pointing towards ecliptic coordinates (β, λ) = (30°, −16°) with a 5° uncertainty in the Ecliptic J2000.0 reference frame. This means that every Palladian summer and winter, large parts of the surface are in constant sunlight or constant darkness for a time on the order of an Earth year, with areas near the poles experiencing continuous sunlight for as long as two years. Near resonances Pallas is in a near-1:1 orbital resonance with Ceres, which is probably coincidental. Pallas also has a near-18:7 resonance (91,000-year period) and an approximate 5:2 resonance (83-year period) with Jupiter. Transits of planets from Pallas From Pallas, the planets Mercury, Venus, Mars, and Earth can occasionally appear to transit, or pass in front of, the Sun. Earth last did so in 1968 and 1998, and will next transit in 2224. Mercury did in October 2009. The last and next by Venus are in 1677 and 2123, and for Mars they are in 1597 and 2759. Physical characteristics Both Vesta and Pallas have assumed the title of second-largest asteroid from time to time. At in diameter, Pallas is slightly smaller than Vesta (). The mass of Pallas is that of Vesta, that of Ceres, and a quarter of one percent that of the Moon. Pallas is farther from Earth and has a much lower albedo than Vesta, and hence is dimmer as seen from Earth. Indeed, the much smaller asteroid 7 Iris marginally exceeds Pallas in mean opposition magnitude. Pallas's mean opposition magnitude is +8.0, which is well within the range of 10×50 binoculars, but, unlike Ceres and Vesta, it will require more-powerful optical aid to view at small elongations, when its magnitude can drop as low as +10.6. During rare perihelic oppositions, Pallas can reach a magnitude of +6.4, right on the edge of naked-eye visibility. During late February 2014 Pallas shone with magnitude 6.96. Pallas is a B-type asteroid. Based on spectroscopic observations, the primary component of the material on Pallas's surface is a silicate containing little iron and water. Minerals of this type include olivine and pyroxene, which are found in CM chondrules. The surface composition of Pallas is very similar to the Renazzo carbonaceous chondrite (CR) meteorites, which are even lower in hydrous minerals than the CM type. The Renazzo meteorite was discovered in Italy in 1824 and is one of the most primitive meteorites known. Pallas's visible and near-infrared spectrum is almost flat, being slightly brighter in towards the blue. There is only one clear absorption band in the 3-micron part, which suggests an anhydrous component mixed with hydrated CM-like silicates. Pallas's surface is most likely composed of a silicate material; its spectrum and calculated density () correspond to CM chondrite meteorites (), suggesting a mineral composition similar to that of Ceres, but significantly less hydrated. To within observational limits, Pallas appears to be saturated with craters. Its high inclination and eccentricity means that average impacts are much more energetic than on Vesta or Ceres (with on average twice their velocity), meaning that smaller (and thus more common) impactors can create equivalently sized craters. Indeed, Pallas appears to have many more large craters than either Vesta or Ceres, with craters larger than 40 km covering at least 9% of its surface. Pallas's shape departs significantly from the dimensions of an equilibrium body at its current rotational period, indicating that it is not a dwarf planet. It's possible that a suspected large impact basin at the south pole, which ejected of the volume of Pallas (twice the volume of the Rheasilvia basin on Vesta), may have increased its inclination and slowed its rotation; the shape of Pallas without such a basin would be close to an equilibrium shape for a 6.2-hour rotational period. A smaller crater near the equator is associated with the Palladian family of asteroids. Pallas probably has a quite homogeneous interior. The close match between Pallas and CM chondrites suggests that they formed in the same era and that the interior of Pallas never reached the temperature (≈820 K) needed to dehydrate silicates, which would be necessary to differentiate a dry silicate core beneath a hydrated mantle. Thus Pallas should be rather homogeneous in composition, though some upward flow of water could have occurred since. Such a migration of water to the surface would have left salt deposits, potentially explaining Pallas's relatively high albedo. Indeed, one bright spot is reminiscent of those found on Ceres. Although other explanations for the bright spot are possible (e.g. a recent ejecta blanket), if the near-Earth asteroid 3200 Phaethon is an ejected piece of Pallas, as some have theorized, then a Palladian surface enriched in salts would explain the sodium abundance in the Geminid meteor shower caused by Phaethon. Surface features Besides one bright spot in the southern hemisphere, the only surface features identified on Pallas are craters. As of 2020, 36 craters have been identified, 34 of which are larger than 40 km in diameter. Provisional names have been provided for some of them. The craters are named after ancient weapons. Satellites A small moon about 1 kilometer in diameter was suggested based on occultation data from 29 May 1978. In 1980, speckle interferometry suggested a much larger satellite, whose existence was refuted a few years later with occultation data. Exploration Pallas itself has never been visited by spacecraft. Proposals have been made in the past though none have come to fruition. A flyby of the Dawn probe's visits to 4 Vesta and 1 Ceres was discussed but was not possible due to the high orbital inclination of Pallas. The proposed Athena SmallSat mission would have been launched in 2022 as a secondary payload of the Psyche mission and travel on separate trajectory to a flyby encounter with 2 Pallas, though was not funded due to being outcompeted by other mission concepts such as the TransOrbital Trailblazer Lunar Orbiter. The authors of the proposal cited Pallas as the "largest unexplored" main-belt protoplanet. Gallery
Physical sciences
Solar System
Astronomy
47263
https://en.wikipedia.org/wiki/243%20Ida
243 Ida
243 Ida is an asteroid in the Koronis family of the asteroid belt. It was discovered on 29 September 1884 by Austrian astronomer Johann Palisa at Vienna Observatory and named after a nymph from Greek mythology. Later telescopic observations categorized Ida as an S-type asteroid, the most numerous type in the inner asteroid belt. On 28 August 1993, Ida was visited by the uncrewed Galileo spacecraft while en route to Jupiter. It was the second asteroid visited by a spacecraft and the first found to have a natural satellite. Ida's orbit lies between the planets Mars and Jupiter, like all main-belt asteroids. Its orbital period is 4.84 years, and its rotation period is 4.63 hours. Ida has an average diameter of . It is irregularly shaped and elongated, apparently composed of two large objects connected together. Its surface is one of the most heavily cratered in the Solar System, featuring a wide variety of crater sizes and ages. Ida's moon Dactyl was discovered by mission member Ann Harch in images returned from Galileo. It was named after the Dactyls, creatures which inhabited Mount Ida in Greek mythology. Dactyl is only in diameter, about 1/20 the size of Ida. Its orbit around Ida could not be determined with much accuracy, but the constraints of possible orbits allowed a rough determination of Ida's density and revealed that it is depleted of metallic minerals. Dactyl and Ida share many characteristics, suggesting a common origin. The images returned from Galileo and the subsequent measurement of Ida's mass provided new insights into the geology of S-type asteroids. Before the Galileo flyby, many different theories had been proposed to explain their mineral composition. Determining their composition permits a correlation between meteorites falling to the Earth and their origin in the asteroid belt. Data returned from the flyby pointed to S-type asteroids as the source for the ordinary chondrite meteorites, the most common type found on the Earth's surface. Discovery and observations Ida was discovered on 29 September 1884 by Austrian astronomer Johann Palisa at the Vienna Observatory. It was his 45th asteroid discovery. Ida was named by Moriz von Kuffner, a Viennese brewer and amateur astronomer. In Greek mythology, Ida was a nymph of Crete who raised the god Zeus. Ida was recognized as a member of the Koronis family by Kiyotsugu Hirayama, who proposed in 1918 that the group comprised the remnants of a destroyed precursor body. Ida's reflection spectrum was measured on 16 September 1980 by astronomers David J. Tholen and Edward F. Tedesco as part of the eight-color asteroid survey (ECAS). Its spectrum matched those of the asteroids in the S-type classification. Many observations of Ida were made in early 1993 by the US Naval Observatory in Flagstaff and the Oak Ridge Observatory. These improved the measurement of Ida's orbit around the Sun and reduced the uncertainty of its position during the Galileo flyby from . Exploration Galileo flyby Ida was visited in 1993 by the Jupiter-bound space probe Galileo. Its encounters of the asteroids Gaspra and Ida were secondary to the Jupiter mission. These were selected as targets in response to a new NASA policy directing mission planners to consider asteroid flybys for all spacecraft crossing the belt. No prior missions had attempted such a flyby. Galileo was launched into orbit by the Space Shuttle Atlantis mission STS-34 on 18 October 1989. Changing Galileo's trajectory to approach Ida required that it consume of propellant. Mission planners delayed the decision to attempt a flyby until they were certain that this would leave the spacecraft enough propellant to complete its Jupiter mission. Galileo's trajectory carried it into the asteroid belt twice on its way to Jupiter. During its second crossing, it flew by Ida on 28 August 1993 at a speed of relative to the asteroid. The onboard imager observed Ida from a distance of to its closest approach of . Ida was the second asteroid, after Gaspra, to be imaged by a spacecraft. About 95% of Ida's surface came into view of the probe during the flyby. Transmission of many Ida images was delayed due to a permanent failure in the spacecraft's high-gain antenna. The first five images were received in September 1993. These comprised a high-resolution mosaic of the asteroid at a resolution of 31–38 m/pixel. The remaining images were sent in February 1994, when the spacecraft's proximity to the Earth allowed higher speed transmissions. Discoveries The data returned from the Galileo flybys of Gaspra and Ida, and the later NEAR Shoemaker asteroid mission, permitted the first study of asteroid geology. Ida's relatively large surface exhibited a diverse range of geological features. The discovery of Ida's moon Dactyl, the first confirmed satellite of an asteroid, provided additional insights into Ida's composition. Ida is classified as an S-type asteroid based on ground-based spectroscopic measurements. The composition of S-types was uncertain before the Galileo flybys, but was interpreted to be either of two minerals found in meteorites that had fallen to the Earth: ordinary chondrite (OC) and stony-iron. Estimates of Ida's density are constrained to less than 3.2 g/cm3 by the long-term stability of Dactyl's orbit. This all but rules out a stony-iron composition; were Ida made of 5 g/cm3 iron- and nickel-rich material, it would have to contain more than 40% empty space. The Galileo images also led to the discovery that space weathering was taking place on Ida, a process which causes older regions to become more red in color over time. The same process affects both Ida and its moon, although Dactyl shows a lesser change. The weathering of Ida's surface revealed another detail about its composition: the reflection spectra of freshly exposed parts of the surface resembled that of OC meteorites, but the older regions matched the spectra of S-type asteroids.Both of these discoveries—the space weathering effects and the low density—led to a new understanding about the relationship between S-type asteroids and OC meteorites. S-types are the most numerous kind of asteroid in the inner part of the asteroid belt. OC meteorites are, likewise, the most common type of meteorite found on the Earth's surface. The reflection spectra measured by remote observations of S-type asteroids, however, did not match that of OC meteorites. The Galileo flyby of Ida found that some S-types, particularly the Koronis family, could be the source of these meteorites. Physical characteristics Ida's mass is between 3.65 and 4.99 × 1016 kg. Its gravitational field produces an acceleration of about 0.3 to 1.1 cm/s2 over its surface. This field is so weak that an astronaut standing on its surface could leap from one end of Ida to the other, and an object moving in excess of could escape the asteroid entirely. Ida is a distinctly elongated asteroid, with an irregular surface. Ida is 2.35 times as long as it is wide, and a "waist" separates it into two geologically dissimilar halves. This constricted shape is consistent with Ida being made of two large, solid components, with loose debris filling the gap between them. However, no such debris was seen in high-resolution images captured by Galileo. Although there are a few steep slopes tilting up to about 50° on Ida, the slope generally does not exceed 35°. Ida's irregular shape is responsible for the asteroid's very uneven gravitational field. The surface acceleration is lowest at the extremities because of their high rotational speed. It is also low near the "waist" because the mass of the asteroid is concentrated in the two halves, away from this location. Surface features Ida's surface appears heavily cratered and mostly gray, although minor color variations mark newly formed or uncovered areas. Besides craters, other features are evident, such as grooves, ridges, and protrusions. Ida is covered by a thick layer of regolith, loose debris that obscures the solid rock beneath. The largest, boulder-sized, debris fragments are called ejecta blocks, several of which have been observed on the surface. Regolith The surface of Ida is covered in a blanket of pulverized rock, called regolith, about thick. This material is produced in impact events and redistributed across Ida's surface by geological processes. Galileo observed evidence of recent downslope regolith movement. Ida's regolith is composed of the silicate minerals olivine and pyroxene. Its appearance changes over time through a process called space weathering. Because of this process, older regolith appears more red in color compared to freshly exposed material. About 20 large (40–150 m across) ejecta blocks have been identified, embedded in Ida's regolith. Ejecta blocks constitute the largest pieces of the regolith. Because ejecta blocks are expected to break down quickly by impact events, those present on the surface must have been either formed recently or uncovered by an impact event. Most of them are located within the craters Lascaux and Mammoth, but they may not have been produced there. This area attracts debris due to Ida's irregular gravitational field. Some blocks may have been ejected from the young crater Azzurra on the opposite side of the asteroid. Structures Several major structures mark Ida's surface. The asteroid appears to be split into two halves, here referred to as region 1 and region 2, connected by a "waist". This feature may have been filled in by debris, or blasted out of the asteroid by impacts. Region 1 of Ida contains two major structures. One is a prominent ridge named Townsend Dorsum that stretches 150 degrees around Ida's surface. The other structure is a large indentation named Vienna Regio. Ida's region 2 features several sets of grooves, most of which are wide or less and up to long. They are located near, but are not connected with, the craters Mammoth, Lascaux, and Kartchner. Some grooves are related to major impact events, for example a set opposite Vienna Regio. Craters Ida is one of the most densely cratered bodies yet explored in the Solar System, and impacts have been the primary process shaping its surface. Cratering has reached the saturation point, meaning that new impacts erase evidence of old ones, leaving the total crater count roughly the same. It is covered with craters of all sizes and stages of degradation, and ranging in age from fresh to as old as Ida itself. The oldest may have been formed during the breakup of the Koronis family parent body. The largest crater, Lascaux, is almost across. Region 2 contains nearly all of the craters larger than in diameter, but Region 1 has no large craters at all. Some craters are arranged in chains. Ida's major craters are named after caves and lava tubes on Earth. The crater Azzurra, for example, is named after a submerged cave on the island of Capri, also known as the Blue Grotto. Azzurra seems to be the most recent major impact on Ida. The ejecta from this collision is distributed discontinuously over Ida and is responsible for the large-scale color and albedo variations across its surface. An exception to the crater morphology is the fresh, asymmetric Fingal, which has a sharp boundary between the floor and wall on one side. Another significant crater is Afon, which marks Ida's prime meridian. The craters are simple in structure: bowl-shaped with no flat bottoms and no central peaks. They are distributed evenly around Ida, except for a protrusion north of crater Choukoutien which is smoother and less cratered. The ejecta excavated by impacts is deposited differently on Ida than on planets because of its rapid rotation, low gravity and irregular shape. Ejecta blankets settle asymmetrically around their craters, but fast-moving ejecta that escapes from the asteroid is permanently lost. Composition Ida was classified as an S-type asteroid based on the similarity of its reflectance spectra with similar asteroids. S-types may share their composition with stony-iron or ordinary chondrite (OC) meteorites. The composition of the interior has not been directly analyzed, but is assumed to be similar to OC material based on observed surface color changes and Ida's bulk density of 2.27–3.10 g/cm3. OC meteorites contain varying amounts of the silicates olivine and pyroxene, iron, and feldspar. Olivine and pyroxene were detected on Ida by Galileo. The mineral content appears to be homogeneous throughout its extent. Galileo found minimal variations on the surface, and the asteroid's spin indicates a consistent density. Assuming that its composition is similar to OC meteorites, which range in density from 3.48 to 3.64 g/cm3, Ida would have a porosity of 11–42%. Ida's interior probably contains some amount of impact-fractured rock, called megaregolith. The megaregolith layer of Ida extends between hundreds of meters below the surface to a few kilometers. Some rock in Ida's core may have been fractured below the large craters Mammoth, Lascaux, and Undara. Orbit and rotation Ida is a member of the Koronis family of asteroid-belt asteroids. Ida orbits the Sun at an average distance of , between the orbits of Mars and Jupiter. Ida takes 4.84089 years to complete one orbit. Ida rotates in the retrograde direction with a rotation period of 4.63 hours (roughly 5 hours). The calculated maximum moment of inertia of a uniformly dense object the same shape as Ida coincides with the spin axis of the asteroid. This suggests that there are no major variations of density within the asteroid. Ida's axis of rotation precesses with a period of 77 thousand years, due to the gravity of the Sun acting upon the nonspherical shape of the asteroid. Origin Ida originated in the breakup of the roughly diameter Koronis parent body. The progenitor asteroid had partially differentiated, with heavier metals migrating to the core. Ida carried away insignificant amounts of this core material. It is uncertain how long ago the disruption event occurred. According to an analysis of Ida's cratering processes, its surface is more than a billion years old. However, this is inconsistent with the estimated age of the Ida–Dactyl system of less than 100 million years; it is unlikely that Dactyl, due to its small size, could have escaped being destroyed in a major collision for longer. The difference in age estimates may be explained by an increased rate of cratering from the debris of the Koronis parent body's destruction. Dactyl Ida has a moon named Dactyl, official designation (243) Ida I Dactyl. It was discovered in images taken by the Galileo spacecraft during its flyby in 1993. These images provided the first direct confirmation of an asteroid moon. At the time, it was separated from Ida by a distance of , moving in a prograde orbit. Dactyl is heavily cratered, like Ida, and consists of similar materials. Its origin is uncertain, but evidence from the flyby suggests that it originated as a fragment of the Koronis parent body. Discovery Dactyl was found on 17 February 1994 by Galileo mission member Ann Harch, while examining delayed image downloads from the spacecraft. Galileo recorded 47 images of Dactyl over an observation period of 5.5 hours in August 1993. The spacecraft was from Ida and from Dactyl when the first image of the moon was captured, 14 minutes before Galileo made its closest approach. Dactyl was initially designated 1993 (243) 1. It was named by the International Astronomical Union in 1994, for the mythological dactyls who inhabited Mount Ida on the island of Crete. Physical characteristics Dactyl is an "egg-shaped" but "remarkably spherical" object measuring . It is oriented with its longest axis pointing towards Ida. Like Ida, Dactyl's surface exhibits saturation cratering. It is marked by more than a dozen craters with a diameter greater than , indicating that the moon has suffered many collisions during its history. At least six craters form a linear chain, suggesting that it was caused by locally produced debris, possibly ejected from Ida. Dactyl's craters may contain central peaks, unlike those found on Ida. These features, and Dactyl's spheroidal shape, imply that the moon is gravitationally controlled despite its small size. Like Ida, its average temperature is about . Dactyl shares many characteristics with Ida. Their albedos and reflection spectra are very similar. The small differences indicate that the space weathering process is less active on Dactyl. Its small size would make the formation of significant amounts of regolith impossible. This contrasts with Ida, which is covered by a deep layer of regolith. The two largest imaged craters on Dactyl were named Acmon and Celmis , after two of the mythological dactyls. Acmon is the largest crater in the above image, and Celmis is near the bottom of the image, mostly obscured in shadow. The craters are 300 and 200 meters in diameter, respectively. Orbit Dactyl's orbit around Ida is not precisely known. Galileo was in the plane of Dactyl's orbit when most of the images were taken, which made determining its exact orbit difficult. Dactyl orbits in the prograde direction and is inclined about 8° to Ida's equator. Based on computer simulations, Dactyl's pericenter must be more than about from Ida for it to remain in a stable orbit. The range of orbits generated by the simulations was narrowed down by the necessity of having the orbits pass through points at which Galileo observed Dactyl to be at 16:52:05 UT on 28 August 1993, about from Ida at longitude 85°. On 26 April 1994, the Hubble Space Telescope observed Ida for eight hours and was unable to spot Dactyl. It would have been able to observe it if it were more than about from Ida. If in a circular orbit at the distance at which it was seen, Dactyl's orbital period would be about 20 hours. Its orbital speed is roughly , "about the speed of a fast run or a slowly thrown baseball". Age and origin Dactyl may have originated at the same time as Ida, from the disruption of the Koronis parent body. However, it may have formed more recently, perhaps as ejecta from a large impact on Ida. It is extremely unlikely that it was captured by Ida. Dactyl may have suffered a major impact around 100 million years ago, which reduced its size.
Physical sciences
Solar System
Astronomy
47264
https://en.wikipedia.org/wiki/Asteroid%20belt
Asteroid belt
The asteroid belt is a torus-shaped region in the Solar System, centered on the Sun and roughly spanning the space between the orbits of the planets Jupiter and Mars. It contains a great many solid, irregularly shaped bodies called asteroids or minor planets. The identified objects are of many sizes, but much smaller than planets, and, on average, are about one million kilometers (or six hundred thousand miles) apart. This asteroid belt is also called the main asteroid belt or main belt to distinguish it from other asteroid populations in the Solar System. The asteroid belt is the smallest and innermost known circumstellar disc in the Solar System. Classes of small Solar System bodies in other regions are the near-Earth objects, the centaurs, the Kuiper belt objects, the scattered disc objects, the sednoids, and the Oort cloud objects. About 60% of the main belt mass is contained in the four largest asteroids: Ceres, Vesta, Pallas, and Hygiea. The total mass of the asteroid belt is estimated to be 3% that of the Moon. Ceres, the only object in the asteroid belt large enough to be a dwarf planet, is about 950 km in diameter, whereas Vesta, Pallas, and Hygiea have mean diameters less than 600 km. The remaining mineralogically classified bodies range in size down to a few metres. The asteroid material is so thinly distributed that numerous uncrewed spacecraft have traversed it without incident. Nonetheless, collisions between large asteroids occur and can produce an asteroid family, whose members have similar orbital characteristics and compositions. Individual asteroids within the belt are categorized by their spectra, with most falling into three basic groups: carbonaceous (C-type), silicate (S-type), and metal-rich (M-type). The asteroid belt formed from the primordial solar nebula as a group of planetesimals, the smaller precursors of the protoplanets. However, between Mars and Jupiter gravitational perturbations from Jupiter disrupted their accretion into a planet, imparting excess kinetic energy which shattered colliding planetesimals and most of the incipient protoplanets. As a result, 99.9% of the asteroid belt's original mass was lost in the first 100 million years of the Solar System's history. Some fragments eventually found their way into the inner Solar System, leading to meteorite impacts with the inner planets. Asteroid orbits continue to be appreciably perturbed whenever their period of revolution about the Sun forms an orbital resonance with Jupiter. At these orbital distances, a Kirkwood gap occurs as they are swept into other orbits. History of observation In 1596, Johannes Kepler wrote, "Between Mars and Jupiter, I place a planet," in his Mysterium Cosmographicum, stating his prediction that a planet would be found there. While analyzing Tycho Brahe's data, Kepler thought that too large a gap existed between the orbits of Mars and Jupiter to fit his own model of where planetary orbits should be found. In an anonymous footnote to his 1766 translation of Charles Bonnet's Contemplation de la Nature, the astronomer Johann Daniel Titius of Wittenberg noted an apparent pattern in the layout of the planets, now known as the Titius-Bode Law. If one began a numerical sequence at 0, then included 3, 6, 12, 24, 48, etc., doubling each time, and added four to each number and divided by 10, this produced a remarkably close approximation to the radii of the orbits of the known planets as measured in astronomical units, provided one allowed for a "missing planet" (equivalent to 24 in the sequence) between the orbits of Mars (12) and Jupiter (48). In his footnote, Titius declared, "But should the Lord Architect have left that space empty? Not at all." When William Herschel discovered Uranus in 1781, the planet's orbit closely matched the law, leading some astronomers to conclude that a planet had to be between the orbits of Mars and Jupiter. On January 1, 1801, Giuseppe Piazzi, chairman of astronomy at the University of Palermo, Sicily, found a tiny moving object in an orbit with exactly the radius predicted by this pattern. He dubbed it "Ceres", after the Roman goddess of the harvest and patron of Sicily. Piazzi initially believed it to be a comet, but its lack of a coma suggested it was a planet. Thus, the aforementioned pattern predicted the semimajor axes of all eight planets of the time (Mercury, Venus, Earth, Mars, Ceres, Jupiter, Saturn, and Uranus). Concurrent with the discovery of Ceres, an informal group of 24 astronomers dubbed the "celestial police" was formed under the invitation of Franz Xaver von Zach with the express purpose of finding additional planets; they focused their search for them in the region between Mars and Jupiter where the Titius–Bode law predicted there should be a planet. About 15 months later, Heinrich Olbers, a member of the celestial police, discovered a second object in the same region, Pallas. Unlike the other known planets, Ceres and Pallas remained points of light even under the highest telescope magnifications instead of resolving into discs. Apart from their rapid movement, they appeared indistinguishable from stars. Accordingly, in 1802, William Herschel suggested they be placed into a separate category, named "asteroids", after the Greek asteroeides, meaning "star-like". Upon completing a series of observations of Ceres and Pallas, he concluded, Neither the appellation of planets nor that of comets can with any propriety of language be given to these two stars ... They resemble small stars so much as hardly to be distinguished from them. From this, their asteroidal appearance, if I take my name, and call them Asteroids; reserving for myself, however, the liberty of changing that name, if another, more expressive of their nature, should occur. By 1807, further investigation revealed two new objects in the region: Juno and Vesta. The burning of Lilienthal in the Napoleonic wars, where the main body of work had been done, brought this first period of discovery to a close. Despite Herschel's coinage, for several decades it remained common practice to refer to these objects as planets and to prefix their names with numbers representing their sequence of discovery: 1 Ceres, 2 Pallas, 3 Juno, 4 Vesta. In 1845, though, the astronomer Karl Ludwig Hencke detected a fifth object (5 Astraea) and, shortly thereafter, new objects were found at an accelerating rate. Counting them among the planets became increasingly cumbersome. Eventually, they were dropped from the planet list (as first suggested by Alexander von Humboldt in the early 1850s) and Herschel's coinage, "asteroids", gradually came into common use. The discovery of Neptune in 1846 led to the discrediting of the Titius–Bode law in the eyes of scientists because its orbit was nowhere near the predicted position. To date, no scientific explanation for the law has been given, and astronomers' consensus regards it as a coincidence. [[File:951 Gaspra.jpg|right|thumb|951 Gaspra, the first asteroid imaged by a spacecraft, as viewed during Galileo'''s 1991 flyby; colors are exaggerated]] The expression "asteroid belt" came into use in the early 1850s, although pinpointing who coined the term is difficult. The first English use seems to be in the 1850 translation (by Elise Otté) of Alexander von Humboldt's Cosmos: "[...] and the regular appearance, about the 13th of November and the 11th of August, of shooting stars, which probably form part of a belt of asteroids intersecting the Earth's orbit and moving with planetary velocity". Another early appearance occurred in Robert James Mann's A Guide to the Knowledge of the Heavens: "The orbits of the asteroids are placed in a wide belt of space, extending between the extremes of [...]". The American astronomer Benjamin Peirce seems to have adopted that terminology and to have been one of its promoters. Over 100 asteroids had been located by mid-1868, and in 1891, the introduction of astrophotography by Max Wolf accelerated the rate of discovery. A total of 1,000 asteroids had been found by 1921, 10,000 by 1981, and 100,000 by 2000. Modern asteroid survey systems now use automated means to locate new minor planets in ever-increasing numbers. On 22 January 2014, European Space Agency (ESA) scientists reported the detection, for the first definitive time, of water vapor on Ceres, the largest object in the asteroid belt. The detection was made by using the far-infrared abilities of the Herschel Space Observatory. The finding was unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids". Origin Formation In 1802, shortly after discovering Pallas, Olbers suggested to Herschel and Carl Gauss that Ceres and Pallas were fragments of a much larger planet that once occupied the Mars–Jupiter region, with this planet having suffered an internal explosion or a cometary impact many million years before, while Odesan astronomer K. N. Savchenko suggested that Ceres, Pallas, Juno, and Vesta were escaped moons rather than fragments of the exploded planet. The large amount of energy required to destroy a planet, combined with the belt's low combined mass, which is only about 4% of the mass of Earth's Moon, does not support these hypotheses. Further, the significant chemical differences between the asteroids become difficult to explain if they come from the same planet. A modern hypothesis for the asteroid belt's creation relates to how, in general for the Solar System, planetary formation is thought to have occurred via a process comparable to the long-standing nebular hypothesis; a cloud of interstellar dust and gas collapsed under the influence of gravity to form a rotating disc of material that then conglomerated to form the Sun and planets. During the first few million years of the Solar System's history, an accretion process of sticky collisions caused the clumping of small particles, which gradually increased in size. Once the clumps reached sufficient mass, they could draw in other bodies through gravitational attraction and become planetesimals. This gravitational accretion led to the formation of the planets. Planetesimals within the region that would become the asteroid belt were strongly perturbed by Jupiter's gravity. Orbital resonances occurred where the orbital period of an object in the belt formed an integer fraction of the orbital period of Jupiter, perturbing the object into a different orbit; the region lying between the orbits of Mars and Jupiter contains many such orbital resonances. As Jupiter migrated inward following its formation, these resonances would have swept across the asteroid belt, dynamically exciting the region's population and increasing their velocities relative to each other. In regions where the average velocity of the collisions was too high, the shattering of planetesimals tended to dominate over accretion, preventing the formation of a planet. Instead, they continued to orbit the Sun as before, occasionally colliding. During the early history of the Solar System, the asteroids melted to some degree, allowing elements within them to be differentiated by mass. Some of the progenitor bodies may even have undergone periods of explosive volcanism and formed magma oceans. Because of the relatively small size of the bodies, though, the period of melting was necessarily brief compared to the much larger planets, and had generally ended about 4.5 billion years ago, in the first tens of millions of years of formation. In August 2007, a study of zircon crystals in an Antarctic meteorite believed to have originated from Vesta suggested that it, and by extension the rest of the asteroid belt, had formed rather quickly, within 10 million years of the Solar System's origin. Evolution The asteroids are not pristine samples of the primordial Solar System. They have undergone considerable evolution since their formation, including internal heating (in the first few tens of millions of years), surface melting from impacts, space weathering from radiation, and bombardment by micrometeorites. Although some scientists refer to the asteroids as residual planetesimals, other scientists consider them distinct. The current asteroid belt is believed to contain only a small fraction of the mass of the primordial belt. Computer simulations suggest that the original asteroid belt may have contained mass equivalent to the Earth's. Primarily because of gravitational perturbations, most of the material was ejected from the belt within about 1 million years of formation, leaving behind less than 0.1% of the original mass. Since its formation, the size distribution of the asteroid belt has remained relatively stable; no significant increase or decrease in the typical dimensions of the main-belt asteroids has occurred. The 4:1 orbital resonance with Jupiter, at a radius 2.06 astronomical units (AUs), can be considered the inner boundary of the asteroid belt. Perturbations by Jupiter send bodies straying there into unstable orbits. Most bodies formed within the radius of this gap were swept up by Mars (which has an aphelion at 1.67 AU) or ejected by its gravitational perturbations in the early history of the Solar System. The Hungaria asteroids lie closer to the Sun than the 4:1 resonance, but are protected from disruption by their high inclination. When the asteroid belt was first formed, the temperatures at a distance of 2.7 AU from the Sun formed a "snow line" below the freezing point of water. Planetesimals formed beyond this radius were able to accumulate ice. In 2006, a population of comets had been discovered within the asteroid belt beyond the snow line, which may have provided a source of water for Earth's oceans. According to some models, outgassing of water during the Earth's formative period was insufficient to form the oceans, requiring an external source such as a cometary bombardment. The outer asteroid belt appears to include a few objects that may have arrived there during the last few hundred years, the list includes also known as 362P. Characteristics Contrary to popular imagery, the asteroid belt is mostly empty. The asteroids are spread over such a large volume that reaching an asteroid without aiming carefully would be improbable. Nonetheless, hundreds of thousands of asteroids are currently known, and the total number ranges in the millions or more, depending on the lower size cutoff. Over 200 asteroids are known to be larger than 100 km, and a survey in the infrared wavelengths has shown that the asteroid belt has between 700,000 and 1.7 million asteroids with a diameter of 1 km or more. The number of asteroids in the main belt steadily increases with decreasing size. Although the size distribution generally follows a power law, there are 'bumps' in the curve at about and , where more asteroids than expected from such a curve are found. Most asteroids larger than approximately in diameter are primordial, having survived from the accretion epoch, whereas most smaller asteroids are products of fragmentation of primordial asteroids. The primordial population of the main belt was probably 200 times what it is today. The absolute magnitudes of most of the known asteroids are between 11 and 19, with the median at about 16. On average the distance between the asteroids is about , although this varies among asteroid families and smaller undetected asteroids might be even closer. The total mass of the asteroid belt is estimated to be kg, which is 3% of the mass of the Moon. The four largest objects, Ceres, Vesta, Pallas, and Hygiea, contain an estimated 62% of the belt's total mass, with 39% accounted for by Ceres alone.For recent estimates of the masses of Ceres, Vesta, Pallas and Hygiea, see the references in the infoboxes of their respective articles. Composition The present day belt consists primarily of three categories of asteroids: C-type carbonaceous asteroids, S-type silicate asteroids, and a hybrid group of X-type asteroids. The hybrid group have featureless spectra, but they can be divided into three groups based on reflectivity, yielding the M-type metallic, P-type primitive, and E-type enstatite asteroids. Additional types have been found that do not fit within these primary classes. There is a compositional trend of asteroid types by increasing distance from the Sun, in the order of S, C, P, and the spectrally-featureless D-types. Carbonaceous asteroids, as their name suggests, are carbon-rich. They dominate the asteroid belt's outer regions, and are rare in the inner belt. Together they comprise over 75% of the visible asteroids. They are redder in hue than the other asteroids and have a low albedo. Their surface compositions are similar to carbonaceous chondrite meteorites. Chemically, their spectra match the primordial composition of the early Solar System, with hydrogen, helium, and volatiles removed. S-type (silicate-rich) asteroids are more common toward the inner region of the belt, within 2.5 AU of the Sun. The spectra of their surfaces reveal the presence of silicates and some metal, but no significant carbonaceous compounds. This indicates that their materials have been significantly modified from their primordial composition, probably through melting and reformation. They have a relatively high albedo and form about 17% of the total asteroid population. M-type (metal-rich) asteroids are typically found in the middle of the main belt, and they make up much of the remainder of the total population. Their spectra resemble that of iron-nickel. Some are believed to have formed from the metallic cores of differentiated progenitor bodies that were disrupted through collision. However, some silicate compounds also can produce a similar appearance. For example, the large M-type asteroid 22 Kalliope does not appear to be primarily composed of metal. Within the asteroid belt, the number distribution of M-type asteroids peaks at a semimajor axis of about 2.7 AU. Whether all M-types are compositionally similar, or whether it is a label for several varieties which do not fit neatly into the main C and S classes is not yet clear. One mystery is the relative rarity of V-type (Vestoid) or basaltic asteroids in the asteroid belt. Theories of asteroid formation predict that objects the size of Vesta or larger should form crusts and mantles, which would be composed mainly of basaltic rock, resulting in more than half of all asteroids being composed either of basalt or of olivine. However, observations suggest that 99% of the predicted basaltic material is missing. Until 2001, most basaltic bodies discovered in the asteroid belt were believed to originate from the asteroid Vesta (hence their name V-type), but the discovery of the asteroid 1459 Magnya revealed a slightly different chemical composition from the other basaltic asteroids discovered until then, suggesting a different origin. This hypothesis was reinforced by the further discovery in 2007 of two asteroids in the outer belt, 7472 Kumakiri and , with a differing basaltic composition that could not have originated from Vesta. These two are the only V-type asteroids discovered in the outer belt to date. The temperature of the asteroid belt varies with the distance from the Sun. For dust particles within the belt, typical temperatures range from 200 K (−73 °C) at 2.2 AU down to 165 K (−108 °C) at 3.2 AU. However, due to rotation, the surface temperature of an asteroid can vary considerably as the sides are alternately exposed to solar radiation then to the stellar background. Main-belt comets Several otherwise unremarkable bodies in the outer belt show cometary activity. Because their orbits cannot be explained through the capture of classical comets, many of the outer asteroids are thought to be icy, with the ice occasionally exposed to sublimation through small impacts. Main-belt comets may have been a major source of the Earth's oceans because the deuterium-hydrogen ratio is too low for classical comets to have been the principal source. Orbits Most asteroids within the asteroid belt have orbital eccentricities of less than 0.4, and an inclination of less than 30°. The orbital distribution of the asteroids reaches a maximum at an eccentricity around 0.07 and an inclination below 4°. Thus, although a typical asteroid has a relatively circular orbit and lies near the plane of the ecliptic, some asteroid orbits can be highly eccentric or travel well outside the ecliptic plane. Sometimes, the term "main belt" is used to refer only to the more compact "core" region where the greatest concentration of bodies is found. This lies between the strong 4:1 and 2:1 Kirkwood gaps at 2.06 and 3.27 AU, and at orbital eccentricities less than roughly 0.33, along with orbital inclinations below about 20°. , this "core" region contained 93% of all discovered and numbered minor planets within the Solar System. The JPL Small-Body Database lists over 1 million known main-belt asteroids. Kirkwood gaps The semimajor axis of an asteroid is used to describe the dimensions of its orbit around the Sun, and its value determines the minor planet's orbital period. In 1866, Daniel Kirkwood announced the discovery of gaps in the distances of these bodies' orbits from the Sun. They were located in positions where their period of revolution about the Sun was an integer fraction of Jupiter's orbital period. Kirkwood proposed that the gravitational perturbations of the planet led to the removal of asteroids from these orbits. When the mean orbital period of an asteroid is an integer fraction of the orbital period of Jupiter, a mean-motion resonance with the gas giant is created that is sufficient to perturb an asteroid to new orbital elements. Primordial asteroids entered these gaps because of the migration of Jupiter's orbit. Subsequently, asteroids primarily migrate into these gap orbits due to the Yarkovsky effect, but may also enter because of perturbations or collisions. After entering, an asteroid is gradually nudged into a different, random orbit with a larger or smaller semimajor axis. Collisions The high population of the asteroid belt makes for an active environment, where collisions between asteroids occur frequently (on deep time scales). Impact events between main-belt bodies with a mean radius of 10 km are expected to occur about once every 10 million years. A collision may fragment an asteroid into numerous smaller pieces (leading to the formation of a new asteroid family). Conversely, collisions that occur at low relative speeds may also join two asteroids. After more than 4 billion years of such processes, the members of the asteroid belt now bear little resemblance to the original population. Evidence suggests that most main belt asteroids between 200 m and 10 km in diameter are rubble piles formed by collisions. These bodies consist of a multitude of irregular objects that are mostly bound together by self-gravity, resulting in significant amounts of internal porosity. Along with the asteroid bodies, the asteroid belt also contains bands of dust with particle radii of up to a few hundred micrometres. This fine material is produced, at least in part, from collisions between asteroids, and by the impact of micrometeorites upon the asteroids. Due to the Poynting–Robertson effect, the pressure of solar radiation causes this dust to slowly spiral inward toward the Sun. The combination of this fine asteroid dust, as well as ejected cometary material, produces the zodiacal light. This faint auroral glow can be viewed at night extending from the direction of the Sun along the plane of the ecliptic. Asteroid particles that produce visible zodiacal light average about 40 μm in radius. The typical lifetimes of main-belt zodiacal cloud particles are about 700,000 years. Thus, to maintain the bands of dust, new particles must be steadily produced within the asteroid belt. It was once thought that collisions of asteroids form a major component of the zodiacal light. However, computer simulations by Nesvorný and colleagues attributed 85 percent of the zodiacal-light dust to fragmentations of Jupiter-family comets, rather than to comets and collisions between asteroids in the asteroid belt. At most 10 percent of the dust is attributed to the asteroid belt. Meteorites Some of the debris from collisions can form meteoroids that enter the Earth's atmosphere. Of the 50,000 meteorites found on Earth to date, 99.8 percent are believed to have originated in the asteroid belt. Families and groups In 1918, the Japanese astronomer Kiyotsugu Hirayama noticed that the orbits of some of the asteroids had similar parameters, forming families or groups. Approximately one-third of the asteroids in the asteroid belt are members of an asteroid family. These share similar orbital elements, such as semi-major axis, eccentricity, and orbital inclination as well as similar spectral features, which indicate a common origin in the breakup of a larger body. Graphical displays of these element pairs, for members of the asteroid belt, show concentrations indicating the presence of an asteroid family. There are about 20 to 30 associations that are likely asteroid families. Additional groupings have been found that are less certain. Asteroid families can be confirmed when the members display similar spectral features. Smaller associations of asteroids are called groups or clusters. Some of the most prominent families in the asteroid belt (in order of increasing semi-major axes) are the Flora, Eunomia, Koronis, Eos, and Themis families. The Flora family, one of the largest with more than 800 known members, may have formed from a collision less than 1 billion years ago. The largest asteroid to be a true member of a family is 4 Vesta. (This is in contrast to an interloper, in the case of Ceres with the Gefion family.) The Vesta family is believed to have formed as the result of a crater-forming impact on Vesta. Likewise, the HED meteorites may also have originated from Vesta as a result of this collision. Three prominent bands of dust have been found within the asteroid belt. These have similar orbital inclinations as the Eos, Koronis, and Themis asteroid families, and so are possibly associated with those groupings. The main belt evolution after the Late Heavy Bombardment was likely affected by the passages of large Centaurs and trans-Neptunian objects (TNOs). Centaurs and TNOs that reach the inner Solar System can modify the orbits of main belt asteroids, though only if their mass is of the order of for single encounters or, one order less in case of multiple close encounters. However, Centaurs and TNOs are unlikely to have significantly dispersed young asteroid families in the main belt, although they can have perturbed some old asteroid families. Current main belt asteroids that originated as Centaurs or trans-Neptunian objects may lie in the outer belt with short lifetime of less than 4 million years, most likely orbiting between 2.8 and 3.2 AU at larger eccentricities than typical of main belt asteroids. Periphery Skirting the inner edge of the belt (ranging between 1.78 and 2.0 AU, with a mean semi-major axis of 1.9 AU) is the Hungaria family of minor planets. They are named after the main member, 434 Hungaria; the group contains at least 52 named asteroids. The Hungaria group is separated from the main body by the 4:1 Kirkwood gap and their orbits have a high inclination. Some members belong to the Mars-crossing category of asteroids, and gravitational perturbations by Mars are likely a factor in reducing the total population of this group. Another high-inclination group in the inner part of the asteroid belt is the Phocaea family. These are composed primarily of S-type asteroids, whereas the neighboring Hungaria family includes some E-types. The Phocaea family orbit between 2.25 and 2.5 AU from the Sun. Skirting the outer edge of the asteroid belt is the Cybele group, orbiting between 3.3 and 3.5 AU. These have a 7:4 orbital resonance with Jupiter. The Hilda family orbit between 3.5 and 4.2 AU with relatively circular orbits and a stable 3:2 orbital resonance with Jupiter. There are few asteroids beyond 4.2 AU, until Jupiter's orbit. At the latter the two families of Trojan asteroids can be found, which, at least for objects larger than 1 km, are approximately as numerous as the asteroids of the asteroid belt. New families Some asteroid families have formed recently, in astronomical terms. The Karin family apparently formed about 5.7 million years ago from a collision with a progenitor asteroid 33 km in radius. The Veritas family formed about 8.3 million years ago; evidence includes interplanetary dust recovered from ocean sediment. More recently, the Datura cluster appears to have formed about 530,000 years ago from a collision with a main-belt asteroid. The age estimate is based on the probability of the members having their current orbits, rather than from any physical evidence. However, this cluster may have been a source for some zodiacal dust material. Other recent cluster formations, such as the Iannini cluster ( million years ago), may have provided additional sources of this asteroid dust. Exploration The first spacecraft to traverse the asteroid belt was Pioneer 10, which entered the region on 16 July 1972. At the time there was some concern that the debris in the belt would pose a hazard to the spacecraft, but it has since been safely traversed by multiple spacecraft without incident. Pioneer 11, Voyagers 1 and 2 and Ulysses passed through the belt without imaging any asteroids. Cassini measured plasma and fine dust grains while traversing the belt in 2000. On its way to Jupiter, Juno traversed the asteroid belt without collecting science data. Due to the low density of materials within the belt, the odds of a probe running into an asteroid are estimated at less than 1 in 1 billion. Most main belt asteroids imaged to date have come from brief flyby opportunities by probes headed for other targets. Only the Dawn mission has studied main belt asteroids for a protracted period in orbit. The Galileo spacecraft imaged 951 Gaspra in 1991 and 243 Ida in 1993, then NEAR imaged 253 Mathilde in 1997 and landed on near–Earth asteroid 433 Eros in February 2001. Cassini imaged 2685 Masursky in 2000, Stardust imaged 5535 Annefrank in 2002, New Horizons imaged 132524 APL in 2006, and Rosetta imaged 2867 Šteins in September 2008 and 21 Lutetia in July 2010. Dawn orbited Vesta between July 2011 and September 2012 and has orbited Ceres since March 2015. The Lucy space probe made a flyby of 152830 Dinkinesh in 2023, on its way to the Jupiter Trojans. ESA's JUICE mission will pass through the asteroid belt twice, with a proposed flyby of the asteroid 223 Rosa in 2029. The Psyche'' spacecraft is a NASA mission to the large M-type asteroid 16 Psyche.
Physical sciences
Solar System
null
47271
https://en.wikipedia.org/wiki/Sponge
Sponge
Sponges or sea sponges are primarily marine invertebrates of the metazoan phylum Porifera ( ; meaning 'pore bearer'), a basal animal clade and a sister taxon of the diploblasts. They are sessile filter feeders that are bound to the seabed, and are one of the most ancient members of macrobenthos, with many historical species being important reef-building organisms. Sponges are multicellular organisms consisting of jelly-like mesohyl sandwiched between two thin layers of cells, and usually have tube-like bodies full of pores and channels that allow water to circulate through them. They have unspecialized cells that can transform into other types and that often migrate between the main cell layers and the mesohyl in the process. They do not have complex nervous, digestive or circulatory systems. Instead, most rely on maintaining a constant water flow through their bodies to obtain food and oxygen and to remove wastes, usually via flagella movements of the so-called "collar cells". Believed to be some of the most basal animals alive today, sponges were possibly the first outgroup to branch off the evolutionary tree from the last common ancestor of all animals, with fossil evidence of primitive sponges such as Otavia from as early as the Tonian period (around 800 Mya). The branch of zoology that studies sponges is known as spongiology. Etymology The term sponge derives from the Ancient Greek word . The scientific name Porifera is a neuter plural of the Modern Latin term porifer, which comes from the roots porus meaning "pore, opening", and -fer meaning "bearing or carrying". Overview Sponges are similar to other animals in that they are multicellular, heterotrophic, lack cell walls and produce sperm cells. Unlike other animals, they lack true tissues and organs. Some of them are radially symmetrical, but most are asymmetrical. The shapes of their bodies are adapted for maximal efficiency of water flow through the central cavity, where the water deposits nutrients and then leaves through a hole called the osculum. The single-celled choanoflagellates resemble the choanocyte cells of sponges which are used to drive their water flow systems and capture most of their food. This along with phylogenetic studies of ribosomal molecules have been used as morphological evidence to suggest sponges are the sister group to the rest of animals. A great majority are marine (salt-water) species, ranging in habitat from tidal zones to depths exceeding , though there are freshwater species. All adult sponges are sessile, meaning that they attach to an underwater surface and remain fixed in place (i.e., do not travel). While in their larval stage of life, they are motile. Many sponges have internal skeletons of spicules (skeletal-like fragments of calcium carbonate or silicon dioxide), and/or spongin (a modified type of collagen protein). An internal gelatinous matrix called mesohyl functions as an endoskeleton, and it is the only skeleton in soft sponges that encrust such hard surfaces as rocks. More commonly, the mesohyl is stiffened by mineral spicules, by spongin fibers, or both. 90% of all known sponge species that have the widest range of habitats including all freshwater ones are demosponges that use spongin; many species have silica spicules, whereas some species have calcium carbonate exoskeletons. Calcareous sponges have calcium carbonate spicules and, in some species, calcium carbonate exoskeletons, are restricted to relatively shallow marine waters where production of calcium carbonate is easiest. The fragile glass sponges, with "scaffolding" of silica spicules, are restricted to polar regions and the ocean depths where predators are rare. Fossils of all of these types have been found in rocks dated from . In addition Archaeocyathids, whose fossils are common in rocks from , are now regarded as a type of sponge. Although most of the approximately 5,000–10,000 known species of sponges feed on bacteria and other microscopic food in the water, some host photosynthesizing microorganisms as endosymbionts, and these alliances often produce more food and oxygen than they consume. A few species of sponges that live in food-poor environments have evolved as carnivores that prey mainly on small crustaceans. Most sponges reproduce sexually, but they can also reproduce asexually. Sexually reproducing species release sperm cells into the water to fertilize ova released or retained by its mate or "mother"; the fertilized eggs develop into larvae which swim off in search of places to settle. Sponges are known for regenerating from fragments that are broken off, although this only works if the fragments include the right types of cells. Some species reproduce by budding. When environmental conditions become less hospitable to the sponges, for example as temperatures drop, many freshwater species and a few marine ones produce gemmules, "survival pods" of unspecialized cells that remain dormant until conditions improve; they then either form completely new sponges or recolonize the skeletons of their parents. The few species of demosponge that have entirely soft fibrous skeletons with no hard elements have been used by humans over thousands of years for several purposes, including as padding and as cleaning tools. By the 1950s, though, these had been overfished so heavily that the industry almost collapsed, and most sponge-like materials are now synthetic. Sponges and their microscopic endosymbionts are now being researched as possible sources of medicines for treating a wide range of diseases. Dolphins have been observed using sponges as tools while foraging. Distinguishing features Sponges constitute the phylum Porifera, and have been defined as sessile metazoans (multicelled immobile animals) that have water intake and outlet openings connected by chambers lined with choanocytes, cells with whip-like flagella. However, a few carnivorous sponges have lost these water flow systems and the choanocytes. All known living sponges can remold their bodies, as most types of their cells can move within their bodies and a few can change from one type to another. Even if a few sponges are able to produce mucus – which acts as a microbial barrier in all other animals – no sponge with the ability to secrete a functional mucus layer has been recorded. Without such a mucus layer their living tissue is covered by a layer of microbial symbionts, which can contribute up to 40–50% of the sponge wet mass. This inability to prevent microbes from penetrating their porous tissue could be a major reason why they have never evolved a more complex anatomy. Like cnidarians (jellyfish, etc.) and ctenophores (comb jellies), and unlike all other known metazoans, sponges' bodies consist of a non-living jelly-like mass (mesohyl) sandwiched between two main layers of cells. Cnidarians and ctenophores have simple nervous systems, and their cell layers are bound by internal connections and by being mounted on a basement membrane (thin fibrous mat, also known as "basal lamina"). Sponges do not have a nervous system similar to that of vertebrates but may have one that is quite different. Their middle jelly-like layers have large and varied populations of cells, and some types of cells in their outer layers may move into the middle layer and change their functions. Basic structure Cell types A sponge's body is hollow and is held in shape by the mesohyl, a jelly-like substance made mainly of collagen and reinforced by a dense network of fibers also made of collagen. 18 distinct cell types have been identified. The inner surface is covered with choanocytes, cells with cylindrical or conical collars surrounding one flagellum per choanocyte. The wave-like motion of the whip-like flagella drives water through the sponge's body. All sponges have ostia, channels leading to the interior through the mesohyl, and in most sponges these are controlled by tube-like porocytes that form closable inlet valves. Pinacocytes, plate-like cells, form a single-layered external skin over all other parts of the mesohyl that are not covered by choanocytes, and the pinacocytes also digest food particles that are too large to enter the ostia, while those at the base of the animal are responsible for anchoring it. Other types of cells live and move within the mesohyl: Lophocytes are amoeba-like cells that move slowly through the mesohyl and secrete collagen fibres. Collencytes are another type of collagen-producing cell. Rhabdiferous cells secrete polysaccharides that also form part of the mesohyl. Oocytes and spermatocytes are reproductive cells. Sclerocytes secrete the mineralized spicules ("little spines") that form the skeletons of many sponges and in some species provide some defense against predators. In addition to or instead of sclerocytes, demosponges have spongocytes that secrete a form of collagen that polymerizes into spongin, a thick fibrous material that stiffens the mesohyl. Myocytes ("muscle cells") conduct signals and cause parts of the animal to contract. "Grey cells" act as sponges' equivalent of an immune system. Archaeocytes (or amoebocytes) are amoeba-like cells that are totipotent, in other words, each is capable of transformation into any other type of cell. They also have important roles in feeding and in clearing debris that block the ostia. Many larval sponges possess neuron-less eyes that are based on cryptochromes. They mediate phototaxic behavior. Glass sponges present a distinctive variation on this basic plan. Their spicules, which are made of silica, form a scaffolding-like framework between whose rods the living tissue is suspended like a cobweb that contains most of the cell types. This tissue is a syncytium that in some ways behaves like many cells that share a single external membrane, and in others like a single cell with multiple nuclei. Water flow and body structures Most sponges work rather like chimneys: they take in water at the bottom and eject it from the osculum at the top. Since ambient currents are faster at the top, the suction effect that they produce by Bernoulli's principle does some of the work for free. Sponges can control the water flow by various combinations of wholly or partially closing the osculum and ostia (the intake pores) and varying the beat of the flagella, and may shut it down if there is a lot of sand or silt in the water. Although the layers of pinacocytes and choanocytes resemble the epithelia of more complex animals, they are not bound tightly by cell-to-cell connections or a basal lamina (thin fibrous sheet underneath). The flexibility of these layers and re-modeling of the mesohyl by lophocytes allow the animals to adjust their shapes throughout their lives to take maximum advantage of local water currents. The simplest body structure in sponges is a tube or vase shape known as "asconoid", but this severely limits the size of the animal. The body structure is characterized by a stalk-like spongocoel surrounded by a single layer of choanocytes. If it is simply scaled up, the ratio of its volume to surface area increases, because surface increases as the square of length or width while volume increases proportionally to the cube. The amount of tissue that needs food and oxygen is determined by the volume, but the pumping capacity that supplies food and oxygen depends on the area covered by choanocytes. Asconoid sponges seldom exceed in diameter. Some sponges overcome this limitation by adopting the "syconoid" structure, in which the body wall is pleated. The inner pockets of the pleats are lined with choanocytes, which connect to the outer pockets of the pleats by ostia. This increase in the number of choanocytes and hence in pumping capacity enables syconoid sponges to grow up to a few centimeters in diameter. The "leuconoid" pattern boosts pumping capacity further by filling the interior almost completely with mesohyl that contains a network of chambers lined with choanocytes and connected to each other and to the water intakes and outlet by tubes. Leuconid sponges grow to over in diameter, and the fact that growth in any direction increases the number of choanocyte chambers enables them to take a wider range of forms, for example, "encrusting" sponges whose shapes follow those of the surfaces to which they attach. All freshwater and most shallow-water marine sponges have leuconid bodies. The networks of water passages in glass sponges are similar to the leuconid structure. In all three types of structure, the cross-section area of the choanocyte-lined regions is much greater than that of the intake and outlet channels. This makes the flow slower near the choanocytes and thus makes it easier for them to trap food particles. For example, in Leuconia, a small leuconoid sponge about tall and in diameter, water enters each of more than 80,000 intake canals at 6 cm per minute. However, because Leuconia has more than 2 million flagellated chambers whose combined diameter is much greater than that of the canals, water flow through chambers slows to 3.6 cm per hour, making it easy for choanocytes to capture food. All the water is expelled through a single osculum at about 8.5 cm per second, fast enough to carry waste products some distance away. Skeleton In zoology, a skeleton is any fairly rigid structure of an animal, irrespective of whether it has joints and irrespective of whether it is biomineralized. The mesohyl functions as an endoskeleton in most sponges, and is the only skeleton in soft sponges that encrust hard surfaces such as rocks. More commonly the mesohyl is stiffened by mineral spicules, by spongin fibers or both. Spicules, which are present in most but not all species, may be made of silica or calcium carbonate, and vary in shape from simple rods to three-dimensional "stars" with up to six rays. Spicules are produced by sclerocyte cells, and may be separate, connected by joints, or fused. Some sponges also secrete exoskeletons that lie completely outside their organic components. For example, sclerosponges ("hard sponges") have massive calcium carbonate exoskeletons over which the organic matter forms a thin layer with choanocyte chambers in pits in the mineral. These exoskeletons are secreted by the pinacocytes that form the animals' skins. Vital functions Movement Although adult sponges are fundamentally sessile animals, some marine and freshwater species can move across the sea bed at speeds of per day, as a result of amoeba-like movements of pinacocytes and other cells. A few species can contract their whole bodies, and many can close their oscula and ostia. Juveniles drift or swim freely, while adults are stationary. Respiration, feeding and excretion Sponges do not have distinct circulatory, respiratory, digestive, and excretory systems – instead, the water flow system supports all these functions. They filter food particles out of the water flowing through them. Particles larger than 50 micrometers cannot enter the ostia and pinacocytes consume them by phagocytosis (engulfing and intracellular digestion). Particles from 0.5 μm to 50 μm are trapped in the ostia, which taper from the outer to inner ends. These particles are consumed by pinacocytes or by archaeocytes which partially extrude themselves through the walls of the ostia. Bacteria-sized particles, below 0.5 micrometers, pass through the ostia and are caught and consumed by choanocytes. Since the smallest particles are by far the most common, choanocytes typically capture 80% of a sponge's food supply. Archaeocytes transport food packaged in vesicles from cells that directly digest food to those that do not. At least one species of sponge has internal fibers that function as tracks for use by nutrient-carrying archaeocytes, and these tracks also move inert objects. It used to be claimed that glass sponges could live on nutrients dissolved in sea water and were very averse to silt. However, a study in 2007 found no evidence of this and concluded that they extract bacteria and other micro-organisms from water very efficiently (about 79%) and process suspended sediment grains to extract such prey. Collar bodies digest food and distribute it wrapped in vesicles that are transported by dynein "motor" molecules along bundles of microtubules that run throughout the syncytium. Sponges' cells absorb oxygen by diffusion from water into cells as water flows through body, into which carbon dioxide and other soluble waste products such as ammonia also diffuse. Archeocytes remove mineral particles that threaten to block the ostia, transport them through the mesohyl and generally dump them into the outgoing water current, although some species incorporate them into their skeletons. Carnivorous sponges In waters where the supply of food particles is very poor, some species prey on crustaceans and other small animals. So far only 137 species have been discovered. Most belong to the family Cladorhizidae, but a few members of the Guitarridae and Esperiopsidae are also carnivores. In most cases, little is known about how they actually capture prey, although some species are thought to use either sticky threads or hooked spicules. Most carnivorous sponges live in deep waters, up to , and the development of deep-ocean exploration techniques is expected to lead to the discovery of several more. However, one species has been found in Mediterranean caves at depths of , alongside the more usual filter-feeding sponges. The cave-dwelling predators capture crustaceans under long by entangling them with fine threads, digest them by enveloping them with further threads over the course of a few days, and then return to their normal shape; there is no evidence that they use venom. Most known carnivorous sponges have completely lost the water flow system and choanocytes. However, the genus Chondrocladia uses a highly modified water flow system to inflate balloon-like structures that are used for capturing prey. Endosymbionts Freshwater sponges often host green algae as endosymbionts within archaeocytes and other cells and benefit from nutrients produced by the algae. Many marine species host other photosynthesizing organisms, most commonly cyanobacteria but in some cases dinoflagellates. Symbiotic cyanobacteria may form a third of the total mass of living tissue in some sponges, and some sponges gain 48% to 80% of their energy supply from these micro-organisms. In 2008, a University of Stuttgart team reported that spicules made of silica conduct light into the mesohyl, where the photosynthesizing endosymbionts live. Sponges that host photosynthesizing organisms are most common in waters with relatively poor supplies of food particles and often have leafy shapes that maximize the amount of sunlight they collect. A recently discovered carnivorous sponge that lives near hydrothermal vents hosts methane-eating bacteria and digests some of them. "Immune" system Sponges do not have the complex immune systems of most other animals. However, they reject grafts from other species but accept them from other members of their own species. In a few marine species, gray cells play the leading role in rejection of foreign material. When invaded, they produce a chemical that stops movement of other cells in the affected area, thus preventing the intruder from using the sponge's internal transport systems. If the intrusion persists, the grey cells concentrate in the area and release toxins that kill all cells in the area. The "immune" system can stay in this activated state for up to three weeks. Reproduction Asexual Sponges have three asexual methods of reproduction: after fragmentation, by budding, and by producing gemmules. Fragments of sponges may be detached by currents or waves. They use the mobility of their pinacocytes and choanocytes and reshaping of the mesohyl to re-attach themselves to a suitable surface and then rebuild themselves as small but functional sponges over the course of several days. The same capabilities enable sponges that have been squeezed through a fine cloth to regenerate. A sponge fragment can only regenerate if it contains both collencytes to produce mesohyl and archeocytes to produce all the other cell types. A very few species reproduce by budding. Gemmules are "survival pods" which a few marine sponges and many freshwater species produce by the thousands when dying and which some, mainly freshwater species, regularly produce in autumn. Spongocytes make gemmules by wrapping shells of spongin, often reinforced with spicules, round clusters of archeocytes that are full of nutrients. Freshwater gemmules may also include photosynthesizing symbionts. The gemmules then become dormant, and in this state can survive cold, drying out, lack of oxygen and extreme variations in salinity. Freshwater gemmules often do not revive until the temperature drops, stays cold for a few months and then reaches a near-"normal" level. When a gemmule germinates, the archeocytes round the outside of the cluster transform into pinacocytes, a membrane over a pore in the shell bursts, the cluster of cells slowly emerges, and most of the remaining archeocytes transform into other cell types needed to make a functioning sponge. Gemmules from the same species but different individuals can join forces to form one sponge. Some gemmules are retained within the parent sponge, and in spring it can be difficult to tell whether an old sponge has revived or been "recolonized" by its own gemmules. Sexual Most sponges are hermaphrodites (function as both sexes simultaneously), although sponges have no gonads (reproductive organs). Sperm are produced by choanocytes or entire choanocyte chambers that sink into the mesohyl and form spermatic cysts while eggs are formed by transformation of archeocytes, or of choanocytes in some species. Each egg generally acquires a yolk by consuming "nurse cells". During spawning, sperm burst out of their cysts and are expelled via the osculum. If they contact another sponge of the same species, the water flow carries them to choanocytes that engulf them but, instead of digesting them, metamorphose to an ameboid form and carry the sperm through the mesohyl to eggs, which in most cases engulf the carrier and its cargo. A few species release fertilized eggs into the water, but most retain the eggs until they hatch. By retaining the eggs, the parents can transfer symbiotic microorganisms directly to their offspring through vertical transmission, while the species who release their eggs into the water has to acquire symbionts horizontally (a combination of both is probably most common, where larvae with vertically transmitted symbionts also acquire others horizontally). There are four types of larvae, but all are lecithotrophic (non-feeding) balls of cells with an outer layer of cells whose flagella or cilia enable the larvae to move. After swimming for a few days the larvae sink and crawl until they find a place to settle. Most of the cells transform into archeocytes and then into the types appropriate for their locations in a miniature adult sponge. Glass sponge embryos start by dividing into separate cells, but once 32 cells have formed they rapidly transform into larvae that externally are ovoid with a band of cilia round the middle that they use for movement, but internally have the typical glass sponge structure of spicules with a cobweb-like main syncitium draped around and between them and choanosyncytia with multiple collar bodies in the center. The larvae then leave their parents' bodies. Meiosis The cytological progression of porifera oogenesis and spermatogenesis (gametogenesis) is very similar to that of other metazoa. Most of the genes from the classic set of meiotic genes, including genes for DNA recombination and double-strand break repair, that are conserved in eukaryotes are expressed in the sponges (e.g. Geodia hentscheli and Geodia phlegraei). Since porifera are considered to be the earliest divergent animals, these findings indicate that the basic toolkit of meiosis including capabilities for recombination and DNA repair were present early in eukaryote evolution. Life cycle Sponges in temperate regions live for at most a few years, but some tropical species and perhaps some deep-ocean ones may live for 200 years or more. Some calcified demosponges grow by only per year and, if that rate is constant, specimens wide must be about 5,000 years old. Some sponges start sexual reproduction when only a few weeks old, while others wait until they are several years old. Coordination of activities Adult sponges lack neurons or any other kind of nervous tissue. However, most species have the ability to perform movements that are coordinated all over their bodies, mainly contractions of the pinacocytes, squeezing the water channels and thus expelling excess sediment and other substances that may cause blockages. Some species can contract the osculum independently of the rest of the body. Sponges may also contract in order to reduce the area that is vulnerable to attack by predators. In cases where two sponges are fused, for example if there is a large but still unseparated bud, these contraction waves slowly become coordinated in both of the "Siamese twins". The coordinating mechanism is unknown, but may involve chemicals similar to neurotransmitters. However, glass sponges rapidly transmit electrical impulses through all parts of the syncytium, and use this to halt the motion of their flagella if the incoming water contains toxins or excessive sediment. Myocytes are thought to be responsible for closing the osculum and for transmitting signals between different parts of the body. Sponges contain genes very similar to those that contain the "recipe" for the post-synaptic density, an important signal-receiving structure in the neurons of all other animals. However, in sponges these genes are only activated in "flask cells" that appear only in larvae and may provide some sensory capability while the larvae are swimming. This raises questions about whether flask cells represent the predecessors of true neurons or are evidence that sponges' ancestors had true neurons but lost them as they adapted to a sessile lifestyle. Ecology Habitats Sponges are worldwide in their distribution, living in a wide range of ocean habitats, from the polar regions to the tropics. Most live in quiet, clear waters, because sediment stirred up by waves or currents would block their pores, making it difficult for them to feed and breathe. The greatest numbers of sponges are usually found on firm surfaces such as rocks, but some sponges can attach themselves to soft sediment by means of a root-like base. Sponges are more abundant but less diverse in temperate waters than in tropical waters, possibly because organisms that prey on sponges are more abundant in tropical waters. Glass sponges are the most common in polar waters and in the depths of temperate and tropical seas, as their very porous construction enables them to extract food from these resource-poor waters with the minimum of effort. Demosponges and calcareous sponges are abundant and diverse in shallower non-polar waters. The different classes of sponge live in different ranges of habitat: {|class="wikitable" |- ! Class !! Water type !! Depth !! Type of surface |- ! Calcarea |Marine ||less than ||Hard |- ! Glass sponges |Marine ||Deep ||Soft or firm sediment |- ! Demosponges |Marine, brackish; and about 150 freshwater species ||Inter-tidal to abyssal; a carnivorous demosponge has been found at ||Any |} As primary producers Sponges with photosynthesizing endosymbionts produce up to three times more oxygen than they consume, as well as more organic matter than they consume. Such contributions to their habitats' resources are significant along Australia's Great Barrier Reef but relatively minor in the Caribbean. Defenses Many sponges shed spicules, forming a dense carpet several meters deep that keeps away echinoderms which would otherwise prey on the sponges. They also produce toxins that prevent other sessile organisms such as bryozoans or sea squirts from growing on or near them, making sponges very effective competitors for living space. One of many examples includes ageliferin. A few species, the Caribbean fire sponge Tedania ignis, cause a severe rash in humans who handle them. Turtles and some fish feed mainly on sponges. It is often said that sponges produce chemical defenses against such predators. However, experiments have been unable to establish a relationship between the toxicity of chemicals produced by sponges and how they taste to fish, which would diminish the usefulness of chemical defenses as deterrents. Predation by fish may even help to spread sponges by detaching fragments. However, some studies have shown fish showing a preference for non chemically defended sponges, and another study found that high levels of coral predation did predict the presence of chemically defended species. Glass sponges produce no toxic chemicals, and live in very deep water where predators are rare. Predation Spongeflies, also known as spongillaflies (Neuroptera, Sisyridae), are specialist predators of freshwater sponges. The female lays her eggs on vegetation overhanging water. The larvae hatch and drop into the water where they seek out sponges to feed on. They use their elongated mouthparts to pierce the sponge and suck the fluids within. The larvae of some species cling to the surface of the sponge while others take refuge in the sponge's internal cavities. The fully grown larvae leave the water and spin a cocoon in which to pupate. Bioerosion The Caribbean chicken-liver sponge Chondrilla nucula secretes toxins that kill coral polyps, allowing the sponges to grow over the coral skeletons. Others, especially in the family Clionaidae, use corrosive substances secreted by their archeocytes to tunnel into rocks, corals and the shells of dead mollusks. Sponges may remove up to per year from reefs, creating visible notches just below low-tide level. Diseases Caribbean sponges of the genus Aplysina suffer from Aplysina red band syndrome. This causes Aplysina to develop one or more rust-colored bands, sometimes with adjacent bands of necrotic tissue. These lesions may completely encircle branches of the sponge. The disease appears to be contagious and impacts approximately ten percent of A. cauliformis on Bahamian reefs. The rust-colored bands are caused by a cyanobacterium, but it is unknown whether this organism actually causes the disease. Collaboration with other organisms In addition to hosting photosynthesizing endosymbionts, sponges are noted for their wide range of collaborations with other organisms. The relatively large encrusting sponge Lissodendoryx colombiensis is most common on rocky surfaces, but has extended its range into seagrass meadows by letting itself be surrounded or overgrown by seagrass sponges, which are distasteful to the local starfish and therefore protect Lissodendoryx against them; in return, the seagrass sponges get higher positions away from the sea-floor sediment. Shrimps of the genus Synalpheus form colonies in sponges, and each shrimp species inhabits a different sponge species, making Synalpheus one of the most diverse crustacean genera. Specifically, Synalpheus regalis utilizes the sponge not only as a food source, but also as a defense against other shrimp and predators. As many as 16,000 individuals inhabit a single loggerhead sponge, feeding off the larger particles that collect on the sponge as it filters the ocean to feed itself. Other crustaceans such as hermit crabs commonly have a specific species of sponge, Pseudospongosorites, grow on them as both the sponge and crab occupy gastropod shells until the crab and sponge outgrow the shell, eventually resulting in the crab using the sponge's body as protection instead of the shell until the crab finds a suitable replacement shell. Sponge loop Most sponges are detritivores which filter organic debris particles and microscopic life forms from ocean water. In particular, sponges occupy an important role as detritivores in coral reef food webs by recycling detritus to higher trophic levels. The hypothesis has been made that coral reef sponges facilitate the transfer of coral-derived organic matter to their associated detritivores via the production of sponge detritus, as shown in the diagram. Several sponge species are able to convert coral-derived DOM into sponge detritus, and transfer organic matter produced by corals further up the reef food web. Corals release organic matter as both dissolved and particulate mucus, as well as cellular material such as expelled Symbiodinium. Organic matter could be transferred from corals to sponges by all these pathways, but DOM likely makes up the largest fraction, as the majority (56 to 80%) of coral mucus dissolves in the water column, and coral loss of fixed carbon due to expulsion of Symbiodinium is typically negligible (0.01%) compared with mucus release (up to ~40%). Coral-derived organic matter could also be indirectly transferred to sponges via bacteria, which can also consume coral mucus. Sponge holobiont Besides a one to one symbiotic relationship, it is possible for a host to become symbiotic with a microbial consortium, resulting in a diverse sponge microbiome. Sponges are able to host a wide range of microbial communities that can also be very specific. The microbial communities that form a symbiotic relationship with the sponge can amount to as much as 35% of the biomass of its host. The term for this specific symbiotic relationship, where a microbial consortia pairs with a host is called a holobiotic relationship. The sponge as well as the microbial community associated with it will produce a large range of secondary metabolites that help protect it against predators through mechanisms such as chemical defense. Some of these relationships include endosymbionts within bacteriocyte cells, and cyanobacteria or microalgae found below the pinacoderm cell layer where they are able to receive the highest amount of light, used for phototrophy. They can host over 50 different microbial phyla and candidate phyla, including Alphaprotoebacteria, Actinomycetota, Chloroflexota, Nitrospirota, "Cyanobacteria", the taxa Gamma-, the candidate phylum Poribacteria, and Thaumarchaea. Systematics Taxonomy Carl Linnaeus, who classified most kinds of sessile animals as belonging to the order Zoophyta in the class Vermes, mistakenly identified the genus Spongia as plants in the order Algae. For a long time thereafter, sponges were assigned to subkingdom Parazoa ("beside the animals") separated from the Eumetazoa which formed the rest of the kingdom Animalia. The phylum Porifera is further divided into classes mainly according to the composition of their skeletons: Hexactinellida (glass sponges) have silicate spicules, the largest of which have six rays and may be individual or fused. The main components of their bodies are syncytia in which large numbers of cell share a single external membrane. Calcarea have skeletons made of calcite, a form of calcium carbonate, which may form separate spicules or large masses. All the cells have a single nucleus and membrane. Most Demospongiae have silicate spicules or spongin fibers or both within their soft tissues. However, a few also have massive external skeletons made of aragonite, another form of calcium carbonate. All the cells have a single nucleus and membrane. Archeocyatha are known only as fossils from the Cambrian period. In the 1970s, sponges with massive calcium carbonate skeletons were assigned to a separate class, Sclerospongiae, otherwise known as "coralline sponges". However, in the 1980s, it was found that these were all members of either the Calcarea or the Demospongiae. So far scientific publications have identified about 9,000 poriferan species, of which: about 400 are glass sponges; about 500 are calcareous species; and the rest are demosponges. However, some types of habitat, vertical rock and cave walls and galleries in rock and coral boulders, have been investigated very little, even in shallow seas. Classes Sponges were traditionally distributed in three classes: calcareous sponges (Calcarea), glass sponges (Hexactinellida) and demosponges (Demospongiae). However, studies have now shown that the Homoscleromorpha, a group thought to belong to the Demospongiae, has a genetic relationship well separated from other sponge classes. Therefore, they have recently been recognized as the fourth class of sponges. Sponges are divided into classes mainly according to the composition of their skeletons: These are arranged in evolutionary order as shown below in ascending order of their evolution from top to bottom: {|class="wikitable" ! Class !! Type of cells !! Spicules !! Spongin fibers !! Massive exoskeleton !! Body form |- ! Hexactinellida |Mostly syncytia in all species||SilicaMay be individual or fused ||Never ||Never ||Leuconoid |- ! Demospongiae |Single nucleus, single external membrane ||Silica ||In many species ||In some species.Made of aragonite if present.||Leuconoid |- ! Calcarea |Single nucleus, single external membrane||CalciteMay be individual or large masses ||Never ||Common.Made of calcite if present.||Asconoid, syconoid, leuconoid or solenoid |- ! Homoscleromorpha |Single nucleus, single external membrane||Silica ||In many species ||Never ||Sylleibid or leuconoid |} Phylogeny The phylogeny of sponges has been debated heavily since the advent of phylogenetics. Originally thought to be the most basal animal phylum, there is now considerable evidence that Ctenophora may hold that title instead. Additionally, the monophyly of the phylum is now under question. Several studies have concluded that all other animals emerged from within the sponges, and usually recover that the calcareous sponges and Homoscleromorpha are closer to other animals than to demosponges. The internal relationships of Porifera have proven to be less uncertain. A close relationship of Homoscleromorpha and Calcarea has been recovered in nearly all studies, whether or not they support sponge or eumetazoan monophyly. The position of glass sponges is also fairly certain, with a majority of studies recovering them as the sister of the demosponges. Thus, the uncertainty at the base of the animal family tree is probably best represented by the below cladogram. Evolutionary history Fossil record Although molecular clocks and biomarkers suggest sponges existed well before the Cambrian explosion of life, silica spicules like those of demosponges are absent from the fossil record until the Cambrian. An unsubstantiated 2002 report exists of spicules in rocks dated around . Well-preserved fossil sponges from about in the Ediacaran period have been found in the Doushantuo Formation. These fossils, which include: spicules; pinacocytes; porocytes; archeocytes; sclerocytes; and the internal cavity, have been classified as demosponges. The Ediacaran record of sponges also contains two other genera: the stem-hexactinellid Helicolocellus from the Dengying Formation and the possible stem-archaeocyathan Arimasia from the Nama Group. These genera are both from the “Nama assemblage” of Ediacaran biota, although whether this is due to a genuine lack beforehand or preservational bias is uncertain. Fossils of glass sponges have been found from around in rocks in Australia, China, and Mongolia. Early Cambrian sponges from Mexico belonging to the genus Kiwetinokia show evidence of fusion of several smaller spicules to form a single large spicule. Calcium carbonate spicules of calcareous sponges have been found in Early Cambrian rocks from about in Australia. Other probable demosponges have been found in the Early Cambrian Chengjiang fauna, from . Fossils found in the Canadian Northwest Territories dating to may be sponges; if this finding is confirmed, it suggests the first animals appeared before the Neoproterozoic oxygenation event. Freshwater sponges appear to be much younger, as the earliest known fossils date from the Mid-Eocene period about . Although about 90% of modern sponges are demosponges, fossilized remains of this type are less common than those of other types because their skeletons are composed of relatively soft spongin that does not fossilize well. The earliest sponge symbionts are known from the early Silurian. A chemical tracer is 24-isopropyl cholestane, which is a stable derivative of 24-isopropyl cholesterol, which is said to be produced by demosponges but not by eumetazoans ("true animals", i.e. cnidarians and bilaterians). Since choanoflagellates are thought to be animals' closest single-celled relatives, a team of scientists examined the biochemistry and genes of one choanoflagellate species. They concluded that this species could not produce 24-isopropyl cholesterol but that investigation of a wider range of choanoflagellates would be necessary in order to prove that the fossil 24-isopropyl cholestane could only have been produced by demosponges. Although a previous publication reported traces of the chemical 24-isopropyl cholestane in ancient rocks dating to , recent research using a much more accurately dated rock series has revealed that these biomarkers only appear before the end of the Marinoan glaciation approximately , and that "Biomarker analysis has yet to reveal any convincing evidence for ancient sponges pre-dating the first globally extensive Neoproterozoic glacial episode (the Sturtian, ~ in Oman)". While it has been argued that this 'sponge biomarker' could have originated from marine algae, recent research suggests that the algae's ability to produce this biomarker evolved only in the Carboniferous; as such, the biomarker remains strongly supportive of the presence of demosponges in the Cryogenian. Archaeocyathids, which some classify as a type of coralline sponge, are very common fossils in rocks from the Early Cambrian about , but apparently died out by the end of the Cambrian . It has been suggested that they were produced by: sponges; cnidarians; algae; foraminiferans; a completely separate phylum of animals, Archaeocyatha; or even a completely separate kingdom of life, labeled Archaeata or Inferibionta. Since the 1990s, archaeocyathids have been regarded as a distinctive group of sponges. It is difficult to fit chancelloriids into classifications of sponges or more complex animals. An analysis in 1996 concluded that they were closely related to sponges on the grounds that the detailed structure of chancellorid sclerites ("armor plates") is similar to that of fibers of spongin, a collagen protein, in modern keratose (horny) demosponges such as Darwinella. However, another analysis in 2002 concluded that chancelloriids are not sponges and may be intermediate between sponges and more complex animals, among other reasons because their skins were thicker and more tightly connected than those of sponges. In 2008, a detailed analysis of chancelloriids' sclerites concluded that they were very similar to those of halkieriids, mobile bilaterian animals that looked like slugs in chain mail and whose fossils are found in rocks from the very Early Cambrian to the Mid Cambrian. If this is correct, it would create a dilemma, as it is extremely unlikely that totally unrelated organisms could have developed such similar sclerites independently, but the huge difference in the structures of their bodies makes it hard to see how they could be closely related. Relationships to other animal groups In the 1990s, sponges were widely regarded as a monophyletic group, all of them having descended from a common ancestor that was itself a sponge, and as the "sister-group" to all other metazoans (multi-celled animals), which themselves form a monophyletic group. On the other hand, some 1990s analyses also revived the idea that animals' nearest evolutionary relatives are choanoflagellates, single-celled organisms very similar to sponges' choanocytes – which would imply that most Metazoa evolved from very sponge-like ancestors and therefore that sponges may not be monophyletic, as the same sponge-like ancestors may have given rise both to modern sponges and to non-sponge members of Metazoa. Analyses since 2001 have concluded that Eumetazoa (more complex than sponges) are more closely related to particular groups of sponges than to other sponge groups. Such conclusions imply that sponges are not monophyletic, because the last common ancestor of all sponges would also be a direct ancestor of the Eumetazoa, which are not sponges. A study in 2001 based on comparisons of ribosome DNA concluded that the most fundamental division within sponges was between glass sponges and the rest, and that Eumetazoa are more closely related to calcareous sponges (those with calcium carbonate spicules) than to other types of sponge. In 2007, one analysis based on comparisons of RNA and another based mainly on comparison of spicules concluded that demosponges and glass sponges are more closely related to each other than either is to the calcareous sponges, which in turn are more closely related to Eumetazoa. Other anatomical and biochemical evidence links the Eumetazoa with Homoscleromorpha, a sub-group of demosponges. A comparison in 2007 of nuclear DNA, excluding glass sponges and comb jellies, concluded that: Homoscleromorpha are most closely related to Eumetazoa; calcareous sponges are the next closest; the other demosponges are evolutionary "aunts" of these groups; and the chancelloriids, bag-like animals whose fossils are found in Cambrian rocks, may be sponges. The sperm of Homoscleromorpha share features with the sperm of Eumetazoa, that sperm of other sponges lack. In both Homoscleromorpha and Eumetazoa layers of cells are bound together by attachment to a carpet-like basal membrane composed mainly of "typ IV" collagen, a form of collagen not found in other sponges – although the spongin fibers that reinforce the mesohyl of all demosponges is similar to "type IV" collagen. The analyses described above concluded that sponges are closest to the ancestors of all Metazoa, of all multi-celled animals including both sponges and more complex groups. However, another comparison in 2008 of 150 genes in each of 21 genera, ranging from fungi to humans but including only two species of sponge, suggested that comb jellies (ctenophora) are the most basal lineage of the Metazoa included in the sample. If this is correct, either modern comb jellies developed their complex structures independently of other Metazoa, or sponges' ancestors were more complex and all known sponges are drastically simplified forms. The study recommended further analyses using a wider range of sponges and other simple Metazoa such as Placozoa. However, reanalysis of the data showed that the computer algorithms used for analysis were misled by the presence of specific ctenophore genes that were markedly different from those of other species, leaving sponges as either the sister group to all other animals, or an ancestral paraphyletic grade. 'Family trees' constructed using a combination of all available data – morphological, developmental and molecular – concluded that the sponges are in fact a monophyletic group, and with the cnidarians form the sister group to the bilaterians. A very large and internally consistent alignment of 1,719 proteins at the metazoan scale, published in 2017, showed that (i) sponges – represented by Homoscleromorpha, Calcarea, Hexactinellida, and Demospongiae – are monophyletic, (ii) sponges are sister-group to all other multicellular animals, (iii) ctenophores emerge as the second-earliest branching animal lineage, and (iv) placozoans emerge as the third animal lineage, followed by cnidarians sister-group to bilaterians. In March 2021, scientists from Dublin found additional evidence that sponges are the sister group to all other animals, while in May 2023, Schultz et al. found patterns of irreversible change in genome synteny that provide strong evidence that ctenophores are the sister group to all other animals instead. Notable spongiologists Céline Allewaert Patricia Bergquist James Scott Bowerbank Maurice Burton Henry John Carter Max Walker de Laubenfels Arthur Dendy Édouard Placide Duchassaing de Fontbressin Randolph Kirkpatrick Robert J. Lendlmayer von Lendenfeld Edward Alfred Minchin Giovanni Domenico Nardo Eduard Oscar Schmidt Émile Topsent Use By dolphins A report in 1997 described use of sponges as a tool by bottlenose dolphins in Shark Bay in Western Australia. A dolphin will attach a marine sponge to its rostrum, which is presumably then used to protect it when searching for food in the sandy sea bottom. The behavior, known as sponging, has only been observed in this bay and is almost exclusively shown by females. A study in 2005 concluded that mothers teach the behavior to their daughters and that all the sponge users are closely related, suggesting that it is a fairly recent innovation. By humans Skeleton The calcium carbonate or silica spicules of most sponge genera make them too rough for most uses, but two genera, Hippospongia and Spongia, have soft, entirely fibrous skeletons. Early Europeans used soft sponges for many purposes, including padding for helmets, portable drinking utensils and municipal water filters. Until the invention of synthetic sponges, they were used as cleaning tools, applicators for paints and ceramic glazes and discreet contraceptives. However, by the mid-20th century, overfishing brought both the animals and the industry close to extinction. Many objects with sponge-like textures are now made of substances not derived from poriferans. Synthetic sponges include personal and household cleaning tools, breast implants, and contraceptive sponges. Typical materials used are cellulose foam, polyurethane foam, and less frequently, silicone foam. The luffa "sponge", also spelled loofah, which is commonly sold for use in the kitchen or the shower, is not derived from an animal but mainly from the fibrous "skeleton" of the sponge gourd (Luffa aegyptiaca, Cucurbitaceae). Antibiotic compounds Sponges have medicinal potential due to the presence in sponges themselves or their microbial symbionts of chemicals that may be used to control viruses, bacteria, tumors and fungi. Other biologically active compounds Lacking any protective shell or means of escape, sponges have evolved to synthesize a variety of unusual compounds. One such class is the oxidized fatty acid derivatives called oxylipins. Members of this family have been found to have anti-cancer, anti-bacterial and anti-fungal properties. One example isolated from the Okinawan Plakortis sponges, plakoridine A, has shown potential as a cytotoxin to murine lymphoma cells.
Biology and health sciences
Porifera
null
47300
https://en.wikipedia.org/wiki/Movable%20type
Movable type
Movable type (US English; moveable type in British English) is the system and technology of printing and typography that uses movable components to reproduce the elements of a document (usually individual alphanumeric characters or punctuation marks) usually on the medium of paper. Overview The world's first movable type printing technology for paper books was made of porcelain materials and was invented around 1040 AD in China during the Northern Song dynasty by the inventor Bi Sheng (990–1051). The earliest printed paper money with movable metal type to print the identifying code of the money was made in 1161 during the Song dynasty. In 1193, a book in the Song dynasty documented how to use the copper movable type. The oldest extant book printed with movable metal type, Jikji, was printed in Korea in 1377 during the Goryeo dynasty. The spread of both movable-type systems was, to some degree, limited to primarily East Asia. The creation of the printing press in Europe may have been influenced by various sporadic reports of movable type technology brought back to Europe by returning business people and missionaries to China. Some of these medieval European accounts are still preserved in the library archives of the Vatican and Oxford University among many others. Around 1450, German goldsmith Johannes Gutenberg invented the metal movable-type printing press, along with innovations in casting the type based on a matrix and hand mould. The small number of alphabetic characters needed for European languages was an important factor. Gutenberg was the first to create his type pieces from an alloy of lead, tin, and antimony—and these materials remained standard for 550 years. For alphabetic scripts, movable-type page setting was quicker than woodblock printing. The metal type pieces were more durable and the lettering was more uniform, leading to typography and fonts. The high quality and relatively low price of the Gutenberg Bible (1455) established the superiority of movable type in Europe and the use of printing presses spread rapidly. The printing press may be regarded as one of the key factors fostering the Renaissance and, due to its effectiveness, its use spread around the globe. The 19th-century invention of hot metal typesetting and its successors caused movable type to decline in the 20th century. Precursors to movable type Letter punch and coins The technique of imprinting multiple copies of symbols or glyphs with a master type punch made of hard metal first developed around 3000 BC in ancient Sumer. These metal punch types can be seen as precursors of the letter punches adapted in later millennia to printing with movable metal type. Cylinder seals were used in Mesopotamia to create an impression on a surface by rolling the seal on wet clay. Seals and stamps Seals and stamps may have been precursors to movable type. The uneven spacing of the impressions on brick stamps found in the Mesopotamian cities of Uruk and Larsa, dating from the 2nd millennium BC, has been conjectured by some archaeologists as evidence that the stamps were made using movable type. The enigmatic Minoan Phaistos Disc of –1600 BC has been considered by one scholar as an early example of a body of text being reproduced with reusable characters: it may have been produced by pressing pre-formed hieroglyphic "seals" into the soft clay. A few authors even view the disc as technically meeting all definitional criteria to represent an early incidence of movable-type printing. Woodblock printing Bones, shells, bamboo slips, metal tablets, stone tablets, silk, as well as other materials were previously used for writing. However, following the invention of paper during the Chinese Han dynasty, writing materials became more portable and economical. Yet, copying books by hand was still labour-consuming. Not until the Xiping Era (172–178 AD), towards the end of the Eastern Han dynasty, did sealing print and monotype appear. These were used to print designs on fabrics and to print texts. By about the 8th century during the Tang dynasty, woodblock printing was invented and worked as follows. First, the neat hand-copied script was stuck on a relatively thick and smooth board, with the front of the paper sticking to the board, the paper being so thin it was transparent, the characters showing in reverse distinctly so that every stroke could be easily recognized. Then, carvers cut away the parts of the board that were not part of the character, so that the characters were cut in relief, completely differently from those cut intaglio. When printing, the bulging characters would have some ink spread on them and be covered by paper. With workers' hands moving on the back of paper gently, characters would be printed on the paper. By the Song dynasty, woodblock printing came to its heyday. Although woodblock printing played an influential role in spreading culture, there were some significant drawbacks. Carving the printing plate required considerable time, labour, and materials. It also was not convenient to store these plates and was difficult to correct mistakes. History Ceramic movable type Bi Sheng () (990–1051) developed the first known movable-type system for printing in China around 1040 AD during the Northern Song dynasty, using ceramic materials. As described by the Chinese scholar Shen Kuo (沈括) (1031–1095): After his death, the ceramic movable-type passed onto his descendants. In 1193, Zhou Bida, an officer of the Southern Song dynasty, made a set of clay movable-type method according to the method described by Shen Kuo in his Dream Pool Essays, and printed his book
Technology
Printing
null
11782509
https://en.wikipedia.org/wiki/Isostatic%20depression
Isostatic depression
Isostatic depression is the sinking of large parts of the Earth's crust into the asthenosphere caused by a heavy weight placed on the Earth's surface, often glacial ice during continental glaciation. Isostatic depression and isostatic rebound occur at rates of centimeters per year. Greenland is an example of an isostatically depressed region. Glacial isostatic depression Isostatic depression is a phase of glacial isostasy, along with isostatic rebound. Glacial isostasy is the Earth's response to changing surface loads of ice and water during the expansion and contraction of large ice sheets. The Earth's asthenosphere acts viscoelastically, flowing when exposed to loads and non-hydrostatic stress, such as ice sheets, for an extended period of time. The Earth's crust is depressed by the product of thickness of ice and the ratio of ice and mantle densities. This large ice load results in elastic deformation of the entire lithospheric mantle over the span of 10,000-100,000 years, with the load eventually supported by the lithosphere after the limit of local isostatic depression has been attained. Historically, isostatic depression has been used to estimate global ice volume by relating the magnitude of depression to the density of ice and upper mantle material. Glacial megalakes can form in regional depressions under the influence of glacial load. Isostatic depression in Greenland Greenland is isostatically depressed by the Greenland ice sheet. However, due to deglaciation induced by climate change, the regions near the shrinking ice sheet have begun to uplift, a process known as post-glacial rebound. Modeling these glacial isostatic adjustments has been an area of interest for some time now as the entire topography of Greenland is affected by these movements. These movements are unique in that they can be observed on a human time scale unlike other geological processes. Models have been created to assess what future equilibrium states of the Greenland ice sheet will look like.
Physical sciences
Landforms: General
Earth science
6995526
https://en.wikipedia.org/wiki/Hydraulic%20motor
Hydraulic motor
A hydraulic motor is a mechanical actuator that converts hydraulic pressure and flow into torque and angular displacement (rotation). The hydraulic motor is the rotary counterpart of the hydraulic cylinder as a linear actuator. Most broadly, the category of devices called hydraulic motors has sometimes included those that run on hydropower (namely, water engines and water motors) but in today's terminology the name usually refers more specifically to motors that use hydraulic fluid as part of closed hydraulic circuits in modern hydraulic machinery. Conceptually, a hydraulic motor should be interchangeable with a hydraulic pump because it performs the opposite function – similar to the way a DC electric motor is theoretically interchangeable with a DC electrical generator. However, many hydraulic pumps cannot be used as hydraulic motors because they cannot be backdriven. Also, a hydraulic motor is usually designed for working pressure at both sides of the motor, whereas most hydraulic pumps rely on low pressure provided from the reservoir at the input side and would leak fluid when abused as a motor. History of hydraulic motors One of the first rotary hydraulic motors to be developed was that constructed by William Armstrong for his Swing Bridge over the River Tyne. Two motors were provided, for reliability. Each one was a three-cylinder single-acting oscillating engine. Armstrong developed a wide range of hydraulic motors, linear and rotary, that were used for a wide range of industrial and civil engineering tasks, particularly for docks and moving bridges. The first simple fixed-stroke hydraulic motors had the disadvantage that they used the same volume of water whatever the load and so were wasteful at part-power. Unlike steam engines, as water is incompressible, they could not be throttled or their valve cut-off controlled. To overcome this, motors with variable stroke were developed. Adjusting the stroke, rather than controlling admission valves, now controlled the engine power and water consumption. One of the first of these was Arthur Rigg's patent engine of 1886. This used a double eccentric mechanism, as used on variable stroke power presses, to control the stroke length of a three cylinder radial engine. Later, the swashplate engine with an adjustable swashplate angle would become a popular way to make variable stroke hydraulic motors. Hydraulic motor types Vane motors A vane motor consists of a housing with an eccentric bore, in which runs a rotor with vanes in it that slide in and out. The force differential created by the unbalanced force of the pressurized fluid on the vanes causes the rotor to spin in one direction. A critical element in vane motor design is how the vane tips are machined at the contact point between vane tip and motor housing. Several types of "lip" designs are used, and the main objective is to provide a tight seal between the inside of the motor housing and the vane, and at the same time to minimize wear and metal-to-metal contact. Gear motors A gear motor (external gear) consists of two gears, the driven gear (attached to the output shaft by way of a key, etc.) and the idler gear. High pressure oil is ported into one side of the gears, where it flows around the periphery of the gears, between the gear tips and the wall housings in which it resides, to the outlet port. The gears mesh, not allowing the oil from the outlet side to flow back to the inlet side. For lubrication, the gear motor uses a small amount of oil from the pressurized side of the gears, bleeds this through the (typically) hydrodynamic bearings, and vents the same oil either to the low pressure side of the gears, or through a dedicated drain port on the motor housing, which is usually connected to a line that vents the motor's case pressure to the system's reservoir. An especially positive attribute of the gear motor is that catastrophic breakdown is less common than in most other types of hydraulic motors. This is because the gears gradually wear down the housing and/or main bushings, reducing the volumetric efficiency of the motor gradually until it is all but useless. This often happens long before wear causes the unit to seize or break down. Gear motors can be supplied as single or double-directional based on their usage, and they are preferred in either aluminum or cast iron bodies, depending on application conditions. They offer design options that can handle radial loads. Additionally, alternative configurations include pressure relief valve, anti-cavitation valve, and speed sensor to meet specific application needs. Gerotor motors The gerotor motor is in essence a rotor with n − 1 teeth, rotating off center in a rotor/stator with n teeth. Pressurized fluid is guided into the assembly using a (usually) axially placed plate-type distributor valve. Several different designs exist, such as the Geroller (internal or external rollers) and Nichols motors. Typically, the Gerotor motors are low-to-medium speed and medium-to-high torque. Axial plunger motors For high quality rotating drive systems, plunger motors are generally used. Whereas the speed of hydraulic pumps range from 1200 to 1800 rpm, the machinery to be driven by the motor often requires a much lower speed. This means that when an axial plunger motor (swept volume maximum 2 litres) is used, a gearbox is usually needed. For a continuously adjustable swept volume, axial piston motors are used. Like piston (reciprocating) type pumps, the most common design of the piston type of motor is the axial. This type of motor is the most commonly used in hydraulic systems. These motors are, like their pump counterparts, available in both variable and fixed displacement designs. Typical usable (within acceptable efficiency) rotational speeds range from below 50 rpm to above 14000 rpm. Efficiencies and minimum/maximum rotational speeds are highly dependent on the design of the rotating group, and many different types are in use. Radial piston motors Radial piston motors are available in two basic types: Pistons pushing inward, and pistons pushing outward. Pistons pushing inward The crankshaft type (e.g. Staffa or SAI hydraulic motors) with a single cam and the pistons pushing inwards is basically an old design but is one which has extremely high starting torque characteristics. They are available in displacements from 40 cc/rev up to about 50 litres/rev but can sometimes be limited in power output. Crankshaft type radial piston motors are capable of running at "creep" speeds and some can run seamlessly up to 1500 rpm whilst offering virtually constant output torque characteristics. This makes them still the most versatile design. The single-cam-type radial piston motor exists in many different designs itself. Usually the difference lies in the way the fluid is distributed to the different pistons or cylinders, and also the design of the cylinders themselves. Some motors have pistons attached to the cam using rods (much like in an internal combustion engine), while others employ floating "shoes", and even spherical contact telescopic cylinders like the Parker Denison Calzoni type. Each design has its own set of pros and cons, such as freewheeling ability, high volumetric efficiency, high reliability and so on. Pistons pushing outward Multi-lobe cam ring types (e.g. Black Bruin, Rexroth, Hägglunds Drives, Poclain, Rotary Power or Eaton Hydre-MAC type) have a cam ring with multiple lobes and the piston rollers push outward against the cam ring. This produces a very smooth output with high starting torque but they are often limited in the upper speed range. This type of motor is available in a very wide range from about 1 litre/rev to 250 litres/rev. These motors are particularly good on low speed applications and can develop very high power. Braking Hydraulic motors usually have a drain connection for the internal leakage, which means that when the power unit is turned off the hydraulic motor in the drive system will move slowly if an external load is acting on it. Thus, for applications such as a crane or winch with suspended load, there is always a need for a brake or a locking device. Uses Hydraulic pumps, motors, and cylinders can be combined into hydraulic drive systems. One or more hydraulic pumps, coupled to one or more hydraulic motors, constitute a hydraulic transmission. Hydraulic motors are used for many applications now such as winches and crane drives, wheel motors for military vehicles, self-driven cranes, excavators, conveyor and feeder drives, cooling fan drives, mixer and agitator drives, roll mills, drum drives for digesters, trommels and kilns, shredders, drilling rigs, trench cutters, high-powered lawn trimmers, and plastic injection machines. Hydraulic motors are also used in heat transfer applications.
Technology
Hydraulics and pneumatics
null
8649736
https://en.wikipedia.org/wiki/Varicella%20vaccine
Varicella vaccine
Varicella vaccine, also known as chickenpox vaccine, is a vaccine that protects against chickenpox. One dose of vaccine prevents 95% of moderate disease and 100% of severe disease. Two doses of vaccine are more effective than one. If given to those who are not immune within five days of exposure to chickenpox it prevents most cases of the disease. Vaccinating a large portion of the population also protects those who are not vaccinated. It is given by injection just under the skin. Another vaccine, known as zoster vaccine, is used to prevent diseases caused by the same virus – the varicella zoster virus. The World Health Organization (WHO) recommends routine vaccination only if a country can keep more than 80% of people vaccinated. If only 20% to 80% of people are vaccinated it is possible that more people will get the disease at an older age and outcomes overall may worsen. Either one or two doses of the vaccine are recommended. In the United States two doses are recommended starting at twelve to fifteen months of age. , twenty-three countries recommend all non-medically exempt children receive the vaccine, nine recommend it only for high-risk groups, three additional countries recommend use in only parts of the country, while other countries make no recommendation. Not all countries provide the vaccine due to its cost. In the United Kingdom, Varilrix, a live viral vaccine is approved from the age of 12 months, but only recommended for certain at risk groups. Minor side effects may include pain at the site of injection, fever, and rash. Severe side effects are rare and occur mostly in those with poor immune function. Its use in people with HIV/AIDS should be done with care. It is not recommended during pregnancy; however, the few times it has been given during pregnancy no problems resulted. The vaccine is available either by itself or along with the MMR vaccine, in a version known as the MMRV vaccine. It is made from weakened virus. A live attenuated varicella vaccine, the Oka strain, was developed by Michiaki Takahashi and his colleagues in Japan in the early 1970s. American vaccinologist Maurice Hilleman's team developed a chickenpox vaccine in the United States in 1981, based on the "Oka strain" of the varicella virus. The chickenpox vaccine first became commercially available in 1984. It is on the WHO Model List of Essential Medicines. Medical uses Varicella vaccine is 70% to 90% effective for preventing varicella and more than 95% effective for preventing severe varicella. Follow-up evaluations have taken place in the United States of children immunized that revealed protection for at least 11 years. Studies were conducted in Japan which indicated protection for at least 20 years. People who do not develop enough protection when they get the vaccine may develop a mild case of the disease when in close contact with a person with chickenpox. In these cases, people show very little sign of illness. This has been the case of children who get the vaccine in their early childhood and later have contact with children with chickenpox. Some of these children may develop mild chickenpox also known as breakthrough disease. Another vaccine, known as zoster vaccine, is simply a larger-than-normal dose of the same vaccine used against chickenpox and is used in older adults to reduce the risk of shingles (also called herpes zoster) and postherpetic neuralgia, which are caused by the same virus. The recombinant zoster (shingles) vaccine is recommended for adults aged 50 years and older. Duration of immunity The long-term duration of protection from varicella vaccine is unknown, but there are now persons vaccinated twenty years ago with no evidence of waning immunity, while others have become vulnerable in as few as six years. Assessments of the duration of immunity are complicated in an environment where natural disease is still common, which typically leads to an overestimation of effectiveness. Some vaccinated children have been found to lose their protective antibodies in as little as five to eight years. However, according to the World Health Organization (WHO): "After observation of study populations for periods of up to 20 years in Japan and 10 years in the United States, more than 90% of immunocompetent persons who were vaccinated as children were still protected from varicella." However, since only one out of five Japanese children were vaccinated, the annual exposure of these vaccinees to children with natural chickenpox boosted the vaccinees' immune system. In the United States, where universal varicella vaccination has been practiced, the majority of children no longer receive exogenous (outside) boosting, thus, their cell-mediated immunity to VZV (varicella zoster virus) wanes – necessitating booster chickenpox vaccinations. As time goes on, boosters may be necessary. Persons exposed to the virus after vaccination tend to experience milder cases of chickenpox if they develop the disease. Chickenpox Prior to the widespread introduction of the vaccine in the United States in 1995 (1986 in Japan and 1988 in Korea), there were around 4,000,000 cases per year in the United States, mostly in children, with typically 10,500–13,000 hospital admissions (range, 8,000–18,000), and 100–150 deaths each year. Most of the deaths were among young children. During 2003, and the first half of 2004, the CDC reported eight deaths from varicella, six of whom were children or adolescents. These deaths and hospital admissions have substantially declined in the US due to vaccination, though the rate of shingles infection has increased as adults are less exposed to infected children (which would otherwise help protect against shingles). Ten years after the vaccine was recommended in the US, the CDC reported as much as a 90% drop in chickenpox cases, a varicella-related hospital admission decline of 71% and a 97% drop in chickenpox deaths among those under 20. Vaccines are less effective among high-risk patients, as well as being more dangerous because they contain attenuated live viruses. In a study performed on children with an impaired immune system, 30% had lost the antibody after five years, and 8% had already caught wild chickenpox in those five years. Herpes zoster Herpes zoster (shingles) most often occurs in the elderly and is only rarely seen in children. The incidence of herpes zoster in vaccinated adults is 0.9/1000 person-years, and is 0.33/1000 person-years in vaccinated children; this is lower than the overall incidence of 3.2–4.2/1000 person-years. The risk of developing shingles is reduced for children who receive the varicella vaccine, but not eliminated. The CDC stated in 2014: "Chickenpox vaccines contain weakened live VZV, which may cause latent (dormant) infection. The vaccine-strain VZV can reactivate later in life and cause shingles. However, the risk of getting shingles from vaccine-strain VZV after chickenpox vaccination is much lower than getting shingles after natural infection with wild-type VZV." The risk of shingles is significantly lower among children who have received varicella vaccination, including those who are immunocompromised. The risk of shingles is approximately 80% lower among healthy vaccinated children compared to unvaccinated children who had wild-type varicella. A population with high varicella vaccination also has lower incidence of shingles in unvaccinated children, due to herd immunity. Schedule The WHO recommends one or two doses with the initial dose given at 12 to 18 months of age. The second dose, if given, should occur at least one to three months later. The second dose, if given, provides the additional benefit of improved protection against all varicella. This vaccine is a shot given subcutaneously (under the skin). It is recommended for all children under 13 and for everyone 13 or older who has never had chickenpox. In the United States, two doses are recommended by the CDC. For a routine vaccination, the first dose is administered at 12 to 15 months of age and the second dose at age 4–6 years. However, the second dose can be given as early as 3 months after the first dose. If an individual misses the timing for the routine vaccination, the individual is eligible to receive a catch-up vaccination. For a catch-up vaccination, individuals between 7 and 12 years old should receive a two-dose series 3 months apart (a minimum interval of 4 weeks). For individuals 13–18 years old, the catch-up vaccination should be given 4 to 8 weeks apart (a minimum interval of 4 weeks). The varicella vaccine did not become widely available in the United States until 1995. In the UK, the vaccine is only available on the National Health Service for those who are in close contact with someone who is particularly vulnerable to chickenpox. As there is an increased risk of shingles in adults due to possible lack of contact with chickenpox-infected children providing a natural boosting to immunity, and the fact that chickenpox is usually a mild illness, the NHS cites concerns about unvaccinated children catching chickenpox as adults when it is more dangerous. However, the vaccine is approved for 12 months and up and is available privately, with a second dose to be given a year after the first. Contraindications The varicella vaccine is not recommended for seriously ill people, pregnant women, people who have tuberculosis, people who have experienced a serious allergic reaction to the varicella vaccine in the past, people who are allergic to gelatin, people allergic to neomycin, people receiving high doses of steroids, people receiving treatment for cancer with x-rays or chemotherapy, as well as people who have received blood products or transfusions during the past five months. Additionally, the varicella vaccine is not recommended for people who are taking salicylates (e.g. aspirin). After receiving the varicella vaccine, the use of salicylates should be avoided for at least six weeks. The varicella vaccine is also not recommended for individuals who have received a live vaccine in the last four weeks, because live vaccines that are administered too soon within one another may not be as effective. It may be usable in people with HIV infections who have a good blood count and are receiving appropriate treatment. Specific antiviral medication, such as acyclovir, famciclovir, or valacyclovir, are not recommended 24 hours before and 14 days after vaccination. Side effects Serious side effects are very rare. From 1998 to 2013, only one vaccine-related death was reported: an English child with pre-existent leukemia. On some occasions, severe reactions such as meningitis and pneumonia have been reported (mainly in inadvertently vaccinated immunocompromised children) as well as anaphylaxis. The possible mild side effects include redness, stiffness, and soreness at the injection site, as well as fever. A few people may develop a mild rash, which usually appears around the injection site. There is a short-term risk of developing herpes zoster (shingles) following vaccination. However, this risk is less than the risk due to a natural infection resulting in chickenpox. Most of the cases reported have been mild and have not been associated with serious complications. Approximately 5% of children who receive the vaccine develop a fever or rash. Adverse reaction reports for the period 1995 to 2005 found no deaths attributed to the vaccine despite approximately 55.7 million doses being delivered. Cases of vaccine-related chickenpox have been reported in patients with a weakened immune system, but no deaths. The literature contains several reports of adverse reactions following varicella vaccination, including vaccine-strain zoster in children and adults. History The varicella-zoster vaccine is made from the Oka/Merck strain of live attenuated varicella virus. The Oka virus was initially obtained from a child with natural varicella, introduced into human embryonic lung cell cultures, adapted to and propagated in embryonic guinea pig cell cultures, and finally propagated in a human diploid cell line originally derived from fetal tissues (WI-38). Takahashi and his colleagues used the Oka strain to develop a live attenuated varicella vaccine in Japan in the early 1970s. This strain was further developed by pharmaceutical companies such as Merck & Co. and GlaxoSmithKline. American vaccinologist Maurice Hilleman's team at Merck then used the Oka strain to prepare a chickenpox vaccine in 1981. Japan was among the first countries to vaccinate for chickenpox. The vaccine developed by Hilleman was first licensed in the United States in 1995. Routine vaccination against varicella zoster virus is also performed in the United States, and the incidence of chickenpox has been dramatically reduced there (from four million cases per year in the pre-vaccine era to approximately 390,000 cases per year ). , standalone varicella vaccines are available in all 27 European Union member countries, and 16 countries also offer a combined measles, mumps, rubella, and varicella vaccine (MMRV). Twelve European countries (Austria, Andorra, Cyprus, Czech Republic, Finland, Germany, Greece, Hungary, Italy, Latvia, Luxembourg and Spain) have universal varicella vaccination (UVV) policies, though only six of these countries have made it available at no cost via government funding. EU member states that have not implemented UVV cite reasons such as "a perceived low disease burden and low public health priority," the cost and cost-effectiveness, the possible risk of herpes zoster when vaccinating older adults, and rare fevers leading to seizures after the first dose of the MMRV vaccine. "Countries that implemented UVV experienced decreases in varicella incidence, hospitalizations, and complications, showing overall beneficial impact." Varicella vaccination is recommended in Canada for all healthy children aged 1 to 12, as well as susceptible adolescents and adults 50 years of age and younger; "may be considered for people with select immunodeficiency disorders; and "should be prioritized" for susceptible individuals, including "non-pregnant women of childbearing age, household contacts of immunocompromised individuals, members of a household expecting a newborn, health care workers, adults who may be exposed occupationally to varicella (for example, people who work with young children), immigrants and refugees from tropical regions, people receiving chronic salicylate therapy (for example, acetylsalicylic acid [ASA])," and others. Australia has adopted recommendations for routine immunization of children and susceptible adults against chickenpox. Other countries, such as the United Kingdom, have targeted recommendations for the vaccine, e.g., for susceptible healthcare workers at risk of varicella exposure. In the UK, varicella antibodies are measured as part of the routine of prenatal care, and by 2005 all National Health Service personnel had determined their immunity and been immunized if they were non-immune and had direct patient contact. Population-based immunization against varicella is not otherwise practised in the UK. Since 2013, the MMRV vaccine has been offered for free to all Brazilian citizens. Society and culture Catholic Church The use of fetal tissue in vaccine development is the practice of researching, developing, and producing vaccines through growing viruses in cultured (laboratory-grown) cells that were originally derived from human fetal tissue. Since the cell strains in use originate from abortions, there has been some opposition to the practice and the resulting vaccines on religious and moral grounds. The Roman Catholic Church is opposed to abortion. Nevertheless, the Pontifical Academy for Life stated in 2017 that "clinically recommended vaccinations can be used with a clear conscience and that the use of such vaccines does not signify some sort of cooperation with voluntary abortion". On 21 December 2020, the Vatican's doctrinal office, the Congregation for the Doctrine of the Faith, further clarified that it is "morally " for Catholics to receive vaccines derived from fetal cell lines or in which such lines were used in testing or development, because "passive material cooperation in the procured abortion from which these cell lines originate is, on the part of those making use of the resulting vaccines, remote" and "does not and should not in any way imply that there is a moral endorsement of the use of cell lines proceeding from aborted fetuses".
Biology and health sciences
Vaccines
Health
2508
https://en.wikipedia.org/wiki/Artillery
Artillery
Artillery are ranged weapons that launch munitions far beyond the range and power of infantry firearms. Early artillery development focused on the ability to breach defensive walls and fortifications during sieges, and led to heavy, fairly immobile siege engines. As technology improved, lighter, more mobile field artillery cannons developed for battlefield use. This development continues today; modern self-propelled artillery vehicles are highly mobile weapons of great versatility generally providing the largest share of an army's total firepower. Originally, the word "artillery" referred to any group of soldiers primarily armed with some form of manufactured weapon or armour. Since the introduction of gunpowder and cannon, "artillery" has largely meant cannon, and in contemporary usage, usually refers to shell-firing guns, howitzers, and mortars (collectively called barrel artillery, cannon artillery or gun artillery) and rocket artillery. In common speech, the word "artillery" is often used to refer to individual devices, along with their accessories and fittings, although these assemblages are more properly called "equipment". However, there is no generally recognized generic term for a gun, howitzer, mortar, and so forth: the United States uses "artillery piece", but most English-speaking armies use "gun" and "mortar". The projectiles fired are typically either "shot" (if solid) or "shell" (if not solid). Historically, variants of solid shot including canister, chain shot and grapeshot were also used. "Shell" is a widely used generic term for a projectile, which is a component of munitions. By association, artillery may also refer to the arm of service that customarily operates such engines. In some armies, the artillery arm has operated field, coastal, anti-aircraft, and anti-tank artillery; in others these have been separate arms, and with some nations coastal has been a naval or marine responsibility. In the 20th century, target acquisition devices (such as radar) and techniques (such as sound ranging and flash spotting) emerged, primarily for artillery. These are usually utilized by one or more of the artillery arms. The widespread adoption of indirect fire in the early 20th century introduced the need for specialist data for field artillery, notably survey and meteorological, and in some armies, provision of these are the responsibility of the artillery arm. The majority of combat deaths in the Napoleonic Wars, World War I, and World War II were caused by artillery. In 1944, Joseph Stalin said in a speech that artillery was "the god of war". Artillery piece Although not called by that name, siege engines performing the role recognizable as artillery have been employed in warfare since antiquity. The first known catapult was developed in Syracuse in 399 BC. Until the introduction of gunpowder into western warfare, artillery was dependent upon mechanical energy, which not only severely limited the kinetic energy of the projectiles, but also required the construction of very large engines to accumulate sufficient energy. A 1st-century BC Roman catapult launching stones achieved a kinetic energy of 16 kilojoules, compared to a mid-19th-century 12-pounder gun, which fired a round, with a kinetic energy of 240 kilojoules, or a 20th-century US battleship that fired a projectile from its main battery with an energy level surpassing 350 megajoules. From the Middle Ages through most of the modern era, artillery pieces on land were moved by horse-drawn gun carriages. In the contemporary era, artillery pieces and their crew relied on wheeled or tracked vehicles as transportation. These land versions of artillery were dwarfed by railway guns; the largest of these large-calibre guns ever conceived – Project Babylon of the Supergun affair – was theoretically capable of putting a satellite into orbit. Artillery used by naval forces has also changed significantly, with missiles generally replacing guns in surface warfare. Over the course of military history, projectiles were manufactured from a wide variety of materials, into a wide variety of shapes, using many different methods in which to target structural/defensive works and inflict enemy casualties. The engineering applications for ordnance delivery have likewise changed significantly over time, encompassing some of the most complex and advanced technologies in use today. In some armies, the weapon of artillery is the projectile, not the equipment that fires it. The process of delivering fire onto the target is called gunnery. The actions involved in operating an artillery piece are collectively called "serving the gun" by the "detachment" or gun crew, constituting either direct or indirect artillery fire. The manner in which gunnery crews (or formations) are employed is called artillery support. At different periods in history, this may refer to weapons designed to be fired from ground-, sea-, and even air-based weapons platforms. Crew Some armed forces use the term "gunners" for the soldiers and sailors with the primary function of using artillery. The gunners and their guns are usually grouped in teams called either "crews" or "detachments". Several such crews and teams with other functions are combined into a unit of artillery, usually called a battery, although sometimes called a company. In gun detachments, each role is numbered, starting with "1" the Detachment Commander, and the highest number being the Coverer, the second-in-command. "Gunner" is also the lowest rank, and junior non-commissioned officers are "Bombardiers" in some artillery arms. Batteries are roughly equivalent to a company in the infantry, and are combined into larger military organizations for administrative and operational purposes, either battalions or regiments, depending on the army. These may be grouped into brigades; the Russian army also groups some brigades into artillery divisions, and the People's Liberation Army has artillery corps. The term "artillery" also designates a combat arm of most military services when used organizationally to describe units and formations of the national armed forces that operate the weapons. Tactics During military operations, field artillery has the role of providing support to other arms in combat or of attacking targets, particularly in-depth. Broadly, these effects fall into two categories, aiming either to suppress or neutralize the enemy, or to cause casualties, damage, and destruction. This is mostly achieved by delivering high-explosive munitions to suppress, or inflict casualties on the enemy from casing fragments and other debris and from blast, or by destroying enemy positions, equipment, and vehicles. Non-lethal munitions, notably smoke, can also suppress or neutralize the enemy by obscuring their view. Fire may be directed by an artillery observer or another observer, including crewed and uncrewed aircraft, or called onto map coordinates. Military doctrine has had a significant influence on the core engineering design considerations of artillery ordnance through its history, in seeking to achieve a balance between the delivered volume of fire with ordnance mobility. However, during the modern period, the consideration of protecting the gunners also arose due to the late-19th-century introduction of the new generation of infantry weapons using conoidal bullet, better known as the Minié ball, with a range almost as long as that of field artillery. The gunners' increasing proximity to and participation in direct combat against other combat arms and attacks by aircraft made the introduction of a gun shield necessary. The problems of how to employ a fixed or horse-towed gun in mobile warfare necessitated the development of new methods of transporting the artillery into combat. Two distinct forms of artillery were developed: the towed gun, used primarily to attack or defend a fixed-line; and the self-propelled gun, intended to accompany a mobile force and to provide continuous fire support and/or suppression. These influences have guided the development of artillery ordnance, systems, organizations, and operations until the present, with artillery systems capable of providing support at ranges from as little as 100 m to the intercontinental ranges of ballistic missiles. The only combat in which artillery is unable to take part is close-quarters combat, with the possible exception of artillery reconnaissance teams. Etymology The word as used in the current context originated in the Middle Ages. One suggestion is that it comes from French atelier, meaning the place where manual work is done. Another suggestion is that it originates from the 13th century and the Old French artillier, designating craftsmen and manufacturers of all materials and warfare equipments (spears, swords, armor, war machines); and, for the next 250 years, the sense of the word "artillery" covered all forms of military weapons. Hence, the naming of the Honourable Artillery Company, which was essentially an infantry unit until the 19th century. Another suggestion is that it comes from the Italian arte de tirare (art of shooting), coined by one of the first theorists on the use of artillery, Niccolò Tartaglia. The term was used by Girolamo Ruscelli (died 1566) in his Precepts of Modern Militia published posthumously in 1572. History Mechanical systems used for throwing ammunition in ancient warfare, also known as "engines of war", like the catapult, onager, trebuchet, and ballista, are also referred to by military historians as artillery. Medieval During medieval times, more types of artillery were developed, most notably the counterweight trebuchet. Traction trebuchets, using manpower to launch projectiles, have been used in ancient China since the 4th century as anti-personnel weapons. The much more powerful counterweight trebuchet was invented in the eastern Mediterranean region in the 12th century, with the earliest definite attestation in 1187. Invention of gunpowder Early Chinese artillery had vase-like shapes. This includes the "long range awe inspiring" cannon dated from 1350 and found in the 14th century Ming dynasty treatise Huolongjing. With the development of better metallurgy techniques, later cannons abandoned the vase shape of early Chinese artillery. This change can be seen in the bronze "thousand ball thunder cannon", an early example of field artillery. These small, crude weapons diffused into the Middle East (the madfaa) and reached Europe in the 13th century, in a very limited manner. In Asia, Mongols adopted the Chinese artillery and used it effectively in the great conquest. By the late 14th century, Chinese rebels used organized artillery and cavalry to push Mongols out. As small smooth-bore barrels, these were initially cast in iron or bronze around a core, with the first drilled bore ordnance recorded in operation near Seville in 1247. They fired lead, iron, or stone balls, sometimes large arrows and on occasions simply handfuls of whatever scrap came to hand. During the Hundred Years' War, these weapons became more common, initially as the bombard and later the cannon. Cannons were always muzzle-loaders. While there were many early attempts at breech-loading designs, a lack of engineering knowledge rendered these even more dangerous to use than muzzle-loaders. Expansion of use In 1415, the Portuguese invaded the Mediterranean port town of Ceuta. While it is difficult to confirm the use of firearms in the siege of the city, it is known the Portuguese defended it thereafter with firearms, namely bombardas, colebratas, and falconetes. In 1419, Sultan Abu Sa'id led an army to reconquer the fallen city, and Marinids brought cannons and used them in the assault on Ceuta. Finally, hand-held firearms and riflemen appear in Morocco, in 1437, in an expedition against the people of Tangiers. It is clear these weapons had developed into several different forms, from small guns to large artillery pieces. The artillery revolution in Europe caught on during the Hundred Years' War and changed the way that battles were fought. In the preceding decades, the English had even used a gunpowder-like weapon in military campaigns against the Scottish. However, at this time, the cannons used in battle were very small and not particularly powerful. Cannons were only useful for the defense of a castle, as demonstrated at Breteuil in 1356, when the besieged English used a cannon to destroy an attacking French assault tower. By the end of the 14th century, cannons were only powerful enough to knock in roofs, and could not penetrate castle walls. However, a major change occurred between 1420 and 1430, when artillery became much more powerful and could now batter strongholds and fortresses quite efficiently. The English, French, and Burgundians all advanced in military technology, and as a result the traditional advantage that went to the defense in a siege was lost. Cannons during this period were elongated, and the recipe for gunpowder was improved to make it three times as powerful as before. These changes led to the increased power in the artillery weapons of the time. Joan of Arc encountered gunpowder weaponry several times. When she led the French against the English at the Battle of Tourelles, in 1430, she faced heavy gunpowder fortifications, and yet her troops prevailed in that battle. In addition, she led assaults against the English-held towns of Jargeau, Meung, and Beaugency, all with the support of large artillery units. When she led the assault on Paris, Joan faced stiff artillery fire, especially from the suburb of St. Denis, which ultimately led to her defeat in this battle. In April 1430, she went to battle against the Burgundians, whose support was purchased by the English. At this time, the Burgundians had the strongest and largest gunpowder arsenal among the European powers, and yet the French, under Joan of Arc's leadership, were able to beat back the Burgundians and defend themselves. As a result, most of the battles of the Hundred Years' War that Joan of Arc participated in were fought with gunpowder artillery. The army of Mehmet the Conqueror, which conquered Constantinople in 1453, included both artillery and foot soldiers armed with gunpowder weapons. The Ottomans brought to the siege sixty-nine guns in fifteen separate batteries and trained them at the walls of the city. The barrage of Ottoman cannon fire lasted forty days, and they are estimated to have fired 19,320 times. Artillery also played a decisive role in the Battle of St. Jakob an der Birs of 1444. Early cannon were not always reliable; King James II of Scotland was killed by the accidental explosion of one of his own cannon, imported from Flanders, at the siege of Roxburgh Castle in 1460. The able use of artillery supported to a large measure the expansion and defense of the Portuguese Empire, as it was a necessary tool that allowed the Portuguese to face overwhelming odds both on land and sea from Morocco to Asia. In great sieges and in sea battles, the Portuguese demonstrated a level of proficiency in the use of artillery after the beginning of the 16th century unequalled by contemporary European neighbours, in part due to the experience gained in intense fighting in Morocco, which served as a proving ground for artillery and its practical application, and made Portugal a forerunner in gunnery for decades. During the reign of King Manuel (1495–1521) at least 2017 cannon were sent to Morocco for garrison defense, with more than 3000 cannon estimated to have been required during that 26-year period. An especially noticeable division between siege guns and anti-personnel guns enhanced the use and effectiveness of Portuguese firearms above contemporary powers, making cannon the most essential element in the Portuguese arsenal. The three major classes of Portuguese artillery were anti-personnel guns with a high borelength (including: rebrodequim, berço, falconete, falcão, sacre, áspide, cão, serpentina and passavolante); bastion guns which could batter fortifications (camelete, leão, pelicano, basilisco, águia, camelo, roqueira, urso); and howitzers that fired large stone cannonballs in an elevated arch, weighted up to 4000 pounds and could fire incendiary devices, such as a hollow iron ball filled with pitch and fuse, designed to be fired at close range and burst on contact. The most popular in Portuguese arsenals was the berço, a 5 cm, one pounder bronze breech-loading cannon that weighted 150 kg with an effective range of 600 meters. A tactical innovation the Portuguese introduced in fort defense was the use of combinations of projectiles against massed assaults. Although canister shot had been developed in the early 15th century, the Portuguese were the first to employ it extensively, and Portuguese engineers invented a canister round which consisted of a thin lead case filled with iron pellets, that broke up at the muzzle and scattered its contents in a narrow pattern. An innovation which Portugal adopted in advance of other European powers was fuse-delayed action shells, and were commonly used in 1505. Although dangerous, their effectiveness meant a sixth of all rounds used by the Portuguese in Morocco were of the fused-shell variety. The new Ming Dynasty established the "Divine Engine Battalion" (神机营), which specialized in various types of artillery. Light cannons and cannons with multiple volleys were developed. In a campaign to suppress a local minority rebellion near today's Burmese border, "the Ming army used a 3-line method of arquebuses/muskets to destroy an elephant formation". When the Portuguese and Spanish arrived at Southeast Asia, they found that the local kingdoms were already using cannons. Portuguese and Spanish invaders were unpleasantly surprised and even outgunned on occasion. Duarte Barbosa ca. 1514 said that the inhabitants of Java were great masters in casting artillery and very good artillerymen. They made many one-pounder cannons (cetbang or rentaka), long muskets, spingarde (arquebus), schioppi (hand cannon), Greek fire, guns (cannons), and other fire-works. In all aspects the Javanese were considered excellent in casting artillery, and in the knowledge of using it. In 1513, the Javanese fleet led by Pati Unus sailed to attack Portuguese Malacca "with much artillery made in Java, for the Javanese are skilled in founding and casting, and in all works in iron, over and above what they have in India". By the early 16th century, the Javanese had already started locally-producing large guns, which were dubbed "sacred cannon[s]" or "holy cannon[s]" and have survived up to the present day - though in limited numbers. These cannons varied between 180 and 260 pounders, weighing anywhere between 3–8 tons, measuring between 3–6 m. Between 1593 and 1597, about 200,000 Korean and Chinese troops which fought against Japan in Korea actively used heavy artillery in both siege and field combat. Korean forces mounted artillery in ships as naval guns, providing an advantage against Japanese navy which used Kunikuzushi (国崩し – Japanese breech-loading swivel gun) and Ōzutsu (大筒 – large size Tanegashima) as their largest firearms. Smoothbores Bombards were of value mainly in sieges. A famous Turkish example used at the siege of Constantinople in 1453 weighed 19 tons, took 200 men and sixty oxen to emplace, and could fire just seven times a day. The Fall of Constantinople was perhaps "the first event of supreme importance whose result was determined by the use of artillery" when the huge bronze cannons of Mehmed II breached the city's walls, ending the Byzantine Empire, according to Sir Charles Oman. Bombards developed in Europe were massive smoothbore weapons distinguished by their lack of a field carriage, immobility once emplaced, highly individual design, and noted unreliability (in 1460 James II, King of Scots, was killed when one exploded at the siege of Roxburgh). Their large size precluded the barrels being cast and they were constructed out of metal staves or rods bound together with hoops like a barrel, giving their name to the gun barrel. The use of the word "cannon" marks the introduction in the 15th century of a dedicated field carriage with axle, trail and animal-drawn limber—this produced mobile field pieces that could move and support an army in action, rather than being found only in the siege and static defenses. The reduction in the size of the barrel was due to improvements in both iron technology and gunpowder manufacture, while the development of trunnions—projections at the side of the cannon as an integral part of the cast—allowed the barrel to be fixed to a more movable base, and also made raising or lowering the barrel much easier. The first land-based mobile weapon is usually credited to Jan Žižka, who deployed his oxen-hauled cannon during the Hussite Wars of Bohemia (1418–1424). However, cannons were still large and cumbersome. With the rise of musketry in the 16th century, cannon were largely (though not entirely) displaced from the battlefield—the cannon were too slow and cumbersome to be used and too easily lost to a rapid enemy advance. The combining of shot and powder into a single unit, a cartridge, occurred in the 1620s with a simple fabric bag, and was quickly adopted by all nations. It speeded loading and made it safer, but unexpelled bag fragments were an additional fouling in the gun barrel and a new tool—a worm—was introduced to remove them. Gustavus Adolphus is identified as the general who made cannon an effective force on the battlefield—pushing the development of much lighter and smaller weapons and deploying them in far greater numbers than previously. The outcome of battles was still determined by the clash of infantry. Shells, explosive-filled fused projectiles, were in use by the 15th century. The development of specialized pieces—shipboard artillery, howitzers and mortars—was also begun in this period. More esoteric designs, like the multi-barrel ribauldequin (known as "organ guns"), were also produced. The 1650 book by Kazimierz Siemienowicz Artis Magnae Artilleriae pars prima was one of the most important contemporary publications on the subject of artillery. For over two centuries this work was used in Europe as a basic artillery manual. One of the most significant effects of artillery during this period was however somewhat more indirect—by easily reducing to rubble any medieval-type fortification or city wall (some which had stood since Roman times), it abolished millennia of siege-warfare strategies and styles of fortification building. This led, among other things, to a frenzy of new bastion-style fortifications to be built all over Europe and in its colonies, but also had a strong integrating effect on emerging nation-states, as kings were able to use their newfound artillery superiority to force any local dukes or lords to submit to their will, setting the stage for the absolutist kingdoms to come. Modern rocket artillery can trace its heritage back to the Mysorean rockets of Mysore. Their first recorded use was in 1780 during the battles of the Second, Third and Fourth Mysore Wars. The wars fought between the British East India Company and the Kingdom of Mysore in India made use of the rockets as a weapon. In the Battle of Pollilur, the Siege of Seringapatam (1792) and in Battle of Seringapatam in 1799, these rockets were used with considerable effect against the British. After the wars, several Mysore rockets were sent to England, but experiments with heavier payloads were unsuccessful. In 1804 William Congreve, considering the Mysorian rockets to have too short a range (less than 1,000 yards) developed rockets in numerous sizes with ranges up to 3,000 yards and eventually utilizing iron casing as the Congreve rocket which were used effectively during the Napoleonic Wars and the War of 1812. Napoleonic With the Napoleonic Wars, artillery experienced changes in both physical design and operation. Rather than being overseen by "mechanics", artillery was viewed as its own service branch with the capability of dominating the battlefield. The success of the French artillery companies was at least in part due to the presence of specially trained artillery officers leading and coordinating during the chaos of battle. Napoleon, himself a former artillery officer, perfected the tactic of massed artillery batteries unleashed upon a critical point in his enemies' line as a prelude to a decisive infantry and cavalry assault. Physically, cannons continued to become smaller and lighter. During the Seven Years War, King Frederick II of Prussia used these advances to deploy horse artillery that could move throughout the battlefield. Frederick also introduced the reversible iron ramrod, which was much more resistant to breakage than older wooden designs. The reversibility aspect also helped increase the rate of fire, since a soldier would no longer have to worry about what end of the ramrod they were using. Jean-Baptiste de Gribeauval, a French artillery engineer, introduced the standardization of cannon design in the mid-18th century. He developed a 6-inch (150 mm) field howitzer whose gun barrel, carriage assembly and ammunition specifications were made uniform for all French cannons. The standardized interchangeable parts of these cannons down to the nuts, bolts and screws made their mass production and repair much easier. While the Gribeauval system made for more efficient production and assembly, the carriages used were heavy and the gunners were forced to march on foot (instead of riding on the limber and gun as in the British system). Each cannon was named for the weight of its projectiles, giving us variants such as 4, 8, and 12, indicating the weight in pounds. The projectiles themselves included solid balls or canister containing lead bullets or other material. These canister shots acted as massive shotguns, peppering the target with hundreds of projectiles at close range. The solid balls, known as round shot, was most effective when fired at shoulder-height across a flat, open area. The ball would tear through the ranks of the enemy or bounce along the ground breaking legs and ankles. Modern The development of modern artillery occurred in the mid to late 19th century as a result of the convergence of various improvements in the underlying technology. Advances in metallurgy allowed for the construction of breech-loading rifled guns that could fire at a much greater muzzle velocity. After the British artillery was shown up in the Crimean War as having barely changed since the Napoleonic Wars, the industrialist William Armstrong was awarded a contract by the government to design a new piece of artillery. Production started in 1855 at the Elswick Ordnance Company and the Royal Arsenal at Woolwich, and the outcome was the revolutionary Armstrong Gun, which marked the birth of modern artillery. Three of its features particularly stand out. First, the piece was rifled, which allowed for a much more accurate and powerful action. Although rifling had been tried on small arms since the 15th century, the necessary machinery to accurately rifle artillery was not available until the mid-19th century. Martin von Wahrendorff, and Joseph Whitworth independently produced rifled cannon in the 1840s, but it was Armstrong's gun that was first to see widespread use during the Crimean War. The cast iron shell of the Armstrong gun was similar in shape to a Minié ball and had a thin lead coating which made it fractionally larger than the gun's bore and which engaged with the gun's rifling grooves to impart spin to the shell. This spin, together with the elimination of windage as a result of the tight fit, enabled the gun to achieve greater range and accuracy than existing smooth-bore muzzle-loaders with a smaller powder charge. His gun was also a breech-loader. Although attempts at breech-loading mechanisms had been made since medieval times, the essential engineering problem was that the mechanism could not withstand the explosive charge. It was only with the advances in metallurgy and precision engineering capabilities during the Industrial Revolution that Armstrong was able to construct a viable solution. The gun combined all the properties that make up an effective artillery piece. The gun was mounted on a carriage in such a way as to return the gun to firing position after the recoil. What made the gun really revolutionary lay in the technique of the construction of the gun barrel that allowed it to withstand much more powerful explosive forces. The "built-up" method involved assembling the barrel with wrought-iron (later mild steel was used) tubes of successively smaller diameter. The tube would then be heated to allow it to expand and fit over the previous tube. When it cooled the gun would contract although not back to its original size, which allowed an even pressure along the walls of the gun which was directed inward against the outward forces that the gun's firing exerted on the barrel. Another innovative feature, more usually associated with 20th-century guns, was what Armstrong called its "grip", which was essentially a squeeze bore; the 6 inches of the bore at the muzzle end was of slightly smaller diameter, which centered the shell before it left the barrel and at the same time slightly swaged down its lead coating, reducing its diameter and slightly improving its ballistic qualities. Armstrong's system was adopted in 1858, initially for "special service in the field" and initially he produced only smaller artillery pieces, 6-pounder (2.5 in/64 mm) mountain or light field guns, 9-pounder (3 in/76 mm) guns for horse artillery, and 12-pounder (3 inches /76 mm) field guns. The first cannon to contain all 'modern' features is generally considered to be the French 75 of 1897. The gun used cased ammunition, was breech-loading, had modern sights, and a self-contained firing mechanism. It was the first field gun to include a hydro-pneumatic recoil mechanism, which kept the gun's trail and wheels perfectly still during the firing sequence. Since it did not need to be re-aimed after each shot, the crew could fire as soon as the barrel returned to its resting position. In typical use, the French 75 could deliver fifteen rounds per minute on its target, either shrapnel or melinite high-explosive, up to about 5 miles (8,500 m) away. Its firing rate could even reach close to 30 rounds per minute, albeit only for a very short time and with a highly experienced crew. These were rates that contemporary bolt action rifles could not match. Indirect fire Indirect fire, the firing of a projectile without relying on direct line of sight between the gun and the target, possibly dates back to the 16th century. Early battlefield use of indirect fire may have occurred at Paltzig in July 1759, when the Russian artillery fired over the tops of trees, and at the Battle of Waterloo, where a battery of the Royal Horse Artillery fired shrapnel indirectly against advancing French troops. In 1882, Russian Lieutenant Colonel KG Guk published Indirect Fire for Field Artillery, which provided a practical method of using aiming points for indirect fire by describing, "all the essentials of aiming points, crest clearance, and corrections to fire by an observer". A few years later, the Richtfläche (lining-plane) sight was invented in Germany and provided a means of indirect laying in azimuth, complementing the clinometers for indirect laying in elevation which already existed. Despite conservative opposition within the German army, indirect fire was adopted as doctrine by the 1890s. In the early 1900s, Goertz in Germany developed an optical sight for azimuth laying. It quickly replaced the lining-plane; in English, it became the 'Dial Sight' (UK) or 'Panoramic Telescope' (US). The British halfheartedly experimented with indirect fire techniques since the 1890s, but with the onset of the Boer War, they were the first to apply the theory in practice in 1899, although they had to improvise without a lining-plane sight. In the next 15 years leading up to World War I, the techniques of indirect fire became available for all types of artillery. Indirect fire was the defining characteristic of 20th-century artillery and led to undreamt of changes in the amount of artillery, its tactics, organisation, and techniques, most of which occurred during World War I. An implication of indirect fire and improving guns was increasing range between gun and target, this increased the time of flight and the vertex of the trajectory. The result was decreasing accuracy (the increasing distance between the target and the mean point of impact of the shells aimed at it) caused by the increasing effects of non-standard conditions. Indirect firing data was based on standard conditions including a specific muzzle velocity, zero wind, air temperature and density, and propellant temperature. In practice, this standard combination of conditions almost never existed, they varied throughout the day and day to day, and the greater the time of flight, the greater the inaccuracy. An added complication was the need for survey to accurately fix the coordinates of the gun position and provide accurate orientation for the guns. Of course, targets had to be accurately located, but by 1916, air photo interpretation techniques enabled this, and ground survey techniques could sometimes be used. In 1914, the methods of correcting firing data for the actual conditions were often convoluted, and the availability of data about actual conditions was rudimentary or non-existent, the assumption was that fire would always be ranged (adjusted). British heavy artillery worked energetically to progressively solve all these problems from late 1914 onwards, and by early 1918, had effective processes in place for both field and heavy artillery. These processes enabled 'map-shooting', later called 'predicted fire'; it meant that effective fire could be delivered against an accurately located target without ranging. Nevertheless, the mean point of impact was still some tens of yards from the target-centre aiming point. It was not precision fire, but it was good enough for concentrations and barrages. These processes remain in use into the 21st century with refinements to calculations enabled by computers and improved data capture about non-standard conditions. The British Major General Henry Hugh Tudor pioneered armour and artillery cooperation at the breakthrough Battle of Cambrai. The improvements in providing and using data for non-standard conditions (propellant temperature, muzzle velocity, wind, air temperature, and barometric pressure) were developed by the major combatants throughout the war and enabled effective predicted fire. The effectiveness of this was demonstrated by the British in 1917 (at Cambrai) and by Germany the following year (Operation Michael). Major General J.B.A. Bailey, British Army (retired) wrote: An estimated 75,000 French soldiers were casualties of friendly artillery fire in the four years of World War I. Precision-guidance Modern artillery is most obviously distinguished by its long range, firing an explosive shell or rocket and a mobile carriage for firing and transport. However, its most important characteristic is the use of indirect fire, whereby the firing equipment is aimed without seeing the target through its sights. Indirect fire emerged at the beginning of the 20th century and was greatly enhanced by the development of predicted fire methods in World War I. However, indirect fire was area fire; it was and is not suitable for destroying point targets; its primary purpose is area suppression. Nevertheless, by the late 1970s precision-guided munitions started to appear, notably the US 155 mm Copperhead and its Soviet 152 mm Krasnopol equivalent that had success in Indian service. These relied on laser designation to 'illuminate' the target that the shell homed onto. However, in the early 21st century, the Global Positioning System (GPS) enabled relatively cheap and accurate guidance for shells and missiles, notably the US 155 mm Excalibur and the 227 mm GMLRS rocket. The introduction of these led to a new issue, the need for very accurate three dimensional target coordinates—the mensuration process. Weapons covered by the term 'modern artillery' include "cannon" artillery (such as howitzer, mortar, and field gun) and rocket artillery. Certain smaller-caliber mortars are more properly designated small arms rather than artillery, albeit indirect-fire small arms. This term also came to include coastal artillery which traditionally defended coastal areas against seaborne attack and controlled the passage of ships. With the advent of powered flight at the start of the 20th century, artillery also included ground-based anti-aircraft batteries. The term "artillery" has traditionally not been used for projectiles with internal guidance systems, preferring the term "missilery", though some modern artillery units employ surface-to-surface missiles. Advances in terminal guidance systems for small munitions has allowed large-caliber guided projectiles to be developed, blurring this distinction. See Long Range Precision Fires (LRPF), Joint terminal attack controller Ammunition One of the most important roles of logistics is the supply of munitions as a primary type of artillery consumable, their storage (ammunition dump, arsenal, magazine ) and the provision of fuzes, detonators and warheads at the point where artillery troops will assemble the charge, projectile, bomb or shell. A round of artillery ammunition comprises four components: Fuze Projectile Propellant Primer Fuzes Fuzes are the devices that initiate an artillery projectile, either to detonate its High Explosive (HE) filling or eject its cargo (illuminating flare or smoke canisters being examples). The official military spelling is "fuze". Broadly there are four main types: impact (including graze and delay) mechanical time including airburst proximity sensor including airburst programmable electronic detonation including airburst Most artillery fuzes are nose fuzes. However, base fuzes have been used with armor-piercing shells and for squash head (High-Explosive Squash Head (HESH) or High Explosive, Plastic (HEP) anti-tank shells). At least one nuclear shell and its non-nuclear spotting version also used a multi-deck mechanical time fuze fitted into its base. Impact fuzes were, and in some armies remain, the standard fuze for HE projectiles. Their default action is normally 'superquick', some have had a 'graze' action which allows them to penetrate light cover and others have 'delay'. Delay fuzes allow the shell to penetrate the ground before exploding. Armor or Concrete-Piercing (AP or CP) fuzes are specially hardened. During World War I and later, ricochet fire with delay or graze fuzed HE shells, fired with a flat angle of descent, was used to achieve airburst. HE shells can be fitted with other fuzes. Airburst fuzes usually have a combined airburst and impact function. However, until the introduction of proximity fuzes, the airburst function was mostly used with cargo munitions—for example, shrapnel, illumination, and smoke. The larger calibers of anti-aircraft artillery are almost always used airburst. Airburst fuzes have to have the fuze length (running time) set on them. This is done just before firing using either a wrench or a fuze setter pre-set to the required fuze length. Early airburst fuzes used igniferous timers which lasted into the second half of the 20th century. Mechanical time fuzes appeared in the early part of the century. These required a means of powering them. The Thiel mechanism used a spring and escapement (i.e. 'clockwork'), Junghans used centrifugal force and gears, and Dixi used centrifugal force and balls. From about 1980, electronic time fuzes started replacing mechanical ones for use with cargo munitions. Proximity fuzes have been of two types: photo-electric or radar. The former was not very successful and seems only to have been used with British anti-aircraft artillery 'unrotated projectiles' (rockets) in World War II. Radar proximity fuzes were a big improvement over the mechanical (time) fuzes which they replaced. Mechanical time fuzes required an accurate calculation of their running time, which was affected by non-standard conditions. With HE (requiring a burst 20 to above the ground), if this was very slightly wrong the rounds would either hit the ground or burst too high. Accurate running time was less important with cargo munitions that burst much higher. The first radar proximity fuzes (perhaps originally codenamed 'VT' and later called Variable Time (VT)) were invented by the British and developed by the US and initially used against aircraft in World War II. Their ground use was delayed for fear of the enemy recovering 'blinds' (artillery shells which failed to detonate) and copying the fuze. The first proximity fuzes were designed to detonate about above the ground. These air-bursts are much more lethal against personnel than ground bursts because they deliver a greater proportion of useful fragments and deliver them into terrain where a prone soldier would be protected from ground bursts. However, proximity fuzes can suffer premature detonation because of the moisture in heavy rain clouds. This led to 'Controlled Variable Time' (CVT) after World War II. These fuzes have a mechanical timer that switched on the radar about 5 seconds before expected impact, they also detonated on impact. The proximity fuze emerged on the battlefields of Europe in late December 1944. They have become known as the U.S. Artillery's "Christmas present", and were much appreciated when they arrived during the Battle of the Bulge. They were also used to great effect in anti-aircraft projectiles in the Pacific against kamikaze as well as in Britain against V-1 flying bombs. Electronic multi-function fuzes started to appear around 1980. Using solid-state electronics they were relatively cheap and reliable, and became the standard fitted fuze in operational ammunition stocks in some western armies. The early versions were often limited to proximity airburst, albeit with height of burst options, and impact. Some offered a go/no-go functional test through the fuze setter. Later versions introduced induction fuze setting and testing instead of physically placing a fuze setter on the fuze. The latest, such as Junghan's DM84U provide options giving, superquick, delay, a choice of proximity heights of burst, time and a choice of foliage penetration depths. Projectiles The projectile is the munition or "bullet" fired downrange. This may be an explosive device. Projectiles have traditionally been classified as "shot" or "shell", the former being solid and the latter having some form of "payload". Shells can be divided into three configurations: bursting, base ejection or nose ejection. The latter is sometimes called the shrapnel configuration. The most modern is base ejection, which was introduced in World War I. Base and nose ejection are almost always used with airburst fuzes. Bursting shells use various types of fuze depending on the nature of the payload and the tactical need at the time. Payloads have included: Bursting: high-explosive, white phosphorus, coloured marker, chemical, nuclear devices; high-explosive anti-tank and canister may be considered special types of bursting shell. Nose ejection: shrapnel, star, incendiary and flechette (a more modern version of shrapnel). Base ejection: Dual-Purpose Improved Conventional Munition bomblets, which arm themselves and function after a set number of rotations after having been ejected from the projectile (this produces unexploded sub-munitions, or "duds", which remain dangerous), scatterable mines, illuminating, coloured flare, smoke, incendiary, propaganda, chaff (foil to jam radars) and modern exotics such as electronic payloads and sensor-fuzed munitions. Stabilization Rifled: Artillery projectiles have traditionally been spin-stabilised, meaning that they spin in flight so that gyroscopic forces prevent them from tumbling. Spin is induced by gun barrels having rifling, which engages a soft metal band around the projectile, called a "driving band" (UK) or "rotating band" (U.S.). The driving band is usually made of copper, but synthetic materials have been used. Smoothbore/fin-stabilized: In modern artillery, smoothbore barrels have been used mostly by mortars. These projectiles use fins in the airflow at their rear to maintain correct orientation. The primary benefits over rifled barrels is reduced barrel wear, longer ranges that can be achieved (due to the reduced loss of energy to friction and gas escaping around the projectile via the rifling) and larger explosive cores for a given caliber artillery due to less metal needing to be used to form the case of the projectile because of less force applied to the shell from the non-rifled sides of the barrel of smooth bore guns. Rifled/fin-stabilized: A combination of the above can be used, where the barrel is rifled, but the projectile also has deployable fins for stabilization, guidance or gliding. Propellant Most forms of artillery require a propellant to propel the projectile to the target. Propellant is always a low explosive, which means it deflagrates, rather than detonating like high explosives. The shell is accelerated to a high velocity in a very short time by the rapid generation of gas from the burning propellant. This high pressure is achieved by burning the propellant in a contained area, either the chamber of a gun barrel or the combustion chamber of a rocket motor. Until the late 19th century, the only available propellant was black powder. It had many disadvantages as a propellant; it has relatively low power, requiring large amounts of powder to fire projectiles, and created thick clouds of white smoke that would obscure the targets, betray the positions of guns, and make aiming impossible. In 1846, nitrocellulose (also known as guncotton) was discovered, and the high explosive nitroglycerin was discovered at nearly the same time. Nitrocellulose was significantly more powerful than black powder, and was smokeless. Early guncotton was unstable, however, and burned very fast and hot, leading to greatly increased barrel wear. Widespread introduction of smokeless powder would wait until the advent of the double-base powders, which combine nitrocellulose and nitroglycerin to produce powerful, smokeless, stable propellant. Many other formulations were developed in the following decades, generally trying to find the optimum characteristics of a good artillery propellant – low temperature, high energy, non-corrosive, highly stable, cheap, and easy to manufacture in large quantities. Modern gun propellants are broadly divided into three classes: single-base propellants that are mainly or entirely nitrocellulose based, double-base propellants consisting of a combination of nitrocellulose and nitroglycerin, and triple base composed of a combination of nitrocellulose and nitroglycerin and nitroguanidine. Artillery shells fired from a barrel can be assisted to greater range in three ways: Rocket-assisted projectiles enhance and sustain the projectile's velocity by providing additional 'push' from a small rocket motor that is part of the projectile's base. Base bleed uses a small pyrotechnic charge at the base of the projectile to introduce sufficient combustion products into the low-pressure region behind the base of the projectile responsible for a large proportion of the drag. Ramjet-assisted, similar to rocket-assisted, but using a ramjet instead of a rocket motor; it is anticipated that a ramjet-assisted 120-mm mortar shell could reach a range of . Propelling charges for barrel artillery can be provided either as cartridge bags or in metal cartridge cases. Generally, anti-aircraft artillery and smaller-caliber (up to 3" or 76.2 mm) guns use metal cartridge cases that include the round and propellant, similar to a modern rifle cartridge. This simplifies loading and is necessary for very high rates of fire. Bagged propellant allows the amount of powder to be raised or lowered, depending on the range to the target. It also makes handling of larger shells easier. Cases and bags require totally different types of breech. A metal case holds an integral primer to initiate the propellant and provides the gas seal to prevent the gases leaking out of the breech; this is called obturation. With bagged charges, the breech itself provides obturation and holds the primer. In either case, the primer is usually percussion, but electrical is also used, and laser ignition is emerging. Modern 155 mm guns have a primer magazine fitted to their breech. Artillery ammunition has four classifications according to use: Service: ammunition used in live fire training or for wartime use in a combat zone. Also known as "warshot" ammunition. Practice: Ammunition with a non- or minimally-explosive projectile that mimics the characteristics (range, accuracy) of live rounds for use under training conditions. Practice artillery ammunition often utilizes a colored-smoke-generating bursting charge for marking purposes in place of the normal high-explosive charge. Dummy: Ammunition with an inert warhead, inert primer, and no propellant; used for training or display. Blank: Ammunition with live primer, greatly reduced propellant charge (typically black powder), and no projectile; used for training, demonstration or ceremonial use. Field artillery system Because modern field artillery mostly uses indirect fire, the guns have to be part of a system that enables them to attack targets invisible to them, in accordance with the combined arms plan. The main functions in the field artillery system are: Communications Command: authority to allocate resources; Target acquisition: detect, identify and deduce the location of targets; Control: authority to decide which targets to attack and allot fire units to the attack; Computation of firing data – to deliver fire from a fire unit onto its target; Fire units: guns, launchers or mortars grouped together; Specialist services: produce data to support the production of accurate firing data; Logistic services: to provide combat supplies, particularly ammunition, and equipment support. All these calculations to produce a quadrant elevation (or range) and azimuth were done manually using instruments, tabulated, data of the moment, and approximations until battlefield computers started appearing in the 1960s and 1970s. While some early calculators copied the manual method (typically substituting polynomials for tabulated data), computers use a different approach. They simulate a shell's trajectory by 'flying' it in short steps and applying data about the conditions affecting the trajectory at each step. This simulation is repeated until it produces a quadrant elevation and azimuth that lands the shell within the required 'closing' distance of the target coordinates. NATO has a standard ballistic model for computer calculations and has expanded the scope of this into the NATO Armaments Ballistic Kernel (NABK) within the SG2 Shareable (Fire Control) Software Suite (S4). Logistics Supply of artillery ammunition has always been a major component of military logistics. Up until World War I some armies made artillery responsible for all forward ammunition supply because the load of small arms ammunition was trivial compared to artillery. Different armies use different approaches to ammunition supply, which can vary with the nature of operations. Differences include where the logistic service transfers artillery ammunition to artillery, the amount of ammunition carried in units and extent to which stocks are held at unit or battery level. A key difference is whether supply is 'push' or 'pull'. In the former the 'pipeline' keeps pushing ammunition into formations or units at a defined rate. In the latter units fire as tactically necessary and replenish to maintain or reach their authorised holding (which can vary), so the logistic system has to be able to cope with surge and slack. Classification Artillery types can be categorised in several ways, for example by type or size of weapon or ordnance, by role or by organizational arrangements. Types of ordnance The types of cannon artillery are generally distinguished by the velocity at which they fire projectiles. Types of artillery: Cannon: The oldest type of artillery with direct firing trajectory. Bombard: A type of a large calibre, muzzle-loading artillery piece, a cannon or mortar used during sieges to shoot round stone projectiles at the walls of enemy fortifications. Falconet was a type of light cannon developed in the late 15th century that fired a smaller shot than the similar falcon. Swivel gun is a type of small cannon mounted on a swiveling stand or fork which allows a very wide arc of movement. Camel mounted swivel guns called zamburak were used by the Gunpowder Empires as self-propelled artillery. Volley gun is a gun with multiple single-shot barrels that volley fired simultaneously or sequentially in quick succession. Although capable of unleashing intense firepower, volley guns differ from modern machine guns in that they lack autoloading and automatic fire mechanisms Siege artillery: Large-caliber artillery that have limited mobility with indirect firing trajectory, which was used to bombard targets at long distances. Large-calibre artillery. Field artillery: Mobile weapons used to support armies in the field. Subcategories include: Infantry support guns: Directly support infantry units. Mountain guns: Lightweight guns that can be disassembled and transported through difficult terrain. Field guns: Capable of long-range direct fires. Howitzers: Capable of high-angle fire, they are most often employed for indirect-fire. Gun-howitzers: Capable of high or low-angle fire with a longer barrel. Mortars: Typically muzzle-loaded, short-barreled, high-trajectory weapons designed primarily for an indirect-fire role. Gun-mortars: Typically breech-loaded, capable of high or low-angle fire with a longer barrel. Tank guns: Large-caliber guns mounted on tanks to provide mobile direct fire. Anti-tank artillery: Guns, usually mobile, designed primarily for direct fire to destroy armored fighting vehicles with heavy armor. Anti-tank gun: Guns designed for direct fire to destroy tanks and other armored fighting vehicles. Anti-aircraft artillery: Guns, usually mobile, designed for attacking aircraft by land and/or at sea. Some guns were suitable for the dual roles of anti-aircraft and anti-tank warfare. Rocket artillery: Launches rockets or missiles, instead of shot or shell. Railway gun: Large-caliber weapons that are mounted on, transported by and fired from specially-designed railway wagons. Naval artillery: Guns mounted on warships to be used either against other naval vessels or to bombard coastal targets in support of ground forces. The crowning achievement of naval artillery was the battleship, but the advent of air power and missiles have rendered this type of artillery largely obsolete. They are typically longer-barreled, low-trajectory, high-velocity weapons designed primarily for a direct-fire role. Coastal artillery: Fixed-position weapons dedicated to defense of a particular location, usually a coast (for example, the Atlantic Wall in World War II) or harbor. Not needing to be mobile, coastal artillery used to be much larger than equivalent field artillery pieces, giving them longer range and more destructive power. Modern coastal artillery (for example, Russia's "Bereg" system) is often self-propelled, (allowing it to avoid counter-battery fire) and fully integrated, meaning that each battery has all of the support systems that it requires (maintenance, targeting radar, etc.) organic to its unit. Aircraft artillery: Large-caliber guns mounted on attack aircraft, this is typically found on slow-flying gunships. Nuclear artillery: Artillery which fires nuclear shells. Modern field artillery can also be split into two other subcategories: towed and self-propelled. As the name suggests, towed artillery has a prime mover, usually an artillery tractor or truck, to move the piece, crew, and ammunition around. Towed artillery is in some cases equipped with an APU for small displacements. Self-propelled artillery is permanently mounted on a carriage or vehicle with room for the crew and ammunition and is thus capable of moving quickly from one firing position to another, both to support the fluid nature of modern combat and to avoid counter-battery fire. It includes mortar carrier vehicles, many of which allow the mortar to be removed from the vehicle and be used dismounted, potentially in terrain in which the vehicle cannot navigate, or in order to avoid detection. Organizational types At the beginning of the modern artillery period, the late 19th century, many armies had three main types of artillery, in some case they were sub-branches within the artillery branch in others they were separate branches or corps. There were also other types excluding the armament fitted to warships: Horse artillery, first formed as regular units in the late 18th century, with the role of supporting cavalry, they were distinguished by the entire crew being mounted. Field or "foot" artillery, the main artillery arm of the field army, using either guns, howitzers, or mortars. In World War II this branch again started using rockets and later surface to surface missiles. Fortress or garrison artillery, operated a nation's fixed defences using guns, howitzers or mortars, either on land or coastal frontiers. Some had deployable elements to provide heavy artillery to the field army. In some nations coast defence artillery was a naval responsibility. Mountain artillery, a few nations treated mountain artillery as a separate branch, in others it was a speciality in another artillery branch. They used light guns or howitzers, usually designed for pack animal transport and easily broken down into small easily handled loads Naval artillery, some nations carried pack artillery on some warships, these were used and manhandled by naval (or marine) landing parties. At times, part of a ship's armament would be unshipped and mated to makeshift carriages and limbers for actions ashore, for example during the Second Boer War, during the First World War the guns from the stricken SMS Königsberg formed the main artillery strength of the German forces in East Africa. After World War I many nations merged these different artillery branches, in some cases keeping some as sub-branches. Naval artillery disappeared apart from that belonging to marines. However, two new branches of artillery emerged during that war and its aftermath, both used specialised guns (and a few rockets) and used direct not indirect fire, in the 1950s and 1960s both started to make extensive use of missiles: Anti-tank artillery, also under various organisational arrangements but typically either field artillery or a specialist branch and additional elements integral to infantry, etc., units. However, in most armies field and anti-aircraft artillery also had at least a secondary anti-tank role. After World War II anti-tank in Western armies became mostly the responsibility of infantry and armoured branches and ceased to be an artillery matter, with some exceptions. Anti-aircraft artillery, under various organisational arrangements including being part of artillery, a separate corps, even a separate service or being split between army for the field and air force for home defence. In some cases infantry and the new armoured corps also operated their own integral light anti-aircraft artillery. Home defence anti-aircraft artillery often used fixed as well as mobile mountings. Some anti-aircraft guns could also be used as field or anti-tank artillery, providing they had suitable sights. However, the general switch by artillery to indirect fire before and during World War I led to a reaction in some armies. The result was accompanying or infantry guns. These were usually small, short range guns, that could be easily man-handled and used mostly for direct fire but some could use indirect fire. Some were operated by the artillery branch but under command of the supported unit. In World War II they were joined by self-propelled assault guns, although other armies adopted infantry or close support tanks in armoured branch units for the same purpose, subsequently tanks generally took on the accompanying role. Equipment types The three main types of artillery "gun" are field guns, howitzers, and mortars. During the 20th century, guns and howitzers have steadily merged in artillery use, making a distinction between the terms somewhat meaningless. By the end of the 20th century, true guns with calibers larger than about 60 mm have become very rare in artillery use, the main users being tanks, ships, and a few residual anti-aircraft and coastal guns. The term "cannon" is a United States generic term that includes guns, howitzers, and mortars; it is not used in other English speaking armies. The traditional definitions differentiated between guns and howitzers in terms of maximum elevation (well less than 45° as opposed to close to or greater than 45°), number of charges (one or more than one charge), and having higher or lower muzzle velocity, sometimes indicated by barrel length. These three criteria give eight possible combinations, of which guns and howitzers are but two. However, modern "howitzers" have higher velocities and longer barrels than the equivalent "guns" of the first half of the 20th century. True guns are characterized by long range, having a maximum elevation significantly less than 45°, a high muzzle velocity and hence a relatively long barrel, smooth bore (no rifling) and a single charge. The latter often led to fixed ammunition where the projectile is locked to the cartridge case. There is no generally accepted minimum muzzle velocity or barrel length associated with a gun. Howitzers can fire at maximum elevations at least close to 45°; elevations up to about 70° are normal for modern howitzers. Howitzers also have a choice of charges, meaning that the same elevation angle of fire will achieve a different range depending on the charge used. They have rifled bores, lower muzzle velocities and shorter barrels than equivalent guns. All this means they can deliver fire with a steep angle of descent. Because of their multi-charge capability, their ammunition is mostly separate loading (the projectile and propellant are loaded separately). That leaves six combinations of the three criteria, some of which have been termed gun howitzers. A term first used in the 1930s when howitzers with a relatively high maximum muzzle velocities were introduced, it never became widely accepted, most armies electing to widen the definition of "gun" or "howitzer". By the 1960s, most equipment had maximum elevations up to about 70°, were multi-charge, had quite high maximum muzzle velocities and relatively long barrels. Mortars are simpler. The modern mortar originated in World War I and there were several patterns. After that war, most mortars settled on the Stokes pattern, characterized by a short barrel, smooth bore, low muzzle velocity, elevation angle of firing generally greater than 45°, and a very simple and light mounting using a "baseplate" on the ground. The projectile with its integral propelling charge was dropped down the barrel from the muzzle to hit a fixed firing pin. Since that time, a few mortars have become rifled and adopted breech loading. There are other recognized typifying characteristics for artillery. One such characteristic is the type of obturation used to seal the chamber and prevent gases escaping through the breech. This may use a metal cartridge case that also holds the propelling charge, a configuration called "QF" or "quickfiring" by some nations. The alternative does not use a metal cartridge case, the propellant being merely bagged or in combustible cases with the breech itself providing all the sealing. This is called "BL" or "breech loading" by some nations. A second characteristic is the form of propulsion. Modern equipment can either be towed or self-propelled (SP). A towed gun fires from the ground and any inherent protection is limited to a gun shield. Towing by horse teams lasted throughout World War II in some armies, but others were fully mechanized with wheeled or tracked gun towing vehicles by the outbreak of that war. The size of a towing vehicle depends on the weight of the equipment and the amount of ammunition it has to carry. A variation of towed is portee, where the vehicle carries the gun which is dismounted for firing. Mortars are often carried this way. A mortar is sometimes carried in an armored vehicle and can either fire from it or be dismounted to fire from the ground. Since the early 1960s it has been possible to carry lighter towed guns and most mortars by helicopter. Even before that, they were parachuted or landed by glider from the time of the first airborne trials in the USSR in the 1930s. In SP equipment, the gun is an integral part of the vehicle that carries it. SPs first appeared during World War I, but did not really develop until World War II. They are mostly tracked vehicles, but wheeled SPs started to appear in the 1970s. Some SPs have no armor and carry few or no other weapons and ammunition. Armored SPs usually carry a useful ammunition load. Early armored SPs were mostly a "casemate" configuration, in essence an open top armored box offering only limited traverse. However, most modern armored SPs have a full enclosed armored turret, usually giving full traverse for the gun. Many SPs cannot fire without deploying stabilizers or spades, sometimes hydraulic. A few SPs are designed so that the recoil forces of the gun are transferred directly onto the ground through a baseplate. A few towed guns have been given limited self-propulsion by means of an auxiliary engine. Two other forms of tactical propulsion were used in the first half of the 20th century: Railways or transporting the equipment by road, as two or three separate loads, with disassembly and re-assembly at the beginning and end of the journey. Railway artillery took two forms, railway mountings for heavy and super-heavy guns and howitzers and armored trains as "fighting vehicles" armed with light artillery in a direct fire role. Disassembled transport was also used with heavy and super heavy weapons and lasted into the 1950s. Caliber categories A third form of artillery typing is to classify it as "light", "medium", "heavy" and various other terms. It appears to have been introduced in World War I, which spawned a very wide array of artillery in all sorts of sizes so a simple categorical system was needed. Some armies defined these categories by bands of calibers. Different bands were used for different types of weapons—field guns, mortars, anti-aircraft guns and coastal guns. Modern operations List of countries in order of amount of artillery (only conventional barrel ordnance is given, in use with land forces): Artillery is used in a variety of roles depending on its type and caliber. The general role of artillery is to provide fire support—"the application of fire, coordinated with the manoeuvre of forces to destroy, neutralize or suppress the enemy". This NATO definition makes artillery a supporting arm although not all NATO armies agree with this logic. The italicised terms are NATO's. Unlike rockets, guns (or howitzers as some armies still call them) and mortars are suitable for delivering close supporting fire. However, they are all suitable for providing deep supporting fire although the limited range of many mortars tends to exclude them from the role. Their control arrangements and limited range also mean that mortars are most suited to direct supporting fire. Guns are used either for this or general supporting fire while rockets are mostly used for the latter. However, lighter rockets may be used for direct fire support. These rules of thumb apply to NATO armies. Modern mortars, because of their lighter weight and simpler, more transportable design, are usually an integral part of infantry and, in some armies, armour units. This means they generally do not have to concentrate their fire so their shorter range is not a disadvantage. Some armies also consider infantry operated mortars to be more responsive than artillery, but this is a function of the control arrangements and not the case in all armies. However, mortars have always been used by artillery units and remain with them in many armies, including a few in NATO. In NATO armies artillery is usually assigned a tactical mission that establishes its relationship and responsibilities to the formation or units it is assigned to. It seems that not all NATO nations use the terms and outside NATO others are probably used. The standard terms are: direct support, general support, general support reinforcing and reinforcing. These tactical missions are in the context of the command authority: operational command, operational control, tactical command or tactical control. In NATO direct support generally means that the directly supporting artillery unit provides observers and liaison to the manoeuvre troops being supported, typically an artillery battalion or equivalent is assigned to a brigade and its batteries to the brigade's battalions. However, some armies achieve this by placing the assigned artillery units under command of the directly supported formation. Nevertheless, the batteries' fire can be concentrated onto a single target, as can the fire of units in range and with the other tactical missions. Application of fire There are several dimensions to this subject. The first is the notion that fire may be against an opportunity target or may be arranged. If it is the latter it may be either on-call or scheduled. Arranged targets may be part of a fire plan. Fire may be either observed or unobserved, if the former it may be adjusted, if the latter then it has to be predicted. Observation of adjusted fire may be directly by a forward observer or indirectly via some other target acquisition system. NATO also recognises several different types of fire support for tactical purposes: Counterbattery fire: delivered for the purpose of destroying or neutralizing the enemy's fire support system. Counterpreparation fire: intensive prearranged fire delivered when the imminence of the enemy attack is discovered. Covering fire: used to protect troops when they are within range of enemy small arms. Defensive fire: delivered by supporting units to assist and protect a unit engaged in a defensive action. Final Protective Fire: an immediately available prearranged barrier of fire designed to impede enemy movement across defensive lines or areas. Harassing fire: a random number of shells are fired at random intervals, without any pattern to it that the enemy can predict. This process is designed to hinder enemy forces' movement, and, by the constantly imposed stress, threat of losses and inability of enemy forces to relax or sleep, lowers their morale. Interdiction fire: placed on an area or point to prevent the enemy from using the area or point. Preparation fire: delivered before an attack to weaken the enemy position. These purposes have existed for most of the 20th century, although their definitions have evolved and will continue to do so, lack of suppression in counterbattery is an omission. Broadly they can be defined as either: Deep supporting fire: directed at objectives not in the immediate vicinity of own force, for neutralizing or destroying enemy reserves and weapons, and interfering with enemy command, supply, communications and observation; or Close supporting fire: placed on enemy troops, weapons or positions which, because of their proximity present the most immediate and serious threat to the supported unit. Two other NATO terms also need definition: Neutralization fire: delivered to render a target temporarily ineffective or unusable; and Suppression fire: that degrades the performance of a target below the level needed to fulfill its mission. Suppression is usually only effective for the duration of the fire. The tactical purposes also include various "mission verbs", a rapidly expanding subject with the modern concept of "effects based operations". Targeting is the process of selecting target and matching the appropriate response to them taking account of operational requirements and capabilities. It requires consideration of the type of fire support required and the extent of coordination with the supported arm. It involves decisions about: what effects are required, for example, neutralization or suppression; the proximity of and risks to own troops or non-combatants; what types of munitions, including their fuzing, are to be used and in what quantities; when the targets should be attacked and possibly for how long; what methods should be used, for example, converged or distributed, whether adjustment is permissible or surprise essential, the need for special procedures such as precision or danger close how many fire units are needed and which ones they should be from those that are available (in range, with the required munitions type and quantity, not allotted to another target, have the most suitable line of fire if there is a risk to own troops or non-combatants); The targeting process is the key aspect of tactical fire control. Depending on the circumstances and national procedures it may all be undertaken in one place or may be distributed. In armies practicing control from the front, most of the process may be undertaken by a forward observer or other target acquirer. This is particularly the case for a smaller target requiring only a few fire units. The extent to which the process is formal or informal and makes use of computer based systems, documented norms or experience and judgement also varies widely armies and other circumstances. Surprise may be essential or irrelevant. It depends on what effects are required and whether or not the target is likely to move or quickly improve its protective posture. During World War II UK researchers concluded that for impact fuzed munitions the relative risk were as follows: men standing – 1 men lying – 1/3 men firing from trenches – 1/15–1/50 men crouching in trenches – 1/25–1/100 Airburst munitions significantly increase the relative risk for lying men, etc. Historically most casualties occur in the first 10–15 seconds of fire, i.e. the time needed to react and improve protective posture, however, this is less relevant if airburst is used. There are several ways of making best use of this brief window of maximum vulnerability: ordering the guns to fire together, either by executive order or by a "fire at" time. The disadvantage is that if the fire is concentrated from many dispersed fire units then there will be different times of flight and the first rounds will be spread in time. To some extent a large concentration offsets the problem because it may mean that only one round is required from each gun and most of these could arrive in the 15 second window. burst fire, a rate of fire to deliver three rounds from each gun within 10 or 15 seconds, this reduces the number of guns and hence fire units needed, which means they may be less dispersed and have less variation in their times of flight. Smaller caliber guns, such as 105 mm, have always been able to deliver three rounds in 15 seconds, larger calibers firing fixed rounds could also do it but it was not until the 1970s that a multi-charge 155 mm howitzer, FH-70 first gained the capability. multiple round simultaneous impact (MRSI), where a single weapon or multiple individual weapons fire multiple rounds at differing trajectories so that all rounds arrive on target at the same time. time on target, fire units fire at the time less their time of flight, this works well with prearranged scheduled fire but is less satisfactory for opportunity targets because it means delaying the delivery of fire by selecting a 'safe' time that all or most fire units can achieve. It can be used with both the previous two methods. Counter-battery fire Modern counter-battery fire developed in World War I, with the objective of defeating the enemy's artillery. Typically such fire was used to suppress enemy batteries when they were or were about to interfere with the activities of friendly forces (such as to prevent enemy defensive artillery fire against an impending attack) or to systematically destroy enemy guns. In World War I the latter required air observation. The first indirect counter-battery fire was in May 1900 by an observer in a balloon. Enemy artillery can be detected in two ways, either by direct observation of the guns from the air or by ground observers (including specialist reconnaissance), or from their firing signatures. This includes radars tracking the shells in flight to determine their place of origin, sound ranging detecting guns firing and resecting their position from pairs of microphones or cross-observation of gun flashes using observation by human observers or opto-electronic devices, although the widespread adoption of 'flashless' propellant limited the effectiveness of the latter. Once hostile batteries have been detected they may be engaged immediately by friendly artillery or later at an optimum time, depending on the tactical situation and the counter-battery policy. Air strike is another option. In some situations the task is to locate all active enemy batteries for attack using a counter-battery fire at the appropriate moment in accordance with a plan developed by artillery intelligence staff. In other situations counter-battery fire may occur whenever a battery is located with sufficient accuracy. Modern counter-battery target acquisition uses unmanned aircraft, counter-battery radar, ground reconnaissance and sound-ranging. Counter-battery fire may be adjusted by some of the systems, for example the operator of an unmanned aircraft can 'follow' a battery if it moves. Defensive measures by batteries include frequently changing position or constructing defensive earthworks, the tunnels used by North Korea being an extreme example. Counter-measures include air defence against aircraft and attacking counter-battery radars physically and electronically. Field artillery team 'Field Artillery Team' is a US term and the following description and terminology applies to the US, other armies are broadly similar but differ in significant details. Modern field artillery (post–World War I) has three distinct parts: the Forward Observer (FO), the Fire Direction Center (FDC) and the actual guns themselves. The forward observer observes the target using tools such as binoculars, laser rangefinders, designators and call back fire missions on his radio, or relays the data through a portable computer via an encrypted digital radio connection protected from jamming by computerized frequency hopping. A lesser known part of the team is the FAS or Field Artillery Survey team which sets up the "Gun Line" for the cannons. Today most artillery battalions use an "Aiming Circle" which allows for faster setup and more mobility. FAS teams are still used for checks and balances purposes and if a gun battery has issues with the "Aiming Circle" a FAS team will do it for them. The FO can communicate directly with the battery FDC, of which there is one per each battery of 4–8 guns. Otherwise the several FOs communicate with a higher FDC such as at a Battalion level, and the higher FDC prioritizes the targets and allocates fires to individual batteries as needed to engage the targets that are spotted by the FOs or to perform preplanned fires. The Battery FDC computes firing data—ammunition to be used, powder charge, fuse settings, the direction to the target, and the quadrant elevation to be fired at to reach the target, what gun will fire any rounds needed for adjusting on the target, and the number of rounds to be fired on the target by each gun once the target has been accurately located—to the guns. Traditionally this data is relayed via radio or wire communications as a warning order to the guns, followed by orders specifying the type of ammunition and fuse setting, direction, and the elevation needed to reach the target, and the method of adjustment or orders for fire for effect (FFE). However, in more advanced artillery units, this data is relayed through a digital radio link. Other parts of the field artillery team include meteorological analysis to determine the temperature, humidity and pressure of the air and wind direction and speed at different altitudes. Also radar is used both for determining the location of enemy artillery and mortar batteries and to determine the precise actual strike points of rounds fired by battery and comparing that location with what was expected to compute a registration allowing future rounds to be fired with much greater accuracy. Time on target A technique called time on target (TOT) was developed by the British Army in North Africa at the end of 1941 and early 1942 particularly for counter-battery fire and other concentrations, it proved very popular. It relied on BBC time signals to enable officers to synchronize their watches to the second because this avoided the need to use military radio networks and the possibility of losing surprise, and the need for field telephone networks in the desert. With this technique the time of flight from each fire unit (battery or troop) to the target is taken from the range or firing tables, or the computer and each engaging fire unit subtracts its time of flight from the TOT to determine the time to fire. An executive order to fire is given to all guns in the fire unit at the correct moment to fire. When each fire unit fires their rounds at their individual firing time all the opening rounds will reach the target area almost simultaneously. This is especially effective when combined with techniques that allow fires for effect to be made without preliminary adjusting fires. Multiple round simultaneous impact Multiple round simultaneous impact (MRSI) is a modern version of the earlier time on target concept. MRSI is when a single gun fires multiple shells so all arrive at the same target simultaneously. This is possible because there is more than one trajectory for a round to fly to any given target. Typically one is below 45 degrees from horizontal and the other is above it, and by using different sized propellant charges with each shell, it is possible to utilize more than two trajectories. Because the higher trajectories cause the shells to arc higher into the air, they take longer to reach the target. If shells are fired on higher trajectories for initial volleys (starting with the shell with the most propellant and working down) and later volleys are fired on the lower trajectories, with the correct timing the shells will all arrive at the same target simultaneously. This is useful because many more shells can land on the target with no warning. With traditional methods of firing, the target area may have time (however long it takes to reload and re-fire the guns) to take cover between volleys. However, guns capable of burst fire can deliver multiple rounds in a few seconds if they use the same firing data for each, and if guns in more than one location are firing on one target they can use Time on Target procedures so that all their shells arrive at the same time and target. MRSI has a few prerequisites. The first is guns with a high rate of fire. The second is the ability to use different sized propellant charges. Third is a fire control computer that has the ability to compute MRSI volleys and the capability to produce firing data, sent to each gun, and then presented to the gun commander in the correct order. The number of rounds that can be delivered in MRSI depends primarily on the range to the target and the rate of fire. To allow the most shells to reach the target, the target has to be in range of the lowest propellant charge. Examples of guns with a rate of fire that makes them suitable for MRSI includes UK's AS-90, South Africa's Denel G6-52 (which can land six rounds simultaneously at targets at least away), Germany's Panzerhaubitze 2000 (which can land five rounds simultaneously at targets at least away), Slovakia's 155 mm SpGH ZUZANA 2, and K9 Thunder. The Archer project (developed by BAE-Systems Bofors in Sweden) is a 155 mm howitzer on a wheeled chassis which is claimed to be able to deliver up to six shells on target simultaneously from the same gun. The 120 mm twin barrel AMOS mortar system, joint developed by Hägglunds (Sweden) and Patria (Finland), is capable of 7 + 7 shells MRSI. The United States Crusader program (now cancelled) was slated to have MRSI capability. It is unclear how many fire control computers have the necessary capabilities. Two-round MRSI firings were a popular artillery demonstration in the 1960s, where well trained detachments could show off their skills for spectators. Air burst The destructiveness of artillery bombardments can be enhanced when some or all of the shells are set for airburst, meaning that they explode in the air above the target instead of upon impact. This can be accomplished either through time fuzes or proximity fuzes. Time fuzes use a precise timer to detonate the shell after a preset delay. This technique is tricky and slight variations in the functioning of the fuze can cause it to explode too high and be ineffective, or to strike the ground instead of exploding above it. Since December 1944 (Battle of the Bulge), proximity fuzed artillery shells have been available that take the guesswork out of this process. These employ a miniature, low powered radar transmitter in the fuze to detect the ground and explode them at a predetermined height above it. The return of the weak radar signal completes an electrical circuit in the fuze which explodes the shell. The proximity fuze itself was developed by the British to increase the effectiveness of anti-aircraft warfare. This is a very effective tactic against infantry and light vehicles, because it scatters the fragmentation of the shell over a larger area and prevents it from being blocked by terrain or entrenchments that do not include some form of robust overhead cover. Combined with TOT or MRSI tactics that give no warning of the incoming rounds, these rounds are especially devastating because many enemy soldiers are likely to be caught in the open; even more so if the attack is launched against an assembly area or troops moving in the open rather than a unit in an entrenched tactical position. Use in monuments Numerous war memorials around the world incorporate an artillery piece that was used in the war or battle commemorated.
Technology
Artillery and siege
null
2524
https://en.wikipedia.org/wiki/Airbus%20A300
Airbus A300
The Airbus A300 is Airbus' first production aircraft and the world's first twin-engine, double-aisle (wide-body) airliner. It was developed by Airbus Industrie GIE, now merged into Airbus SE, and manufactured from 1971 to 2007. In September 1967, aircraft manufacturers in France, West Germany and the United Kingdom signed an initial memorandum of understanding to collaborate to develop an innovative large airliner. The French and West Germans reached a firm agreement on 29 May 1969, after the British withdrew from the project on 10 April 1969. A new collaborative aerospace company, Airbus Industrie GIE, was formally created on 18 December 1970 to develop and produce it. The A300 prototype first flew on 28 October 1972. The first twin-engine widebody airliner, the A300 typically seats 247 passengers in two classes over a range of 5,375 to 7,500 km (2,900 to 4,050 nmi; ). Initial variants are powered by General Electric CF6-50 or Pratt & Whitney JT9D turbofans and have a three-crew flight deck. The improved A300-600 has a two-crew cockpit and updated CF6-80C2 or PW4000 engines; it made its first flight on 8 July 1983 and entered service later that year. The A300 is the basis of the smaller A310 (first flown in 1982) and was adapted in a freighter version. Its cross section was retained for the larger four-engined A340 (1991) and the larger twin-engined A330 (1992). It is also the basis for the oversize Beluga transport (1994). Unlike most Airbus products, it has a yoke, not using a fly-by-wire system. Launch customer Air France introduced the type on 23 May 1974. After limited demand initially, sales took off as the type was proven in early service, beginning three decades of steady orders. It has a similar capacity to the Boeing 767-300, introduced in 1986, but lacked the 767-300ER range. During the 1990s, the A300 became popular with cargo aircraft operators, as both passenger airliner conversions and as original builds. Production ceased in July 2007 after 561 deliveries. , there are 197 A300 family aircraft still in commercial service. Development Origins During the 1960s, European aircraft manufacturers such as Hawker Siddeley and the British Aircraft Corporation, based in the UK, and Sud Aviation of France, had ambitions to build a new 200-seat airliner for the growing civil aviation market. While studies were performed and considered, such as a stretched twin-engine variant of the Hawker Siddeley Trident and an expanded development of the British Aircraft Corporation (BAC) One-Eleven, designated the BAC Two-Eleven, it was recognized that if each of the European manufacturers were to launch similar aircraft into the market at the same time, neither would achieve sales volume needed to make them viable. In 1965, a British government study, known as the Plowden Report, had found British aircraft production costs to be between 10% and 20% higher than American counterparts due to shorter production runs, which was in part due to the fractured European market. To overcome this factor, the report recommended the pursuit of multinational collaborative projects between the region's leading aircraft manufacturers. European manufacturers were keen to explore prospective programmes; the proposed 260-seat wide-body HBN 100 between Hawker Siddeley, Nord Aviation, and Breguet Aviation being one such example. National governments were also keen to support such efforts amid a belief that American manufacturers could dominate the European Economic Community; in particular, Germany had ambitions for a multinational airliner project to invigorate its aircraft industry, which had declined considerably following the Second World War. During the mid-1960s, both Air France and American Airlines had expressed interest in a short-haul twin-engine wide-body aircraft, indicating a market demand for such an aircraft to be produced. In July 1967, during a high-profile meeting between French, German, and British ministers, an agreement was made for greater cooperation between European nations in the field of aviation technology, and "for the joint development and production of an airbus". The word airbus at this point was a generic aviation term for a larger commercial aircraft, and was considered acceptable in multiple languages, including French. Shortly after the July 1967 meeting, French engineer Roger Béteille was appointed as the technical director of what would become the A300 programme, while Henri Ziegler, chief operating office of Sud Aviation, was appointed as the general manager of the organisation and German politician Franz Josef Strauss became the chairman of the supervisory board. Béteille drew up an initial work share plan for the project, under which French firms would produce the aircraft's cockpit, the control systems, and lower-centre portion of the fuselage, Hawker Siddeley would manufacture the wings, while German companies would produce the forward, rear and upper part of the center fuselage sections. Additional work included moving elements of the wings being produced in the Netherlands, and Spain producing the horizontal tail plane. An early design goal for the A300 that Béteille had stressed the importance of was the incorporation of a high level of technology, which would serve as a decisive advantage over prospective competitors. For this reason, the A300 would feature the first use of composite materials of any passenger aircraft, the leading and trailing edges of the tail fin being composed of glass fibre reinforced plastic. Béteille opted for English as the working language for the developing aircraft, as well against using Metric instrumentation and measurements, as most airlines already had US-built aircraft. These decisions were partially influenced by feedback from various airlines, such as Air France and Lufthansa, as an emphasis had been placed on determining the specifics of what kind of aircraft that potential operators were seeking. According to Airbus, this cultural approach to market research had been crucial to the company's long-term success. Workshare and redefinition On 26 September 1967, the French, West German and British governments signed a Memorandum of Understanding to start the development of the 300-seat Airbus A300. At this point, the A300 was only the second major joint aircraft programme in Europe, the first being the Anglo-French Concorde. Under the terms of the memorandum, the French and British were to each receive a 37.5 per cent work share on the project, while the West Germans would receive a 25 per cent share. Sud Aviation was recognized as the lead contractor for the A300, with Hawker Siddeley being selected as the British partner company. At the time, the news of the announcement had been clouded by the British Government's support for the Airbus, which coincided with its refusal to back BAC's proposed competitor, the BAC 2–11, despite a preference for the latter expressed by British European Airways (BEA). Another parameter was the requirement for a new engine to be developed by Rolls-Royce to power the proposed airliner; a derivative of the in-development Rolls-Royce RB211, the triple-spool RB207, capable of producing of . The programme cost was US$4.6 billion (in 1993 dollars, equivalent to $ in ). In December 1968, the French and British partner companies (Sud Aviation and Hawker Siddeley) proposed a revised configuration, the 250-seat Airbus A250. It had been feared that the original 300-seat proposal was too large for the market, thus it had been scaled down to produce the A250. The dimensional changes involved in the shrink reduced the length of the fuselage by and the diameter by , reducing the overall weight by . For increased flexibility, the cabin floor was raised so that standard LD3 freight containers could be accommodated side-by-side, allowing more cargo to be carried. Refinements made by Hawker Siddeley to the wing's design provided for greater lift and overall performance; this gave the aircraft the ability to climb faster and attain a level cruising altitude sooner than any other passenger aircraft. It was later renamed the A300B. Perhaps the most significant change of the A300B was that it would not require new engines to be developed, being of a suitable size to be powered by Rolls-Royce's RB211, or alternatively the American Pratt & Whitney JT9D and General Electric CF6 powerplants; this switch was recognized as considerably reducing the project's development costs. To attract potential customers in the US market, it was decided that General Electric CF6-50 engines would power the A300 in place of the British RB207; these engines would be produced in co-operation with French firm Snecma. By this time, Rolls-Royce had been concentrating their efforts upon developing their RB211 turbofan engine instead and progress on the RB207's development had been slow for some time, the firm having suffered due to funding limitations, both of which had been factors in the engine switch decision. On 10 April 1969, a few months after the decision to drop the RB207 had been announced, the British government announced that they would withdraw from the Airbus venture. In response, West Germany proposed to France that they would be willing to contribute up to 50% of the project's costs if France was prepared to do the same. Additionally, the managing director of Hawker Siddeley, Sir Arnold Alexander Hall, decided that his company would remain in the project as a favoured sub-contractor, developing and manufacturing the wings for the A300, which would prove to be an important contributor to the performance of subsequent versions. Hawker Siddeley spent £35 million of its own funds, along with a further £35 million loan from the West German government, on the machine tooling to design and produce the wings. Programme launch On 29 May 1969, during the Paris Air Show, French transport minister Jean Chamant and German economics minister Karl Schiller signed an agreement officially launching the Airbus A300, the world's first twin-engine widebody airliner. The intention of the project was to produce an aircraft that was smaller, lighter, and more economical than its three-engine American rivals, the McDonnell Douglas DC-10 and the Lockheed L-1011 TriStar. In order to meet Air France's demands for an aircraft larger than 250-seat A300B, it was decided to stretch the fuselage to create a new variant, designated as the A300B2, which would be offered alongside the original 250-seat A300B, henceforth referred to as the A300B1. On 3 September 1970, Air France signed a letter of intent for six A300s, marking the first order to be won for the new airliner. In the aftermath of the Paris Air Show agreement, it was decided that, in order to provide effective management of responsibilities, a Groupement d'intérêt économique would be established, allowing the various partners to work together on the project while remaining separate business entities. On 18 December 1970, Airbus Industrie was formally established following an agreement between Aérospatiale (the newly merged Sud Aviation and Nord Aviation) of France and the antecedents to Deutsche Aerospace of Germany, each receiving a 50 per cent stake in the newly formed company. In 1971, the consortium was joined by a third full partner, the Spanish firm CASA, who received a 4.2 per cent stake, the other two members reducing their stakes to 47.9 per cent each. In 1979, Britain joined the Airbus consortium via British Aerospace, which Hawker Siddeley had merged into, which acquired a 20 per cent stake in Airbus Industrie with France and Germany each reducing their stakes to 37.9 per cent. Prototype and flight testing Airbus Industrie was initially headquartered in Paris, which is where design, development, flight testing, sales, marketing, and customer support activities were centred; the headquarters was relocated to Toulouse in January 1974. The final assembly line for the A300 was located adjacent to Toulouse Blagnac International Airport. The manufacturing process necessitated transporting each aircraft section being produced by the partner companies scattered across Europe to this one location. The combined use of ferries and roads were used for the assembly of the first A300, however this was time-consuming and not viewed as ideal by Felix Kracht, Airbus Industrie's production director. Kracht's solution was to have the various A300 sections brought to Toulouse by a fleet of Boeing 377-derived Aero Spacelines Super Guppy aircraft, by which means none of the manufacturing sites were more than two hours away. Having the sections airlifted in this manner made the A300 the first airliner to use just-in-time manufacturing techniques, and allowed each company to manufacture its sections as fully equipped, ready-to-fly assemblies. In September 1969, construction of the first prototype A300 began. On 28 September 1972, this first prototype was unveiled to the public, it conducted its maiden flight from Toulouse–Blagnac International Airport on 28 October that year. This maiden flight, which was performed a month ahead of schedule, lasted for one hour and 25 minutes; the captain was Max Fischl and the first officer was Bernard Ziegler, son of Henri Ziegler. In 1972, unit cost was US$17.5M. On 5 February 1973, the second prototype performed its maiden flight. The flight test programme, which involved a total of four aircraft, was relatively problem-free, accumulating 1,580 flight hours throughout. In September 1973, as part of promotional efforts for the A300, the new aircraft was taken on a six-week tour around North America and South America, to demonstrate it to airline executives, pilots, and would-be customers. Amongst the consequences of this expedition, it had allegedly brought the A300 to the attention of Frank Borman, the CEO of Eastern Airlines, one of the "big four" U.S. airlines. Entry into service On 15 March 1974, type certificates were granted for the A300 from both German and French authorities, clearing the way for its entry into revenue service. On 23 May 1974, Federal Aviation Administration (FAA) certification was received. The first production model, the A300B2, entered service in 1974, followed by the A300B4 one year later. Initially, the success of the consortium was poor, in part due to the economic consequences of the 1973 oil crisis, but by 1979 there were 81 A300 passenger liners in service with 14 airlines, alongside 133 firm orders and 88 options. Ten years after the official launch of the A300, the company had achieved a 26 per cent market share in terms of dollar value, enabling Airbus to proceed with the development of its second aircraft, the Airbus A310. Design The Airbus A300 is a wide-body medium-to-long range airliner; it has the distinction of being the first twin-engine wide-body aircraft in the world. In 1977, the A300 became the first Extended Range Twin Operations (ETOPS)-compliant aircraft, due to its high performance and safety standards. Another world-first of the A300 is the use of composite materials on a commercial aircraft, which were used on both secondary and later primary airframe structures, decreasing overall weight and improving cost-effectiveness. Other firsts included the pioneering use of centre-of-gravity control, achieved by transferring fuel between various locations across the aircraft, and electrically signalled secondary flight controls. The A300 is powered by a pair of underwing turbofan engines, either General Electric CF6 or Pratt & Whitney JT9D engines; the sole use of underwing engine pods allowed for any suitable turbofan engine to be more readily used. The lack of a third tail-mounted engine, as per the trijet configuration used by some competing airliners, allowed for the wings to be located further forwards and to reduce the size of the vertical stabiliser and elevator, which had the effect of increasing the aircraft's flight performance and fuel efficiency. Airbus partners had employed the latest technology, some of which having been derived from Concorde, on the A300. According to Airbus, new technologies adopted for the airliner were selected principally for increased safety, operational capability, and profitability. Upon entry into service in 1974, the A300 was a very advanced plane, which went on to influence later airliner designs. The technological highlights include advanced wings by de Havilland (later BAE Systems) with supercritical airfoil sections for economical performance and advanced aerodynamically efficient flight control surfaces. The diameter circular fuselage section allows an eight-abreast passenger seating and is wide enough for 2 LD3 cargo containers side by side. Structures are made from metal billets, reducing weight. It is the first airliner to be fitted with wind shear protection. Its advanced autopilots are capable of flying the aircraft from climb-out to landing, and it has an electrically controlled braking system. Later A300s incorporated other advanced features such as the Forward-Facing Crew Cockpit (FFCC), which enabled a two-pilot flight crew to fly the aircraft alone without the need for a flight engineer, the functions of which were automated; this two-man cockpit concept was a world-first for a wide-body aircraft. Glass cockpit flight instrumentation, which used cathode-ray tube (CRT) monitors to display flight, navigation, and warning information, along with fully digital dual autopilots and digital flight control computers for controlling the spoilers, flaps, and leading-edge slats, were also adopted upon later-built models. Additional composites were also made use of, such as carbon-fibre-reinforced polymer (CFRP), as well as their presence in an increasing proportion of the aircraft's components, including the spoilers, rudder, air brakes, and landing gear doors. Another feature of later aircraft was the addition of wingtip fences, which improved aerodynamic performance and thus reduced cruise fuel consumption by about 1.5% for the A300-600. In addition to passenger duties, the A300 became widely used by air freight operators; according to Airbus, it is the best-selling freight aircraft of all time. Various variants of the A300 were built to meet customer demands, often for diverse roles such as aerial refueling tankers, freighter models (new-build and conversions), combi aircraft, military airlifter, and VIP transport. Perhaps the most visually unique of the variants is the A300-600ST Beluga, an oversized cargo-carrying model operated by Airbus to carry aircraft sections between their manufacturing facilities. The A300 was the basis for, and retained a high level of commonality with, the second airliner produced by Airbus, the smaller Airbus A310. Operational history On 23 May 1974, the first A300 to enter service performed the first commercial flight of the type, flying from Paris to London, for Air France. Immediately after the launch, sales of the A300 were weak for some years, with most orders going to airlines that had an obligation to favor the domestically made product – notably Air France and Lufthansa, the first two airlines to place orders for the type. Following the appointment of Bernard Lathière as Henri Ziegler's replacement, an aggressive sales approach was adopted. Indian Airlines was the world's first domestic airline to purchase the A300, ordering three aircraft with three options. However, between December 1975 and May 1977, there were no sales for the type. During this period a number of "whitetail" A300s – completed but unsold aircraft – were completed and stored at Toulouse, and production fell to half an aircraft per month amid calls to pause production completely. During the flight testing of the A300B2, Airbus held a series of talks with Korean Air on the topic of developing a longer-range version of the A300, which would become the A300B4. In September 1974, Korean Air placed an order for four A300B4s with options for two further aircraft; this sale was viewed as significant as it was the first non-European international airline to order Airbus aircraft. Airbus had viewed South-East Asia as a vital market that was ready to be opened up and believed Korean Air to be the 'key'. Airlines operating the A300 on short-haul routes were forced to reduce frequencies to try and fill the aircraft. As a result, they lost passengers to airlines operating more frequent narrow-body flights. Eventually, Airbus had to build its own narrowbody aircraft (the A320) to compete with the Boeing 737 and McDonnell Douglas DC-9/MD-80. The saviour of the A300 was the advent of ETOPS, a revised FAA rule which allows twin-engine jets to fly long-distance routes that were previously off-limits to them. This enabled Airbus to develop the aircraft as a medium/long-range airliner. In 1977, US carrier Eastern Air Lines leased four A300s as an in-service trial. CEO Frank Borman was impressed that the A300 consumed 30% less fuel, even less than expected, than Eastern's fleet of L-1011s. The A300 would replacing the aging DC-9s and 727-100s but in smaller numbers, while being a twinjet sized between the Tristars and 727-200s, and capable of operating from short runway airports with sufficient range from New York City to Miami. Borman proceeded to order 23 A300s, becoming the first U.S. customer for the type. This order is often cited as the point at which Airbus came to be seen as a serious competitor to the large American aircraft-manufacturers Boeing and McDonnell Douglas. Aviation author John Bowen alleged that various concessions, such as loan guarantees from European governments and compensation payments, were a factor in the decision as well. Although the A300 was originally too large for Eastern's exiting routes, Airbus provided a fixed subsidy for a 57% load factor which decreased for every percent above that figure. The Eastern Air Lines breakthrough was shortly followed by an order from Pan Am. From then on, the A300 family sold well, eventually reaching a total of 561 delivered aircraft. In December 1977, Aerocondor Colombia became the first Airbus operator in Latin America, leasing one Airbus A300B4-2C, named Ciudad de Barranquilla. During the late 1970s, Airbus adopted a so-called 'Silk Road' strategy, targeting airlines in the Far East. As a result, The aircraft found particular favor with Asian airlines, being bought by Japan Air System, Korean Air, China Eastern Airlines, Thai Airways International, Singapore Airlines, Malaysia Airlines, Philippine Airlines, Garuda Indonesia, China Airlines, Pakistan International Airlines, Indian Airlines, Trans Australia Airlines and many others. As Asia did not have restrictions similar to the FAA 60-minutes rule for twin-engine airliners which existed at the time, Asian airlines used A300s for routes across the Bay of Bengal and South China Sea. In 1977, the A300B4 became the first ETOPS compliant aircraft, qualifying for Extended Twin Engine Operations over water, providing operators with more versatility in routing. In 1982, Garuda Indonesian Airways became the first airline to fly the A300B4-200FFCC with the newly Forward-Facing Crew Cockpit concept, the world's first wide-body aircraft that only operated by two-man cockpit crew. By 1981, Airbus was growing rapidly, with over 400 aircraft sold to over forty airlines. In 1989, Chinese operator China Eastern Airlines received its first A300; by 2006, the airline operated around 18 A300s, making it the largest operator of both the A300 and the A310 at that time. On 31 May 2014, China Eastern officially retired the last A300-600 in its fleet, having begun drawing down the type in 2010. From 1997 to 2014, a single A300, designated A300 Zero-G, was operated by the European Space Agency (ESA), centre national d'études spatiales (CNES) and the German Aerospace Center (DLR) as a reduced-gravity aircraft for conducting research into microgravity; the A300 is the largest aircraft to ever have been used in this capacity. A typical flight would last for two and a half hours, enabling up to 30 parabolas to be performed per flight. By the 1990s, the A300 was being heavily promoted as a cargo freighter. The largest freight operator of the A300 is FedEx Express, which has 70 A300 aircraft in service as of September 2022. UPS Airlines also operates 52 freighter versions of the A300. The final version was the A300-600R and is rated for 180-minute ETOPS. The A300 has enjoyed renewed interest in the secondhand market for conversion to freighters; large numbers were being converted during the late 1990s. The freighter versions – either new-build A300-600s or converted ex-passenger A300-600s, A300B2s and B4s – account for most of the world's freighter fleet after the Boeing 747 freighter. The A300 provided Airbus the experience of manufacturing and selling airliners competitively. The basic fuselage of the A300 was later stretched (A330 and A340), shortened (A310), or modified into derivatives (A300-600ST Beluga Super Transporter). In 2006, unit cost of an −600F was $105 million. In March 2006, Airbus announced the impending closure of the A300/A310 final assembly line, making them the first Airbus aircraft to be discontinued. The final production A300, an A300F freighter, performed its initial flight on 18 April 2007, and was delivered to FedEx Express on 12 July 2007. Airbus has announced a support package to keep A300s flying commercially. Airbus offers the A330-200F freighter as a replacement for the A300 cargo variants. The life of UPS's fleet of 52 A300s, delivered from 2000 to 2006, will be extended to 2035 by a flight deck upgrade based around Honeywell Primus Epic avionics; new displays and flight management system (FMS), improved weather radar, a central maintenance system, and a new version of the current enhanced ground proximity warning system. With a light usage of only two to three cycles per day, it will not reach the maximum number of cycles by then. The first modification will be made at Airbus Toulouse in 2019 and certified in 2020. As of July 2017, there are 211 A300s in service with 22 operators, with the largest operator being FedEx Express with 68 A300-600F aircraft. Variants A300B1 The A300B1 was the first variant to take flight. It had a maximum takeoff weight (MTOW) of , was long and was powered by two General Electric CF6-50A engines. Only two prototypes of the variant were built before it was adapted into the A300B2, the first production variant of the airliner. The second prototype was leased to Trans European Airways in 1974. A300B2 A300B2-100 Responding to a need for more seats from Air France, Airbus decided that the first production variant should be larger than the original prototype A300B1. The CF6-50A powered A300B2-100 was longer than the A300B1 and had an increased MTOW of , allowing for 30 additional seats and bringing the typical passenger count up to 281, with capacity for 20 LD3 containers. Two prototypes were built and the variant made its maiden flight on 28 June 1973, became certified on 15 March 1974 and entered service with Air France on 23 May 1974. A300B2-200 For the A300B2-200, originally designated as the A300B2K, Krueger flaps were introduced at the leading-edge root, the slat angles were reduced from 20 degrees to 16 degrees, and other lift related changes were made in order to introduce a high-lift system. This was done to improve performance when operating at high-altitude airports, where the air is less dense and lift generation is reduced. The variant had an increased MTOW of and was powered by CF6-50C engines, was certified on 23 June 1976, and entered service with South African Airways in November 1976. CF6-50C1 and CF6-50C2 models were also later fitted depending on customer requirements, these became certified on 22 February 1978 and 21 February 1980 respectively. A300B2-320 The A300B2-320 introduced the Pratt & Whitney JT9D powerplant and was powered by JT9D-59A engines. It retained the MTOW of the B2-200, was certified on 4 January 1980, and entered service with Scandinavian Airlines on 18 February 1980, with only four being produced. A300B4 A300B4-100 The initial A300B4 variant, later named the A300B4-100, included a centre fuel tank for an increased fuel capacity of , and had an increased MTOW of . It also featured Krueger flaps and had a similar high-lift system to what was later fitted to the A300B2-200. The variant made its maiden flight on 26 December 1974, was certified on 26 March 1975, and entered service with Germanair in May 1975. A300B4-200 The A300B4-200 had an increased MTOW of and featured an additional optional fuel tank in the rear cargo hold, which would reduce the cargo capacity by two LD3 containers. The variant was certified on 26 April 1979. A300B4-200FFCC It is the A300B4-200 without the flight engineer but analog flight instruments. Introduced by Garuda Indonesian Airways in 1982 A300-600 The A300-600, officially designated as the A300B4-600, was slightly longer than the A300B2 and A300B4 variants and had an increased interior space from using a similar rear fuselage to the Airbus A310, this allowed it to have two additional rows of seats. It was initially powered by Pratt & Whitney JT9D-7R4H1 engines, but was later fitted with General Electric CF6-80C2 engines, with Pratt & Whitney PW4156 or PW4158 engines being introduced in 1986. Other changes include an improved wing featuring a recambered trailing edge, the incorporation of simpler single-slotted Fowler flaps, the deletion of slat fences, and the removal of the outboard ailerons after they were deemed unnecessary on the A310. The variant made its first flight on 8 July 1983, was certified on 9 March 1984, and entered service in June 1984 with Saudi Arabian Airlines. A total of 313 A300-600s (all versions) have been sold. The A300-600 uses the A310 cockpits, featuring digital technology and electronic displays, eliminating the need for a flight engineer. The FAA issues a single type rating which allows operation of both the A310 and A300-600. A300-600: (Official designation: A300B4-600) The baseline model of the −600 series. A300-620C: (Official designation: A300C4-620) A convertible-freighter version. Four delivered between 1984 and 1985. A300-600F: (Official designation: A300F4-600) The freighter version of the baseline −600. A300-600R: (Official designation: A300B4-600R) The increased-range −600, achieved by an additional trim fuel tank in the tail. First delivery in 1988 to American Airlines; all A300s built since 1989 (freighters included) are −600Rs. Japan Air System (later merged into Japan Airlines) took delivery of the last new-built passenger A300, an A300-622R, in November 2002. A300-600RC: (Official designation: A300C4-600R) The convertible-freighter version of the −600R. Two were delivered in 1999. A300-600RF: (Official designation: A300F4-600R) The freighter version of the −600R. All A300s delivered between November 2002 and 12 July 2007 (last ever A300 delivery) were A300-600RFs. A300B10 (A310) Airbus had demand for an aircraft smaller than the A300. On 7 July 1978, the A310 (initially the A300B10) was launched with orders from Swissair and Lufthansa. On 3 April 1982, the first prototype conducted its maiden flight and it received its type certification on 11 March 1983. Keeping the same eight-abreast cross-section, the A310 is shorter than the initial A300 variants, and has a smaller wing, down from . The A310 introduced a two-crew glass cockpit, later adopted for the A300-600 with a common type rating. It was powered by the same GE CF6-80 or Pratt & Whitney JT9D then PW4000 turbofans. It can seat 220 passengers in two classes, or 240 in all-economy, and can fly up to . It has overwing exits between the two main front and rear door pairs. In April 1983, the aircraft entered revenue service with Swissair and competed with the Boeing 767–200, introduced six months before. Its longer range and ETOPS regulations allowed it to be operated on transatlantic flights. Until the last delivery in June 1998, 255 aircraft were produced, as it was succeeded by the larger Airbus A330-200. It has cargo aircraft versions, and was derived into the Airbus A310 MRTT military tanker/transport. A300-600ST Commonly referred to as the Airbus Beluga or "Airbus Super Transporter", these five airframes are used by Airbus to ferry parts between the company's disparate manufacturing facilities, thus enabling workshare distribution. They replaced the four Aero Spacelines Super Guppys previously used by Airbus. ICAO code: A3ST Operators , there are 197 A300 family aircraft in commercial service. The five largest operators were FedEx Express (70), UPS Airlines (52), European Air Transport Leipzig (23), Iran Air (11), and Mahan Air (11). Deliveries Data through end of December 2007. Accidents and incidents As of June 2021, the A300 has been involved in 77 occurrences including 24 hull-loss accidents causing 1133 fatalities, and criminal occurrences and hijackings causing fatalities. Accidents with fatalities 21 September 1987: An Egyptair Airbus A300B4-203 touched down past the runway threshold during a training flight. The right main gear hit the runway lights and the aircraft collided with an antenna and fences. No passengers were on board the plane, but 5 crew members were killed. The aircraft was written off. This was the first fatal accident of an Airbus A300. 28 September 1992: PIA Flight 268, an A300B4 crashed on approach near Kathmandu, Nepal. All 12 crew and 155 passengers died. 26 April 1994: China Airlines Flight 140 crashed at the end of runway at Nagoya, Japan, killing all 15 crew and 249 of 256 passengers on board. 26 September 1997: Garuda Indonesia Flight 152 was on approach to Polonia International Airport in Medan. The plane later crashed into a ravine in Buah Nabar due to ATC error and apparent haze that covers the country which limits the visibility. All 234 passengers and crew aboard were killed in Indonesia's deadliest crash. 16 February 1998: China Airlines Flight 676 crashed into a residential area close to CKS International Airport near Taipei, Taiwan. All 196 people on board were killed, including Taiwan's central bank president. Six people on the ground were also killed. 2 February 2000: During take-off, a Lockheed C-130 Hercules owned by the Iranian Air Force lost control and veered off the runway, striking an Airbus A300B2-203 owned by Iran Air, killing eight people. 12 November 2001: American Airlines Flight 587 crashed into Belle Harbor—a neighbourhood in Queens, New York, United States—shortly after takeoff from John F. Kennedy International Airport. The vertical stabiliser ripped off the aircraft after the rudder was mishandled during wake turbulence. All 260 people on board were killed, along with 5 people on the ground. It is the second-deadliest incident involving an A300 to date and the second-deadliest aircraft incident in the United States. 14 April 2010: AeroUnion Flight 302, an A300B4-203F, crashed on a road short of the runway while attempting to land at Monterrey Airport in Mexico. Seven people (five crew members and two on the ground) were killed. 14 August 2013: UPS Flight 1354, an Airbus A300F4-622R, crashed outside the perimeter fence on approach to Birmingham–Shuttlesworth International Airport in Birmingham, Alabama, United States. Both crew members died. Non-fatal hull losses 18 December 1983: Malaysian Airline System Flight 684, an Airbus A300B4 leased from Scandinavian Airlines System (SAS), registration OY-KAA, crashed short of the runway at Kuala Lumpur in bad weather while attempting to land on a flight from Singapore. All 247 people aboard escaped unharmed but the aircraft was destroyed in the resulting fire. 24 April 1993: an Air Inter Airbus A300B2-1C was written off after colliding with a light pole while being pushed back at Montpellier. 15 November 1993, an Indian Airlines Airbus A300, registered as VT-EDV, crash landed near Hyderabad Airport. There were no deaths but the aircraft was written off. 10 August 1994 – Korean Air Flight 2033 (Airbus A300) from Seoul to Jeju, the flight approached faster than usual to avoid potential windshear. Fifty feet above the runway the co-pilot, who was not flying the aircraft, decided that there was insufficient runway left to land and tried to perform a go-around against the captain's wishes. The aircraft touched down 1,773 meters beyond the runway threshold. The aircraft could not be stopped on the remaining 1,227 meters of runway and overran at a speed of 104 knots. After striking the airport wall and a guard post at 30 knots, the aircraft burst into flames and was incinerated. The cabin crew was credited with safely evacuating all passengers although only half of the aircraft's emergency exits were usable. 17 October 2001: Pakistan International Airlines flight PK231, registration AP-BCJ, from Islamabad via Peshawar to Dubai veered off the side of the runway after the right hand main landing gear collapsed as it touched down. The aircraft skidded and eventually came to rest in sand 50 meters from the runway. The aircraft sustained damage to its right wing structure and its no. 2 engine, which partly broke off the wing. All 205 passengers and crew survived. 1 March 2004: Pakistan International Airlines Flight 2002 burst 2 tyres whilst taking off from King Abdulaziz International Airport. Fragments of the tyre were ingested by the engines, this caused the engines to catch fire and an aborted takeoff was performed. Due to the fire substantial damage to the engine and the left wing caused the aircraft to be written off. All 261 passengers and 12 crew survived. 16 November 2012: an Air Contractors Airbus A300B4-203(F) EI-EAC, operating flight QY6321 on behalf of EAT Leipzig from Leipzig (Germany) to Bratislava (Slovakia), suffered a nose wheel collapse during roll out after landing at Bratislava's M. R. Štefánik Airport. All three crew members survived unharmed, the aircraft was written off. As of December 2017, the aircraft still was parked at a remote area of the airport between runways 13 and 22. 12 October 2015: An Airbus A300B4-200F Freighter operated by Egyptian Tristar cargo carrier crashed in Mogadishu, Somalia. All the passengers and crew members survived the crash. 1 October 2016: An Airbus A300-B4 registration PR-STN on a cargo flight between São Paulo-Guarulhos and Recife suffered a runway excursion after landing and the aft gear collapsed upon touchdown. Violent incidents 27 June 1976: Air France Flight 139, originating in Tel Aviv, Israel and carrying 248 passengers and a crew of 12 took off from Athens, Greece, headed for Paris, France. The flight was hijacked by terrorists, and was eventually flown to Entebbe Airport in Uganda. At the airport, Israeli commandos rescued 102 of the 106 hostages. 26 October 1986: Thai Airways Flight 620, an Airbus A300B4-601, originating in Bangkok suffered an explosion mid-flight. The aircraft descended rapidly and was able to land safely at Osaka. The aircraft was later repaired and there were no fatalities. The cause was a hand grenade brought onto the plane by a Japanese gangster of the Yamaguchi-gumi. 62 of the 247 people on board were injured. 3 July 1988: Iran Air Flight 655 was shot down by USS Vincennes in the Persian Gulf after being mistaken for an attacking Iranian F-14 Tomcat, killing all 290 passengers and crew. 15 February 1991: two Kuwait Airways A300C4-620s and two Boeing 767s that had been seized during Iraq's occupation of Kuwait were destroyed in coalition bombing of Mosul Airport. 24 December 1994: Air France Flight 8969 was hijacked at Houari Boumedienne Airport in Algiers, by four terrorists who belonged to the Armed Islamic Group. The terrorists apparently intended to crash the plane over the Eiffel Tower on Boxing Day. After a failed attempt to leave Marseille following a confrontational firefight between the terrorists and the GIGN French Special Forces, the result was the death of all four terrorists. (Snipers on the terminal front's roof shot dead two of the terrorists. The other two terrorists died as a result of gunshots in the cabin after approximately 20 minutes.) Three hostages including a Vietnamese diplomat were executed in Algiers, 229 hostages survived, many of them wounded by shrapnel. The almost 15-year-old aircraft was written off. 24 December 1999: Indian Airlines Flight IC 814 from Kathmandu, Nepal, to New Delhi was hijacked. After refuelling and offloading a few passengers, the flight was diverted to Kandahar, Afghanistan. A Nepalese man was murdered while the plane was in flight. 22 November 2003: European Air Transport OO-DLL, operating on behalf of DHL Aviation, was hit by an SA-14 'Gremlin' missile after takeoff from Baghdad International Airport. The aeroplane lost hydraulic pressure and thus the controls. After extending the landing gear to create more drag, the crew piloted the plane using differences in engine thrust and landed the plane with minimal further damage. The plane was repaired and offered for sale, but in April 2011 it still remained parked at Baghdad Intl. 25 August 2011: an A300B4-620 5A-IAY of Afriqiyah Airways and A300B4-622 5A-DLZ of Libyan Arab Airlines were both destroyed in fighting between pro- and anti-Gaddafi forces at Tripoli International Airport. Aircraft on display Fifteen A300s are currently preserved: F-BUAD Airbus A300 ZERO-G, since August 2015 preserved at Cologne Bonn Airport, Germany. F-WUAB The first prototype of the Airbus A300 is Partially preserved with a fuselage section, the right-hand wing, and an engine on display at the Deutsches Museum ex-HL7219 Korean Air Airbus A300B4 preserved at Korean Air Jeongseok Airfield. ex-N11984 Continental Airlines Airbus A300B4 preserved in South Korea as a Night Flight Restaurant. ex TC-ACD and TC-ACE Air ACT, preserved as coffee house at Uçak Cafe in Burhaniye, Turkey. ex TC-MNJ MNG Airlines, preserved as Köfte Airlines restaurant at Tekirdağ, Turkey. ex TC-FLA Fly Air, preserved as the Airbus Cafe & Restaurant at Kayseri, Turkey. ex TC-ACC Air ACT, preserved as the Uçak Kütüphane library and education centre at Çankırı, Turkey. ex EP-MHA Mahan Air, preserved as instructional airframe at the Botia Mahan Aviation College at Kerman, Iran. ex TC-FLM Fly Air, preserved as a restaurant at Istanbul, Turkey. ex B-18585 China Airlines, preserved as the Flight of Happiness restaurant at Taoyuan, Taiwan. ex-PK-JID Sempati Air Airbus A300B4 repainted in first A300B1 prototype colours, including original F-WUAB registration, became an exhibit in 2014 at the Aeroscopia museum in Blagnac, near Toulouse, France. ex TC-MCE MNG Airlines, preserved as a restaurant at the Danialand theme park at Agadir, Morocco. ex HL7240 Korean Air, preserved as instructional airframe (gate guard) at the Korea Aerospace University at Goyang, South Korea. ex HS-TAM Thai Airways A300-600R, preserved in a field near Doi Saket, Chiang Mai. Specifications Aircraft model designations
Technology
Specific aircraft_2
null
2547
https://en.wikipedia.org/wiki/Agent%20Orange
Agent Orange
Agent Orange is a chemical herbicide and defoliant, one of the tactical use Rainbow Herbicides. It was used by the U.S. military as part of its herbicidal warfare program, Operation Ranch Hand, during the Vietnam War from 1961 to 1971. The U.S. was strongly influenced by the British who used Agent Orange during the Malayan Emergency. It is a mixture of equal parts of two herbicides, 2,4,5-T and 2,4-D. Agent Orange was produced in the United States beginning in the late 1940s and was used in industrial agriculture, and was also sprayed along railroads and power lines to control undergrowth in forests. During the Vietnam War, the U.S. military procured over , consisting of a fifty-fifty mixture of 2,4-D and dioxin-contaminated 2,4,5-T. Nine chemical companies produced it: Dow Chemical Company, Monsanto Company, Diamond Shamrock Corporation, Hercules Inc., Thompson Hayward Chemical Co., United States Rubber Company (Uniroyal), Thompson Chemical Co., Hoffman-Taff Chemicals, Inc., and Agriselect. The government of Vietnam says that up to four million people in Vietnam were exposed to the defoliant, and as many as three million people have suffered illness because of Agent Orange, while the Vietnamese Red Cross estimates that up to one million people were disabled or have health problems as a result of exposure to Agent Orange. The United States government has described these figures as unreliable, while documenting cases of leukemia, Hodgkin's lymphoma, and various kinds of cancer in exposed U.S. military veterans, however without having conclusively found either a causal relationship or a plausible biological carcinogenic mechanism. An epidemiological study done by the Centers for Disease Control and Prevention showed that there was an increase in the rate of birth defects of the children of military personnel who were exposed to Agent Orange. Agent Orange has also caused enormous environmental damage in Vietnam. Over or of forest were defoliated. Defoliants eroded tree cover and seedling forest stock, making reforestation difficult in numerous areas. Animal species diversity is sharply reduced in contrast with unsprayed areas. The environmental destruction caused by this defoliation has been described by Swedish Prime Minister Olof Palme, lawyers, historians and other academics as an ecocide. The use of Agent Orange in Vietnam resulted in numerous legal actions. The United Nations ratified United Nations General Assembly Resolution 31/72 and the Environmental Modification Convention. Lawsuits filed on behalf of both U.S. and Vietnamese veterans sought compensation for damages. Agent Orange was first used by British Commonwealth forces in Malaya during the Malayan Emergency. It was also used by the U.S. military in Laos and Cambodia during the Vietnam War because forests near the border with Vietnam were used by the Viet Cong. Chemical composition The active ingredient of Agent Orange was an equal mixture of two phenoxy herbicides – 2,4-dichlorophenoxyacetic acid (2,4-D) and 2,4,5-trichlorophenoxyacetic acid (2,4,5-T) – in iso-octyl ester form, which contained traces of the dioxin 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD). TCDD was a trace (typically 2–3 ppm, ranging from 50 ppb to 50 ppm) - but significant - contaminant of Agent Orange. Toxicology TCDD is the most toxic of the dioxins and is classified as a human carcinogen by the U.S. Environmental Protection Agency (EPA). The fat-soluble nature of TCDD causes it to enter the body readily through physical contact or ingestion. Dioxins accumulate easily in the food chain. Dioxin enters the body by attaching to a protein called the aryl hydrocarbon receptor (AhR), a transcription factor. When TCDD binds to AhR, the protein moves to the cell nucleus, where it influences gene expression. According to U.S. government reports, if not bound chemically to a biological surface such as soil, leaves or grass, Agent Orange dries quickly after spraying and breaks down within hours to days when exposed to sunlight and is no longer harmful. Development Several herbicides were developed as part of efforts by the United States and the United Kingdom to create herbicidal weapons for use during World War II. These included 2,4-D, 2,4,5-T, MCPA (2-methyl-4-chlorophenoxyacetic acid, 1414B and 1414A, recoded LN-8 and LN-32), and isopropyl phenylcarbamate (1313, recoded LN-33). In 1943, the United States Department of the Army contracted botanist (and later bioethicist) Arthur Galston, who discovered the defoliants later used in Agent Orange, and his employer University of Illinois Urbana-Champaign to study the effects of 2,4-D and 2,4,5-T on cereal grains (including rice) and broadleaf crops. While a graduate and post-graduate student at the University of Illinois, Galston's research and dissertation focused on finding a chemical means to make soybeans flower and fruit earlier. He discovered both that 2,3,5-triiodobenzoic acid (TIBA) would speed up the flowering of soybeans and that in higher concentrations it would defoliate the soybeans. From these studies arose the concept of using aerial applications of herbicides to destroy enemy crops to disrupt their food supply. In early 1945, the U.S. Army ran tests of various 2,4-D and 2,4,5-T mixtures at the Bushnell Army Airfield in Florida. As a result, the U.S. began a full-scale production of 2,4-D and 2,4,5-T and would have used it against Japan in 1946 during Operation Downfall if the war had continued. In the years after the war, the U.S. tested 1,100 compounds, and field trials of the more promising ones were done at British stations in India and Australia, in order to establish their effects in tropical conditions, as well as at the U.S. testing ground in Florida. Between 1950 and 1952, trials were conducted in Tanganyika, at Kikore and Stunyansa, to test arboricides and defoliants under tropical conditions. The chemicals involved were 2,4-D, 2,4,5-T, and endothall (3,6-endoxohexahydrophthalic acid). During 1952–53, the unit supervised the aerial spraying of 2,4,5-T in Kenya to assess the value of defoliants in the eradication of tsetse fly. Early use In Malaya, the local unit of Imperial Chemical Industries researched defoliants as weed killers for rubber plantations. Roadside ambushes by the Malayan National Liberation Army were a danger to the British Commonwealth forces during the Malayan Emergency, several trials were made to defoliate vegetation that might hide ambush sites, but hand removal was found cheaper. A detailed account of how the British experimented with the spraying of herbicides was written by two scientists, E. K. Woodford of Agricultural Research Council's Unit of Experimental Agronomy and H. G. H. Kearns of the University of Bristol. After the Malayan Emergency ended in 1960, the U.S. considered the British precedent in deciding that the use of defoliants was a legal tactic of warfare. Secretary of State Dean Rusk advised President John F. Kennedy that the British had established a precedent for warfare with herbicides in Malaya. Use in the Vietnam War In mid-1961, President Ngo Dinh Diem of South Vietnam asked the United States to help defoliate the lush jungle that was providing cover to his enemies. In August of that year, the Republic of Vietnam Air Force conducted herbicide operations with American help. Diem's request launched a policy debate in the White House and the State and Defense Departments. Many U.S. officials supported herbicide operations, pointing out that the British had already used herbicides and defoliants in Malaya during the 1950s. In November 1961, Kennedy authorized the start of Operation Ranch Hand, the codename for the United States Air Force's herbicide program in Vietnam. The herbicide operations were formally directed by the government of South Vietnam. During the Vietnam War, between 1962 and 1971, the United States military sprayed nearly of various chemicals – the "rainbow herbicides" and defoliants – in Vietnam, eastern Laos, and parts of Cambodia as part of Operation Ranch Hand, reaching its peak from 1967 to 1969. For comparison purposes, an olympic size pool holds approximately . As the British did in Malaya, the goal of the U.S. was to defoliate rural/forested land, depriving guerrillas of food and concealment and clearing sensitive areas such as around base perimeters and possible ambush sites along roads and canals. Samuel P. Huntington argued that the program was also a part of a policy of forced draft urbanization, which aimed to destroy the ability of peasants to support themselves in the countryside, forcing them to flee to the U.S.-dominated cities, depriving the guerrillas of their rural support base. Agent Orange was usually sprayed from helicopters or from low-flying C-123 Provider aircraft, fitted with sprayers and "MC-1 Hourglass" pump systems and chemical tanks. Spray runs were also conducted from trucks, boats, and backpack sprayers. Altogether, over of Agent Orange were applied. The first batch of herbicides was unloaded at Tan Son Nhut Air Base in South Vietnam, on January 9, 1962. U.S. Air Force records show at least 6,542 spraying missions took place over the course of Operation Ranch Hand. By 1971, 12 percent of the total area of South Vietnam had been sprayed with defoliating chemicals, at an average concentration of 13 times the recommended U.S. Department of Agriculture application rate for domestic use. In South Vietnam alone, an estimated of agricultural land was ultimately destroyed. In some areas, TCDD concentrations in soil and water were hundreds of times greater than the levels considered safe by the EPA. The campaign destroyed of upland and mangrove forests and thousands of square kilometres of crops. Overall, more than 20% of South Vietnam's forests were sprayed at least once over the nine-year period. 3.2% of South Vietnam's cultivated land was sprayed at least once between 1965 and 1971. 90% of herbicide use was directed at defoliation. The U.S. military began targeting food crops in October 1962, primarily using Agent Blue; the American public was not made aware of the crop destruction programs until 1965 (and it was then believed that crop spraying had begun that spring). In 1965, 42% of all herbicide spraying was dedicated to food crops. In 1965, members of the U.S. Congress were told, "crop destruction is understood to be the more important purpose ... but the emphasis is usually given to the jungle defoliation in public mention of the program." The first official acknowledgment of the programs came from the State Department in March 1966. When crops were destroyed, the Viet Cong would compensate for the loss of food by confiscating more food from local villages. Some military personnel reported being told they were destroying crops used to feed guerrillas, only to later discover, most of the destroyed food was actually produced to support the local civilian population. For example, according to Wil Verwey, 85% of the crop lands in Quang Ngai province were scheduled to be destroyed in 1970 alone. He estimated this would have caused famine and left hundreds of thousands of people without food or malnourished in the province. According to a report by the American Association for the Advancement of Science, the herbicide campaign had disrupted the food supply of more than 600,000 people by 1970. Many experts at the time, including plant physiologist and bioethicist Arthur Galston, opposed herbicidal warfare because of concerns about the side effects to humans and the environment by indiscriminately spraying the chemical over a wide area. As early as 1966, resolutions were introduced to the United Nations charging that the U.S. was violating the 1925 Geneva Protocol, which regulated the use of chemical and biological weapons in international conflicts. The U.S. defeated most of the resolutions, arguing that Agent Orange was not a chemical or a biological weapon as it was considered a herbicide and a defoliant and it was used in effort to destroy plant crops and to deprive the enemy of concealment and not meant to target human beings. The U.S. delegation argued that a weapon, by definition, is any device used to injure, defeat, or destroy living beings, structures, or systems, and Agent Orange did not qualify under that definition. It also argued that if the U.S. were to be charged for using Agent Orange, then the United Kingdom and its Commonwealth nations should be charged since they also used it widely during the Malayan Emergency in the 1950s. In 1969, the United Kingdom commented on the draft Resolution 2603 (XXIV):The evidence seems to us to be notably inadequate for the assertion that the use in war of chemical substances specifically toxic to plants is prohibited by international law. The environmental destruction caused by this defoliation has been described by Swedish Prime Minister Olof Palme, lawyers, historians and other academics as an ecocide. A study carried out by the Bionetic Research Laboratories between 1965 and 1968 found malformations in test animals caused by 2,4,5-T, a component of Agent Orange. The study was later brought to the attention of the White House in October 1969. Other studies reported similar results and the Department of Defense began to reduce the herbicide operation. On April 15, 1970, it was announced that the use of Agent Orange was suspended. Two brigades of the Americal Division in the summer of 1970 continued to use Agent Orange for crop destruction in violation of the suspension. An investigation led to disciplinary action against the brigade and division commanders because they had falsified reports to hide its use. Defoliation and crop destruction were completely stopped by June 30, 1971. Health effects There are various types of cancer associated with Agent Orange, including chronic B-cell leukemia, Hodgkin's lymphoma, multiple myeloma, non-Hodgkin's lymphoma, prostate cancer, respiratory cancer, lung cancer, and soft tissue sarcomas. However, these associations are spurious, and a review of the literature indicates that neither Agent Orange nor its contaminants are carcinogenic in humans. Vietnamese people The government of Vietnam states that 4 million of its citizens were exposed to Agent Orange, and as many as 3 million have suffered illnesses because of it; these figures include their children who were exposed. The Red Cross of Vietnam estimates that up to 1 million people are disabled or have health problems due to Agent Orange contamination. The United States government has challenged these figures as being unreliable. According to a study by Dr. Nguyen Viet Nhan, children in the areas where Agent Orange was used have been affected and have multiple health problems, including cleft palate, mental disabilities, hernias, and extra fingers and toes. In the 1970s, high levels of dioxin were found in the breast milk of South Vietnamese women, and in the blood of U.S. military personnel who had served in Vietnam. The most affected zones are the mountainous area along Truong Son (Long Mountains) and the border between Vietnam and Cambodia. The affected residents are living in substandard conditions with many genetic diseases. In 2006, Anh Duc Ngo and colleagues of the University of Texas Health Science Center published a meta-analysis that exposed a large amount of heterogeneity (different findings) between studies, a finding consistent with a lack of consensus on the issue. Despite this, statistical analysis of the studies they examined resulted in data that the increase in birth defects/relative risk (RR) from exposure to agent orange/dioxin "appears" to be on the order of 3 in Vietnamese-funded studies, but 1.29 in the rest of the world. There is data near the threshold of statistical significance suggesting Agent Orange contributes to still-births, cleft palate, and neural tube defects, with spina bifida being the most statistically significant defect. The large discrepancy in RR between Vietnamese studies and those in the rest of the world has been ascribed to bias in the Vietnamese studies. Twenty-eight of the former U.S. military bases in Vietnam where the herbicides were stored and loaded onto airplanes may still have high levels of dioxins in the soil, posing a health threat to the surrounding communities. Extensive testing for dioxin contamination has been conducted at the former U.S. airbases in Da Nang, Phù Cát District and Biên Hòa. Some of the soil and sediment on the bases have extremely high levels of dioxin requiring remediation. The Da Nang Air Base has dioxin contamination up to 350 times higher than international recommendations for action. The contaminated soil and sediment continue to affect the citizens of Vietnam, poisoning their food chain and causing illnesses, serious skin diseases and a variety of cancers in the lungs, larynx, and prostate. Vietnam veterans While in Vietnam, U.S. and Free World Military Assistance Forces soldiers were told not to worry about Agent Orange and were persuaded the chemical was harmless. After returning home, Vietnam veterans from all countries that served began to suspect their ill health or the instances of their wives having miscarriages or children born with birth defects might be related to Agent Orange and the other toxic herbicides to which they had been exposed in Vietnam. U.S veterans U.S. Veterans began to file claims in 1977 to the Department of Veterans Affairs for disability payments for health care for conditions they believed were associated with exposure to Agent Orange, or more specifically, dioxin, but their claims were denied unless they could prove the condition began when they were in the service or within one year of their discharge. In order to qualify for compensation, U.S. veterans must have served on or near the perimeters of military bases in Thailand during the Vietnam Era, where herbicides were tested and stored outside of Vietnam, veterans who were crew members on C-123 planes flown after the Vietnam War, or were associated with Department of Defense (DoD) projects to test, dispose of, or store herbicides in the U.S. By April 1993, the Department of Veterans Affairs had compensated only 486 victims, although it had received disability claims from 39,419 soldiers who had been exposed to Agent Orange while serving in Vietnam. In a November 2004 Zogby International poll of 987 people, 79% of respondents thought the U.S. chemical companies which produced Agent Orange defoliant should compensate U.S. soldiers who were affected by the toxic chemical used during the war in Vietnam and 51% said they supported compensation for Vietnamese Agent Orange victims. Australian and New Zealand veterans Several official investigations in Australia failed to prove otherwise even though extant American investigations had already established that defoliants were sprayed at U.S. airbases including Bien Hoa Air Base where Australian and New Zealand forces first served before being given their own Tactical area of responsibility (TAOR.) Even then, Australian and New Zealand non-military and military contributions saw personnel from both countries spread over Vietnam such as the hospitals at Bong Son and Qui Nhon, on secondments at various bases, and as flight crew and ground crew for flights into and out of Da Nang Air Base - all areas that were well-documented as having been sprayed. It wasn't until a group of Australian veterans produced official military records, maps, and mission data as proof that the TAOR controlled by Australian and New Zealand forces in Vietnam had been sprayed with the chemicals in the presence of personnel that the Australian government was forced to change their stance. Only in 1994 did the Australian government finally admit that it was true that defoliants had been used in areas of Vietnam where Australian forces operated and the effects of these may have been detrimental to some Vietnam veterans and their children. It was only in 2015 that the official Australian War Memorial accepted rewriting the official history of Australia's involvement in the Vietnam War to acknowledge that Australian soldiers were exposed to defoliants used in Vietnam. New Zealand was even slower to correct their error, with the government going as far as to deny the legitimacy of the Australian reports in a report called the "McLeod Report" published by Veterans Affairs NZ in 2001 thus infuriating New Zealand veterans and those associated with their cause. In 2006 progress was made in the form of a Memorandum of Understanding signed between the New Zealand government, representatives of New Zealand Vietnam veterans, and the Royal New Zealand Returned and Services' Association (RSA) for monetary compensation for New Zealand Vietnam veterans who have conditions as evidence of association with exposure to Agent Orange, as determined by the United States National Academy of Sciences. In 2008 the New Zealand government finally admitted that New Zealanders had in fact been exposed to Agent Orange while serving in Vietnam and the experience was responsible for detrimental health conditions in veterans and their children. Amendments to the memorandum made in 2021 meant that more veterans were eligible for an ex gratia payment of NZ$40,000. National Academy of Medicine (Institute of Medicine) Starting in the early 1990s, the federal government directed the Institute of Medicine (IOM), now known as the National Academy of Medicine, to issue reports every 2 years on the health effects of Agent Orange and similar herbicides. First published in 1994 and titled Veterans and Agent Orange, the IOM reports assess the risk of both cancer and non-cancer health effects. Each health effect is categorized by evidence of association based on available research data. The last update was published in 2016, entitled Veterans and Agent Orange: Update 2014. The report shows sufficient evidence of an association with soft tissue sarcoma; non-Hodgkin lymphoma (NHL); Hodgkin disease; Chronic lymphocytic leukemia (CLL); including hairy cell leukemia and other chronic B-cell leukemias. Limited or suggested evidence of an association was linked with respiratory cancers (lung, bronchus, trachea, larynx); prostate cancer; multiple myeloma; and bladder cancer. Numerous other cancers were determined to have inadequate or insufficient evidence of links to Agent Orange. The National Academy of Medicine has repeatedly concluded that any evidence suggestive of an association between Agent Orange and prostate cancer is, "limited because chance, bias, and confounding could not be ruled out with confidence." At the request of the Veterans Administration, the Institute Of Medicine evaluated whether service in these C-123 aircraft could have plausibly exposed soldiers and been detrimental to their health. Their report Post-Vietnam Dioxin Exposure in Agent Orange-Contaminated C-123 Aircraft confirmed it. U.S. Public Health Service Publications by the United States Public Health Service have shown that Vietnam veterans, overall, have increased rates of cancer, and nerve, digestive, skin, and respiratory disorders. The Centers for Disease Control and Prevention notes that in particular, there are higher rates of acute/chronic leukemia, Hodgkin's lymphoma and non-Hodgkin's lymphoma, throat cancer, prostate cancer, lung cancer, colon cancer, Ischemic heart disease, soft tissue sarcoma, and liver cancer. With the exception of liver cancer, these are the same conditions the U.S. Veterans Administration has determined may be associated with exposure to Agent Orange/dioxin and are on the list of conditions eligible for compensation and treatment. Military personnel who were involved in storage, mixture and transportation (including aircraft mechanics), and actual use of the chemicals were probably among those who received the heaviest exposures. Military members who served on Okinawa also claim to have been exposed to the chemical, but there is no verifiable evidence to corroborate these claims. Some studies have suggested that veterans exposed to Agent Orange may be more at risk of developing prostate cancer and potentially more than twice as likely to develop higher-grade, more lethal prostate cancers. However, a critical analysis of these studies and 35 others consistently found that there was no significant increase in prostate cancer incidence or mortality in those exposed to Agent Orange or 2,3,7,8-tetracholorodibenzo-p-dioxin. U.S. Veterans of Laos and Cambodia During the Vietnam War, the United States fought the North Vietnamese, and their allies, in Laos and Cambodia, including heavy bombing campaigns. They also sprayed large quantities of Agent Orange in each of those countries. According to one estimate, the U.S. dropped in Laos and in Cambodia. Because Laos and Cambodia were both officially neutral during the Vietnam War, the U.S. attempted to keep secret its military operations in those countries, from the American population and has largely avoided compensating American veterans and CIA personnel stationed in Cambodia and Laos who suffered permanent injuries as a result of exposure to Agent Orange there. One noteworthy exception, according to the U.S. Department of Labor, is a claim filed with the CIA by an employee of "a self-insured contractor to the CIA that was no longer in business." The CIA advised the Department of Labor that it "had no objections" to paying the claim and Labor accepted the claim for payment: Ecological impact About 17.8% or of the total forested area of Vietnam was sprayed during the war, which disrupted the ecological equilibrium. The persistent nature of dioxins, erosion caused by loss of tree cover, and loss of seedling forest stock meant that reforestation was difficult (or impossible) in many areas. Many defoliated forest areas were quickly invaded by aggressive pioneer species (such as bamboo and cogon grass), making forest regeneration difficult and unlikely. Animal species diversity was also impacted; in one study a Harvard biologist found 24 species of birds and 5 species of mammals in a sprayed forest, while in two adjacent sections of unsprayed forest there were, respectively, 145 and 170 species of birds and 30 and 55 species of mammals. Dioxins from Agent Orange have persisted in the Vietnamese environment since the war, settling in the soil and sediment and entering the food chain through animals and fish which feed in the contaminated areas. The movement of dioxins through the food web has resulted in bioconcentration and biomagnification. The areas most heavily contaminated with dioxins are former U.S. air bases. Sociopolitical impact American policy during the Vietnam War was to destroy crops, accepting the sociopolitical impact that that would have. The RAND Corporation's Memorandum 5446-ISA/ARPA states: "the fact that the VC [the Vietcong] obtain most of their food from the neutral rural population dictates the destruction of civilian crops ... if they are to be hampered by the crop destruction program, it will be necessary to destroy large portions of the rural economy – probably 50% or more". Crops were deliberately sprayed with Agent Orange and areas were bulldozed clear of vegetation forcing many rural civilians to cities. Legal and diplomatic proceedings International The extensive environmental damage that resulted from usage of the herbicide prompted the United Nations to pass Resolution 31/72 and ratify the Environmental Modification Convention. Many states do not regard this as a complete ban on the use of herbicides and defoliants in warfare, but it does require case-by-case consideration. Article 2(4) of Protocol III of the Convention on Certain Conventional Weapons contains the "Jungle Exception", which prohibits states from attacking forests or jungles "except if such natural elements are used to cover, conceal or camouflage combatants or military objectives or are military objectives themselves". This exception voids any protection of any military and civilian personnel from a napalm attack or something like Agent Orange, and it has been argued that it was clearly designed to cover situations like U.S. tactics in Vietnam. Class action lawsuit Since at least 1978, several lawsuits have been filed against the companies which produced Agent Orange, among them Dow Chemical, Monsanto, and Diamond Shamrock. In 1978, army veteran Paul Reutershan sued Dow Chemical for $10 million, after he was diagnosed with terminal cancer that he believed was a result of Agent Orange exposure. After Reutershan died in December 1978, his attorneys added additional plaintiffs and refiled the lawsuit as a class action. That lawsuit would eventually represent thousands of veterans, and was considered one of the largest and most complex lawsuits ever brought in the US at that time. Attorney Hy Mayerson was an early pioneer in Agent Orange litigation, working with environmental attorney Victor Yannacone in 1980 on the first class-action suits against wartime manufacturers of Agent Orange. In meeting Dr. Ronald A. Codario, one of the first civilian doctors to see affected patients, Mayerson, so impressed by the fact a physician would show so much interest in a Vietnam veteran, forwarded more than a thousand pages of information on Agent Orange and the effects of dioxin on animals and humans to Codario's office the day after he was first contacted by the doctor. The corporate defendants sought to escape culpability by blaming everything on the U.S. government. In 1980, Mayerson, with Sgt. Charles E. Hartz as their principal client, filed the first U.S. Agent Orange class-action lawsuit in Pennsylvania, for the injuries military personnel in Vietnam suffered through exposure to toxic dioxins in the defoliant. Attorney Mayerson co-wrote the brief that certified the Agent Orange Product Liability action as a class action, the largest ever filed as of its filing. Hartz's deposition was one of the first ever taken in America, and the first for an Agent Orange trial, for the purpose of preserving testimony at trial, as it was understood that Hartz would not live to see the trial because of a brain tumor that began to develop while he was a member of Tiger Force, special forces, and LRRPs in Vietnam. The firm also located and supplied critical research to the veterans' lead expert, Dr. Codario, including about 100 articles from toxicology journals dating back more than a decade, as well as data about where herbicides had been sprayed, what the effects of dioxin had been on animals and humans, and every accident in factories where herbicides were produced or dioxin was a contaminant of some chemical reaction. The chemical companies involved denied that there was a link between Agent Orange and the veterans' medical problems. However, on May 7, 1984, seven chemical companies settled the class-action suit out of court just hours before jury selection was to begin. The companies agreed to pay $180 million as compensation if the veterans dropped all claims against them. Slightly over 45% of the sum was ordered to be paid by Monsanto alone. Many veterans who were victims of Agent Orange exposure were outraged the case had been settled instead of going to court and felt they had been betrayed by the lawyers. "Fairness Hearings" were held in five major American cities, where veterans and their families discussed their reactions to the settlement and condemned the actions of the lawyers and courts, demanding the case be heard before a jury of their peers. Federal Judge Jack B. Weinstein refused the appeals, claiming the settlement was "fair and just". By 1989, the veterans' fears were confirmed when it was decided how the money from the settlement would be paid out. A totally disabled Vietnam veteran would receive a maximum of $12,000 spread out over the course of 10 years. Furthermore, by accepting the settlement payments, disabled veterans would become ineligible for many state benefits that provided far more monetary support than the settlement, such as food stamps, public assistance, and government pensions. A widow of a Vietnam veteran who died of Agent Orange exposure would receive $3,700. In 2004, Monsanto spokesman Jill Montgomery said Monsanto should not be liable at all for injuries or deaths caused by Agent Orange, saying: "We are sympathetic with people who believe they have been injured and understand their concern to find the cause, but reliable scientific evidence indicates that Agent Orange is not the cause of serious long-term health effects." On 22 August 2024, the Court of Appeal of Paris dismissed an appeal filed by Tran To Nga against 14 US corporations that supplied Agent Orange for the US army during the war in Vietnam. The lawyers said that Nga will take her case to France's highest appeals court. Only military veterans from the United States and its allies in the war have won compensation so far. Some of the agrochemical companies in the U.S. have compensated U.S. veterans, but not to Vietnamese victims. New Jersey Agent Orange Commission In 1980, New Jersey created the New Jersey Agent Orange Commission, the first state commission created to study its effects. The commission's research project in association with Rutgers University was called "The Pointman Project". It was disbanded by Governor Christine Todd Whitman in 1996. During the first phase of the project, commission researchers devised ways to determine trace dioxin levels in blood. Prior to this, such levels could only be found in the adipose (fat) tissue. The project studied dioxin (TCDD) levels in blood as well as in adipose tissue in a small group of Vietnam veterans who had been exposed to Agent Orange and compared them to those of a matched control group; the levels were found to be higher in the exposed group. The second phase of the project continued to examine and compare dioxin levels in various groups of Vietnam veterans, including Soldiers, Marines, and Brownwater Naval personnel. U.S. Congress In 1991, Congress enacted the Agent Orange Act, giving the Department of Veterans Affairs the authority to declare certain conditions "presumptive" to exposure to Agent Orange/dioxin, making these veterans who served in Vietnam eligible to receive treatment and compensation for these conditions. The same law required the National Academy of Sciences to periodically review the science on dioxin and herbicides used in Vietnam to inform the Secretary of Veterans Affairs about the strength of the scientific evidence showing association between exposure to Agent Orange/dioxin and certain conditions. The authority for the National Academy of Sciences reviews and addition of any new diseases to the presumptive list by the VA expired in 2015 under the sunset clause of the Agent Orange Act of 1991. Through this process, the list of 'presumptive' conditions has grown since 1991, and currently the U.S. Department of Veterans Affairs has listed prostate cancer, respiratory cancers, multiple myeloma, type II diabetes mellitus, Hodgkin's disease, non-Hodgkin's lymphoma, soft tissue sarcoma, chloracne, porphyria cutanea tarda, peripheral neuropathy, chronic lymphocytic leukemia, and spina bifida in children of veterans exposed to Agent Orange as conditions associated with exposure to the herbicide. This list now includes B cell leukemias, such as hairy cell leukemia, Parkinson's disease and ischemic heart disease, these last three having been added on August 31, 2010. Several highly placed individuals in government are voicing concerns about whether some of the diseases on the list should, in fact, actually have been included. In 2011, an appraisal of the 20-year long Air Force Health Study that began in 1982 indicates that the results of the AFHS as they pertain to Agent Orange, do not provide evidence of disease in the Operation Ranch Hand veterans caused by "their elevated levels of exposure to Agent Orange". The VA initially denied the applications of post-Vietnam C-123 aircrew veterans because as veterans without "boots on the ground" service in Vietnam, they were not covered under VA's interpretation of "exposed". In June 2015, the Secretary of Veterans Affairs issued an Interim final rule providing presumptive service connection for post-Vietnam C-123 aircrews, maintenance staff and aeromedical evacuation crews. The VA now provides medical care and disability compensation for the recognized list of Agent Orange illnesses. U.S.–Vietnamese government negotiations In 2002, Vietnam and the U.S. held a joint conference on Human Health and Environmental Impacts of Agent Orange. Following the conference, the U.S. National Institute of Environmental Health Sciences (NIEHS) began scientific exchanges between the U.S. and Vietnam, and began discussions for a joint research project on the human health impacts of Agent Orange. These negotiations broke down in 2005, when neither side could agree on the research protocol and the research project was canceled. More progress has been made on the environmental front. In 2005, the first U.S.-Vietnam workshop on remediation of dioxin was held. Starting in 2005, the EPA began to work with the Vietnamese government to measure the level of dioxin at the Da Nang Air Base. Also in 2005, the Joint Advisory Committee on Agent Orange, made up of representatives of Vietnamese and U.S. government agencies, was established. The committee has been meeting yearly to explore areas of scientific cooperation, technical assistance and environmental remediation of dioxin. A breakthrough in the diplomatic stalemate on this issue occurred as a result of United States President George W. Bush's state visit to Vietnam in November 2006. In the joint statement, President Bush and President Triet agreed "further joint efforts to address the environmental contamination near former dioxin storage sites would make a valuable contribution to the continued development of their bilateral relationship." On May 25, 2007, President Bush signed the U.S. Troop Readiness, Veterans' Care, Katrina Recovery, and Iraq Accountability Appropriations Act, 2007 into law for the wars in Iraq and Afghanistan that included an earmark of $3 million specifically for funding for programs for the remediation of dioxin 'hotspots' on former U.S. military bases, and for public health programs for the surrounding communities; some authors consider this to be completely inadequate, pointing out that the Da Nang Airbase alone will cost $14 million to clean up, and that three others are estimated to require $60 million for cleanup. The appropriation was renewed in the fiscal year 2009 and again in FY 2010. An additional $12 million was appropriated in the fiscal year 2010 in the Supplemental Appropriations Act and a total of $18.5 million appropriated for fiscal year 2011. Secretary of State Hillary Clinton stated during a visit to Hanoi in October 2010 that the U.S. government would begin work on the clean-up of dioxin contamination at the Da Nang Airbase. In June 2011, a ceremony was held at Da Nang airport to mark the start of U.S.-funded decontamination of dioxin hotspots in Vietnam. Thirty-two million dollars has so far been allocated by the U.S. Congress to fund the program. A $43 million project began in the summer of 2012, as Vietnam and the U.S. forge closer ties to boost trade and counter China's rising influence in the disputed South China Sea. Vietnamese victims class action lawsuit in U.S. courts On January 31, 2004, a victim's rights group, the Vietnam Association for Victims of Agent Orange/dioxin (VAVA), filed a lawsuit in the United States District Court for the Eastern District of New York in Brooklyn, against several U.S. companies for liability in causing personal injury, by developing, and producing the chemical, and claimed that the use of Agent Orange violated the 1907 Hague Convention on Land Warfare, 1925 Geneva Protocol, and the 1949 Geneva Conventions. Dow Chemical and Monsanto were the two largest producers of Agent Orange for the U.S. military and were named in the suit, along with the dozens of other companies (Diamond Shamrock, Uniroyal, Thompson Chemicals, Hercules, etc.). On March 10, 2005, Judge Jack B. Weinstein of the Eastern District – who had presided over the 1984 U.S. veterans class-action lawsuit – dismissed the lawsuit, ruling there was no legal basis for the plaintiffs' claims. He concluded Agent Orange was not considered a poison under international humanitarian law at the time of its use by the U.S.; the U.S. was not prohibited from using it as a herbicide; and the companies which produced the substance were not liable for the method of its use by the government. In the dismissal statement issued by Weinstein, he wrote "The prohibition extended only to gases deployed for their asphyxiating or toxic effects on man, not to herbicides designed to affect plants that may have unintended harmful side-effects on people." Author and activist George Jackson had written previously that:If the Americans were guilty of war crimes for using Agent Orange in Vietnam, then the British would be also guilty of war crimes as well since they were the first nation to deploy the use of herbicides and defoliants in warfare and used them on a large scale throughout the Malayan Emergency. Not only was there no outcry by other states in response to the United Kingdom's use, but the U.S. viewed it as establishing a precedent for the use of herbicides and defoliants in jungle warfare.The U.S. government was also not a party in the lawsuit because of sovereign immunity, and the court ruled the chemical companies, as contractors of the U.S. government, shared the same immunity. The case was appealed and heard by the Second Circuit Court of Appeals in Manhattan on June 18, 2007. Three judges on the court upheld Weinstein's ruling to dismiss the case. They ruled that, though the herbicides contained a dioxin (a known poison), they were not intended to be used as a poison on humans. Therefore, they were not considered a chemical weapon and thus not a violation of international law. A further review of the case by the entire panel of judges of the Court of Appeals also confirmed this decision. The lawyers for the Vietnamese filed a petition to the U.S. Supreme Court to hear the case. On March 2, 2009, the Supreme Court denied certiorari and declined to reconsider the ruling of the Court of Appeals. Help for those affected in Vietnam To assist those who have been affected by Agent Orange/dioxin, the Vietnamese have established "peace villages", which each host between 50 and 100 victims, giving them medical and psychological help. As of 2006, there were 11 such villages, thus granting some social protection to fewer than a thousand victims. U.S. veterans of the war in Vietnam and individuals who are aware and sympathetic to the impacts of Agent Orange have supported these programs in Vietnam. An international group of veterans from the U.S. and its allies during the Vietnam War working with their former enemy—veterans from the Vietnam Veterans Association—established the Vietnam Friendship Village outside of Hanoi. The center provides medical care, rehabilitation and vocational training for children and veterans from Vietnam who have been affected by Agent Orange. In 1998, The Vietnam Red Cross established the Vietnam Agent Orange Victims Fund to provide direct assistance to families throughout Vietnam that have been affected. In 2003, the Vietnam Association of Victims of Agent Orange (VAVA) was formed. In addition to filing the lawsuit against the chemical companies, VAVA provides medical care, rehabilitation services and financial assistance to those injured by Agent Orange. The Vietnamese government provides small monthly stipends to more than 200,000 Vietnamese believed affected by the herbicides; this totaled $40.8 million in 2008. The Vietnam Red Cross has raised more than $22 million to assist the ill or disabled, and several U.S. foundations, United Nations agencies, European governments and nongovernmental organizations have given a total of about $23 million for site cleanup, reforestation, health care and other services to those in need. Vuong Mo of the Vietnam News Agency described one of the centers: May is 13, but she knows nothing, is unable to talk fluently, nor walk with ease due to for her bandy legs. Her father is dead and she has four elder brothers, all mentally retarded ... The students are all disabled, retarded and of different ages. Teaching them is a hard job. They are of the 3rd grade but many of them find it hard to do the reading. Only a few of them can. Their pronunciation is distorted due to their twisted lips and their memory is quite short. They easily forget what they've learned ... In the Village, it is quite hard to tell the kids' exact ages. Some in their twenties have a physical statures as small as the 7- or 8-years-old. They find it difficult to feed themselves, much less have mental ability or physical capacity for work. No one can hold back the tears when seeing the heads turning round unconsciously, the bandy arms managing to push the spoon of food into the mouths with awful difficulty ... Yet they still keep smiling, singing in their great innocence, at the presence of some visitors, craving for something beautiful. On June 16, 2010, members of the U.S.-Vietnam Dialogue Group on Agent Orange/Dioxin unveiled a comprehensive 10-year Declaration and Plan of Action to address the toxic legacy of Agent Orange and other herbicides in Vietnam. The Plan of Action was released as an Aspen Institute publication and calls upon the U.S. and Vietnamese governments to join with other governments, foundations, businesses, and nonprofits in a partnership to clean up dioxin "hot spots" in Vietnam and to expand humanitarian services for people with disabilities there. On September 16, 2010, Senator Patrick Leahy acknowledged the work of the Dialogue Group by releasing a statement on the floor of the United States Senate. The statement urges the U.S. government to take the Plan of Action's recommendations into account in developing a multi-year plan of activities to address the Agent Orange/dioxin legacy. Use outside of Vietnam Australia In 2008, Australian researcher Jean Williams claimed that cancer rates in Innisfail, Queensland, were 10 times higher than the state average because of secret testing of Agent Orange by the Australian military scientists during the Vietnam War. Williams, who had won the Order of Australia medal for her research on the effects of chemicals on U.S. war veterans, based her allegations on Australian government reports found in the Australian War Memorial's archives. A former soldier, Ted Bosworth, backed up the claims, saying that he had been involved in the secret testing. Neither Williams nor Bosworth have produced verifiable evidence to support their claims. The Queensland health department determined that cancer rates in Innisfail were no higher than those in other parts of the state. Canada The U.S. military, with the permission of the Canadian government, tested herbicides, including Agent Orange, in the forests near Canadian Forces Base Gagetown in New Brunswick. In 2007, the government of Canada offered a one-time ex gratia payment of $20,000 as compensation for Agent Orange exposure at CFB Gagetown. On July 12, 2005, Merchant Law Group, on behalf of over 1,100 Canadian veterans and civilians who were living in and around CFB Gagetown, filed a lawsuit to pursue class action litigation concerning Agent Orange and Agent Purple with the Federal Court of Canada. On August 4, 2009, the case was rejected by the court, citing lack of evidence. In 2007, the Canadian government announced that a research and fact-finding program initiated in 2005 had found the base was safe. A legislative commission in the State of Maine found in 2024 that the Canadian investigation was "incorrect, biased, and based upon, in some cases, incomplete data and poor study design—at times exacerbated by the rapid period in which these reports were required to be conducted and issued." On February 17, 2011, the Toronto Star revealed that Agent Orange had been employed to clear extensive plots of Crown land in Northern Ontario. The Toronto Star reported that, "records from the 1950s, 1960s and 1970s show forestry workers, often students and junior rangers, spent weeks at a time as human markers holding red, helium-filled balloons on fishing lines while low-flying planes sprayed toxic herbicides including an infamous chemical mixture known as Agent Orange on the brush and the boys below." In response to the Toronto Star article, the Ontario provincial government launched a probe into the use of Agent Orange. Guam An analysis of chemicals present in the island's soil, together with resolutions passed by Guam's legislature, suggest that Agent Orange was among the herbicides routinely used on and around Andersen Air Force Base and Naval Air Station Agana. Despite the evidence, the Department of Defense continues to deny that Agent Orange was stored or used on Guam. Several Guam veterans have collected evidence to assist in their disability claims for direct exposure to dioxin containing herbicides such as 2,4,5-T which are similar to the illness associations and disability coverage that has become standard for those who were harmed by the same chemical contaminant of Agent Orange used in Vietnam. South Korea Agent Orange was used in South Korea in the late 1960s and in 1999, about 20,000 South Koreans filed two separated lawsuits against U.S. companies, seeking more than $5 billion in damages. After losing a decision in 2002, they filed an appeal. In January 2006, the South Korean Appeals Court ordered Dow Chemical and Monsanto to pay $62 million in compensation to about 6,800 people. The ruling acknowledged that "the defendants failed to ensure safety as the defoliants manufactured by the defendants had higher levels of dioxins than standard", and, quoting the U.S. National Academy of Science report, declared that there was a "causal relationship" between Agent Orange and a range of diseases, including several cancers. The judges failed to acknowledge "the relationship between the chemical and peripheral neuropathy, the disease most widespread among Agent Orange victims". In 2011, the United States local press KPHO-TV in Phoenix, Arizona, alleged that in 1978 that the United States Army had buried 250 55-gallon drums () of Agent Orange in Camp Carroll, the U.S. Army base located in Gyeongsangbuk-do, South Korea. Currently, veterans who provide evidence meeting VA requirements for service in Vietnam and who can medically establish that anytime after this 'presumptive exposure' they developed any medical problems on the list of presumptive diseases, may receive compensation from the VA. Certain veterans who served in South Korea and are able to prove they were assigned to certain specified around the Korean Demilitarized Zone, during a specific time frame are afforded similar presumption. New Zealand The use of Agent Orange has been controversial in New Zealand, because of the exposure of New Zealand troops in Vietnam and because of the production of herbicide used in Agent Orange which has been alleged at various times to have been exported for use in the Vietnam War and to other users by the Ivon Watkins-Dow chemical plant in Paritutu, New Plymouth. What is fact is that from 1962 until 1987, 2,4,5T herbicide was manufactured at the Ivon Watkins-Dow plant for domestic use in New Zealand. It was widely used by farmers and in New Zealand agriculture as a weed killer. This fact was the basis of a 2005 New Zealand Media story that claimed that the herbicide had been allegedly exported to U.S. military bases in South East Asia. However the claim was not proven, a fact which the Media did not subsequently report. There have been continuing claims, as yet unproven, that the suburb of Paritutu has also been polluted. However, the agriscience company Corteva (which split from DowDupont in 2019) agreed to clean up the Paritutu site in September 2022. Philippines Herbicide persistence studies of Agents Orange and White were conducted in the Philippines. Johnston Atoll The U.S. Air Force operation to remove Herbicide Orange from Vietnam in 1972 was named Operation Pacer IVY, while the operation to destroy the Agent Orange stored at Johnston Atoll in 1977 was named Operation Pacer HO. Operation Pacer IVY collected Agent Orange in South Vietnam and removed it in 1972 aboard the ship MV Transpacific for storage on Johnston Atoll. The EPA reports that of Herbicide Orange was stored at Johnston Island in the Pacific and at Gulfport, Mississippi. Research and studies were initiated to find a safe method to destroy the materials, and it was discovered they could be incinerated safely under special conditions of temperature and dwell time. However, these herbicides were expensive, and the Air Force wanted to resell its surplus instead of dumping it at sea. Among many methods tested, a possibility of salvaging the herbicides by reprocessing and filtering out the TCDD contaminant with carbonized (charcoaled) coconut fibers. This concept was then tested in 1976 and a pilot plant constructed at Gulfport. From July to September 1977 during Operation Pacer HO, the entire stock of Agent Orange from both Herbicide Orange storage sites at Gulfport and Johnston Atoll was subsequently incinerated in four separate burns in the vicinity of Johnston Island aboard the Dutch-owned waste incineration ship . As of 2004, some records of the storage and disposition of Agent Orange at Johnston Atoll have been associated with the historical records of Operation Red Hat. Okinawa, Japan There have been dozens of reports in the press about use and/or storage of military formulated herbicides on Okinawa that are based upon statements by former U.S. service members that had been stationed on the island, photographs, government records, and unearthed storage barrels. The U.S. Department of Defense has denied these allegations with statements by military officials and spokespersons, as well as a January 2013 report authored by Dr. Alvin Young that was released in April 2013. In particular, the 2013 report rebuts articles written by journalist Jon Mitchell as well as a statement from "An Ecological Assessment of Johnston Atoll" a 2003 publication produced by the United States Army Chemical Materials Agency that states, "in 1972, the U.S. Air Force also brought about 25,000 200L drums () of the chemical, Herbicide Orange (HO) to Johnston Island that originated from Vietnam and was stored on Okinawa." The 2013 report states: "The authors of the [2003] report were not DoD employees, nor were they likely familiar with the issues surrounding Herbicide Orange or its actual history of transport to the Island." and detailed the transport phases and routes of Agent Orange from Vietnam to Johnston Atoll, none of which included Okinawa. Further official confirmation of restricted (dioxin containing) herbicide storage on Okinawa appeared in a 1971 Fort Detrick report titled "Historical, Logistical, Political and Technical Aspects of the Herbicide/Defoliant Program", which mentions that the environmental statement should consider "Herbicide stockpiles elsewhere in PACOM (Pacific Command) U.S. Government restricted materials Thailand and Okinawa (Kadena AFB)." The 2013 DoD report says that the environmental statement urged by the 1971 report was published in 1974 as "The Department of Air Force Final Environmental Statement", and that the latter did not find Agent Orange was held in either Thailand or Okinawa. Thailand Agent Orange was tested by the United States in Thailand during the Vietnam War. In 1999, buried drums were uncovered and confirmed to be Agent Orange. Workers who uncovered the drums fell ill while upgrading the airport near Hua Hin District, 100 km south of Bangkok. Vietnam-era veterans whose service involved duty on or near the perimeters of military bases in Thailand anytime between February 28, 1961, and May 7, 1975, may have been exposed to herbicides and may qualify for VA benefits. A declassified Department of Defense report written in 1973, suggests that there was a significant use of herbicides on the fenced-in perimeters of military bases in Thailand to remove foliage that provided cover for enemy forces. In 2013, the VA determined that herbicides used on the Thailand base perimeters may have been tactical and procured from Vietnam, or a strong, commercial type resembling tactical herbicides. United States The University of Hawaiʻi has acknowledged extensive testing of Agent Orange on behalf of the United States Department of Defense in Hawaii along with mixtures of Agent Orange on Hawaii Island in 1966 and on Kaua'i Island in 1967–1968; testing and storage in other U.S. locations has been documented by the United States Department of Veterans Affairs. In 1971, the C-123 aircraft used for spraying Agent Orange were returned to the United States and assigned various East Coast USAF Reserve squadrons, and then employed in traditional airlift missions between 1972 and 1982. In 1994, testing by the Air Force identified some former spray aircraft as "heavily contaminated" with dioxin residue. Inquiries by aircrew veterans in 2011 brought a decision by the U.S. Department of Veterans Affairs opining that not enough dioxin residue remained to injure these post-Vietnam War veterans. On 26 January 2012, the U.S. Center For Disease Control's Agency for Toxic Substances and Disease Registry challenged this with their finding that former spray aircraft were indeed contaminated and the aircrews exposed to harmful levels of dioxin. In response to veterans' concerns, the VA in February 2014 referred the C-123 issue to the Institute of Medicine for a special study, with results released on January 9, 2015. In 1978, the EPA suspended spraying of Agent Orange in national forests. Agent Orange was sprayed on thousands of acres of brush in the Tennessee Valley for 15 years before scientists discovered the herbicide was dangerous. Monroe County, Tennessee, is one of the locations known to have been sprayed according to the Tennessee Valley Authority. Forty-four remote acres were sprayed with Agent Orange along power lines throughout the National Forest. In 1983, New Jersey declared a Passaic River production site to be a state of emergency. The dioxin pollution in the Passaic River dates back to the Vietnam era, when Diamond Alkali manufactured it in a factory along the river. The tidal river carried dioxin upstream and down, contaminating a stretch of riverbed in one of New Jersey's most populous areas. A December 2006 Department of Defense report listed Agent Orange testing, storage, and disposal sites at 32 locations throughout the United States, Canada, Thailand, Puerto Rico, Korea, and in the Pacific Ocean. The Veteran Administration has also acknowledged that Agent Orange was used domestically by U.S. forces in test sites throughout the United States. Eglin Air Force Base in Florida was one of the primary testing sites throughout the 1960s. Cleanup programs In February 2012, Monsanto agreed to settle a case covering dioxin contamination around a plant in Nitro, West Virginia, that had manufactured Agent Orange. Monsanto agreed to pay up to $9 million for cleanup of affected homes, $84 million for medical monitoring of people affected, and the community's legal fees. On 9 August 2012, the United States and Vietnam began a cooperative cleaning up of the toxic chemical on part of Da Nang International Airport, marking the first time the U.S. government has been involved in cleaning up Agent Orange in Vietnam. Danang was the primary storage site of the chemical. Two other cleanup sites the United States and Vietnam are looking at is Biên Hòa, in the southern province of Đồng Nai is a hotspot for dioxin and so is Phù Cát airport in the central province of Bình Định, says U.S. Ambassador to Vietnam David Shear. According to the Vietnamese newspaper Nhân Dân, the U.S. government provided $41 million to the project. As of 2017, some of soil have been cleaned. The Naval Construction Battalion Center at Gulfport, Mississippi was the largest storage site in the United States for agent orange. It was about in size and was still being cleaned up in 2013. In 2016, the EPA laid out its plan for cleaning up an stretch of the Passaic River in New Jersey, with an estimated cost of $1.4 billion. The contaminants reached to Newark Bay and other waterways, according to the EPA, which has designated the area a Superfund site. Since destruction of the dioxin requires high temperatures over , the destruction process is energy intensive.
Technology
Pest and disease control
null
2551
https://en.wikipedia.org/wiki/Astronomical%20year%20numbering
Astronomical year numbering
Astronomical year numbering is based on AD/CE year numbering, but follows normal decimal integer numbering more strictly. Thus, it has a year 0; the years before that are designated with negative numbers and the years after that are designated with positive numbers. Astronomers use the Julian calendar for years before 1582, including the year 0, and the Gregorian calendar for years after 1582, as exemplified by Jacques Cassini (1740), Simon Newcomb (1898) and Fred Espenak (2007). The prefix AD and the suffixes CE, BC or BCE (Common Era, Before Christ or Before Common Era) are dropped. The year 1 BC/BCE is numbered 0, the year 2 BC is numbered −1, and in general the year n BC/BCE is numbered "−(n − 1)" (a negative number equal to 1 − n). The numbers of AD/CE years are not changed and are written with either no sign or a positive sign; thus in general n AD/CE is simply n or +n. For normal calculation a number zero is often needed, here most notably when calculating the number of years in a period that spans the epoch; the end years need only be subtracted from each other. The system is so named due to its use in astronomy. Few other disciplines outside history deal with the time before year 1, some exceptions being dendrochronology, archaeology and geology, the latter two of which use 'years before the present'. Although the absolute numerical values of astronomical and historical years only differ by one before year 1, this difference is critical when calculating astronomical events like eclipses or planetary conjunctions to determine when historical events which mention them occurred. Usage of the year zero In his Rudolphine Tables (1627), Johannes Kepler used a prototype of year zero which he labeled Christi (Christ's) between years labeled Ante Christum (Before Christ) and Post Christum (After Christ) on the mean motion tables for the Sun, Moon, Saturn, Jupiter, Mars, Venus and Mercury. In 1702, the French astronomer Philippe de la Hire used a year he labeled at the end of years labeled ante Christum (BC), and immediately before years labeled post Christum (AD) on the mean motion pages in his Tabulæ Astronomicæ, thus adding the designation 0 to Kepler's Christi. Finally, in 1740 the French astronomer Jacques Cassini , who is traditionally credited with the invention of year zero, completed the transition in his Tables astronomiques, simply labeling this year 0, which he placed at the end of Julian years labeled avant Jesus-Christ (before Jesus Christ or BC), and immediately before Julian years labeled après Jesus-Christ (after Jesus Christ or AD). Cassini gave the following reasons for using a year 0: Fred Espenak of NASA lists 50 phases of the Moon within year 0, showing that it is a full year, not an instant in time. Jean Meeus gives the following explanation: Signed years without the year zero Although he used the usual French terms "avant J.-C." (before Jesus Christ) and "après J.-C." (after Jesus Christ) to label years elsewhere in his book, the Byzantine historian Venance Grumel (1890–1967) used negative years (identified by a minus sign, −) to label BC years and unsigned positive years to label AD years in a table. He may have done so to save space and he put no year 0 between them. Version 1.0 of the XML Schema language, often used to describe data interchanged between computers in XML, includes built-in primitive datatypes date and dateTime. Although these are defined in terms of ISO 8601 which uses the proleptic Gregorian calendar and therefore should include a year 0, the XML Schema specification states that there is no year zero. Version 1.1 of the defining recommendation realigned the specification with ISO 8601 by including a year zero, despite the problems arising from the lack of backward compatibility.
Technology
Timekeeping
null
2594
https://en.wikipedia.org/wiki/Ant
Ant
Ants are eusocial insects of the family Formicidae and, along with the related wasps and bees, belong to the order Hymenoptera. Ants evolved from vespoid wasp ancestors in the Cretaceous period. More than 13,800 of an estimated total of 22,000 species have been classified. They are easily identified by their geniculate (elbowed) antennae and the distinctive node-like structure that forms their slender waists. Ants form colonies that range in size from a few dozen individuals often living in small natural cavities to highly organised colonies that may occupy large territories with sizeable nest that consist of millions of individuals or into the hundreds of millions in super colonies. Typical colonies consist of various castes of sterile, wingless females, most of which are workers (ergates), as well as soldiers (dinergates) and other specialised groups. Nearly all ant colonies also have some fertile males called "drones" and one or more fertile females called "queens" (gynes). The colonies are described as superorganisms because the ants appear to operate as a unified entity, collectively working together to support the colony. Ants have colonised almost every landmass on Earth. The only places lacking indigenous ants are Antarctica and a few remote or inhospitable islands. Ants thrive in moist tropical ecosystems and may exceed the combined biomass of wild birds and mammals. Their success in so many environments has been attributed to their social organisation and their ability to modify habitats, tap resources, and defend themselves. Their long co-evolution with other species has led to mimetic, commensal, parasitic, and mutualistic relationships. Ant societies have division of labour, communication between individuals, and an ability to solve complex problems. These parallels with human societies have long been an inspiration and subject of study. Many human cultures make use of ants in cuisine, medication, and rites. Some species are valued in their role as biological pest control agents. Their ability to exploit resources may bring ants into conflict with humans, however, as they can damage crops and invade buildings. Some species, such as the red imported fire ant (Solenopsis invicta) of South America, are regarded as invasive species in other parts of the world, establishing themselves in areas where they have been introduced accidentally. Etymology The word ant and the archaic word emmet are derived from , of Middle English, which come from of Old English; these are all related to Low Saxon , and varieties (Old Saxon ) and to German (Old High German ). All of these words come from West Germanic *, and the original meaning of the word was "the biter" (from Proto-Germanic , "off, away" + "cut"). The family name Formicidae is derived from the Latin ("ant") from which the words in other Romance languages, such as the Portuguese , Italian , Spanish , Romanian , and French are derived. It has been hypothesised that a Proto-Indo-European word *morwi- was the root for Sanskrit vamrah, Greek μύρμηξ mýrmēx, Old Church Slavonic mraviji, Old Irish moirb, Old Norse maurr, Dutch mier, Swedish myra, Danish myre, Middle Dutch miere, and Crimean Gothic miera. Taxonomy and evolution The family Formicidae belongs to the order Hymenoptera, which also includes sawflies, bees, and wasps. Ants evolved from a lineage within the stinging wasps, and a 2013 study suggests that they are a sister group of the Apoidea. However, since Apoidea is a superfamily, ants must be upgraded to the same rank. A more detailed basic taxonomy was proposed in 2020. Three species of the extinct mid-Cretaceous genera Camelomecia and Camelosphecia were placed outside of the Formicidae, in a separate clade within the general superfamily Formicoidea, which, together with Apoidea, forms the higher-ranking group Formicapoidina. Fernández et al. (2021) suggest that the common ancestors of ants and apoids within the Formicapoidina probably existed as early as in the end of the Jurassic period, before divergence in the Cretaceous. In 1966, E. O. Wilson and his colleagues identified the fossil remains of an ant (Sphecomyrma) that lived in the Cretaceous period. The specimen, trapped in amber dating back to around 92 million years ago, has features found in some wasps, but not found in modern ants. The oldest fossils of ants date to the mid-Cretaceous, around 100 million years ago, which belong to extinct stem-groups such as the Haidomyrmecinae, Sphecomyrminae and Zigrasimeciinae, with modern ant subfamilies appearing towards the end of the Cretaceous around 80–70 million years ago. Ants diversified extensively during the Angiosperm Terrestrial Revolution and assumed ecological dominance around 60 million years ago. Some groups, such as the Leptanillinae and Martialinae, are suggested to have diversified from early primitive ants that were likely to have been predators underneath the surface of the soil. During the Cretaceous period, a few species of primitive ants ranged widely on the Laurasian supercontinent (the Northern Hemisphere). Their representation in the fossil record is poor, in comparison to the populations of other insects, representing only about 1% of fossil evidence of insects in the era. Ants became dominant after adaptive radiation at the beginning of the Paleogene period. By the Oligocene and Miocene, ants had come to represent 20–40% of all insects found in major fossil deposits. Of the species that lived in the Eocene epoch, around one in 10 genera survive to the present. Genera surviving today comprise 56% of the genera in Baltic amber fossils (early Oligocene), and 92% of the genera in Dominican amber fossils (apparently early Miocene). Termites live in colonies and are sometimes called "white ants", but termites are only distantly related to ants. They are the sub-order Isoptera, and together with cockroaches, they form the order Blattodea. Blattodeans are related to mantids, crickets, and other winged insects that do not undergo complete metamorphosis. Like ants, termites are eusocial, with sterile workers, but they differ greatly in the genetics of reproduction. The similarity of their social structure to that of ants is attributed to convergent evolution. Velvet ants look like large ants, but are wingless female wasps. Distribution and diversity Ants have a cosmopolitan distribution. They are found on all continents except Antarctica, and only a few large islands, such as Greenland, Iceland, parts of Polynesia and the Hawaiian Islands lack native ant species. Ants occupy a wide range of ecological niches and exploit many different food resources as direct or indirect herbivores, predators and scavengers. Most ant species are omnivorous generalists, but a few are specialist feeders. There is considerable variation in ant abundance across habitats, peaking in the moist tropics to nearly six times that found in less suitable habitats. Their ecological dominance has been examined primarily using estimates of their biomass: myrmecologist E. O. Wilson had estimated in 2009 that at any one time the total number of ants was between one and ten quadrillion (short scale) (i.e., between 1015 and 1016) and using this estimate he had suggested that the total biomass of all the ants in the world was approximately equal to the total biomass of the entire human race. More careful estimates made in 2022 which take into account regional variations puts the global ant contribution at 12 megatons of dry carbon, which is about 20% of the total human contribution, but greater than that of the wild birds and mammals combined. This study also puts a conservative estimate of the ants at about 20 × 1015 (20 quadrillion). Ants range in size from , the largest species being the fossil Titanomyrma giganteum, the queen of which was long with a wingspan of . Ants vary in colour; most ants are yellow to red or brown to black, but a few species are green and some tropical species have a metallic lustre. More than 13,800 species are currently known (with upper estimates of the potential existence of about 22,000; see the article List of ant genera), with the greatest diversity in the tropics. Taxonomic studies continue to resolve the classification and systematics of ants. Online databases of ant species, including AntWeb and the Hymenoptera Name Server, help to keep track of the known and newly described species. The relative ease with which ants may be sampled and studied in ecosystems has made them useful as indicator species in biodiversity studies. Morphology Ants are distinct in their morphology from other insects in having geniculate (elbowed) antennae, metapleural glands, and a strong constriction of their second abdominal segment into a node-like petiole. The head, mesosoma, and metasoma are the three distinct body segments (formally tagmata). The petiole forms a narrow waist between their mesosoma (thorax plus the first abdominal segment, which is fused to it) and gaster (abdomen less the abdominal segments in the petiole). The petiole may be formed by one or two nodes (the second alone, or the second and third abdominal segments). Tergosternal fusion, when the tergite and sternite of a segment fuse together, can occur partly or fully on the second, third and fourth abdominal segment and is used in identification. Fourth abdominal tergosternal fusion was formerly used as character that defined the poneromorph subfamilies, Ponerinae and relatives within their clade, but this is no longer considered a synapomorphic character. Like other arthropods, ants have an exoskeleton, an external covering that provides a protective casing around the body and a point of attachment for muscles, in contrast to the internal skeletons of humans and other vertebrates. Insects do not have lungs; oxygen and other gases, such as carbon dioxide, pass through their exoskeleton via tiny valves called spiracles. Insects also lack closed blood vessels; instead, they have a long, thin, perforated tube along the top of the body (called the "dorsal aorta") that functions like a heart, and pumps haemolymph toward the head, thus driving the circulation of the internal fluids. The nervous system consists of a ventral nerve cord that runs the length of the body, with several ganglia and branches along the way reaching into the extremities of the appendages. Head An ant's head contains many sensory organs. Like most insects, ants have compound eyes made from numerous tiny lenses attached together. Ant eyes are good for acute movement detection, but do not offer a high resolution image. They also have three small ocelli (simple eyes) on the top of the head that detect light levels and polarization. Compared to vertebrates, ants tend to have blurrier eyesight, particularly in smaller species, and a few subterranean taxa are completely blind. However, some ants, such as Australia's bulldog ant, have excellent vision and are capable of discriminating the distance and size of objects moving nearly a meter away. Based on experiments conducted to test their ability to differentiate between selected wavelengths of light, some ant species such as Camponotus blandus, Solenopsis invicta, and Formica cunicularia are thought to possess a degree of colour vision. Two antennae ("feelers") are attached to the head; these organs detect chemicals, air currents, and vibrations; they also are used to transmit and receive signals through touch. The head has two strong jaws, the mandibles, used to carry food, manipulate objects, construct nests, and for defence. In some species, a small pocket (infrabuccal chamber) inside the mouth stores food, so it may be passed to other ants or their larvae. Mesosoma Both the legs and wings of the ant are attached to the mesosoma ("thorax"). The legs terminate in a hooked claw which allows them to hook on and climb surfaces. Only reproductive ants (queens and males) have wings. Queens shed their wings after the nuptial flight, leaving visible stubs, a distinguishing feature of queens. In a few species, wingless queens (ergatoids) and males occur. Metasoma The metasoma (the "abdomen") of the ant houses important internal organs, including those of the reproductive, respiratory (tracheae), and excretory systems. Workers of many species have their egg-laying structures modified into stings that are used for subduing prey and defending their nests. Polymorphism In the colonies of a few ant species, there are physical castes—workers in distinct size-classes, called minor (micrergates), median, and major ergates (macrergates). Often, the larger ants have disproportionately larger heads, and correspondingly stronger mandibles. Although formally known as dinergates, such individuals are sometimes called "soldier" ants because their stronger mandibles make them more effective in fighting, although they still are workers and their "duties" typically do not vary greatly from the minor or median workers. In a few species, the median workers are absent, creating a sharp divide between the minors and majors. Weaver ants, for example, have a distinct bimodal size distribution. Some other species show continuous variation in the size of workers. The smallest and largest workers in Carebara diversa show nearly a 500-fold difference in their dry weights. Workers cannot mate; however, because of the haplodiploid sex-determination system in ants, workers of a number of species can lay unfertilised eggs that become fully fertile, haploid males. The role of workers may change with their age and in some species, such as honeypot ants, young workers are fed until their gasters are distended, and act as living food storage vessels. These food storage workers are called repletes. For instance, these replete workers develop in the North American honeypot ant Myrmecocystus mexicanus. Usually the largest workers in the colony develop into repletes; and, if repletes are removed from the colony, other workers become repletes, demonstrating the flexibility of this particular polymorphism. This polymorphism in morphology and behaviour of workers initially was thought to be determined by environmental factors such as nutrition and hormones that led to different developmental paths; however, genetic differences between worker castes have been noted in Acromyrmex sp. These polymorphisms are caused by relatively small genetic changes; differences in a single gene of Solenopsis invicta can decide whether the colony will have single or multiple queens. The Australian jack jumper ant (Myrmecia pilosula) has only a single pair of chromosomes (with the males having just one chromosome as they are haploid), the lowest number known for any animal, making it an interesting subject for studies in the genetics and developmental biology of social insects. Genome size Genome size is a fundamental characteristic of an organism. Ants have been found to have tiny genomes, with the evolution of genome size suggested to occur through loss and accumulation of non-coding regions, mainly transposable elements, and occasionally by whole genome duplication. This may be related to colonisation processes, but further studies are needed to verify this. Life cycle The life of an ant starts from an egg; if the egg is fertilised, the progeny will be female diploid, if not, it will be male haploid. Ants develop by complete metamorphosis with the larva stages passing through a pupal stage before emerging as an adult. The larva is largely immobile and is fed and cared for by workers. Food is given to the larvae by trophallaxis, a process in which an ant regurgitates liquid food held in its crop. This is also how adults share food, stored in the "social stomach". Larvae, especially in the later stages, may also be provided solid food, such as trophic eggs, pieces of prey, and seeds brought by workers. The larvae grow through a series of four or five moults and enter the pupal stage. The pupa has the appendages free and not fused to the body as in a butterfly pupa. The differentiation into queens and workers (which are both female), and different castes of workers, is influenced in some species by the nutrition the larvae obtain. Genetic influences and the control of gene expression by the developmental environment are complex and the determination of caste continues to be a subject of research. Winged male ants, called drones (termed "aner" in old literature), emerge from pupae along with the usually winged breeding females. Some species, such as army ants, have wingless queens. Larvae and pupae need to be kept at fairly constant temperatures to ensure proper development, and so often are moved around among the various brood chambers within the colony. A new ergate spends the first few days of its adult life caring for the queen and young. She then graduates to digging and other nest work, and later to defending the nest and foraging. These changes are sometimes fairly sudden, and define what are called temporal castes. Such age-based task-specialization or polyethism has been suggested as having evolved due to the high casualties involved in foraging and defence, making it an acceptable risk only for ants who are older and likely to die sooner from natural causes. In the Brazilian ant Forelius pusillus, the nest entrance is closed from the outside to protect the colony from predatory ant species at sunset each day. About one to eight workers seal the nest entrance from the outside and they have no chance of returning to the nest and are in effect sacrificed. Whether these seemingly suicidal workers are older workers has not been determined. Ant colonies can be long-lived. The queens can live for up to 30 years, and workers live from 1 to 3 years. Males, however, are more transitory, being quite short-lived and surviving for only a few weeks. Ant queens are estimated to live 100 times as long as solitary insects of a similar size. Ants are active all year long in the tropics; however, in cooler regions, they survive the winter in a state of dormancy known as hibernation. The forms of inactivity are varied and some temperate species have larvae going into the inactive state (diapause), while in others, the adults alone pass the winter in a state of reduced activity. Reproduction A wide range of reproductive strategies have been noted in ant species. Females of many species are known to be capable of reproducing asexually through thelytokous parthenogenesis. Secretions from the male accessory glands in some species can plug the female genital opening and prevent females from re-mating. Most ant species have a system in which only the queen and breeding females have the ability to mate. Contrary to popular belief, some ant nests have multiple queens, while others may exist without queens. Workers with the ability to reproduce are called "gamergates" and colonies that lack queens are then called gamergate colonies; colonies with queens are said to be queen-right. Drones can also mate with existing queens by entering a foreign colony, such as in army ants. When the drone is initially attacked by the workers, it releases a mating pheromone. If recognized as a mate, it will be carried to the queen to mate. Males may also patrol the nest and fight others by grabbing them with their mandibles, piercing their exoskeleton and then marking them with a pheromone. The marked male is interpreted as an invader by worker ants and is killed. Most ants are univoltine, producing a new generation each year. During the species-specific breeding period, winged females and winged males, known to entomologists as alates, leave the colony in what is called a nuptial flight. The nuptial flight usually takes place in the late spring or early summer when the weather is hot and humid. Heat makes flying easier and freshly fallen rain makes the ground softer for mated queens to dig nests. Males typically take flight before the females. Males then use visual cues to find a common mating ground, for example, a landmark such as a pine tree to which other males in the area converge. Males secrete a mating pheromone that females follow. Males will mount females in the air, but the actual mating process usually takes place on the ground. Females of some species mate with just one male but in others they may mate with as many as ten or more different males, storing the sperm in their spermathecae. The genus Cardiocondyla have species with both winged and wingless males, where the latter will only mate with females living in the same nest. Some species in the genus have lost winged males completely, and only produce wingless males. In C. elegans, workers may transport newly emerged queens to other conspecific nests where the wingless males from unrelated colonies can mate with them, a behavioural adaptation that may reduce the chances of inbreeding. Mated females then seek a suitable place to begin a colony. There, they break off their wings using their tibial spurs and begin to lay and care for eggs. The females can selectively fertilise future eggs with the sperm stored to produce diploid workers or lay unfertilized haploid eggs to produce drones. The first workers to hatch, known as nanitics, are weaker and smaller than later workers but they begin to serve the colony immediately. They enlarge the nest, forage for food, and care for the other eggs. Species that have multiple queens may have a queen leaving the nest along with some workers to found a colony at a new site, a process akin to swarming in honeybees. Nests, colonies, and supercolonies The typical ant species has a colony occupying a single nest, housing one or more queens, where the brood is raised. There are however more than 150 species of ants in 49 genera that are known to have colonies consisting of multiple spatially separated nests. These polydomous (as opposed to monodomous) colonies have food and workers moving between the nests. Membership to a colony is identified by the response of worker ants which identify whether another individual belongs to their own colony or not. A signature cocktail of body surface chemicals (also known as cuticular hydrocarbons or CHCs) forms the so-called colony odor which other members can recognize. Some ant species appear to be less discriminating and in the Argentine ant Linepithema humile, workers carried from a colony anywhere in the southern US and Mexico are acceptable within other colonies in the same region. Similarly workers from colonies that have established in Europe are accepted by any other colonies within Europe but not by the colonies in the Americas. The interpretation of these observations has been debated and some have been termed these large populations as supercolonies while others have termed the populations as unicolonial. Behaviour and ecology Communication Ants communicate with each other using pheromones, sounds, and touch. Since most ants live on the ground, they use the soil surface to leave pheromone trails that may be followed by other ants. In species that forage in groups, a forager that finds food marks a trail on the way back to the colony; this trail is followed by other ants, these ants then reinforce the trail when they head back with food to the colony. When the food source is exhausted, no new trails are marked by returning ants and the scent slowly dissipates. This behaviour helps ants deal with changes in their environment. For instance, when an established path to a food source is blocked by an obstacle, the foragers leave the path to explore new routes. If an ant is successful, it leaves a new trail marking the shortest route on its return. Successful trails are followed by more ants, reinforcing better routes and gradually identifying the best path. Ants use pheromones for more than just making trails. A crushed ant emits an alarm pheromone that sends nearby ants into an attack frenzy and attracts more ants from farther away. Several ant species even use "propaganda pheromones" to confuse enemy ants and make them fight among themselves. Pheromones are produced by a wide range of structures including Dufour's glands, poison glands and glands on the hindgut, pygidium, rectum, sternum, and hind tibia. Pheromones also are exchanged, mixed with food, and passed by trophallaxis, transferring information within the colony. This allows other ants to detect what task group (e.g., foraging or nest maintenance) other colony members belong to. In ant species with queen castes, when the dominant queen stops producing a specific pheromone, workers begin to raise new queens in the colony. Some ants produce sounds by stridulation, using the gaster segments and their mandibles. Sounds may be used to communicate with colony members or with other species. Defence Ants attack and defend themselves by biting and, in many species, by stinging often injecting or spraying chemicals. Bullet ants (Paraponera), located in Central and South America, are considered to have the most painful sting of any insect, although it is usually not fatal to humans. This sting is given the highest rating on the Schmidt sting pain index. The sting of jack jumper ants can be lethal for humans, and an antivenom has been developed for it. Fire ants, Solenopsis spp., are unique in having a venom sac containing piperidine alkaloids. Their stings are painful and can be dangerous to hypersensitive people. Formicine ants secrete a poison from their glands, made mainly of formic acid. Trap-jaw ants of the genus Odontomachus are equipped with mandibles called trap-jaws, which snap shut faster than any other predatory appendages within the animal kingdom. One study of Odontomachus bauri recorded peak speeds of between , with the jaws closing within 130 microseconds on average. The ants were also observed to use their jaws as a catapult to eject intruders or fling themselves backward to escape a threat. Before striking, the ant opens its mandibles extremely widely and locks them in this position by an internal mechanism. Energy is stored in a thick band of muscle and explosively released when triggered by the stimulation of sensory organs resembling hairs on the inside of the mandibles. The mandibles also permit slow and fine movements for other tasks. Trap-jaws also are seen in other ponerines such as Anochetus, as well as some genera in the tribe Attini, such as Daceton, Orectognathus, and Strumigenys, which are viewed as examples of convergent evolution. A Malaysian species of ant in the Camponotus cylindricus group has enlarged mandibular glands that extend into their gaster. If combat takes a turn for the worse, a worker may perform a final act of suicidal altruism by rupturing the membrane of its gaster, causing the content of its mandibular glands to burst from the anterior region of its head, spraying a poisonous, corrosive secretion containing acetophenones and other chemicals that immobilise small insect attackers. The worker subsequently dies. In addition to defence against predators, ants need to protect their colonies from pathogens. Secretions from the metapleural gland, unique to the ants, produce a complex range of chemicals including several with antibiotic properties. Some worker ants maintain the hygiene of the colony and their activities include undertaking or necrophoresis, the disposal of dead nest-mates. Oleic acid has been identified as the compound released from dead ants that triggers necrophoric behaviour in Atta mexicana while workers of Linepithema humile react to the absence of characteristic chemicals (dolichodial and iridomyrmecin) present on the cuticle of their living nestmates to trigger similar behaviour. In Megaponera analis, injured ants are treated by nestmastes with secretions from their metapleural glands which protect them from infection. Camponotus ants do not have a metapleural gland and Camponotus maculatus as well as C. floridanus workers have been found to amputate the affected legs of nestmates when the femur is injured. A femur injury carries a greater risk of infection unlike a tibia injury. Nests may be protected from physical threats such as flooding and overheating by elaborate nest architecture. Workers of Cataulacus muticus, an arboreal species that lives in plant hollows, respond to flooding by drinking water inside the nest, and excreting it outside. Camponotus anderseni, which nests in the cavities of wood in mangrove habitats, deals with submergence under water by switching to anaerobic respiration. Learning Many animals can learn behaviours by imitation, but ants may be the only group apart from mammals where interactive teaching has been observed. A knowledgeable forager of Temnothorax albipennis can lead a naïve nest-mate to newly discovered food by the process of tandem running. The follower obtains knowledge through its leading tutor. The leader is acutely sensitive to the progress of the follower and slows down when the follower lags and speeds up when the follower gets too close. Controlled experiments with colonies of Cerapachys biroi suggest that an individual may choose nest roles based on her previous experience. An entire generation of identical workers was divided into two groups whose outcome in food foraging was controlled. One group was continually rewarded with prey, while it was made certain that the other failed. As a result, members of the successful group intensified their foraging attempts while the unsuccessful group ventured out fewer and fewer times. A month later, the successful foragers continued in their role while the others had moved to specialise in brood care. Nest construction Complex nests are built by many ant species, but other species are nomadic and do not build permanent structures. Ants may form subterranean nests or build them on trees. These nests may be found in the ground, under stones or logs, inside logs, hollow stems, or even acorns. The materials used for construction include soil and plant matter, and ants carefully select their nest sites; Temnothorax albipennis will avoid sites with dead ants, as these may indicate the presence of pests or disease. They are quick to abandon established nests at the first sign of threats. The army ants of South America, such as the Eciton burchellii species, and the driver ants of Africa do not build permanent nests, but instead, alternate between nomadism and stages where the workers form a temporary nest (bivouac) from their own bodies, by holding each other together. Weaver ant (Oecophylla spp.) workers build nests in trees by attaching leaves together, first pulling them together with bridges of workers and then inducing their larvae to produce silk as they are moved along the leaf edges. Similar forms of nest construction are seen in some species of Polyrhachis. Formica polyctena, among other ant species, constructs nests that maintain a relatively constant interior temperature that aids in the development of larvae. The ants maintain the nest temperature by choosing the location, nest materials, controlling ventilation and maintaining the heat from solar radiation, worker activity and metabolism, and in some moist nests, microbial activity in the nest materials. Some ant species, such as those that use natural cavities, can be opportunistic and make use of the controlled micro-climate provided inside human dwellings and other artificial structures to house their colonies and nest structures. Cultivation of food Most ants are generalist predators, scavengers, and indirect herbivores, but a few have evolved specialised ways of obtaining nutrition. It is believed that many ant species that engage in indirect herbivory rely on specialized symbiosis with their gut microbes to upgrade the nutritional value of the food they collect and allow them to survive in nitrogen poor regions, such as rainforest canopies. Leafcutter ants (Atta and Acromyrmex) feed exclusively on a fungus that grows only within their colonies. They continually collect leaves which are taken to the colony, cut into tiny pieces and placed in fungal gardens. Ergates specialise in related tasks according to their sizes. The largest ants cut stalks, smaller workers chew the leaves and the smallest tend the fungus. Leafcutter ants are sensitive enough to recognise the reaction of the fungus to different plant material, apparently detecting chemical signals from the fungus. If a particular type of leaf is found to be toxic to the fungus, the colony will no longer collect it. The ants feed on structures produced by the fungi called gongylidia. Symbiotic bacteria on the exterior surface of the ants produce antibiotics that kill bacteria introduced into the nest that may harm the fungi. Navigation Foraging ants travel distances of up to from their nest and scent trails allow them to find their way back even in the dark. In hot and arid regions, day-foraging ants face death by desiccation, so the ability to find the shortest route back to the nest reduces that risk. Diurnal desert ants of the genus Cataglyphis such as the Sahara desert ant navigate by keeping track of direction as well as distance travelled. Distances travelled are measured using an internal pedometer that keeps count of the steps taken and also by evaluating the movement of objects in their visual field (optical flow). Directions are measured using the position of the sun. They integrate this information to find the shortest route back to their nest. Like all ants, they can also make use of visual landmarks when available as well as olfactory and tactile cues to navigate. Some species of ant are able to use the Earth's magnetic field for navigation. The compound eyes of ants have specialised cells that detect polarised light from the Sun, which is used to determine direction. These polarization detectors are sensitive in the ultraviolet region of the light spectrum. In some army ant species, a group of foragers who become separated from the main column may sometimes turn back on themselves and form a circular ant mill. The workers may then run around continuously until they die of exhaustion. Locomotion The female worker ants do not have wings and reproductive females lose their wings after their mating flights in order to begin their colonies. Therefore, unlike their wasp ancestors, most ants travel by walking. Some species are capable of leaping. For example, Jerdon's jumping ant (Harpegnathos saltator) is able to jump by synchronising the action of its mid and hind pairs of legs. There are several species of gliding ant including Cephalotes atratus; this may be a common trait among arboreal ants with small colonies. Ants with this ability are able to control their horizontal movement so as to catch tree trunks when they fall from atop the forest canopy. Other species of ants can form chains to bridge gaps over water, underground, or through spaces in vegetation. Some species also form floating rafts that help them survive floods. These rafts may also have a role in allowing ants to colonise islands. Polyrhachis sokolova, a species of ant found in Australian mangrove swamps, can swim and live in underwater nests. Since they lack gills, they go to trapped pockets of air in the submerged nests to breathe. Cooperation and competition Not all ants have the same kind of societies. The Australian bulldog ants are among the biggest and most basal of ants. Like virtually all ants, they are eusocial, but their social behaviour is poorly developed compared to other species. Each individual hunts alone, using her large eyes instead of chemical senses to find prey. Some species attack and take over neighbouring ant colonies. Extreme specialists among these slave-raiding ants, such as the Amazon ants, are incapable of feeding themselves and need captured workers to survive. Captured workers of enslaved Temnothorax species have evolved a counter-strategy, destroying just the female pupae of the slave-making Temnothorax americanus, but sparing the males (who do not take part in slave-raiding as adults). Ants identify kin and nestmates through their scent, which comes from hydrocarbon-laced secretions that coat their exoskeletons. If an ant is separated from its original colony, it will eventually lose the colony scent. Any ant that enters a colony without a matching scent will be attacked. Parasitic ant species enter the colonies of host ants and establish themselves as social parasites; species such as Strumigenys xenos are entirely parasitic and do not have workers, but instead, rely on the food gathered by their Strumigenys perplexa hosts. This form of parasitism is seen across many ant genera, but the parasitic ant is usually a species that is closely related to its host. A variety of methods are employed to enter the nest of the host ant. A parasitic queen may enter the host nest before the first brood has hatched, establishing herself prior to development of a colony scent. Other species use pheromones to confuse the host ants or to trick them into carrying the parasitic queen into the nest. Some simply fight their way into the nest. A conflict between the sexes of a species is seen in some species of ants with these reproducers apparently competing to produce offspring that are as closely related to them as possible. The most extreme form involves the production of clonal offspring. An extreme of sexual conflict is seen in Wasmannia auropunctata, where the queens produce diploid daughters by thelytokous parthenogenesis and males produce clones by a process whereby a diploid egg loses its maternal contribution to produce haploid males who are clones of the father. Relationships with other organisms Ants form symbiotic associations with a range of species, including other ant species, other insects, plants, and fungi. They also are preyed on by many animals and even certain fungi. Some arthropod species spend part of their lives within ant nests, either preying on ants, their larvae, and eggs, consuming the food stores of the ants, or avoiding predators. These inquilines may bear a close resemblance to ants. The nature of this ant mimicry (myrmecomorphy) varies, with some cases involving Batesian mimicry, where the mimic reduces the risk of predation. Others show Wasmannian mimicry, a form of mimicry seen only in inquilines. Aphids and other hemipteran insects secrete a sweet liquid called honeydew, when they feed on plant sap. The sugars in honeydew are a high-energy food source, which many ant species collect. In some cases, the aphids secrete the honeydew in response to ants tapping them with their antennae. The ants in turn keep predators away from the aphids and will move them from one feeding location to another. When migrating to a new area, many colonies will take the aphids with them, to ensure a continued supply of honeydew. Ants also tend mealybugs to harvest their honeydew. Mealybugs may become a serious pest of pineapples if ants are present to protect mealybugs from their natural enemies. Myrmecophilous (ant-loving) caterpillars of the butterfly family Lycaenidae (e.g., blues, coppers, or hairstreaks) are herded by the ants, led to feeding areas in the daytime, and brought inside the ants' nest at night. The caterpillars have a gland which secretes honeydew when the ants massage them. The chemicals in the secretions of Narathura japonica alter the behavior of attendant Pristomyrmex punctatus workers, making them less aggressive and stationary. The relationship, formerly characterized as "mutualistic", is now considered as possibly a case of the ants being parasitically manipulated by the caterpillars. Some caterpillars produce vibrations and sounds that are perceived by the ants. A similar adaptation can be seen in Grizzled skipper butterflies that emit vibrations by expanding their wings in order to communicate with ants, which are natural predators of these butterflies. Other caterpillars have evolved from ant-loving to ant-eating: these myrmecophagous caterpillars secrete a pheromone that makes the ants act as if the caterpillar is one of their own larvae. The caterpillar is then taken into the ant nest where it feeds on the ant larvae. A number of specialized bacteria have been found as endosymbionts in ant guts. Some of the dominant bacteria belong to the order Hyphomicrobiales whose members are known for being nitrogen-fixing symbionts in legumes but the species found in ant lack the ability to fix nitrogen. Fungus-growing ants that make up the tribe Attini, including leafcutter ants, cultivate certain species of fungus in the genera Leucoagaricus or Leucocoprinus of the family Agaricaceae. In this ant-fungus mutualism, both species depend on each other for survival. The ant Allomerus decemarticulatus has evolved a three-way association with the host plant, Hirtella physophora (Chrysobalanaceae), and a sticky fungus which is used to trap their insect prey. Lemon ants make devil's gardens by killing surrounding plants with their stings and leaving a pure patch of lemon ant trees, (Duroia hirsuta). This modification of the forest provides the ants with more nesting sites inside the stems of the Duroia trees. Although some ants obtain nectar from flowers, pollination by ants is somewhat rare, one example being of the pollination of the orchid Leporella fimbriata which induces male Myrmecia urens to pseudocopulate with the flowers, transferring pollen in the process. One theory that has been proposed for the rarity of pollination is that the secretions of the metapleural gland inactivate and reduce the viability of pollen. Some plants, mostly angiosperms but also some ferns, have special nectar exuding structures, extrafloral nectaries, that provide food for ants, which in turn protect the plant from more damaging herbivorous insects. Species such as the bullhorn acacia (Acacia cornigera) in Central America have hollow thorns that house colonies of stinging ants (Pseudomyrmex ferruginea) who defend the tree against insects, browsing mammals, and epiphytic vines. Isotopic labelling studies suggest that plants also obtain nitrogen from the ants. In return, the ants obtain food from protein- and lipid-rich Beltian bodies. In Fiji Philidris nagasau (Dolichoderinae) are known to selectively grow species of epiphytic Squamellaria (Rubiaceae) which produce large domatia inside which the ant colonies nest. The ants plant the seeds and the domatia of young seedling are immediately occupied and the ant faeces in them contribute to rapid growth. Similar dispersal associations are found with other dolichoderines in the region as well. Another example of this type of ectosymbiosis comes from the Macaranga tree, which has stems adapted to house colonies of Crematogaster ants. Many plant species have seeds that are adapted for dispersal by ants. Seed dispersal by ants or myrmecochory is widespread, and new estimates suggest that nearly 9% of all plant species may have such ant associations. Often, seed-dispersing ants perform directed dispersal, depositing the seeds in locations that increase the likelihood of seed survival to reproduction. Some plants in arid, fire-prone systems are particularly dependent on ants for their survival and dispersal as the seeds are transported to safety below the ground. Many ant-dispersed seeds have special external structures, elaiosomes, that are sought after by ants as food. Ants can substantially alter rate of decomposition and nutrient cycling in their nest. By myrmecochory and modification of soil conditions they substantially alter vegetation and nutrient cycling in surrounding ecosystem. A convergence, possibly a form of mimicry, is seen in the eggs of stick insects. They have an edible elaiosome-like structure and are taken into the ant nest where the young hatch. Most ants are predatory and some prey on and obtain food from other social insects including other ants. Some species specialise in preying on termites (Megaponera and Termitopone) while a few Cerapachyinae prey on other ants. Some termites, including Nasutitermes corniger, form associations with certain ant species to keep away predatory ant species. The tropical wasp Mischocyttarus drewseni coats the pedicel of its nest with an ant-repellent chemical. It is suggested that many tropical wasps may build their nests in trees and cover them to protect themselves from ants. Other wasps, such as A. multipicta, defend against ants by blasting them off the nest with bursts of wing buzzing. Stingless bees (Trigona and Melipona) use chemical defences against ants. Flies in the Old World genus Bengalia (Calliphoridae) prey on ants and are kleptoparasites, snatching prey or brood from the mandibles of adult ants. Wingless and legless females of the Malaysian phorid fly (Vestigipoda myrmolarvoidea) live in the nests of ants of the genus Aenictus and are cared for by the ants. Fungi in the genera Cordyceps and Ophiocordyceps infect ants. Ants react to their infection by climbing up plants and sinking their mandibles into plant tissue. The fungus kills the ants, grows on their remains, and produces a fruiting body. It appears that the fungus alters the behaviour of the ant to help disperse its spores in a microhabitat that best suits the fungus. Strepsipteran parasites also manipulate their ant host to climb grass stems, to help the parasite find mates. A nematode (Myrmeconema neotropicum) that infects canopy ants (Cephalotes atratus) causes the black-coloured gasters of workers to turn red. The parasite also alters the behaviour of the ant, causing them to carry their gasters high. The conspicuous red gasters are mistaken by birds for ripe fruits, such as Hyeronima alchorneoides, and eaten. The droppings of the bird are collected by other ants and fed to their young, leading to further spread of the nematode. A study of Temnothorax nylanderi colonies in Germany found that workers parasitized by the tapeworm Anomotaenia brevis (ants are intermediate hosts, the definitive hosts are woodpeckers) lived much longer than unparasitized workers and had a reduced mortality rate, comparable to that of the queens of the same species, which live for as long as two decades. South American poison dart frogs in the genus Dendrobates feed mainly on ants, and the toxins in their skin may come from the ants. Army ants forage in a wide roving column, attacking any animals in that path that are unable to escape. In Central and South America, Eciton burchellii is the swarming ant most commonly attended by "ant-following" birds such as antbirds and woodcreepers. This behaviour was once considered mutualistic, but later studies found the birds to be parasitic. Direct kleptoparasitism (birds stealing food from the ants' grasp) is rare and has been noted in Inca doves which pick seeds at nest entrances as they are being transported by species of Pogonomyrmex. Birds that follow ants eat many prey insects and thus decrease the foraging success of ants. Birds indulge in a peculiar behaviour called anting that, as yet, is not fully understood. Here birds rest on ant nests, or pick and drop ants onto their wings and feathers; this may be a means to remove ectoparasites from the birds. Anteaters, aardvarks, pangolins, echidnas and numbats have special adaptations for living on a diet of ants. These adaptations include long, sticky tongues to capture ants and strong claws to break into ant nests. Brown bears (Ursus arctos) have been found to feed on ants. About 12%, 16%, and 4% of their faecal volume in spring, summer and autumn, respectively, is composed of ants. Relationship with humans Ants perform many ecological roles that are beneficial to humans, including the suppression of pest populations and aeration of the soil. The use of weaver ants in citrus cultivation in southern China is considered one of the oldest known applications of biological control. On the other hand, ants may become nuisances when they invade buildings or cause economic losses. In some parts of the world (mainly Africa and South America), large ants, especially army ants, are used as surgical sutures. The wound is pressed together and ants are applied along it. The ant seizes the edges of the wound in its mandibles and locks in place. The body is then cut off and the head and mandibles remain in place to close the wound. The large heads of the dinergates (soldiers) of the leafcutting ant Atta cephalotes are also used by native surgeons in closing wounds. Some ants have toxic venom and are of medical importance. The species include Paraponera clavata (tocandira) and Dinoponera spp. (false tocandiras) of South America and the Myrmecia ants of Australia. In South Africa, ants are used to help harvest the seeds of rooibos (Aspalathus linearis), a plant used to make a herbal tea. The plant disperses its seeds widely, making manual collection difficult. Black ants collect and store these and other seeds in their nest, where humans can gather them en masse. Up to half a pound (200 g) of seeds may be collected from one ant-heap. Although most ants survive attempts by humans to eradicate them, a few are highly endangered. These tend to be island species that have evolved specialized traits and risk being displaced by introduced ant species. Examples include the critically endangered Sri Lankan relict ant (Aneuretus simoni) and Adetomyrma venatrix of Madagascar. As food Ants and their larvae are eaten in different parts of the world. The eggs of two species of ants are used in Mexican escamoles. They are considered a form of insect caviar and can sell for as much as US$50 per kg going up to US$200 per kg (as of 2006) because they are seasonal and hard to find. In the Colombian department of Santander, hormigas culonas (roughly interpreted as "large-bottomed ants") Atta laevigata are toasted alive and eaten. In areas of India, and throughout Burma and Thailand, a paste of the green weaver ant (Oecophylla smaragdina) is served as a condiment with curry. Weaver ant eggs and larvae, as well as the ants, may be used in a Thai salad, yam (), in a dish called yam khai mot daeng () or red ant egg salad, a dish that comes from the Issan or north-eastern region of Thailand. Saville-Kent, in the Naturalist in Australia wrote "Beauty, in the case of the green ant, is more than skin-deep. Their attractive, almost sweetmeat-like translucency possibly invited the first essays at their consumption by the human species". Mashed up in water, after the manner of lemon squash, "these ants form a pleasant acid drink which is held in high favor by the natives of North Queensland, and is even appreciated by many European palates". Ants or their pupae are used as starters for yogurt making in parts of Bulgaria and Turkey. In his First Summer in the Sierra, John Muir notes that the Digger Indians of California ate the tickling, acid gasters of the large jet-black carpenter ants. The Mexican Indians eat the repletes, or living honey-pots, of the honey ant (Myrmecocystus). As pests Some ant species are considered as pests, primarily those that occur in human habitations, where their presence is often problematic. For example, the presence of ants would be undesirable in sterile places such as hospitals or kitchens. Some species or genera commonly categorized as pests include the Argentine ant, immigrant pavement ant, yellow crazy ant, banded sugar ant, pharaoh ant, red wood ant, black carpenter ant, odorous house ant, red imported fire ant, and European fire ant. Some ants will raid stored food, some will seek water sources, others may damage indoor structures, some may damage agricultural crops directly or by aiding sucking pests. Some will sting or bite. The adaptive nature of ant colonies make it nearly impossible to eliminate entire colonies and most pest management practices aim to control local populations and tend to be temporary solutions. Ant populations are managed by a combination of approaches that make use of chemical, biological, and physical methods. Chemical methods include the use of insecticidal bait which is gathered by ants as food and brought back to the nest where the poison is inadvertently spread to other colony members through trophallaxis. Management is based on the species and techniques may vary according to the location and circumstance. In science and technology Observed by humans since the dawn of history, the behaviour of ants has been documented and the subject of early writings and fables passed from one century to another. Those using scientific methods, myrmecologists, study ants in the laboratory and in their natural conditions. Their complex and variable social structures have made ants ideal model organisms. Ultraviolet vision was first discovered in ants by Sir John Lubbock in 1881. Studies on ants have tested hypotheses in ecology and sociobiology, and have been particularly important in examining the predictions of theories of kin selection and evolutionarily stable strategies. Ant colonies may be studied by rearing or temporarily maintaining them in formicaria, specially constructed glass framed enclosures. Individuals may be tracked for study by marking them with dots of colours. The successful techniques used by ant colonies have been studied in computer science and robotics to produce distributed and fault-tolerant systems for solving problems, for example Ant colony optimization and Ant robotics. This area of biomimetics has led to studies of ant locomotion, search engines that make use of "foraging trails", fault-tolerant storage, and networking algorithms. As pets From the late 1950s through the late 1970s, ant farms were popular educational children's toys in the United States. Some later commercial versions use transparent gel instead of soil, allowing greater visibility at the cost of stressing the ants with unnatural light. In culture Anthropomorphised ants have often been used in fables, children's stories, and religious texts to represent industriousness and cooperative effort, such as in the Aesop fable The Ant and the Grasshopper. In the Quran, Sulayman is said to have heard and understood an ant warning other ants to return home to avoid being accidentally crushed by Sulayman and his marching army., In parts of Africa, ants are considered to be the messengers of the deities. Some Native American mythology, such as the Hopi mythology, considers ants as the first animals. Ant bites are often said to have curative properties. The sting of some species of Pseudomyrmex is claimed to give fever relief. Ant bites are used in the initiation ceremonies of some Amazon Indian cultures as a test of endurance. In Greek mythology, the goddess Athena turned the maiden Myrmex into an ant when the latter claimed to have invented the plough, when in fact it was Athena's own invention. Ant society has always fascinated humans and has been written about both humorously and seriously. Mark Twain wrote about ants in his 1880 book A Tramp Abroad. Some modern authors have used the example of the ants to comment on the relationship between society and the individual. Examples are Robert Frost in his poem "Departmental" and T. H. White in his fantasy novel The Once and Future King. The plot in French entomologist and writer Bernard Werber's Les Fourmis science-fiction trilogy is divided between the worlds of ants and humans; ants and their behaviour are described using contemporary scientific knowledge. H. G. Wells wrote about intelligent ants destroying human settlements in Brazil and threatening human civilization in his 1905 science-fiction short story, The Empire of the Ants. A similar German story involving army ants, Leiningen Versus the Ants, was written in 1937 and recreated in movie form as The Naked Jungle in 1954. In more recent times, animated cartoons and 3-D animated films featuring ants have been produced including Antz, A Bug's Life, The Ant Bully, The Ant and the Aardvark, Ferdy the Ant and Atom Ant. Renowned myrmecologist E. O. Wilson wrote a short story, "Trailhead" in 2010 for The New Yorker magazine, which describes the life and death of an ant-queen and the rise and fall of her colony, from an ants' point of view. Ants also are quite popular inspiration for many science-fiction insectoids, such as the Formics of Ender's Game, the Bugs of Starship Troopers, the giant ants in the films Them! and Empire of the Ants, Marvel Comics' super hero Ant-Man, and ants mutated into super-intelligence in Phase IV. In computer strategy games, ant-based species often benefit from increased production rates due to their single-minded focus, such as the Klackons in the Master of Orion series of games or the ChCht in Deadlock II. These characters are often credited with a hive mind, a common misconception about ant colonies. In the early 1990s, the video game SimAnt, which simulated an ant colony, won the 1992 Codie award for "Best Simulation Program".
Biology and health sciences
Hymenoptera
null
2637
https://en.wikipedia.org/wiki/Atomic%20absorption%20spectroscopy
Atomic absorption spectroscopy
Atomic absorption spectroscopy (AAS) is a spectroanalytical procedure for the quantitative measurement of chemical elements. AAS is based on the absorption of light by free metallic ions that have been atomized from a sample. An alternative technique is atomic emission spectroscopy (AES). In analytical chemistry the technique is used for determining the concentration of a particular element (the analyte) in a sample to be analyzed. AAS can be used to determine over 70 different elements in solution, or directly in solid samples via electrothermal vaporization, and is used in pharmacology, biophysics, archaeology and toxicology research. Atomic emission spectroscopy (AAS) was first used as an analytical technique, and the underlying principles were established in the second half of the 19th century by Robert Wilhelm Bunsen and Gustav Robert Kirchhoff, both professors at the University of Heidelberg, Germany. The modern form of AAS was largely developed during the 1950s by a team of Australian chemists. They were led by Sir Alan Walsh at the Commonwealth Scientific and Industrial Research Organisation (CSIRO), Division of Chemical Physics, in Melbourne, Australia. Atomic absorption spectrometry has many uses in different areas of chemistry such as clinical analysis of metals in biological fluids and tissues such as whole blood, plasma, urine, saliva, brain tissue, liver, hair, muscle tissue. Atomic absorption spectrometry can be used in qualitative and quantitative analysis. Principles The technique makes use of the atomic absorption spectrum of a sample in order to assess the concentration of specific analytes within it. It requires standards with known analyte content to establish the relation between the measured absorbance and the analyte concentration and relies therefore on the Beer–Lambert law. Analyzing Samples with Atomic Absorption Spectroscopy (AAS) Atomic Absorption Spectroscopy (AAS) measures the concentration of specific elements in a sample by analyzing their unique "fingerprint" in the form of an atomic absorption spectrum. Here's how it works: Step 1: Sample Preparation:** The sample is typically dissolved in a suitable solvent (acids, water) to create a liquid solution. This ensures the analytes are present as free atoms, ready for absorption. For solid samples like ores or minerals, additional steps like grinding and digestion may be required to break down the matrix and liberate the analytes. Step 2: Atomization:** The prepared solution is nebulized into a fine mist and introduced into a high-temperature flame (air-acetylene or nitrous oxide-acetylene mix). The intense heat in the flame excites the electrons in the analyte atoms, promoting them to higher energy levels. Step 3: Absorption:** Simultaneously, a hollow cathode lamp containing the same element as the analyte emits a specific wavelength of light that corresponds to the energy difference between the excited and ground state of the analyte atoms. As the emitted light passes through the atomized sample, some photons are absorbed by the excited analyte atoms, causing them to return to their ground state. This absorption decreases the intensity of the light at the specific wavelength. Step 4: Measurement and Analysis:** The light intensity before and after passing through the sample is measured by a detector. The difference in intensity is directly proportional to the concentration of the analyte in the sample, following the Beer-Lambert law: * **A = εcl**, where: * A is the absorbance measured. * ε is the molar absorptivity (constant specific to the element and wavelength). * c is the concentration of the analyte. * l is the path length of the light through the sample. Step 5: Calibration and Quantification:** To determine the actual concentration of the analyte, the instrument is calibrated using standard solutions containing known concentrations of the element. By comparing the measured absorbance of the sample to the calibration curve, the concentration of the analyte in the original sample can be calculated. Feedback Mechanism:** The measured absorbance directly provides feedback on the concentration of the analyte in the sample. This feedback loop allows the AAS to analyze various samples efficiently and determine their elemental composition with high accuracy. In summary, AAS utilizes the unique absorption properties of elements to accurately quantify their concentration in samples. By preparing the sample, atomizing the analytes, measuring their absorption of specific light, and applying the Beer-Lambert law, this powerful technique helps us understand the elemental makeup of diverse materials across various scientific and industrial fields. Instrumentation In order to analyze a sample for its atomic constituents, it has to be atomized. The atomizers most commonly used nowadays are flames and electrothermal (graphite tube) atomizers. The atoms should then be irradiated by optical radiation, and the radiation source could be an element-specific line radiation source or a continuum radiation source. The radiation then passes through a monochromator in order to separate the element-specific radiation from any other radiation emitted by the radiation source, which is finally measured by a detector. Atomizers The used nowadays are spectroscopic flames and electrothermal atomizers. Other atomizers, such as glow-discharge atomization, hydride atomization, or cold-vapor atomization, might be used for special purposes. Flame atomizers The oldest and most commonly used atomizers in AAS are flames, principally the air-acetylene flame with a temperature of about 2300 °C and the nitrous oxide system (N2O)-acetylene flame with a temperature of about 2700 °C. The latter flame, in addition, offers a more reducing environment, being ideally suited for analytes with high affinity to oxygen. Liquid or dissolved samples are typically used with flame atomizers. The sample solution is aspirated by a pneumatic analytical nebulizer, transformed into an aerosol, which is introduced into a spray chamber, where it is mixed with the flame gases and conditioned in a way that only the finest aerosol droplets (< 10 μm) enter the flame. This conditioning process reduces interference, but only about 5% of the aerosolized solution reaches the flame because of it. On top of the spray chamber is a burner head that produces a flame that is laterally long (usually 5–10 cm) and only a few mm deep. The radiation beam passes through this flame at its longest axis, and the flame gas flow-rates may be adjusted to produce the highest concentration of free atoms. The burner height may also be adjusted, so that the radiation beam passes through the zone of highest atom cloud density in the flame, resulting in the highest sensitivity. The processes in a flame include the stages of desolvation (drying) in which the solvent is evaporated and the dry sample nano-particles remain, vaporization (transfer to the gaseous phase) in which the solid particles are converted into gaseous molecule, atomization in which the molecules are dissociated into free atoms, and ionization where (depending on the ionization potential of the analyte atoms and the energy available in a particular flame) atoms may be in part converted to gaseous ions. Each of these stages includes the risk of interference in case the degree of phase transfer is different for the analyte in the calibration standard and in the sample. Ionization is generally undesirable, as it reduces the number of atoms that are available for measurement, i.e., the sensitivity. In flame AAS a steady-state signal is generated during the time period when the sample is aspirated. This technique is typically used for determinations in the mg L−1 range, and may be extended down to a few μg L−1 for some elements. Electrothermal atomizers Electrothermal AAS (ET AAS) using graphite tube atomizers was pioneered by Boris V. L’vov at the Saint Petersburg Polytechnical Institute, Russia, since the late 1950s, and investigated in parallel by Hans Massmann at the Institute of Spectrochemistry and Applied Spectroscopy (ISAS) in Dortmund, Germany. Although a wide variety of graphite tube designs have been used over the years, the dimensions nowadays are typically 20–25 mm in length and 5–6 mm inner diameter. With this technique liquid/dissolved, solid and gaseous samples may be analyzed directly. A measured volume (typically 10–50 μL) or a weighed mass (typically around 1 mg) of a solid sample are introduced into the graphite tube and subject to a temperature program. This typically consists of stages, such as drying – the solvent is evaporated; pyrolysis – the majority of the matrix constituents are removed; atomization – the analyte element is released to the gaseous phase; and cleaning – eventual residues in the graphite tube are removed at high temperature. The graphite tubes are heated via their ohmic resistance using a low-voltage high-current power supply; the temperature in the individual stages can be controlled very closely, and temperature ramps between the individual stages facilitate separation of sample components. Tubes may be heated transversely or longitudinally, where the former ones have the advantage of a more homogeneous temperature distribution over their length. The so-called stabilized temperature platform furnace (STPF) concept, proposed by Walter Slavin, based on research of Boris L’vov, makes ET AAS essentially free from interference. The major components of this concept are atomization of the sample from a graphite platform inserted into the graphite tube (L’vov platform) instead of from the tube wall in order to delay atomization until the gas phase in the atomizer has reached a stable temperature; use of a chemical modifier in order to stabilize the analyte to a pyrolysis temperature that is sufficient to remove the majority of the matrix components; and integration of the absorbance over the time of the transient absorption signal instead of using peak height absorbance for quantification. In ET AAS a transient signal is generated, the area of which is directly proportional to the mass of analyte (not its concentration) introduced into the graphite tube. This technique has the advantage that any kind of sample, solid, liquid or gaseous, can be analyzed directly. Its sensitivity is 2–3 orders of magnitude higher than that of flame AAS, so that determinations in the low μg L−1 range (for a typical sample volume of 20 μL) and ng g−1 range (for a typical sample mass of 1 mg) can be carried out. It shows a very high degree of freedom from interferences, so that ET AAS might be considered the most robust technique available nowadays for the determination of trace elements in complex matrices. Specialized atomization techniques While flame and electrothermal vaporizers are the most common atomization techniques, several other atomization methods are utilized for specialized use. Glow-discharge atomization A glow-discharge device (GD) serves as a versatile source, as it can simultaneously introduce and atomize the sample. The glow discharge occurs in a low-pressure argon gas atmosphere between 1 and 10 torr. In this atmosphere lies a pair of electrodes applying a DC voltage of 250 to 1000 V to break down the argon gas into positively charged ions and electrons. These ions, under the influence of the electric field, are accelerated into the cathode surface containing the sample, bombarding the sample and causing neutral sample atom ejection through the process known as sputtering. The atomic vapor produced by this discharge is composed of ions, ground state atoms, and fraction of excited atoms. When the excited atoms relax back into their ground state, a low-intensity glow is emitted, giving the technique its name. The requirement for samples of glow discharge atomizers is that they are electrical conductors. Consequently, atomizers are most commonly used in the analysis of metals and other conducting samples. However, with proper modifications, it can be utilized to analyze liquid samples as well as nonconducting materials by mixing them with a conductor (e.g. graphite). Hydride atomization Hydride generation techniques are specialized in solutions of specific elements. The technique provides a means of introducing samples containing arsenic, antimony, selenium, bismuth, and lead into an atomizer in the gas phase. With these elements, hydride atomization enhances detection limits by a factor of 10 to 100 compared to alternative methods. Hydride generation occurs by adding an acidified aqueous solution of the sample to a 1% aqueous solution of sodium borohydride, all of which is contained in a glass vessel. The volatile hydride generated by the reaction that occurs is swept into the atomization chamber by an inert gas, where it undergoes decomposition. This process forms an atomized form of the analyte, which can then be measured by absorption or emission spectrometry. Cold-vapor atomization The cold-vapor technique is an atomization method limited only for the determination of mercury, due to it being the only metallic element to have a large vapor pressure at ambient temperature. Because of this, it has an important use in determining organic mercury compounds in samples and their distribution in the environment. The method initiates by converting mercury into Hg2+ by oxidation from nitric and sulfuric acids, followed by a reduction of Hg2+ with tin(II) chloride. The mercury, is then swept into a long-pass absorption tube by bubbling a stream of inert gas through the reaction mixture. The concentration is determined by measuring the absorbance of this gas at 253.7 nm. Detection limits for this technique are in the parts-per-billion range making it an excellent mercury detection atomization method. Radiation sources We have to distinguish between line source AAS (LS AAS) and continuum source AAS (CS AAS). In classical LS AAS, as it has been proposed by Alan Walsh, the high spectral resolution required for AAS measurements is provided by the radiation source itself that emits the spectrum of the analyte in the form of lines that are narrower than the absorption lines. Continuum sources, such as deuterium lamps, are only used for background correction purposes. The advantage of this technique is that only a medium-resolution monochromator is necessary for measuring AAS; however, it has the disadvantage that usually a separate lamp is required for each element that has to be determined. In CS AAS, in contrast, a single lamp, emitting a continuum spectrum over the entire spectral range of interest is used for all elements. Obviously, a high-resolution monochromator is required for this technique, as will be discussed later. Hollow cathode lamps Hollow cathode lamps (HCL) are the most common radiation source in LS AAS. Inside the sealed lamp, filled with argon or neon gas at low pressure, is a cylindrical metal cathode containing the element of interest and an anode. A high voltage is applied across the anode and cathode, resulting in an ionization of the fill gas. The gas ions are accelerated towards the cathode and, upon impact on the cathode, sputter cathode material that is excited in the glow discharge to emit the radiation of the sputtered material, i.e., the element of interest. In the majority of cases single element lamps are used, where the cathode is pressed out of predominantly compounds of the target element. Multi-element lamps are available with combinations of compounds of the target elements pressed in the cathode. Multi element lamps produce slightly less sensitivity than single element lamps and the combinations of elements have to be selected carefully to avoid spectral interferences. Most multi-element lamps combine a handful of elements, e.g.: 2 - 8. Atomic Absorption Spectrometers can feature as few as 1-2 hollow cathode lamp positions or in automated multi-element spectrometers, a 8-12 lamp positions may be typically available. Electrodeless discharge lamps Electrodeless discharge lamps (EDL) contain a small quantity of the analyte as a metal or a salt in a quartz bulb together with an inert gas, typically argon gas, at low pressure. The bulb is inserted into a coil that is generating an electromagnetic radio frequency field, resulting in a low-pressure inductively coupled discharge in the lamp. The emission from an EDL is higher than that from an HCL, and the line width is generally narrower, but EDLs need a separate power supply and might need a longer time to stabilize. Deuterium lamps Deuterium HCL or even hydrogen HCL and deuterium discharge lamps are used in LS AAS for background correction purposes. The radiation intensity emitted by these lamps decreases significantly with increasing wavelength, so that they can be only used in the wavelength range between 190 and about 320 nm. Continuum sources When a continuum radiation source is used for AAS, it is necessary to use a high-resolution monochromator, as will be discussed later. In addition, it is necessary that the lamp emits radiation of intensity at least an order of magnitude above that of a typical HCL over the entire wavelength range from 190 nm to 900 nm. A special high-pressure xenon short arc lamp, operating in a hot-spot mode has been developed to fulfill these requirements. Spectrometer As already pointed out above, there is a difference between medium-resolution spectrometers that are used for LS AAS and high-resolution spectrometers that are designed for CS AAS. The spectrometer includes the spectral sorting device (monochromator) and the detector. Spectrometers for LS AAS In LS AAS the high resolution that is required for the measurement of atomic absorption is provided by the narrow line emission of the radiation source, and the monochromator simply has to resolve the analytical line from other radiation emitted by the lamp. This can usually be accomplished with a band pass between 0.2 and 2 nm, i.e., a medium-resolution monochromator. Another feature to make LS AAS element-specific is modulation of the primary radiation and the use of a selective amplifier that is tuned to the same modulation frequency, as already postulated by Alan Walsh. This way any (unmodulated) radiation emitted for example by the atomizer can be excluded, which is imperative for LS AAS. Simple monochromators of the Littrow or (better) the Czerny-Turner design are typically used for LS AAS. Photomultiplier tubes are the most frequently used detectors in LS AAS, although solid state detectors might be preferred because of their better signal-to-noise ratio. Spectrometers for CS AAS When a continuum radiation source is used for AAS measurement it is indispensable to work with a high-resolution monochromator. The resolution has to be equal to or better than the half-width of an atomic absorption line (about 2 pm) in order to avoid losses of sensitivity and linearity of the calibration graph. The research with high-resolution (HR) CS AAS was pioneered by the groups of O’Haver and Harnly in the US, who also developed the (up until now) only simultaneous multi-element spectrometer for this technique. The breakthrough, however, came when the group of Becker-Ross in Berlin, Germany, built a spectrometer entirely designed for HR-CS AAS. The first commercial equipment for HR-CS AAS was introduced by Analytik Jena (Jena, Germany) at the beginning of the 21st century, based on the design proposed by Becker-Ross and Florek. These spectrometers use a compact double monochromator with a prism pre-monochromator and an echelle grating monochromator for high resolution. A linear charge-coupled device (CCD) array with 200 pixels is used as the detector. The second monochromator does not have an exit slit; hence the spectral environment at both sides of the analytical line becomes visible at high resolution. As typically only 3–5 pixels are used to measure the atomic absorption, the other pixels are available for correction purposes. One of these corrections is that for lamp flicker noise, which is independent of wavelength, resulting in measurements with very low noise level; other corrections are those for background absorption, as will be discussed later. Background absorption and background correction The relatively small number of atomic absorption lines (compared to atomic emission lines) and their narrow width (a few pm) make spectral overlap rare; there are only few examples known that an absorption line from one element will overlap with another. Molecular absorption, in contrast, is much broader, so that it is more likely that some molecular absorption band will overlap with an atomic line. This kind of absorption might be caused by un-dissociated molecules of concomitant elements of the sample or by flame gases. We have to distinguish between the spectra of di-atomic molecules, which exhibit a pronounced fine structure, and those of larger (usually tri-atomic) molecules that don't show such fine structure. Another source of background absorption, particularly in ET AAS, is scattering of the primary radiation at particles that are generated in the atomization stage, when the matrix could not be removed sufficiently in the pyrolysis stage. All these phenomena, molecular absorption and radiation scattering, can result in artificially high absorption and an improperly high (erroneous) calculation for the concentration or mass of the analyte in the sample. There are several techniques available to correct for background absorption, and they are significantly different for LS AAS and HR-CS AAS. Background correction techniques in LS AAS In LS AAS background absorption can only be corrected using instrumental techniques, and all of them are based on two sequential measurements: firstly, total absorption (atomic plus background), secondly, background absorption only. The difference of the two measurements gives the net atomic absorption. Because of this, and because of the use of additional devices in the spectrometer, the signal-to-noise ratio of background-corrected signals is always significantly inferior compared to uncorrected signals. It should also be pointed out that in LS AAS there is no way to correct for (the rare case of) a direct overlap of two atomic lines. In essence there are three techniques used for background correction in LS AAS: Deuterium background correction This is the oldest and still most commonly used technique, particularly for flame AAS. In this case, a separate source (a deuterium lamp) with broad emission is used to measure the background absorption over the entire width of the exit slit of the spectrometer. The use of a separate lamp makes this technique the least accurate one, as it cannot correct for any structured background. It also cannot be used at wavelengths above about 320 nm, as the emission intensity of the deuterium lamp becomes very weak. The use of deuterium HCL is preferable compared to an arc lamp due to the better fit of the image of the former lamp with that of the analyte HCL. Smith-Hieftje background correction This technique (named after their inventors) is based on the line-broadening and self-reversal of emission lines from HCL when high current is applied. Total absorption is measured with normal lamp current, i.e., with a narrow emission line, and background absorption after application of a high-current pulse with the profile of the self-reversed line, which has little emission at the original wavelength, but strong emission on both sides of the analytical line. The advantage of this technique is that only one radiation source is used; among the disadvantages are that the high-current pulses reduce lamp lifetime, and that the technique can only be used for relatively volatile elements, as only those exhibit sufficient self-reversal to avoid dramatic loss of sensitivity. Another problem is that background is not measured at the same wavelength as total absorption, making the technique unsuitable for correcting structured background. Zeeman-effect background correction An alternating magnetic field is applied at the atomizer (graphite furnace) to split the absorption line into three components, the π component, which remains at the same position as the original absorption line, and two σ components, which are moved to higher and lower wavelengths, respectively. Total absorption is measured without magnetic field and background absorption with the magnetic field on. The π component has to be removed in this case, e.g. using a polarizer, and the σ components do not overlap with the emission profile of the lamp, so that only the background absorption is measured. The advantages of this technique are that total and background absorption are measured with the same emission profile of the same lamp, so that any kind of background, including background with fine structure can be corrected accurately, unless the molecule responsible for the background is also affected by the magnetic field and using a chopper as a polariser reduces the signal to noise ratio. While the disadvantages are the increased complexity of the spectrometer and power supply needed for running the powerful magnet needed to split the absorption line. Background correction techniques in HR-CS AAS In HR-CS AAS background correction is carried out mathematically in the software using information from detector pixels that are not used for measuring atomic absorption; hence, in contrast to LS AAS, no additional components are required for background correction. Background correction using correction pixels It has already been mentioned that in HR-CS AAS lamp flicker noise is eliminated using correction pixels. In fact, any increase or decrease in radiation intensity that is observed to the same extent at all pixels chosen for correction is eliminated by the correction algorithm. This obviously also includes a reduction of the measured intensity due to radiation scattering or molecular absorption, which is corrected in the same way. As measurement of total and background absorption, and correction for the latter, are strictly simultaneous (in contrast to LS AAS), even the fastest changes of background absorption, as they may be observed in ET AAS, do not cause any problem. In addition, as the same algorithm is used for background correction and elimination of lamp noise, the background corrected signals show a much better signal-to-noise ratio compared to the uncorrected signals, which is also in contrast to LS AAS. Background correction using a least-squares algorithm The above technique can obviously not correct for a background with fine structure, as in this case the absorbance will be different at each of the correction pixels. In this case HR-CS AAS is offering the possibility to measure correction spectra of the molecule(s) that is (are) responsible for the background and store them in the computer. These spectra are then multiplied with a factor to match the intensity of the sample spectrum and subtracted pixel by pixel and spectrum by spectrum from the sample spectrum using a least-squares algorithm. This might sound complex, but first of all the number of di-atomic molecules that can exist at the temperatures of the atomizers used in AAS is relatively small, and second, the correction is performed by the computer within a few seconds. The same algorithm can actually also be used to correct for direct line overlap of two atomic absorption lines, making HR-CS AAS the only AAS technique that can correct for this kind of spectral interference.
Physical sciences
Spectroscopy
Chemistry
2704
https://en.wikipedia.org/wiki/Optical%20aberration
Optical aberration
In optics, aberration is a property of optical systems, such as lenses, that causes light to be spread out over some region of space rather than focused to a point. Aberrations cause the image formed by a lens to be blurred or distorted, with the nature of the distortion depending on the type of aberration. Aberration can be defined as a departure of the performance of an optical system from the predictions of paraxial optics. In an imaging system, it occurs when light from one point of an object does not converge into (or does not diverge from) a single point after transmission through the system. Aberrations occur because the simple paraxial theory is not a completely accurate model of the effect of an optical system on light, rather than due to flaws in the optical elements. An image-forming optical system with aberration will produce an image which is not sharp. Makers of optical instruments need to correct optical systems to compensate for aberration. Aberrations are particularly impactful in telescopes, where they can significantly degrade the quality of observed celestial objects. Understanding and correcting these optical imperfections are crucial for astronomers to achieve clear and accurate observations. Aberration can be analyzed with the techniques of geometrical optics. The articles on reflection, refraction and caustics discuss the general features of reflected and refracted rays. Overview With an ideal lens, light from any given point on an object would pass through the lens and come together at a single point in the image plane (or, more generally, the image surface). Real lenses, even when they are perfectly made, do not however focus light exactly to a single point. These deviations from the idealized lens performance are called aberrations of the lens. Aberrations fall into two classes: monochromatic and chromatic. Monochromatic aberrations are caused by the geometry of the lens or mirror and occur both when light is reflected and when it is refracted. They appear even when using monochromatic light, hence the name. Chromatic aberrations are caused by dispersion, the variation of a lens's refractive index with wavelength. Because of dispersion, different wavelengths of light come to focus at different points. Chromatic aberration does not appear when monochromatic light is used. Monochromatic aberrations The most common monochromatic aberrations are: Defocus Spherical aberration Coma Astigmatism Field curvature Image distortion Although defocus is technically the lowest-order of the optical aberrations, it is usually not considered as a lens aberration, since it can be corrected by moving the lens (or the image plane) to bring the image plane to the optical focus of the lens. In addition to these aberrations, piston and tilt are effects which shift the position of the focal point. Piston and tilt are not true optical aberrations, since when an otherwise perfect wavefront is altered by piston and tilt, it will still form a perfect, aberration-free image, only shifted to a different position. Chromatic aberrations Chromatic aberration occurs when different wavelengths are not focussed to the same point. Types of chromatic aberration are: Axial (or "longitudinal") chromatic aberration Lateral (or "transverse") chromatic aberration Theory of monochromatic aberration In a perfect optical system in the classical theory of optics, rays of light proceeding from any object point unite in an image point; and therefore the object space is reproduced in an image space. The introduction of simple auxiliary terms, due to Gauss, named the focal lengths and focal planes, permits the determination of the image of any object for any system. The Gaussian theory, however, is only true so long as the angles made by all rays with the optical axis (the symmetrical axis of the system) are infinitely small, i.e., with infinitesimal objects, images and lenses; in practice these conditions may not be realized, and the images projected by uncorrected systems are, in general, ill-defined and often blurred if the aperture or field of view exceeds certain limits. The investigations of James Clerk Maxwell and Ernst Abbe showed that the properties of these reproductions, i.e., the relative position and magnitude of the images, are not special properties of optical systems, but necessary consequences of the supposition (per Abbe) of the reproduction of all points of a space in image points, and are independent of the manner in which the reproduction is effected. These authors showed, however, that no optical system can justify these suppositions, since they are contradictory to the fundamental laws of reflection and refraction. Consequently, the Gaussian theory only supplies a convenient method of approximating reality; realistic optical systems fall short of this unattainable ideal. Currently, all that can be accomplished is the projection of a single plane onto another plane; but even in this, aberrations always occurs and it may be unlikely that these will ever be entirely corrected. Aberration of axial points (spherical aberration in the restricted sense) Let S (fig. 1) be any optical system, rays proceeding from an axis point O under an angle u1 will unite in the axis point O'1; and those under an angle u2 in the axis point O'2. If there is refraction at a collective spherical surface, or through a thin positive lens, O'2 will lie in front of O'1 so long as the angle u2 is greater than u1 (under correction); and conversely with a dispersive surface or lenses (over correction). The caustic, in the first case, resembles the sign > (greater than); in the second < (less than). If the angle u1 is very small, O'1 is the Gaussian image; and O'1 O'2 is termed the longitudinal aberration, and O'1R the lateral aberration of the pencils with aperture u2. If the pencil with the angle u2 is that of the maximum aberration of all the pencils transmitted, then in a plane perpendicular to the axis at O'1 there is a circular disk of confusion of radius O'1R, and in a parallel plane at O'2 another one of radius O'2R2; between these two is situated the disk of least confusion. The largest opening of the pencils, which take part in the reproduction of O, i.e., the angle u, is generally determined by the margin of one of the lenses or by a hole in a thin plate placed between, before, or behind the lenses of the system. This hole is termed the stop or diaphragm; Abbe used the term aperture stop for both the hole and the limiting margin of the lens. The component S1 of the system, situated between the aperture stop and the object O, projects an image of the diaphragm, termed by Abbe the entrance pupil; the exit pupil is the image formed by the component S2, which is placed behind the aperture stop. All rays which issue from O and pass through the aperture stop also pass through the entrance and exit pupils, since these are images of the aperture stop. Since the maximum aperture of the pencils issuing from O is the angle u subtended by the entrance pupil at this point, the magnitude of the aberration will be determined by the position and diameter of the entrance pupil. If the system be entirely behind the aperture stop, then this is itself the entrance pupil (front stop); if entirely in front, it is the exit pupil (back stop). If the object point be infinitely distant, all rays received by the first member of the system are parallel, and their intersections, after traversing the system, vary according to their perpendicular height of incidence, i.e. their distance from the axis. This distance replaces the angle u in the preceding considerations; and the aperture, i.e., the radius of the entrance pupil, is its maximum value. Aberration of elements, i.e. smallest objects at right angles to the axis If rays issuing from O (fig. 1) are concurrent, it does not follow that points in a portion of a plane perpendicular at O to the axis will be also concurrent, even if the part of the plane be very small. As the diameter of the lens increases (i.e., with increasing aperture), the neighboring point N will be reproduced, but attended by aberrations comparable in magnitude to ON. These aberrations are avoided if, according to Abbe, the sine condition, sin u'1/sin u1=sin u'2/sin u2, holds for all rays reproducing the point O. If the object point O is infinitely distant, u1 and u2 are to be replaced by h1 and h2, the perpendicular heights of incidence; the sine condition then becomes sin u'1/h1=sin u'2/h2. A system fulfilling this condition and free from spherical aberration is called aplanatic (Greek a-, privative, plann, a wandering). This word was first used by Robert Blair to characterize a superior achromatism, and, subsequently, by many writers to denote freedom from spherical aberration as well. Since the aberration increases with the distance of the ray from the center of the lens, the aberration increases as the lens diameter increases (or, correspondingly, with the diameter of the aperture), and hence can be minimized by reducing the aperture, at the cost of also reducing the amount of light reaching the image plane. Aberration of lateral object points (points beyond the axis) with narrow pencils — astigmatism A point O (fig. 2) at a finite distance from the axis (or with an infinitely distant object, a point which subtends a finite angle at the system) is, in general, even then not sharply reproduced if the pencil of rays issuing from it and traversing the system is made infinitely narrow by reducing the aperture stop; such a pencil consists of the rays which can pass from the object point through the now infinitely small entrance pupil. It is seen (ignoring exceptional cases) that the pencil does not meet the refracting or reflecting surface at right angles; therefore it is astigmatic (Gr. a-, privative, stigmia, a point). Naming the central ray passing through the entrance pupil the axis of the pencil or principal ray, it can be said: the rays of the pencil intersect, not in one point, but in two focal lines, which can be assumed to be at right angles to the principal ray; of these, one lies in the plane containing the principal ray and the axis of the system, i.e. in the first principal section or meridional section, and the other at right angles to it, i.e. in the second principal section or sagittal section. We receive, therefore, in no single intercepting plane behind the system, as, for example, a focusing screen, an image of the object point; on the other hand, in each of two planes lines O' and O" are separately formed (in neighboring planes ellipses are formed), and in a plane between O' and O" a circle of least confusion. The interval O'O", termed the astigmatic difference, increases, in general, with the angle W made by the principal ray OP with the axis of the system, i.e. with the field of view. Two astigmatic image surfaces correspond to one object plane; and these are in contact at the axis point; on the one lie the focal lines of the first kind, on the other those of the second. Systems in which the two astigmatic surfaces coincide are termed anastigmatic or stigmatic. Sir Isaac Newton was probably the discoverer of astigmation; the position of the astigmatic image lines was determined by Thomas Young; and the theory was developed by Allvar Gullstrand. A bibliography by P. Culmann is given in Moritz von Rohr's Die Bilderzeugung in optischen Instrumenten. Aberration of lateral object points with broad pencils — coma By opening the stop wider, similar deviations arise for lateral points as have been already discussed for axial points; but in this case they are much more complicated. The course of the rays in the meridional section is no longer symmetrical to the principal ray of the pencil; and on an intercepting plane there appears, instead of a luminous point, a patch of light, not symmetrical about a point, and often exhibiting a resemblance to a comet having its tail directed towards or away from the axis. From this appearance it takes its name. The unsymmetrical form of the meridional pencil—formerly the only one considered—is coma in the narrower sense only; other errors of coma have been treated by Arthur König and Moritz von Rohr, and later by Allvar Gullstrand. Curvature of the field of the image If the above errors be eliminated, the two astigmatic surfaces united, and a sharp image obtained with a wide aperture—there remains the necessity to correct the curvature of the image surface, especially when the image is to be received upon a plane surface, e.g. in photography. In most cases the surface is concave towards the system. Distortion of the image Even if the image is sharp, it may be distorted compared to ideal pinhole projection. In pinhole projection, the magnification of an object is inversely proportional to its distance to the camera along the optical axis so that a camera pointing directly at a flat surface reproduces that flat surface. Distortion can be thought of as stretching the image non-uniformly, or, equivalently, as a variation in magnification across the field. While "distortion" can include arbitrary deformation of an image, the most pronounced modes of distortion produced by conventional imaging optics is "barrel distortion", in which the center of the image is magnified more than the perimeter (figure 3a). The reverse, in which the perimeter is magnified more than the center, is known as "pincushion distortion" (figure 3b). This effect is called lens distortion or image distortion, and there are algorithms to correct it. Systems free of distortion are called orthoscopic (orthos, right, skopein to look) or rectilinear (straight lines). This aberration is quite distinct from that of the sharpness of reproduction; in unsharp, reproduction, the question of distortion arises if only parts of the object can be recognized in the figure. If, in an unsharp image, a patch of light corresponds to an object point, the center of gravity of the patch may be regarded as the image point, this being the point where the plane receiving the image, e.g., a focusing screen, intersects the ray passing through the middle of the stop. This assumption is justified if a poor image on the focusing screen remains stationary when the aperture is diminished; in practice, this generally occurs. This ray, named by Abbe a principal ray (not to be confused with the principal rays of the Gaussian theory), passes through the center of the entrance pupil before the first refraction, and the center of the exit pupil after the last refraction. From this it follows that correctness of drawing depends solely upon the principal rays; and is independent of the sharpness or curvature of the image field. Referring to fig. 4, we have O'Q'/OQ = a' tan w'/a tan w = 1/N, where N is the scale or magnification of the image. For N to be constant for all values of w, a' tan w'/a tan w must also be constant. If the ratio a'/a be sufficiently constant, as is often the case, the above relation reduces to the condition of Airy, i.e. tan w'/ tan w= a constant. This simple relation (see Camb. Phil. Trans., 1830, 3, p. 1) is fulfilled in all systems which are symmetrical with respect to their diaphragm (briefly named symmetrical or holosymmetrical objectives), or which consist of two like, but different-sized, components, placed from the diaphragm in the ratio of their size, and presenting the same curvature to it (hemisymmetrical objectives); in these systems tan w' / tan w = 1. The constancy of a'/a necessary for this relation to hold was pointed out by R. H. Bow (Brit. Journ. Photog., 1861), and Thomas Sutton (Photographic
Physical sciences
Optics
Physics
2724
https://en.wikipedia.org/wiki/Autocorrelation
Autocorrelation
Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals. Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance. Unit root processes, trend-stationary processes, autoregressive processes, and moving average processes are specific forms of processes with autocorrelation. Autocorrelation of stochastic processes In statistics, the autocorrelation of a real or complex random process is the Pearson correlation between values of the process at different times, as a function of the two times or of the time lag. Let be a random process, and be any point in time ( may be an integer for a discrete-time process or a real number for a continuous-time process). Then is the value (or realization) produced by a given run of the process at time . Suppose that the process has mean and variance at time , for each . Then the definition of the autocorrelation function between times and is where is the expected value operator and the bar represents complex conjugation. Note that the expectation may not be well defined. Subtracting the mean before multiplication yields the auto-covariance function between times and : Note that this expression is not well defined for all-time series or processes, because the mean may not exist, or the variance may be zero (for a constant process) or infinite (for processes with distribution lacking well-behaved moments, such as certain types of power law). Definition for wide-sense stationary stochastic process If is a wide-sense stationary process then the mean and the variance are time-independent, and further the autocovariance function depends only on the lag between and : the autocovariance depends only on the time-distance between the pair of values but not on their position in time. This further implies that the autocovariance and autocorrelation can be expressed as a function of the time-lag, and that this would be an even function of the lag . This gives the more familiar forms for the autocorrelation function and the auto-covariance function: In particular, note that Normalization It is common practice in some disciplines (e.g. statistics and time series analysis) to normalize the autocovariance function to get a time-dependent Pearson correlation coefficient. However, in other disciplines (e.g. engineering) the normalization is usually dropped and the terms "autocorrelation" and "autocovariance" are used interchangeably. The definition of the autocorrelation coefficient of a stochastic process is If the function is well defined, its value must lie in the range , with 1 indicating perfect correlation and −1 indicating perfect anti-correlation. For a wide-sense stationary (WSS) process, the definition is . The normalization is important both because the interpretation of the autocorrelation as a correlation provides a scale-free measure of the strength of statistical dependence, and because the normalization has an effect on the statistical properties of the estimated autocorrelations. Properties Symmetry property The fact that the autocorrelation function is an even function can be stated as respectively for a WSS process: Maximum at zero For a WSS process: Notice that is always real. Cauchy–Schwarz inequality The Cauchy–Schwarz inequality, inequality for stochastic processes: Autocorrelation of white noise The autocorrelation of a continuous-time white noise signal will have a strong peak (represented by a Dirac delta function) at and will be exactly for all other . Wiener–Khinchin theorem The Wiener–Khinchin theorem relates the autocorrelation function to the power spectral density via the Fourier transform: For real-valued functions, the symmetric autocorrelation function has a real symmetric transform, so the Wiener–Khinchin theorem can be re-expressed in terms of real cosines only: Autocorrelation of random vectors The (potentially time-dependent) autocorrelation matrix (also called second moment) of a (potentially time-dependent) random vector is an matrix containing as elements the autocorrelations of all pairs of elements of the random vector . The autocorrelation matrix is used in various digital signal processing algorithms. For a random vector containing random elements whose expected value and variance exist, the autocorrelation matrix is defined by where denotes the transposed matrix of dimensions . Written component-wise: If is a complex random vector, the autocorrelation matrix is instead defined by Here denotes Hermitian transpose. For example, if is a random vector, then is a matrix whose -th entry is . Properties of the autocorrelation matrix The autocorrelation matrix is a Hermitian matrix for complex random vectors and a symmetric matrix for real random vectors. The autocorrelation matrix is a positive semidefinite matrix, i.e. for a real random vector, and respectively in case of a complex random vector. All eigenvalues of the autocorrelation matrix are real and non-negative. The auto-covariance matrix is related to the autocorrelation matrix as follows:Respectively for complex random vectors: Autocorrelation of deterministic signals In signal processing, the above definition is often used without the normalization, that is, without subtracting the mean and dividing by the variance. When the autocorrelation function is normalized by mean and variance, it is sometimes referred to as the autocorrelation coefficient or autocovariance function. Autocorrelation of continuous-time signal Given a signal , the continuous autocorrelation is most often defined as the continuous cross-correlation integral of with itself, at lag . where represents the complex conjugate of . Note that the parameter in the integral is a dummy variable and is only necessary to calculate the integral. It has no specific meaning. Autocorrelation of discrete-time signal The discrete autocorrelation at lag for a discrete-time signal is The above definitions work for signals that are square integrable, or square summable, that is, of finite energy. Signals that "last forever" are treated instead as random processes, in which case different definitions are needed, based on expected values. For wide-sense-stationary random processes, the autocorrelations are defined as For processes that are not stationary, these will also be functions of , or . For processes that are also ergodic, the expectation can be replaced by the limit of a time average. The autocorrelation of an ergodic process is sometimes defined as or equated to These definitions have the advantage that they give sensible well-defined single-parameter results for periodic functions, even when those functions are not the output of stationary ergodic processes. Alternatively, signals that last forever can be treated by a short-time autocorrelation function analysis, using finite time integrals. (See short-time Fourier transform for a related process.) Definition for periodic signals If is a continuous periodic function of period , the integration from to is replaced by integration over any interval of length : which is equivalent to Properties In the following, we will describe properties of one-dimensional autocorrelations only, since most properties are easily transferred from the one-dimensional case to the multi-dimensional cases. These properties hold for wide-sense stationary processes. A fundamental property of the autocorrelation is symmetry, , which is easy to prove from the definition. In the continuous case, the autocorrelation is an even function when is a real function, and the autocorrelation is a Hermitian function when is a complex function. The continuous autocorrelation function reaches its peak at the origin, where it takes a real value, i.e. for any delay , . This is a consequence of the rearrangement inequality. The same result holds in the discrete case. The autocorrelation of a periodic function is, itself, periodic with the same period. The autocorrelation of the sum of two completely uncorrelated functions (the cross-correlation is zero for all ) is the sum of the autocorrelations of each function separately. Since autocorrelation is a specific type of cross-correlation, it maintains all the properties of cross-correlation. By using the symbol to represent convolution and is a function which manipulates the function and is defined as , the definition for may be written as: Multi-dimensional autocorrelation Multi-dimensional autocorrelation is defined similarly. For example, in three dimensions the autocorrelation of a square-summable discrete signal would be When mean values are subtracted from signals before computing an autocorrelation function, the resulting function is usually called an auto-covariance function. Efficient computation For data expressed as a discrete sequence, it is frequently necessary to compute the autocorrelation with high computational efficiency. A brute force method based on the signal processing definition can be used when the signal size is small. For example, to calculate the autocorrelation of the real signal sequence (i.e. , and for all other values of ) by hand, we first recognize that the definition just given is the same as the "usual" multiplication, but with right shifts, where each vertical addition gives the autocorrelation for particular lag values: Thus the required autocorrelation sequence is , where and the autocorrelation for other lag values being zero. In this calculation we do not perform the carry-over operation during addition as is usual in normal multiplication. Note that we can halve the number of operations required by exploiting the inherent symmetry of the autocorrelation. If the signal happens to be periodic, i.e. then we get a circular autocorrelation (similar to circular convolution) where the left and right tails of the previous autocorrelation sequence will overlap and give which has the same period as the signal sequence The procedure can be regarded as an application of the convolution property of Z-transform of a discrete signal. While the brute force algorithm is order , several efficient algorithms exist which can compute the autocorrelation in order . For example, the Wiener–Khinchin theorem allows computing the autocorrelation from the raw data with two fast Fourier transforms (FFT): where IFFT denotes the inverse fast Fourier transform. The asterisk denotes complex conjugate. Alternatively, a multiple correlation can be performed by using brute force calculation for low values, and then progressively binning the data with a logarithmic density to compute higher values, resulting in the same efficiency, but with lower memory requirements. Estimation For a discrete process with known mean and variance for which we observe observations , an estimate of the autocorrelation coefficient may be obtained as for any positive integer . When the true mean and variance are known, this estimate is unbiased. If the true mean and variance of the process are not known there are several possibilities: If and are replaced by the standard formulae for sample mean and sample variance, then this is a biased estimate. A periodogram-based estimate replaces in the above formula with . This estimate is always biased; however, it usually has a smaller mean squared error. Other possibilities derive from treating the two portions of data and separately and calculating separate sample means and/or sample variances for use in defining the estimate. The advantage of estimates of the last type is that the set of estimated autocorrelations, as a function of , then form a function which is a valid autocorrelation in the sense that it is possible to define a theoretical process having exactly that autocorrelation. Other estimates can suffer from the problem that, if they are used to calculate the variance of a linear combination of the 's, the variance calculated may turn out to be negative. Regression analysis In regression analysis using time series data, autocorrelation in a variable of interest is typically modeled either with an autoregressive model (AR), a moving average model (MA), their combination as an autoregressive-moving-average model (ARMA), or an extension of the latter called an autoregressive integrated moving average model (ARIMA). With multiple interrelated data series, vector autoregression (VAR) or its extensions are used. In ordinary least squares (OLS), the adequacy of a model specification can be checked in part by establishing whether there is autocorrelation of the regression residuals. Problematic autocorrelation of the errors, which themselves are unobserved, can generally be detected because it produces autocorrelation in the observable residuals. (Errors are also known as "error terms" in econometrics.) Autocorrelation of the errors violates the ordinary least squares assumption that the error terms are uncorrelated, meaning that the Gauss Markov theorem does not apply, and that OLS estimators are no longer the Best Linear Unbiased Estimators (BLUE). While it does not bias the OLS coefficient estimates, the standard errors tend to be underestimated (and the t-scores overestimated) when the autocorrelations of the errors at low lags are positive. The traditional test for the presence of first-order autocorrelation is the Durbin–Watson statistic or, if the explanatory variables include a lagged dependent variable, Durbin's h statistic. The Durbin-Watson can be linearly mapped however to the Pearson correlation between values and their lags. A more flexible test, covering autocorrelation of higher orders and applicable whether or not the regressors include lags of the dependent variable, is the Breusch–Godfrey test. This involves an auxiliary regression, wherein the residuals obtained from estimating the model of interest are regressed on (a) the original regressors and (b) k lags of the residuals, where 'k' is the order of the test. The simplest version of the test statistic from this auxiliary regression is TR2, where T is the sample size and R2 is the coefficient of determination. Under the null hypothesis of no autocorrelation, this statistic is asymptotically distributed as with k degrees of freedom. Responses to nonzero autocorrelation include generalized least squares and the Newey–West HAC estimator (Heteroskedasticity and Autocorrelation Consistent). In the estimation of a moving average model (MA), the autocorrelation function is used to determine the appropriate number of lagged error terms to be included. This is based on the fact that for an MA process of order q, we have , for , and , for . Applications Autocorrelation's ability to find repeating patterns in data yields many applications, including: Autocorrelation analysis is used heavily in fluorescence correlation spectroscopy to provide quantitative insight into molecular-level diffusion and chemical reactions. Another application of autocorrelation is the measurement of optical spectra and the measurement of very-short-duration light pulses produced by lasers, both using optical autocorrelators. Autocorrelation is used to analyze dynamic light scattering data, which notably enables determination of the particle size distributions of nanometer-sized particles or micelles suspended in a fluid. A laser shining into the mixture produces a speckle pattern that results from the motion of the particles. Autocorrelation of the signal can be analyzed in terms of the diffusion of the particles. From this, knowing the viscosity of the fluid, the sizes of the particles can be calculated. Utilized in the GPS system to correct for the propagation delay, or time shift, between the point of time at the transmission of the carrier signal at the satellites, and the point of time at the receiver on the ground. This is done by the receiver generating a replica signal of the 1,023-bit C/A (Coarse/Acquisition) code, and generating lines of code chips [-1,1] in packets of ten at a time, or 10,230 chips (1,023 × 10), shifting slightly as it goes along in order to accommodate for the doppler shift in the incoming satellite signal, until the receiver replica signal and the satellite signal codes match up. The small-angle X-ray scattering intensity of a nanostructured system is the Fourier transform of the spatial autocorrelation function of the electron density. In surface science and scanning probe microscopy, autocorrelation is used to establish a link between surface morphology and functional characteristics. In optics, normalized autocorrelations and cross-correlations give the degree of coherence of an electromagnetic field. In astronomy, autocorrelation can determine the frequency of pulsars. In music, autocorrelation (when applied at time scales smaller than a second) is used as a pitch detection algorithm for both instrument tuners and "Auto Tune" (used as a distortion effect or to fix intonation). When applied at time scales larger than a second, autocorrelation can identify the musical beat, for example to determine tempo. Autocorrelation in space rather than time, via the Patterson function, is used by X-ray diffractionists to help recover the "Fourier phase information" on atom positions not available through diffraction alone. In statistics, spatial autocorrelation between sample locations also helps one estimate mean value uncertainties when sampling a heterogeneous population. The SEQUEST algorithm for analyzing mass spectra makes use of autocorrelation in conjunction with cross-correlation to score the similarity of an observed spectrum to an idealized spectrum representing a peptide. In astrophysics, autocorrelation is used to study and characterize the spatial distribution of galaxies in the universe and in multi-wavelength observations of low mass X-ray binaries. In panel data, spatial autocorrelation refers to correlation of a variable with itself through space. In analysis of Markov chain Monte Carlo data, autocorrelation must be taken into account for correct error determination. In geosciences (specifically in geophysics) it can be used to compute an autocorrelation seismic attribute, out of a 3D seismic survey of the underground. In medical ultrasound imaging, autocorrelation is used to visualize blood flow. In intertemporal portfolio choice, the presence or absence of autocorrelation in an asset's rate of return can affect the optimal portion of the portfolio to hold in that asset. In numerical relays, autocorrelation has been used to accurately measure power system frequency. Serial dependence Serial dependence is closely linked to the notion of autocorrelation, but represents a distinct concept (see Correlation and dependence). In particular, it is possible to have serial dependence but no (linear) correlation. In some fields however, the two terms are used as synonyms. A time series of a random variable has serial dependence if the value at some time in the series is statistically dependent on the value at another time . A series is serially independent if there is no dependence between any pair. If a time series is stationary, then statistical dependence between the pair would imply that there is statistical dependence between all pairs of values at the same lag .
Mathematics
Statistics
null
2753
https://en.wikipedia.org/wiki/AutoCAD
AutoCAD
AutoCAD is a 2D and 3D computer-aided design (CAD) software application developed by Autodesk. It was first released in December 1982 for the CP/M and IBM PC platforms as a desktop app running on microcomputers with internal graphics controllers. Initially a DOS application, subsequent versions were later released for other platforms including Classic Mac OS (1992), Microsoft Windows (1993) and macOS (2010), iOS (2010), and Android (2011). AutoCAD is a general drafting and design application used in industry by architects, project managers, engineers, interior designers, graphic designers, city planners, and other professionals to prepare technical drawings. After discontinuing the sale of perpetual licenses in January 2016, commercial versions of AutoCAD are licensed through a term-based subscription or Autodesk Flex, a pay-as-you-go option introduced on September 24, 2021. Subscriptions to the desktop version of AutoCAD include access to the web and mobile applications. However, users can subscribe separately to the AutoCAD Web App online or AutoCAD Mobile through an in-app purchase. History Before AutoCAD was introduced, most CAD programs ran on mainframe computers or minicomputers, with each CAD operator (user) working at a separate graphics terminal. Origins AutoCAD was derived from a program that began in 1977, and then released in 1979 called Interact CAD, also referred to in early Autodesk documents as MicroCAD, which was written prior to Autodesk's (then Marinchip Software Partners) formation by Autodesk cofounder Michael Riddle. The first version by Autodesk was demonstrated at the 1982 Comdex and released that December. AutoCAD supported CP/M-80 computers. As Autodesk's flagship product, by March 1986 AutoCAD had become the most ubiquitous CAD program worldwide. The first UNIX version was Release 10 for Xenix in October 1989, while the first version for Windows was Release 12, released in February 1993. Features Compatibility with other software Many software applications such as Autodesk Civil 3D and ESRI ArcMap 10 permits export as AutoCAD drawing files. Third-party file converters exist for specific formats such as Bentley MX GENIO Extension, PISTE Extension (France), ISYBAU (Germany), OKSTRA and Microdrainage (UK); also, conversion of .pdf files is feasible, however, the accuracy of the results may be unpredictable or distorted. For example, jagged edges may appear. Several vendors provide online conversions for free such as Cometdocs. Language AutoCAD and AutoCAD LT are available for English, German, French, Italian, Spanish, Japanese, Korean, Chinese Simplified, Chinese Traditional, Brazilian Portuguese, Russian, Czech, Polish and Hungarian (also through additional language packs). The extent of localization varies from full translation of the product to documentation only. The AutoCAD command set is localized as a part of the software localization. Extensions AutoCAD supports a number of APIs for customization and automation. These include AutoLISP, Visual LISP, VBA, .NET, JavaScript, and ObjectARX. ObjectARX is a C++ class library, which was also the base for: products extending AutoCAD functionality to specific fields creating products such as AutoCAD Architecture, AutoCAD Electrical, AutoCAD Civil 3D third-party AutoCAD-based application There are a large number of AutoCAD plugins (add-on applications) available on the application store Autodesk Exchange Apps. AutoCAD's DXF, drawing exchange format, allows importing and exporting drawing information. Vertical integration Autodesk has also developed a few vertical programs for discipline-specific enhancements such as: Advance Steel AutoCAD Architecture AutoCAD Electrical AutoCAD Map 3D AutoCAD Mechanical AutoCAD MEP AutoCAD Plant 3D Autodesk Civil 3D Since AutoCAD 2019 several verticals are included with AutoCAD subscription as Industry-Specific Toolset. For example, AutoCAD Architecture (formerly Architectural Desktop) permits architectural designers to draw 3D objects, such as walls, doors, and windows, with more intelligent data associated with them rather than simple objects, such as lines and circles. The data can be programmed to represent specific architectural products sold in the construction industry, or extracted into a data file for pricing, materials estimation, and other values related to the objects represented. Additional tools generate standard 2D drawings, such as elevations and sections, from a 3D architectural model. Similarly, Civil Design, Civil Design 3D, and Civil Design Professional support data-specific objects facilitating easy standard civil engineering calculations and representations. Softdesk Civil was developed as an AutoCAD add-on by a company in New Hampshire called Softdesk (originally DCA). Softdesk was acquired by Autodesk, and Civil became Land Development Desktop (LDD), later renamed Land Desktop. Civil 3D was later developed and Land Desktop was retired. Platforms File formats AutoCAD's native file formats are denoted either by a .dwg, .dwt, .dws, or .dxf filename extension. .dwg and, to a lesser extent, .dxf, have become de facto, if proprietary, standards for CAD data interoperability, particularly for 2D drawing exchange. The primary file format for 2D and 3D drawing files created with AutoCAD is .dwg. While other third-party CAD software applications can create .dwg files, AutoCAD uniquely creates RealDWG files. The drawing version code changes between AutoCAD releases. Using AutoCAD, any .dwg file may be saved to a derivative format. These derivative formats include: Drawing Template Files .dwt: New .dwg are created from a .dwt file. Although the default template file is acad.dwt for AutoCAD and acadlt.dwt for AutoCAD LT, custom .dwt files may be created to include foundational configurations such as drawing units and layers. Drawing Standards File .dws: Using the CAD Standards feature of AutoCAD, a Drawing Standards File may be associated to any .dwg or .dwt file to enforce graphical standards. Drawing Interchange Format .dxf: The .dxf format is an ASCII representation of a .dwg file, and is used to transfer data between various applications. Variants AutoCAD LT AutoCAD LT is the lower-cost version of AutoCAD, with reduced capabilities, first released in November 1993. Autodesk developed AutoCAD LT to have an entry-level CAD package to compete in the lower price level. Priced at $495, it became the first AutoCAD product priced below $1000. It was sold directly by Autodesk and in computer stores unlike the full version of AutoCAD, which must be purchased from official Autodesk dealers. AutoCAD LT 2015 introduced Desktop Subscription service from $360 per year; as of 2018, three subscription plans were available, from $50 a month to a 3-year, $1170 license. Since AutoCAD LT 2024, AutoCAD LT support LISP customization. While there are hundreds of small differences between the full AutoCAD package and AutoCAD LT, there are a few recognized major differences in the software's features: 3D capabilities: AutoCAD LT lacks the ability to create, visualize and render 3D models as well as 3D printing. Network licensing: AutoCAD LT cannot be used on multiple machines over a network. Customization: AutoCAD LT does not support customization with LISP, ARX, .NET and VBA (Feature introduced with release 2024) Management and automation capabilities with Sheet Set Manager and Action Recorder. CAD standards management tools. AutoCAD Mobile and AutoCAD Web AutoCAD Mobile and AutoCAD Web (formerly AutoCAD WS and AutoCAD 360) is an account-based mobile and web application enabling registered users to view, edit, and share AutoCAD files via mobile device and web using a limited AutoCAD feature set — and using cloud-stored drawing files. The program, which is an evolution and combination of previous products, uses a freemium business model with a free plan and two paid levels, including various amounts of storage, tools, and online access to drawings. 360 includes new features such as a "Smart Pen" mode and linking to third-party cloud-based storage such as Dropbox. Having evolved from Flash-based software, AutoCAD Web uses HTML5 browser technology available in newer browsers including Firefox and Google Chrome. AutoCAD WS began with a version for the iPhone and subsequently expanded to include versions for the iPod Touch, iPad, Android phones, and Android tablets. Autodesk released the iOS version in September 2010, following with the Android version on April 20, 2011. The program is available via download at no cost from the App Store (iOS), Google Play (Android) and Amazon Appstore (Android). In its initial iOS version, AutoCAD WS supported drawing of lines, circles, and other shapes; creation of text and comment boxes; and management of color, layer, and measurements — in both landscape and portrait modes. Version 1.3, released August 17, 2011, added support for unit typing, layer visibility, area measurement and file management. The Android variant includes the iOS feature set along with such unique features as the ability to insert text or captions by voice command as well as manually. Both Android and iOS versions allow the user to save files on-line — or off-line in the absence of an Internet connection. In 2011, Autodesk announced plans to migrate the majority of its software to "the cloud", starting with the AutoCAD WS mobile application. According to a 2013 interview with Ilai Rotbaein, an AutoCAD WS product manager for Autodesk, the name AutoCAD WS had no definitive meaning, and was interpreted variously as Autodesk Web Service, White Sheet or Work Space. In 2013, AutoCAD WS was renamed to AutoCAD 360. Later, it was renamed to AutoCAD Web App. Student versions AutoCAD is licensed, for free, to students, educators, and educational institutions, with a 12-month renewable license available. Licenses acquired before March 25, 2020, were a 36-month license, with its last renovation on March 24, 2020. The student version of AutoCAD is functionally identical to the full commercial version, with one exception: DWG files created or edited by a student version have an internal bit-flag set (the "educational flag"). When such a DWG file is printed by any version of AutoCAD (commercial or student) older than AutoCAD 2014 SP1 or AutoCAD 2019 and newer, the output includes a plot stamp/banner on all four sides. Objects created in the Student Version cannot be used for commercial use. Student Version objects "infect" a commercial version DWG file if they are imported in versions older than AutoCAD 2015 or newer than AutoCAD 2018. Version history
Technology
Science and Engineering
null
2756
https://en.wikipedia.org/wiki/Asexual%20reproduction
Asexual reproduction
Asexual reproduction is a type of reproduction that does not involve the fusion of gametes or change in the number of chromosomes. The offspring that arise by asexual reproduction from either unicellular or multicellular organisms inherit the full set of genes of their single parent and thus the newly created individual is genetically and physically similar to the parent or an exact clone of the parent. Asexual reproduction is the primary form of reproduction for single-celled organisms such as archaea and bacteria. Many eukaryotic organisms including plants, animals, and fungi can also reproduce asexually. In vertebrates, the most common form of asexual reproduction is parthenogenesis, which is typically used as an alternative to sexual reproduction in times when reproductive opportunities are limited. Some monitor lizards, including Komodo dragons, can reproduce asexually. While all prokaryotes reproduce without the formation and fusion of gametes, mechanisms for lateral gene transfer such as conjugation, transformation and transduction can be likened to sexual reproduction in the sense of genetic recombination in meiosis. Types of asexual reproduction Fission Prokaryotes (Archaea and Bacteria) reproduce asexually through binary fission, in which the parent organism divides in two to produce two genetically identical daughter organisms. Eukaryotes (such as protists and unicellular fungi) may reproduce in a functionally similar manner by mitosis; most of these are also capable of sexual reproduction. Multiple fission at the cellular level occurs in many protists, e.g. sporozoans and algae. The nucleus of the parent cell divides several times by mitosis, producing several nuclei. The cytoplasm then separates, creating multiple daughter cells. In apicomplexans, multiple fission, or schizogony appears either as merogony, sporogony or gametogony. Merogony results in merozoites, which are multiple daughter cells, that originate within the same cell membrane, sporogony results in sporozoites, and gametogony results in microgametes. Budding Some cells divide by budding (for example baker's yeast), resulting in a "mother" and a "daughter" cell that is initially smaller than the parent. Budding is also known on a multicellular level; an animal example is the hydra, which reproduces by budding. The buds grow into fully matured individuals which eventually break away from the parent organism. Internal budding is a process of asexual reproduction, favoured by parasites such as Toxoplasma gondii. It involves an unusual process in which two (endodyogeny) or more (endopolygeny) daughter cells are produced inside a mother cell, which is then consumed by the offspring prior to their separation. Also, budding (external or internal) occurs in some worms like Taenia or Echinococcus; these worms produce cysts and then produce (invaginated or evaginated) protoscolex with budding. Vegetative propagation Vegetative propagation is a type of asexual reproduction found in plants where new individuals are formed without the production of seeds or spores and thus without syngamy or meiosis. Examples of vegetative reproduction include the formation of miniaturized plants called plantlets on specialized leaves, for example in kalanchoe (Bryophyllum daigremontianum) and many produce new plants from rhizomes or stolon (for example in strawberry). Some plants reproduce by forming bulbs or tubers, for example tulip bulbs and Dahlia tubers. In these examples, all the individuals are clones, and the clonal population may cover a large area. Spore formation Many multicellular organisms produce spores during their biological life cycle in a process called sporogenesis. Exceptions are animals and some protists, which undergo meiosis immediately followed by fertilization. Plants and many algae on the other hand undergo sporic meiosis where meiosis leads to the formation of haploid spores rather than gametes. These spores grow into multicellular individuals called gametophytes, without a fertilization event. These haploid individuals produce gametes through mitosis. Meiosis and gamete formation therefore occur in separate multicellular generations or "phases" of the life cycle, referred to as alternation of generations. Since sexual reproduction is often more narrowly defined as the fusion of gametes (fertilization), spore formation in plant sporophytes and algae might be considered a form of asexual reproduction (agamogenesis) despite being the result of meiosis and undergoing a reduction in ploidy. However, both events (spore formation and fertilization) are necessary to complete sexual reproduction in the plant life cycle. Fungi and some algae can also utilize true asexual spore formation, which involves mitosis giving rise to reproductive cells called mitospores that develop into a new organism after dispersal. This method of reproduction is found for example in conidial fungi and the red algae Polysiphonia, and involves sporogenesis without meiosis. Thus the chromosome number of the spore cell is the same as that of the parent producing the spores. However, mitotic sporogenesis is an exception and most spores, such as those of plants and many algae, are produced by meiosis. Fragmentation Fragmentation is a form of asexual reproduction where a new organism grows from a fragment of the parent. Each fragment develops into a mature, fully grown individual. Fragmentation is seen in many organisms. Animals that reproduce asexually include planarians, many annelid worms including polychaetes and some oligochaetes, turbellarians and sea stars. Many fungi and plants reproduce asexually. Some plants have specialized structures for reproduction via fragmentation, such as gemmae in mosses and liverworts. Most lichens, which are a symbiotic union of a fungus and photosynthetic algae or cyanobacteria, reproduce through fragmentation to ensure that new individuals contain both symbionts. These fragments can take the form of soredia, dust-like particles consisting of fungal hyphae wrapped around photobiont cells. Clonal Fragmentation in multicellular or colonial organisms is a form of asexual reproduction or cloning where an organism is split into fragments. Each of these fragments develop into mature, fully grown individuals that are clones of the original organism. In echinoderms, this method of reproduction is usually known as fissiparity. Due to many environmental and epigenetic differences, clones originating from the same ancestor might actually be genetically and epigenetically different. Agamogenesis Agamogenesis is any form of reproduction that does not involve a male gamete. Examples are parthenogenesis and apomixis. Parthenogenesis Parthenogenesis is a form of agamogenesis in which an unfertilized egg develops into a new individual. It has been documented in over 2,000 species. Parthenogenesis occurs in the wild in many invertebrates (e.g. water fleas, rotifers, aphids, stick insects, some ants, bees and parasitic wasps) and vertebrates (mostly reptiles, amphibians, and fish). It has also been documented in domestic birds and in genetically altered lab mice. Plants can engage in parthenogenesis as well through a process called apomixis. However this process is considered by many to not be an independent reproduction method, but instead a breakdown of the mechanisms behind sexual reproduction. Parthenogenetic organisms can be split into two main categories: facultative and obligate. Facultative parthenogenesis In facultative parthenogenesis, females can reproduce both sexually and asexually. Because of the many advantages of sexual reproduction, most facultative parthenotes only reproduce asexually when forced to. This typically occurs in instances when finding a mate becomes difficult. For example, female zebra sharks will reproduce asexually if they are unable to find a mate in their ocean habitats. Parthenogenesis was previously believed to rarely occur in vertebrates, and only be possible in very small animals. However, it has been discovered in many more species in recent years. Today, the largest species that has been documented reproducing parthenogenically is the Komodo dragon at 10 feet long and over 300 pounds. Heterogony is a form of facultative parthenogenesis where females alternate between sexual and asexual reproduction at regular intervals (see Alternation between sexual and asexual reproduction). Aphids are one group of organism that engages in this type of reproduction. They use asexual reproduction to reproduce quickly and create winged offspring that can colonize new plants and reproduce sexually in the fall to lay eggs for the next season. However, some aphid species are obligate parthenotes. Obligate parthenogenesis In obligate parthenogenesis, females only reproduce asexually. One example of this is the desert grassland whiptail lizard, a hybrid of two other species. Typically hybrids are infertile but through parthenogenesis this species has been able to develop stable populations. Gynogenesis is a form of obligate parthenogenesis where a sperm cell is used to initiate reproduction. However, the sperm's genes never get incorporated into the egg cell. The best known example of this is the Amazon molly. Because they are obligate parthenotes, there are no males in their species so they depend on males from a closely related species (the Sailfin molly) for sperm. Apomixis and nucellar embryony Apomixis in plants is the formation of a new sporophyte without fertilization. It is important in ferns and in flowering plants, but is very rare in other seed plants. In flowering plants, the term "apomixis" is now most often used for agamospermy, the formation of seeds without fertilization, but was once used to include vegetative reproduction. An example of an apomictic plant would be the triploid European dandelion. Apomixis mainly occurs in two forms: In gametophytic apomixis, the embryo arises from an unfertilized egg within a diploid embryo sac that was formed without completing meiosis. In nucellar embryony, the embryo is formed from the diploid nucellus tissue surrounding the embryo sac. Nucellar embryony occurs in some citrus seeds. Male apomixis can occur in rare cases, such as in the Saharan Cypress Cupressus dupreziana, where the genetic material of the embryo is derived entirely from pollen. Androgenesis Androgenesis occurs when a zygote is produced with only paternal nuclear genes. During standard sexual reproduction, one female and one male parent each produce haploid gametes (such as a sperm or egg cell, each containing only a single set of chromosomes), which recombine to create offspring with genetic material from both parents. However, in androgenesis, there is no recombination of maternal and paternal chromosomes, and only the paternal chromosomes are passed down to the offspring (the inverse of this is gynogenesis, where only the maternal chromosomes are inherited, which is more common than androgenesis). The offspring produced in androgenesis will still have maternally inherited mitochondria, as is the case with most sexually reproducing species. Androgenesis occurs in nature in many invertebrates (for example, clams, stick insects, some ants, bees, flies and parasitic wasps) and vertebrates (mainly amphibians and fish). The androgenesis has also been seen in genetically modified laboratory mice. One of two things can occur to produce offspring with exclusively paternal genetic material: the maternal nuclear genome can be eliminated from the zygote, or the female can produce an egg with no nucleus, resulting in an embryo developing with only the genome of the male gamete. Male apomixis Other type of androgenesis is the male apomixis or paternal apomixis is a reproductive process in which a plant develops from a sperm cell (male gamete) without the participation of a female cell (ovum). In this process, the zygote is formed solely with genetic material from the father, resulting in offspring genetically identical to the male organism. This has been noted in many plants like Nicotiana, Capsicum frutescens, Cicer arietinum, Poa arachnifera, Solanum verrucosum, Phaeophyceae, Pripsacum dactyloides, Zea mays, and occurs as the regular reproductive method in Cupressus dupreziana. This contrasts with the more common apomixis, where development occurs without fertilization, but with genetic material only from the mother. There are also clonal species that reproduce through vegetative reproduction like Lomatia tasmanica and Pando, where the genetic material is exclusively male. Other species where androgenesis has been observed naturally are the stick insects Bacillus rossius and Bassillus Grandii, the little fire ant Wasmannia auropunctata, Vollenhovia emeryi, Paratrechina longicornis, occasionally in Apis mellifera, the Hypseleotris carp gudgeons, the parasitoid Venturia canescens, and occasionally in fruit flies Drosophila melanogaster carrying a specific mutant allele. It has also been induced in many crops and fish via irradiation of an egg cell to destroy the maternal nuclear genome. Obligate androgenesis Obligate androgenesis is the process in which males are capable of producing both eggs and sperm, however, the eggs have no genetic contribution and the offspring come only from the sperm, which allows these individuals to self-fertilize and produce clonal offspring without the need for females. They are also capable of interbreeding with sexual and other androgenetic lineages in a phenomenon known as "egg parasitism." This method of reproduction has been found in several species of the clam genus Corbicula, many plants like, Cupressus dupreziana, Lomatia tasmanica, Pando and recently in the fish Squalius alburnoides. Other species where androgenesis has been observed naturally are the stick insects Bacillus rossius and Bassillus Grandii, the little fire ant Wasmannia auropunctata, Vollenhovia emeryi, Paratrechina longicornis, occasionally in Apis mellifera, the Hypseleotris carp gudgeons, the parasitoid Venturia canescens, and occasionally in fruit flies Drosophila melanogaster carrying a specific mutant allele. It has also been induced in many crops and fish via irradiation of an egg cell to destroy the maternal nuclear genome. Alternation between sexual and asexual reproduction Some species can alternate between sexual and asexual strategies, an ability known as heterogamy, depending on many conditions. Alternation is observed in several rotifer species (cyclical parthenogenesis e.g. in Brachionus species) and a few types of insects. One example of this is aphids which can engage in heterogony. In this system, females are born pregnant and produce only female offspring. This cycle allows them to reproduce very quickly. However, most species reproduce sexually once a year. This switch is triggered by environmental changes in the fall and causes females to develop eggs instead of embryos. This dynamic reproductive cycle allows them to produce specialized offspring with polyphenism, a type of polymorphism where different phenotypes have evolved to carry out specific tasks. The cape bee Apis mellifera subsp. capensis can reproduce asexually through a process called thelytoky. The freshwater crustacean Daphnia reproduces by parthenogenesis in the spring to rapidly populate ponds, then switches to sexual reproduction as the intensity of competition and predation increases. Monogonont rotifers of the genus Brachionus reproduce via cyclical parthenogenesis: at low population densities females produce asexually and at higher densities a chemical cue accumulates and induces the transition to sexual reproduction. Many protists and fungi alternate between sexual and asexual reproduction. A few species of amphibians, reptiles, and birds have a similar ability. The slime mold Dictyostelium undergoes binary fission (mitosis) as single-celled amoebae under favorable conditions. However, when conditions turn unfavorable, the cells aggregate and follow one of two different developmental pathways, depending on conditions. In the social pathway, they form a multi-cellular slug which then forms a fruiting body with asexually generated spores. In the sexual pathway, two cells fuse to form a giant cell that develops into a large cyst. When this macrocyst germinates, it releases hundreds of amoebic cells that are the product of meiotic recombination between the original two cells. The hyphae of the common mold (Rhizopus) are capable of producing both mitotic as well as meiotic spores. Many algae similarly switch between sexual and asexual reproduction. A number of plants use both sexual and asexual means to produce new plants, some species alter their primary modes of reproduction from sexual to asexual under varying environmental conditions. Inheritance in asexual species In the rotifer Brachionus calyciflorus asexual reproduction (obligate parthenogenesis) can be inherited by a recessive allele, which leads to loss of sexual reproduction in homozygous offspring. Inheritance of asexual reproduction by a single recessive locus has also been found in the parasitoid wasp Lysiphlebus fabarum. Examples in animals Asexual reproduction is found in nearly half of the animal phyla. Parthenogenesis occurs in the hammerhead shark and the blacktip shark. In both cases, the sharks had reached sexual maturity in captivity in the absence of males, and in both cases the offspring were shown to be genetically identical to the mothers. The New Mexico whiptail is another example. Some reptiles use the ZW sex-determination system, which produces either males (with ZZ sex chromosomes) or females (with ZW or WW sex chromosomes). Until 2010, it was thought that the ZW chromosome system used by reptiles was incapable of producing viable WW offspring, but a (ZW) female boa constrictor was discovered to have produced viable female offspring with WW chromosomes. The female boa could have chosen any number of male partners (and had successfully in the past) but on this occasion she reproduced asexually, creating 22 female babies with WW sex-chromosomes. Polyembryony is a widespread form of asexual reproduction in animals, whereby the fertilized egg or a later stage of embryonic development splits to form genetically identical clones. Within animals, this phenomenon has been best studied in the parasitic Hymenoptera. In the nine-banded armadillos, this process is obligatory and usually gives rise to genetically identical quadruplets. In other mammals, monozygotic twinning has no apparent genetic basis, though its occurrence is common. There are at least 10 million identical human twins and triplets in the world today. Bdelloid rotifers reproduce exclusively asexually, and all individuals in the class Bdelloidea are females. Asexuality evolved in these animals millions of years ago and has persisted since. There is evidence to suggest that asexual reproduction has allowed the animals to evolve new proteins through the Meselson effect that have allowed them to survive better in periods of dehydration. Bdelloid rotifers are extraordinarily resistant to damage from ionizing radiation due to the same DNA-preserving adaptations used to survive dormancy. These adaptations include an extremely efficient mechanism for repairing DNA double-strand breaks. This repair mechanism was studied in two Bdelloidea species, Adineta vaga, and Philodina roseola. and appears to involve mitotic recombination between homologous DNA regions within each species. Molecular evidence strongly suggests that several species of the stick insect genus Timema have used only asexual (parthenogenetic) reproduction for millions of years, the longest period known for any insect. Similar findings suggest that the mite species Oppiella nova may have reproduced entirely asexually for millions of years. In the grass thrips genus Aptinothrips there have been several transitions to asexuality, likely due to different causes. Adaptive significance of asexual reproduction A complete lack of sexual reproduction is relatively rare among multicellular organisms, particularly animals. It is not entirely understood why the ability to reproduce sexually is so common among them. Current hypotheses suggest that asexual reproduction may have short term benefits when rapid population growth is important or in stable environments, while sexual reproduction offers a net advantage by allowing more rapid generation of genetic diversity, allowing adaptation to changing environments. Developmental constraints may underlie why few animals have relinquished sexual reproduction completely in their life-cycles. Almost all asexual modes of reproduction maintain meiosis either in a modified form or as an alternative pathway. Facultatively apomictic plants increase frequencies of sexuality relative to apomixis after abiotic stress. Another constraint on switching from sexual to asexual reproduction would be the concomitant loss of meiosis and the protective recombinational repair of DNA damage afforded as one function of meiosis.
Biology and health sciences
Biological reproduction
null
2761
https://en.wikipedia.org/wiki/Alkene
Alkene
In organic chemistry, an alkene, or olefin, is a hydrocarbon containing a carbon–carbon double bond. The double bond may be internal or in the terminal position. Terminal alkenes are also known as α-olefins. The International Union of Pure and Applied Chemistry (IUPAC) recommends using the name "alkene" only for acyclic hydrocarbons with just one double bond; alkadiene, alkatriene, etc., or polyene for acyclic hydrocarbons with two or more double bonds; cycloalkene, cycloalkadiene, etc. for cyclic ones; and "olefin" for the general class – cyclic or acyclic, with one or more double bonds. Acyclic alkenes, with only one double bond and no other functional groups (also known as mono-enes) form a homologous series of hydrocarbons with the general formula with n being a >1 natural number (which is two hydrogens less than the corresponding alkane). When n is four or more, isomers are possible, distinguished by the position and conformation of the double bond. Alkenes are generally colorless non-polar compounds, somewhat similar to alkanes but more reactive. The first few members of the series are gases or liquids at room temperature. The simplest alkene, ethylene () (or "ethene" in the IUPAC nomenclature) is the organic compound produced on the largest scale industrially. Aromatic compounds are often drawn as cyclic alkenes, however their structure and properties are sufficiently distinct that they are not classified as alkenes or olefins. Hydrocarbons with two overlapping double bonds () are called allenes—the simplest such compound is itself called allene—and those with three or more overlapping bonds (, , etc.) are called cumulenes. Structural isomerism Alkenes having four or more carbon atoms can form diverse structural isomers. Most alkenes are also isomers of cycloalkanes. Acyclic alkene structural isomers with only one double bond follow: : ethylene only : propylene only : 3 isomers: 1-butene, 2-butene, and isobutylene : 5 isomers: 1-pentene, 2-pentene, 2-methyl-1-butene, 3-methyl-1-butene, 2-methyl-2-butene : 13 isomers: 1-hexene, 2-hexene, 3-hexene, 2-methyl-1-pentene, 3-methyl-1-pentene, 4-methyl-1-pentene, 2-methyl-2-pentene, 3-methyl-2-pentene, 4-methyl-2-pentene, 2,3-dimethyl-1-butene, 3,3-dimethyl-1-butene, 2,3-dimethyl-2-butene, 2-ethyl-1-butene Many of these molecules exhibit cis–trans isomerism. There may also be chiral carbon atoms particularly within the larger molecules (from ). The number of potential isomers increases rapidly with additional carbon atoms. Structure and bonding Bonding A carbon–carbon double bond consists of a sigma bond and a pi bond. This double bond is stronger than a single covalent bond (611 kJ/mol for C=C vs. 347 kJ/mol for C–C), but not twice as strong. Double bonds are shorter than single bonds with an average bond length of 1.33 Å (133 pm) vs 1.53 Å for a typical C-C single bond. Each carbon atom of the double bond uses its three sp2 hybrid orbitals to form sigma bonds to three atoms (the other carbon atom and two hydrogen atoms). The unhybridized 2p atomic orbitals, which lie perpendicular to the plane created by the axes of the three sp2 hybrid orbitals, combine to form the pi bond. This bond lies outside the main C–C axis, with half of the bond on one side of the molecule and a half on the other. With a strength of 65 kcal/mol, the pi bond is significantly weaker than the sigma bond. Rotation about the carbon–carbon double bond is restricted because it incurs an energetic cost to break the alignment of the p orbitals on the two carbon atoms. Consequently cis or trans isomers interconvert so slowly that they can be freely handled at ambient conditions without isomerization. More complex alkenes may be named with the E–Z notation for molecules with three or four different substituents (side groups). For example, of the isomers of butene, the two methyl groups of (Z)-but-2-ene (a.k.a. cis-2-butene) appear on the same side of the double bond, and in (E)-but-2-ene (a.k.a. trans-2-butene) the methyl groups appear on opposite sides. These two isomers of butene have distinct properties. Shape As predicted by the VSEPR model of electron pair repulsion, the molecular geometry of alkenes includes bond angles about each carbon atom in a double bond of about 120°. The angle may vary because of steric strain introduced by nonbonded interactions between functional groups attached to the carbon atoms of the double bond. For example, the C–C–C bond angle in propylene is 123.9°. For bridged alkenes, Bredt's rule states that a double bond cannot occur at the bridgehead of a bridged ring system unless the rings are large enough. Following Fawcett and defining S as the total number of non-bridgehead atoms in the rings, bicyclic systems require S ≥ 7 for stability and tricyclic systems require S ≥ 11. Isomerism In organic chemistry,the prefixes cis- and trans- are used to describe the positions of functional groups attached to carbon atoms joined by a double bond. In Latin, cis and trans mean "on this side of" and "on the other side of" respectively. Therefore, if the functional groups are both on the same side of the carbon chain, the bond is said to have cis- configuration, otherwise (i.e. the functional groups are on the opposite side of the carbon chain), the bond is said to have trans- configuration. For there to be cis- and trans- configurations, there must be a carbon chain, or at least one functional group attached to each carbon is the same for both. E- and Z- configuration can be used instead in a more general case where all four functional groups attached to carbon atoms in a double bond are different. E- and Z- are abbreviations of German words zusammen (together) and entgegen (opposite). In E- and Z-isomerism, each functional group is assigned a priority based on the Cahn–Ingold–Prelog priority rules. If the two groups with higher priority are on the same side of the double bond, the bond is assigned Z- configuration, otherwise (i.e. the two groups with higher priority are on the opposite side of the double bond), the bond is assigned E- configuration. Cis- and trans- configurations do not have a fixed relationship with E- and Z-configurations. Physical properties Many of the physical properties of alkenes and alkanes are similar: they are colorless, nonpolar, and combustible. The physical state depends on molecular mass: like the corresponding saturated hydrocarbons, the simplest alkenes (ethylene, propylene, and butene) are gases at room temperature. Linear alkenes of approximately five to sixteen carbon atoms are liquids, and higher alkenes are waxy solids. The melting point of the solids also increases with increase in molecular mass. Alkenes generally have stronger smells than their corresponding alkanes. Ethylene has a sweet and musty odor. Strained alkenes, in particular, like norbornene and trans-cyclooctene are known to have strong, unpleasant odors, a fact consistent with the stronger π complexes they form with metal ions including copper. Boiling and melting points Below is a list of the boiling and melting points of various alkenes with the corresponding alkane and alkyne analogues. Infrared spectroscopy In the IR spectrum, the stretching/compression of C=C bond gives a peak at 1670–1600 cm−1. The band is weak in symmetrical alkenes. The bending of C=C bond absorbs between 1000 and 650 cm−1 wavelength NMR spectroscopy In 1H NMR spectroscopy, the hydrogen bonded to the carbon adjacent to double bonds will give a δH of 4.5–6.5 ppm. The double bond will also deshield the hydrogen attached to the carbons adjacent to sp2 carbons, and this generates δH=1.6–2. ppm peaks. Cis/trans isomers are distinguishable due to different J-coupling effect. Cis vicinal hydrogens will have coupling constants in the range of 6–14 Hz, whereas the trans will have coupling constants of 11–18 Hz. In their 13C NMR spectra of alkenes, double bonds also deshield the carbons, making them have low field shift. C=C double bonds usually have chemical shift of about 100–170 ppm. Combustion Like most other hydrocarbons, alkenes combust to give carbon dioxide and water. The combustion of alkenes release less energy than burning same molarity of saturated ones with same number of carbons. This trend can be clearly seen in the list of standard enthalpy of combustion of hydrocarbons. Reactions Alkenes are relatively stable compounds, but are more reactive than alkanes. Most reactions of alkenes involve additions to this pi bond, forming new single bonds. Alkenes serve as a feedstock for the petrochemical industry because they can participate in a wide variety of reactions, prominently polymerization and alkylation. Except for ethylene, alkenes have two sites of reactivity: the carbon–carbon pi-bond and the presence of allylic CH centers. The former dominates but the allylic sites are important too. Addition to the unsaturated bonds Hydrogenation involves the addition of H2 ,resulting in an alkane. The equation of hydrogenation of ethylene to form ethane is: H2C=CH2 + H2→H3C−CH3 Hydrogenation reactions usually require catalysts to increase their reaction rate. The total number of hydrogens that can be added to an unsaturated hydrocarbon depends on its degree of unsaturation. Similarly, halogenation involves the addition of a halogen molecule, such as Br2, resulting in a dihaloalkane. The equation of bromination of ethylene to form ethane is: H2C=CH2 + Br2→H2CBr−CH2Br Unlike hydrogenation, these halogenation reactions do not require catalysts. The reaction occurs in two steps, with a halonium ion as an intermediate. Bromine test is used to test the saturation of hydrocarbons. The bromine test can also be used as an indication of the degree of unsaturation for unsaturated hydrocarbons. Bromine number is defined as gram of bromine able to react with 100g of product. Similar as hydrogenation, the halogenation of bromine is also depend on the number of π bond. A higher bromine number indicates higher degree of unsaturation. The π bonds of alkenes hydrocarbons are also susceptible to hydration. The reaction usually involves strong acid as catalyst. The first step in hydration often involves formation of a carbocation. The net result of the reaction will be an alcohol. The reaction equation for hydration of ethylene is: H2C=CH2 + H2O→ Hydrohalogenation involves addition of H−X to unsaturated hydrocarbons. This reaction results in new C−H and C−X σ bonds. The formation of the intermediate carbocation is selective and follows Markovnikov's rule. The hydrohalogenation of alkene will result in haloalkane. The reaction equation of HBr addition to ethylene is: H2C=CH2 + HBr → Cycloaddition Alkenes add to dienes to give cyclohexenes. This conversion is an example of a Diels-Alder reaction. Such reaction proceed with retention of stereochemistry. The rates are sensitive to electron-withdrawing or electron-donating substituents. When irradiated by UV-light, alkenes dimerize to give cyclobutanes. Another example is the Schenck ene reaction, in which singlet oxygen reacts with an allylic structure to give a transposed allyl peroxide: Oxidation Alkenes react with percarboxylic acids and even hydrogen peroxide to yield epoxides: For ethylene, the epoxidation is conducted on a very large scale industrially using oxygen in the presence of silver-based catalysts: Alkenes react with ozone, leading to the scission of the double bond. The process is called ozonolysis. Often the reaction procedure includes a mild reductant, such as dimethylsulfide (): When treated with a hot concentrated, acidified solution of , alkenes are cleaved to form ketones and/or carboxylic acids. The stoichiometry of the reaction is sensitive to conditions. This reaction and the ozonolysis can be used to determine the position of a double bond in an unknown alkene. The oxidation can be stopped at the vicinal diol rather than full cleavage of the alkene by using osmium tetroxide or other oxidants: R'CH=CR2 + 1/2 O2 + H2O -> R'CH(OH)-C(OH)R2 This reaction is called dihydroxylation. In the presence of an appropriate photosensitiser, such as methylene blue and light, alkenes can undergo reaction with reactive oxygen species generated by the photosensitiser, such as hydroxyl radicals, singlet oxygen or superoxide ion. Reactions of the excited sensitizer can involve electron or hydrogen transfer, usually with a reducing substrate (Type I reaction) or interaction with oxygen (Type II reaction). These various alternative processes and reactions can be controlled by choice of specific reaction conditions, leading to a wide range of products. A common example is the [4+2]-cycloaddition of singlet oxygen with a diene such as cyclopentadiene to yield an endoperoxide: Polymerization Terminal alkenes are precursors to polymers via processes termed polymerization. Some polymerizations are of great economic significance, as they generate the plastics polyethylene and polypropylene. Polymers from alkene are usually referred to as polyolefins although they contain no olefins. Polymerization can proceed via diverse mechanisms. Conjugated dienes such as buta-1,3-diene and isoprene (2-methylbuta-1,3-diene) also produce polymers, one example being natural rubber. Allylic substitution The presence of a C=C π bond in unsaturated hydrocarbons weakens the dissociation energy of the allylic C−H bonds. Thus, these groupings are susceptible to free radical substitution at these C-H sites as well as addition reactions at the C=C site. In the presence of radical initiators, allylic C-H bonds can be halogenated. The presence of two C=C bonds flanking one methylene, i.e., doubly allylic, results in particularly weak HC-H bonds. The high reactivity of these situations is the basis for certain free radical reactions, manifested in the chemistry of drying oils. Metathesis Alkenes undergo olefin metathesis, which cleaves and interchanges the substituents of the alkene. A related reaction is ethenolysis: Metal complexation In transition metal alkene complexes, alkenes serve as ligands for metals. In this case, the π electron density is donated to the metal d orbitals. The stronger the donation is, the stronger the back bonding from the metal d orbital to π* anti-bonding orbital of the alkene. This effect lowers the bond order of the alkene and increases the C-C bond length. One example is the complex . These complexes are related to the mechanisms of metal-catalyzed reactions of unsaturated hydrocarbons. Reaction overview Synthesis Industrial methods Alkenes are produced by hydrocarbon cracking. Raw materials are mostly natural-gas condensate components (principally ethane and propane) in the US and Mideast and naphtha in Europe and Asia. Alkanes are broken apart at high temperatures, often in the presence of a zeolite catalyst, to produce a mixture of primarily aliphatic alkenes and lower molecular weight alkanes. The mixture is feedstock and temperature dependent, and separated by fractional distillation. This is mainly used for the manufacture of small alkenes (up to six carbons). Related to this is catalytic dehydrogenation, where an alkane loses hydrogen at high temperatures to produce a corresponding alkene. This is the reverse of the catalytic hydrogenation of alkenes. This process is also known as reforming. Both processes are endothermic and are driven towards the alkene at high temperatures by entropy. Catalytic synthesis of higher α-alkenes (of the type RCH=CH2) can also be achieved by a reaction of ethylene with the organometallic compound triethylaluminium in the presence of nickel, cobalt, or platinum. Elimination reactions One of the principal methods for alkene synthesis in the laboratory is the elimination reaction of alkyl halides, alcohols, and similar compounds. Most common is the β-elimination via the E2 or E1 mechanism. A commercially significant example is the production of vinyl chloride. The E2 mechanism provides a more reliable β-elimination method than E1 for most alkene syntheses. Most E2 eliminations start with an alkyl halide or alkyl sulfonate ester (such as a tosylate or triflate). When an alkyl halide is used, the reaction is called a dehydrohalogenation. For unsymmetrical products, the more substituted alkenes (those with fewer hydrogens attached to the C=C) tend to predominate (see Zaitsev's rule). Two common methods of elimination reactions are dehydrohalogenation of alkyl halides and dehydration of alcohols. A typical example is shown below; note that if possible, the H is anti to the leaving group, even though this leads to the less stable Z-isomer. Alkenes can be synthesized from alcohols via dehydration, in which case water is lost via the E1 mechanism. For example, the dehydration of ethanol produces ethylene: CH3CH2OH → H2C=CH2 + H2O An alcohol may also be converted to a better leaving group (e.g., xanthate), so as to allow a milder syn-elimination such as the Chugaev elimination and the Grieco elimination. Related reactions include eliminations by β-haloethers (the Boord olefin synthesis) and esters (ester pyrolysis). A thioketone and a phosphite ester combined (the Corey-Winter olefination) or diphosphorus tetraiodide will deoxygenate glycols to alkenes. Alkenes can be prepared indirectly from alkyl amines. The amine or ammonia is not a suitable leaving group, so the amine is first either alkylated (as in the Hofmann elimination) or oxidized to an amine oxide (the Cope reaction) to render a smooth elimination possible. The Cope reaction is a syn-elimination that occurs at or below 150 °C, for example: The Hofmann elimination is unusual in that the less substituted (non-Zaitsev) alkene is usually the major product. Alkenes are generated from α-halosulfones in the Ramberg–Bäcklund reaction, via a three-membered ring sulfone intermediate. Synthesis from carbonyl compounds Another important class of methods for alkene synthesis involves construction of a new carbon–carbon double bond by coupling or condensation of a carbonyl compound (such as an aldehyde or ketone) to a carbanion or its equivalent. Pre-eminent is the aldol condensation. Knoevenagel condensations are a related class of reactions that convert carbonyls into alkenes.Well-known methods are called olefinations. The Wittig reaction is illustrative, but other related methods are known, including the Horner–Wadsworth–Emmons reaction. The Wittig reaction involves reaction of an aldehyde or ketone with a Wittig reagent (or phosphorane) of the type Ph3P=CHR to produce an alkene and Ph3P=O. The Wittig reagent is itself prepared easily from triphenylphosphine and an alkyl halide. Related to the Wittig reaction is the Peterson olefination, which uses silicon-based reagents in place of the phosphorane. This reaction allows for the selection of E- or Z-products. If an E-product is desired, another alternative is the Julia olefination, which uses the carbanion generated from a phenyl sulfone. The Takai olefination based on an organochromium intermediate also delivers E-products. A titanium compound, Tebbe's reagent, is useful for the synthesis of methylene compounds; in this case, even esters and amides react. A pair of ketones or aldehydes can be deoxygenated to generate an alkene. Symmetrical alkenes can be prepared from a single aldehyde or ketone coupling with itself, using titanium metal reduction (the McMurry reaction). If different ketones are to be coupled, a more complicated method is required, such as the Barton–Kellogg reaction. A single ketone can also be converted to the corresponding alkene via its tosylhydrazone, using sodium methoxide (the Bamford–Stevens reaction) or an alkyllithium (the Shapiro reaction). Synthesis from alkenes The formation of longer alkenes via the step-wise polymerisation of smaller ones is appealing, as ethylene (the smallest alkene) is both inexpensive and readily available, with hundreds of millions of tonnes produced annually. The Ziegler–Natta process allows for the formation of very long chains, for instance those used for polyethylene. Where shorter chains are wanted, as they for the production of surfactants, then processes incorporating a olefin metathesis step, such as the Shell higher olefin process are important. Olefin metathesis is also used commercially for the interconversion of ethylene and 2-butene to propylene. Rhenium- and molybdenum-containing heterogeneous catalysis are used in this process: CH2=CH2 + CH3CH=CHCH3 → 2 CH2=CHCH3 Transition metal catalyzed hydrovinylation is another important alkene synthesis process starting from alkene itself. It involves the addition of a hydrogen and a vinyl group (or an alkenyl group) across a double bond. From alkynes Reduction of alkynes is a useful method for the stereoselective synthesis of disubstituted alkenes. If the cis-alkene is desired, hydrogenation in the presence of Lindlar's catalyst (a heterogeneous catalyst that consists of palladium deposited on calcium carbonate and treated with various forms of lead) is commonly used, though hydroboration followed by hydrolysis provides an alternative approach. Reduction of the alkyne by sodium metal in liquid ammonia gives the trans-alkene. For the preparation multisubstituted alkenes, carbometalation of alkynes can give rise to a large variety of alkene derivatives. Rearrangements and related reactions Alkenes can be synthesized from other alkenes via rearrangement reactions. Besides olefin metathesis (described above), many pericyclic reactions can be used such as the ene reaction and the Cope rearrangement. In the Diels–Alder reaction, a cyclohexene derivative is prepared from a diene and a reactive or electron-deficient alkene. Application Unsaturated hydrocarbons are widely used to produce plastics, medicines, and other useful materials. Natural occurrence Alkenes are prevalent in nature. Plants are the main natural source of alkenes in the form of terpenes. Many of the most vivid natural pigments are terpenes; e.g. lycopene (red in tomatoes), carotene (orange in carrots), and xanthophylls (yellow in egg yolk). The simplest of all alkenes, ethylene is a signaling molecule that influences the ripening of plants. IUPAC Nomenclature Although the nomenclature is not followed widely, according to IUPAC, an alkene is an acyclic hydrocarbon with just one double bond between carbon atoms. Olefins comprise a larger collection of cyclic and acyclic alkenes as well as dienes and polyenes. To form the root of the IUPAC names for straight-chain alkenes, change the -an- infix of the parent to -en-. For example, CH3-CH3 is the alkane ethANe. The name of CH2=CH2 is therefore ethENe. For straight-chain alkenes with 4 or more carbon atoms, that name does not completely identify the compound. For those cases, and for branched acyclic alkenes, the following rules apply: Find the longest carbon chain in the molecule. If that chain does not contain the double bond, name the compound according to the alkane naming rules. Otherwise: Number the carbons in that chain starting from the end that is closest to the double bond. Define the location k of the double bond as being the number of its first carbon. Name the side groups (other than hydrogen) according to the appropriate rules. Define the position of each side group as the number of the chain carbon it is attached to. Write the position and name of each side group. Write the names of the alkane with the same chain, replacing the "-ane" suffix by "k-ene". The position of the double bond is often inserted before the name of the chain (e.g. "2-pentene"), rather than before the suffix ("pent-2-ene"). The positions need not be indicated if they are unique. Note that the double bond may imply a different chain numbering than that used for the corresponding alkane: C–– is "2,2-dimethyl pentane", whereas C–= is "3,3-dimethyl 1-pentene". More complex rules apply for polyenes and cycloalkenes. Cis–trans isomerism If the double bond of an acyclic mono-ene is not the first bond of the chain, the name as constructed above still does not completely identify the compound, because of cis–trans isomerism. Then one must specify whether the two single C–C bonds adjacent to the double bond are on the same side of its plane, or on opposite sides. For monoalkenes, the configuration is often indicated by the prefixes cis- (from Latin "on this side of") or trans- ("across", "on the other side of") before the name, respectively; as in cis-2-pentene or trans-2-butene. More generally, cis–trans isomerism will exist if each of the two carbons of in the double bond has two different atoms or groups attached to it. Accounting for these cases, the IUPAC recommends the more general E–Z notation, instead of the cis and trans prefixes. This notation considers the group with highest CIP priority in each of the two carbons. If these two groups are on opposite sides of the double bond's plane, the configuration is labeled E (from the German entgegen meaning "opposite"); if they are on the same side, it is labeled Z (from German zusammen, "together"). This labeling may be taught with mnemonic "Z means 'on ze zame zide'". Groups containing C=C double bonds IUPAC recognizes two names for hydrocarbon groups containing carbon–carbon double bonds, the vinyl group and the allyl group.
Physical sciences
Hydrocarbons
null
2763
https://en.wikipedia.org/wiki/Alkyne
Alkyne
Acetylene Propyne 1-Butyne In organic chemistry, an alkyne is an unsaturated hydrocarbon containing at least one carbon—carbon triple bond. The simplest acyclic alkynes with only one triple bond and no other functional groups form a homologous series with the general chemical formula . Alkynes are traditionally known as acetylenes, although the name acetylene also refers specifically to , known formally as ethyne using IUPAC nomenclature. Like other hydrocarbons, alkynes are generally hydrophobic. Structure and bonding In acetylene, the H–C≡C bond angles are 180°. By virtue of this bond angle, alkynes are rod-like. Correspondingly, cyclic alkynes are rare. Benzyne cannot be isolated. The C≡C bond distance of 118 picometers (for C2H2) is much shorter than the C=C distance in alkenes (132 pm, for C2H4) or the C–C bond in alkanes (153 pm). The triple bond is very strong with a bond strength of 839 kJ/mol. The sigma bond contributes 369 kJ/mol, the first pi bond contributes 268 kJ/mol. and the second pi bond 202 kJ/mol. Bonding is usually discussed in the context of molecular orbital theory, which recognizes the triple bond as arising from overlap of s and p orbitals. In the language of valence bond theory, the carbon atoms in an alkyne bond are sp hybridized: they each have two unhybridized p orbitals and two sp hybrid orbitals. Overlap of an sp orbital from each atom forms one sp–sp sigma bond. Each p orbital on one atom overlaps one on the other atom, forming two pi bonds, giving a total of three bonds. The remaining sp orbital on each atom can form a sigma bond to another atom, for example to hydrogen atoms in the parent acetylene. The two sp orbitals project on opposite sides of the carbon atom. Terminal and internal alkynes Internal alkynes feature carbon substituents on each acetylenic carbon. Symmetrical examples include diphenylacetylene and 3-hexyne. They may also be asymmetrical, such as in 2-pentyne. Terminal alkynes have the formula , where at least one end of the alkyne is a hydrogen atom. An example is methylacetylene (propyne using IUPAC nomenclature). They are often prepared by alkylation of monosodium acetylide. Terminal alkynes, like acetylene itself, are mildly acidic, with pKa values of around 25. They are far more acidic than alkenes and alkanes, which have pKa values of around 40 and 50, respectively. The acidic hydrogen on terminal alkynes can be replaced by a variety of groups resulting in halo-, silyl-, and alkoxoalkynes. The carbanions generated by deprotonation of terminal alkynes are called acetylides. Internal alkynes are also considerably more acidic than alkenes and alkanes, though not nearly as acidic as terminal alkynes. The C–H bonds at the α position of alkynes (propargylic C–H bonds) can also be deprotonated using strong bases, with an estimated pKa of 35. This acidity can be used to isomerize internal alkynes to terminal alkynes using the alkyne zipper reaction. Naming alkynes In systematic chemical nomenclature, alkynes are named with the Greek prefix system without any additional letters. Examples include ethyne or octyne. In parent chains with four or more carbons, it is necessary to say where the triple bond is located. For octyne, one can either write 3-octyne or oct-3-yne when the bond starts at the third carbon. The lowest number possible is given to the triple bond. When no superior functional groups are present, the parent chain must include the triple bond even if it is not the longest possible carbon chain in the molecule. Ethyne is commonly called by its trivial name acetylene. In chemistry, the suffix -yne is used to denote the presence of a triple bond. In organic chemistry, the suffix often follows IUPAC nomenclature. However, inorganic compounds featuring unsaturation in the form of triple bonds may be denoted by substitutive nomenclature with the same methods used with alkynes (i.e. the name of the corresponding saturated compound is modified by replacing the "-ane" ending with "-yne"). "-diyne" is used when there are two triple bonds, and so on. The position of unsaturation is indicated by a numerical locant immediately preceding the "-yne" suffix, or 'locants' in the case of multiple triple bonds. Locants are chosen so that the numbers are low as possible. "-yne" is also used as a suffix to name substituent groups that are triply bound to the parent compound. Sometimes a number between hyphens is inserted before it to state which atoms the triple bond is between. This suffix arose as a collapsed form of the end of the word "acetylene". The final "-e" disappears if it is followed by another suffix that starts with a vowel. Structural isomerism Alkynes having four or more carbon atoms can form different structural isomers by having the triple bond in different positions or having some of the carbon atoms be substituents rather than part of the parent chain. Other non-alkyne structural isomers are also possible. : acetylene only : propyne only : 2 isomers: 1-butyne, and 2-butyne : 3 isomers: 1-pentyne, 2-pentyne, and 3-methyl-1-butyne : 7 isomers: 1-hexyne, 2-hexyne, 3-hexyne, 4-methyl-1-pentyne, 4-methyl-2-pentyne, 3-methyl-1-pentyne, 3,3-dimethyl-1-butyne Synthesis From calcium carbide Classically, acetylene was prepared by hydrolysis (protonation) of calcium carbide (Ca2+[:C≡C:]2–): Ca^{2+}[C#C]^2- + 2 HOH -> HC#CH + Ca^{2+}[(HO^{-})2] which was in turn synthesized by combining quicklime and coke in an electric arc furnace at 2200 °C: CaO + 3 C (amorphous) -> CaC2 + CO This was an industrially important process which provided access to hydrocarbons from coal resources for countries like Germany and China. However, the energy-intensive nature of this process is a major disadvantage and its share of the world's production of acetylene has steadily decreased relative to hydrocarbon cracking. Cracking Commercially, the dominant alkyne is acetylene itself, which is used as a fuel and a precursor to other compounds, e.g., acrylates. Hundreds of millions of kilograms are produced annually by partial oxidation of natural gas: 2 CH4 + 3/2 O2 -> HC#CH + 3 H2O Propyne, also industrially useful, is also prepared by thermal cracking of hydrocarbons. Alkylation and arylation of terminal alkynes Terminal alkynes (RC≡CH, including acetylene itself) can be deprotonated by bases like NaNH2, BuLi, or EtMgBr to give acetylide anions (RC≡C:–M+, M = Na, Li, MgBr) which can be alkylated by addition to carbonyl groups (Favorskii reaction), ring opening of epoxides, or SN2-type substitution of unhindered primary alkyl halides. In the presence of transition metal catalysts, classically a combination of Pd(PPh3)2Cl2 and CuI, terminal acetylenes (RC≡CH) can react with aryl iodides and bromides (ArI or ArBr) in the presence of a secondary or tertiary amine like Et3N to give arylacetylenes (RC≡CAr) in the Sonogashira reaction. The availability of these reliable reactions makes terminal alkynes useful building blocks for preparing internal alkynes. Dehydrohalogenation and related reactions Alkynes are prepared from 1,1- and 1,2-dihaloalkanes by double dehydrohalogenation. The reaction provides a means to generate alkynes from alkenes, which are first halogenated and then dehydrohalogenated. For example, phenylacetylene can be generated from styrene by bromination followed by treatment of the resulting of 1,2-dibromo-1-phenylethane with sodium amide in ammonia: Via the Fritsch–Buttenberg–Wiechell rearrangement, alkynes are prepared from vinyl bromides. Alkynes can be prepared from aldehydes using the Corey–Fuchs reaction and from aldehydes or ketones by the Seyferth–Gilbert homologation. Vinyl halides are susceptible to dehydrohalogenation. Reactions, including applications Featuring a reactive functional group, alkynes participate in many organic reactions. Such use was pioneered by Ralph Raphael, who in 1955 wrote the first book describing their versatility as intermediates in synthesis. In spite of their kinetic stability (persistence) due to their strong triple bonds, alkynes are a thermodynamically unstable functional group, as can be gleaned from the highly positive heats of formation of small alkynes. For example, acetylene has a heat of formation of +227.4 kJ/mol (+54.2 kcal/mol), indicating a much higher energy content compared to its constituent elements. The highly exothermic combustion of acetylene is exploited industrially in oxyacetylene torches used in welding. Other reactions involving alkynes are often highly thermodynamically favorable (exothermic/exergonic) for the same reason. Hydrogenation Being more unsaturated than alkenes, alkynes characteristically undergo reactions that show that they are "doubly unsaturated". Alkynes are capable of adding two equivalents of , whereas an alkene adds only one equivalent. Depending on catalysts and conditions, alkynes add one or two equivalents of hydrogen. Partial hydrogenation, stopping after the addition of only one equivalent to give the alkene, is usually more desirable since alkanes are less useful: The largest scale application of this technology is the conversion of acetylene to ethylene in refineries (the steam cracking of alkanes yields a few percent acetylene, which is selectively hydrogenated in the presence of a palladium/silver catalyst). For more complex alkynes, the Lindlar catalyst is widely recommended to avoid formation of the alkane, for example in the conversion of phenylacetylene to styrene. Similarly, halogenation of alkynes gives the alkene dihalides or alkyl tetrahalides: RCH=CR'H + H2 -> RCH2CR'H2 The addition of one equivalent of to internal alkynes gives cis-alkenes. Addition of halogens and related reagents Alkynes characteristically are capable of adding two equivalents of halogens and hydrogen halides. RC#CR' + 2 Br2 -> RCBr2CR'Br2 The addition of nonpolar bonds across is general for silanes, boranes, and related hydrides. The hydroboration of alkynes gives vinylic boranes which oxidize to the corresponding aldehyde or ketone. In the thiol-yne reaction the substrate is a thiol. Addition of hydrogen halides has long been of interest. In the presence of mercuric chloride as a catalyst, acetylene and hydrogen chloride react to give vinyl chloride. While this method has been abandoned in the West, it remains the main production method in China. Hydration The hydration reaction of acetylene gives acetaldehyde. The reaction proceeds by formation of vinyl alcohol, which tautomerizes to form the aldehyde. This reaction was once a major industrial process but it has been displaced by the Wacker process. This reaction occurs in nature, the catalyst being acetylene hydratase. Hydration of phenylacetylene gives acetophenone: PhC#CH + H2O -> PhCOCH3 catalyzes hydration of 1,8-nonadiyne to 2,8-nonanedione: HC#C(CH2)5C#CH + 2H2O -> CH3CO(CH2)5COCH3 Isomerization to allenes Alkynes can be isomerized by strong base or transition metals to allenes. Due to their comparable thermodynamic stabilities, the equilibrium constant of alkyne/allene isomerization is generally within several orders of magnitude of unity. For example propyne can be isomerized to give an equilibrium mixture with propadiene: HC#C-CH3 <=> CH2=C=CH2 Cycloadditions and oxidation Alkynes undergo diverse cycloaddition reactions. The Diels–Alder reaction with 1,3-dienes gives 1,4-cyclohexadienes. This general reaction has been extensively developed. Electrophilic alkynes are especially effective dienophiles. The "cycloadduct" derived from the addition of alkynes to 2-pyrone eliminates carbon dioxide to give the aromatic compound. Other specialized cycloadditions include multicomponent reactions such as alkyne trimerisation to give aromatic compounds and the [2+2+1]-cycloaddition of an alkyne, alkene and carbon monoxide in the Pauson–Khand reaction. Non-carbon reagents also undergo cyclization, e.g. azide alkyne Huisgen cycloaddition to give triazoles. Cycloaddition processes involving alkynes are often catalyzed by metals, e.g. enyne metathesis and alkyne metathesis, which allows the scrambling of carbyne (RC) centers: RC#CR + R'C#CR' <=> 2RC#CR' Oxidative cleavage of alkynes proceeds via cycloaddition to metal oxides. Most famously, potassium permanganate converts alkynes to a pair of carboxylic acids. Reactions specific for terminal alkynes Terminal alkynes are readily converted to many derivatives, e.g. by coupling reactions and condensations. Via the condensation with formaldehyde and acetylene is produced butynediol: 2CH2O + HC#CH -> HOCH2CCCH2OH In the Sonogashira reaction, terminal alkynes are coupled with aryl or vinyl halides: This reactivity exploits the fact that terminal alkynes are weak acids, whose typical pKa values around 25 place them between that of ammonia (35) and ethanol (16): RC#CH + MX -> RC#CM + HX where MX = NaNH2, LiBu, or RMgX. The reactions of alkynes with certain metal cations, e.g. and also gives acetylides. Thus, few drops of diamminesilver(I) hydroxide () reacts with terminal alkynes signaled by formation of a white precipitate of the silver acetylide. This reactivity is the basis of alkyne coupling reactions, including the Cadiot–Chodkiewicz coupling, Glaser coupling, and the Eglinton coupling shown below: 2R-\!{\equiv}\!-H ->[\ce{Cu(OAc)2}][\ce{pyridine}] R-\!{\equiv}\!-\!{\equiv}\!-R In the Favorskii reaction and in alkynylations in general, terminal alkynes add to carbonyl compounds to give the hydroxyalkyne. Metal complexes Alkynes form complexes with transition metals. Such complexes occur also in metal catalyzed reactions of alkynes such as alkyne trimerization. Terminal alkynes, including acetylene itself, react with water to give aldehydes. The transformation typically requires metal catalysts to give this anti-Markovnikov addition result. Alkynes in nature and medicine According to Ferdinand Bohlmann, the first naturally occurring acetylenic compound, dehydromatricaria ester, was isolated from an Artemisia species in 1826. In the nearly two centuries that have followed, well over a thousand naturally occurring acetylenes have been discovered and reported. Polyynes, a subset of this class of natural products, have been isolated from a wide variety of plant species, cultures of higher fungi, bacteria, marine sponges, and corals. Some acids like tariric acid contain an alkyne group. Diynes and triynes, species with the linkage RC≡C–C≡CR′ and RC≡C–C≡C–C≡CR′ respectively, occur in certain plants (Ichthyothere, Chrysanthemum, Cicuta, Oenanthe and other members of the Asteraceae and Apiaceae families). Some examples are cicutoxin, oenanthotoxin, and falcarinol. These compounds are highly bioactive, e.g. as nematocides. 1-Phenylhepta-1,3,5-triyne is illustrative of a naturally occurring triyne. Alkynes occur in some pharmaceuticals, including the contraceptive noretynodrel. A carbon–carbon triple bond is also present in marketed drugs such as the antiretroviral Efavirenz and the antifungal Terbinafine. Molecules called ene-diynes feature a ring containing an alkene ("ene") between two alkyne groups ("diyne"). These compounds, e.g. calicheamicin, are some of the most aggressive antitumor drugs known, so much so that the ene-diyne subunit is sometimes referred to as a "warhead". Ene-diynes undergo rearrangement via the Bergman cyclization, generating highly reactive radical intermediates that attack DNA within the tumor.
Physical sciences
Hydrocarbons
null
2770
https://en.wikipedia.org/wiki/Anatomical%20Therapeutic%20Chemical%20Classification%20System
Anatomical Therapeutic Chemical Classification System
The Anatomical Therapeutic Chemical (ATC) Classification System is a drug classification system that classifies the active ingredients of drugs according to the organ or system on which they act and their therapeutic, pharmacological and chemical properties. Its purpose is an aid to monitor drug use and for research to improve quality medication use. It does not imply drug recommendation or efficacy. It is controlled by the World Health Organization Collaborating Centre for Drug Statistics Methodology (WHOCC), and was first published in 1976. Coding system This pharmaceutical coding system divides drugs into different groups according to the organ or system on which they act, their therapeutic intent or nature, and the drug's chemical characteristics. Different brands share the same code if they have the same active substance and indications. Each bottom-level ATC code stands for a pharmaceutically used substance, or a combination of substances, in a single indication (or use). This means that one drug can have more than one code, for example acetylsalicylic acid (aspirin) has as a drug for local oral treatment, as a platelet inhibitor, and as an analgesic and antipyretic; as well as one code can represent more than one active ingredient, for example is the combination of perindopril with amlodipine, two active ingredients that have their own codes ( and respectively) when prescribed alone. The ATC classification system is a strict hierarchy, meaning that each code necessarily has one and only one parent code, except for the 14 codes at the topmost level which have no parents. The codes are semantic identifiers, meaning they depict information by themselves beyond serving as identifiers (namely, the codes depict themselves the complete lineage of parenthood). As of 7 May 2020, there are 6,331 codes in ATC; the table below gives the count per level. History The ATC system is based on the earlier Anatomical Classification System, which is intended as a tool for the pharmaceutical industry to classify pharmaceutical products (as opposed to their active ingredients). This system, confusingly also called ATC, was initiated in 1971 by the European Pharmaceutical Market Research Association (EphMRA) and is being maintained by the EphMRA and Intellus. Its codes are organised into four levels. The WHO's system, having five levels, is an extension and modification of the EphMRA's. It was first published in 1976. Classification In this system, drugs are classified into groups at five different levels: First level The first level of the code indicates the anatomical main group and consists of one letter. There are 14 main groups: Example: C Cardiovascular system Second level The second level of the code indicates the therapeutic subgroup and consists of two digits. Example: C03 Diuretics Third level The third level of the code indicates the therapeutic/pharmacological subgroup and consists of one letter. Example: C03C High-ceiling diuretics Fourth level The fourth level of the code indicates the chemical/therapeutic/pharmacological subgroup and consists of one letter. Example: C03CA Sulfonamides Fifth level The fifth level of the code indicates the chemical substance and consists of two digits. Example: C03CA01 furosemide Other ATC classification systems ATCvet The Anatomical Therapeutic Chemical Classification System for veterinary medicinal products (ATCvet) is used to classify veterinary drugs. ATCvet codes can be created by placing the letter Q in front of the ATC code of most human medications. For example, furosemide for veterinary use has the code QC03CA01. Some codes are used exclusively for veterinary drugs, such as QI Immunologicals, QJ51 Antibacterials for intramammary use or QN05AX90 amperozide. Herbal ATC (HATC) The Herbal ATC system (HATC) is an ATC classification of herbal substances; it differs from the regular ATC system by using 4 digits instead of 2 at the 5th level group. The herbal classification is not adopted by WHO. The Uppsala Monitoring Centre is responsible for the Herbal ATC classification, and it is part of the WHODrug Global portfolio available by subscription. Defined daily dose The ATC system also includes defined daily doses (DDDs) for many drugs. This is a measurement of drug consumption based on the usual daily dose for a given drug. According to the definition, "[t]he DDD is the assumed average maintenance dose per day for a drug used for its main indication in adults." Adaptations and updates National issues of the ATC classification, such as the German Anatomisch-therapeutisch-chemische Klassifikation mit Tagesdosen, may include additional codes and DDDs not present in the WHO version. ATC follows guidelines in creating new codes for newly approved drugs. An application is submitted to WHO for ATC classification and DDD assignment. A preliminary or temporary code is assigned and published on the website and in the WHO Drug Information for comment or objection. New ATC/DDD codes are discussed at the semi-annual Working Group meeting. If accepted it becomes a final decision and published semi-annually on the website and WHO Drug Information and implemented in the annual print/on-line ACT/DDD Index on January 1. Changes to existing ATC/DDD follow a similar process to become temporary codes and if accepted become a final decision as ATC/DDD alterations. ATC and DDD alterations are only valid and implemented in the coming annual updates; the original codes must continue until the end of the year. An updated version of the complete on-line/print ATC index with DDDs is published annually on January 1.
Biology and health sciences
General concepts_2
Health
2778
https://en.wikipedia.org/wiki/Parallel%20ATA
Parallel ATA
Parallel ATA (PATA), originally , also known as Integrated Drive Electronics (IDE), is a standard interface designed for IBM PC-compatible computers. It was first developed by Western Digital and Compaq in 1986 for compatible hard drives and CD or DVD drives. The connection is used for storage devices such as hard disk drives, floppy disk drives, optical disc drives, and tape drives in computers. The standard is maintained by the X3/INCITS committee. It uses the underlying (ATA) and Packet Interface (ATAPI) standards. The Parallel ATA standard is the result of a long history of incremental technical development, which began with the original AT Attachment interface, developed for use in early PC AT equipment. The ATA interface itself evolved in several stages from Western Digital's original Integrated Drive Electronics (IDE) interface. As a result, many near-synonyms for ATA/ATAPI and its previous incarnations are still in common informal use, in particular Extended IDE (EIDE) and Ultra ATA (UATA). After the introduction of SATA in 2003, the original ATA was renamed to Parallel ATA, or PATA for short. Parallel ATA cables have a maximum allowable length of . Because of this limit, the technology normally appears as an internal computer storage interface. For many years, ATA provided the most common and the least expensive interface for this application. It has largely been replaced by SATA in newer systems. History and terminology The standard was originally conceived as the "AT Bus Attachment", officially called "AT Attachment" and abbreviated "ATA" because its primary feature was a direct connection to the 16-bit ISA bus introduced with the IBM PC/AT. The original ATA specifications published by the standards committees use the name "AT Attachment". The "AT" in the IBM PC/AT referred to "Advanced Technology" so ATA has also been referred to as "Advanced Technology Attachment". When a newer Serial ATA (SATA) was introduced in 2003, the original ATA was renamed to Parallel ATA, or PATA for short. Physical ATA interfaces became a standard component in all PCs, initially on host bus adapters, sometimes on a sound card but ultimately as two physical interfaces embedded in a Southbridge chip on a motherboard. Called the "primary" and "secondary" ATA interfaces, they were assigned to base addresses 0x1F0 and 0x170 on ISA bus systems. They were replaced by SATA interfaces. IDE and ATA-1 The first version of what is now called the ATA/ATAPI interface was developed by Western Digital under the name Integrated Drive Electronics (IDE). Together with Compaq (the initial customer), they worked with various disk drive manufacturers to develop and ship early products with the goal of remaining software compatible with the existing IBM PC hard drive interface. The first such drives appeared internally in Compaq PCs in 1986 and were first separately offered by Conner Peripherals as the CP342 in June 1987. The term Integrated Drive Electronics refers to the drive controller being integrated into the drive, as opposed to a separate controller situated at the other side of the connection cable to the drive. On an IBM PC compatible, CP/M machine, or similar, this was typically a card installed on a motherboard. The interface cards used to connect a parallel ATA drive to, for example, an ISA Slot, are not drive controllers: they are merely bridges between the host bus and the ATA interface. Since the original ATA interface is essentially just a 16-bit ISA bus, the bridge was especially simple in case of an ATA connector being located on an ISA interface card. The integrated controller presented the drive to the host computer as an array of 512-byte blocks with a relatively simple command interface. This relieved the mainboard and interface cards in the host computer of the chores of stepping the disk head arm, moving the head arm in and out, and so on, as had to be done with earlier ST-506 and ESDI hard drives. All of these low-level details of the mechanical operation of the drive were now handled by the controller on the drive itself. This also eliminated the need to design a single controller that could handle many different types of drives, since the controller could be unique for the drive. The host need only to ask for a particular sector, or block, to be read or written, and either accept the data from the drive or send the data to it. The interface used by these drives was standardized in 1994 as ANSI standard X3.221-1994, AT Attachment Interface for Disk Drives. After later versions of the standard were developed, this became known as "ATA-1". A short-lived, seldom-used implementation of ATA was created for the IBM XT and similar machines that used the 8-bit version of the ISA bus. It has been referred to as "XT-IDE", "XTA" or "XT Attachment". EIDE and ATA-2 In 1994, about the same time that the ATA-1 standard was adopted, Western Digital introduced drives under a newer name, Enhanced IDE (EIDE). These included most of the features of the forthcoming ATA-2 specification and several additional enhancements. Other manufacturers introduced their own variations of ATA-1 such as "Fast ATA" and "Fast ATA-2". The new version of the ANSI standard, AT Attachment Interface with Extensions ATA-2 (X3.279-1996), was approved in 1996. It included most of the features of the manufacturer-specific variants. ATA-2 also was the first to note that devices other than hard drives could be attached to the interface: ATAPI ATA was originally designed for, and worked only with, hard disk drives and devices that could emulate them. The introduction of ATAPI (ATA Packet Interface) by a group called the Small Form Factor committee (SFF) allowed ATA to be used for a variety of other devices that require functions beyond those necessary for hard disk drives. For example, any removable media device needs a "media eject" command, and a way for the host to determine whether the media is present, and these were not provided in the ATA protocol. ATAPI is a protocol allowing the ATA interface to carry SCSI commands and responses; therefore, all ATAPI devices are actually "speaking SCSI" other than at the electrical interface. The SCSI commands and responses are embedded in "packets" (hence "ATA Packet Interface") for transmission on the ATA cable. This allows any device class for which a SCSI command set has been defined to be interfaced via ATA/ATAPI. ATAPI devices are also "speaking ATA", as the ATA physical interface and protocol are still being used to send the packets. On the other hand, ATA hard drives and solid state drives do not use ATAPI. ATAPI devices include CD-ROM and DVD-ROM drives, tape drives, and large-capacity floppy drives such as the Zip drive and SuperDisk drive. Some early ATAPI devices were simply SCSI devices with an ATA/ATAPI to SCSI protocol converter added on. The SCSI commands and responses used by each class of ATAPI device (CD-ROM, tape, etc.) are described in other documents or specifications specific to those device classes and are not within ATA/ATAPI or the T13 committee's purview. One commonly used set is defined in the MMC SCSI command set. ATAPI was adopted as part of ATA in INCITS 317-1998, AT Attachment with Packet Interface Extension (ATA/ATAPI-4). UDMA and ATA-4 The ATA/ATAPI-4 standard also introduced several "Ultra DMA" transfer modes. These initially supported speeds from 16 to 33 MB/s. In later versions, faster Ultra DMA modes were added, requiring new 80-wire cables to reduce crosstalk. The latest versions of Parallel ATA support up to 133 MB/s. Ultra ATA Ultra ATA, abbreviated UATA, is a designation that has been primarily used by Western Digital for different speed enhancements to the ATA/ATAPI standards. For example, in 2000 Western Digital published a document describing "Ultra ATA/100", which brought performance improvements for the then-current ATA/ATAPI-5 standard by improving maximum speed of the Parallel ATA interface from 66 to 100 MB/s. Most of Western Digital's changes, along with others, were included in the ATA/ATAPI-6 standard (2002). x86 BIOS size limitations Initially, the size of an ATA drive was stored in the system x86 BIOS using a type number (1 through 45) that predefined the C/H/S parameters and also often the landing zone, in which the drive heads are parked while not in use. Later, a "user definable" format called C/H/S or cylinders, heads, sectors was made available. These numbers were important for the earlier ST-506 interface, but were generally meaningless for ATA—the CHS parameters for later ATA large drives often specified impossibly high numbers of heads or sectors that did not actually define the internal physical layout of the drive at all. From the start, and up to ATA-2, every user had to specify explicitly how large every attached drive was. From ATA-2 on, an "identify drive" command was implemented that can be sent and which will return all drive parameters. Owing to a lack of foresight by motherboard manufacturers, the system BIOS was often hobbled by artificial C/H/S size limitations due to the manufacturer assuming certain values would never exceed a particular numerical maximum. The first of these BIOS limits occurred when ATA drives reached sizes in excess of 504 MiB, because some motherboard BIOSes would not allow C/H/S values above 1024 cylinders, 16 heads, and 63 sectors. Multiplied by 512 bytes per sector, this totals bytes which, divided by bytes per MiB, equals 504 MiB (528 MB). The second of these BIOS limitations occurred at 1024 cylinders, 256 heads, and 63 sectors, and a problem in MS-DOS limited the number of heads to 255. This totals to bytes (8032.5 MiB), commonly referred to as the 8.4 gigabyte barrier. This is again a limit imposed by x86 BIOSes, and not a limit imposed by the ATA interface. It was eventually determined that these size limitations could be overridden with a small program loaded at startup from a hard drive's boot sector. Some hard drive manufacturers, such as Western Digital, started including these override utilities with large hard drives to help overcome these problems. However, if the computer was booted in some other manner without loading the special utility, the invalid BIOS settings would be used and the drive could either be inaccessible or appear to the operating system to be damaged. Later, an extension to the x86 BIOS disk services called the "Enhanced Disk Drive" (EDD) was made available, which makes it possible to address drives as large as 264 sectors. Interface size limitations The first drive interface used 22-bit addressing mode which resulted in a maximum drive capacity of two gigabytes. Later, the first formalized ATA specification used a 28-bit addressing mode through LBA28, allowing for the addressing of 228 () sectors (blocks) of 512 bytes each, resulting in a maximum capacity of 128 GiB (137 GB). ATA-6 introduced 48-bit addressing, increasing the limit to 128 PiB (144 PB). As a consequence, any ATA drive of capacity larger than about 137 GB must be an ATA-6 or later drive. Connecting such a drive to a host with an ATA-5 or earlier interface will limit the usable capacity to the maximum of the interface. Some operating systems, including Windows XP pre-SP1, and Windows 2000 pre-SP3, disable LBA48 by default, requiring the user to take extra steps to use the entire capacity of an ATA drive larger than about 137 gigabytes. Older operating systems, such as Windows 98, do not support 48-bit LBA at all. However, members of the third-party group MSFN have modified the Windows 98 disk drivers to add unofficial support for 48-bit LBA to Windows 95 OSR2, Windows 98, Windows 98 SE and Windows ME. Some 16-bit and 32-bit operating systems supporting LBA48 may still not support disks larger than 2 TiB due to using 32-bit arithmetic only; a limitation also applying to many boot sectors. Primacy and obsolescence Parallel ATA (then simply called ATA or IDE) became the primary storage device interface for PCs soon after its introduction. In some systems, a third and fourth motherboard interface was provided, allowing up to eight ATA devices to be attached to the motherboard. Often, these additional connectors were implemented by inexpensive RAID controllers. Soon after the introduction of Serial ATA (SATA) in 2003, use of Parallel ATA declined. Some PCs and laptops of the era have a SATA hard disk and an optical drive connected to PATA. As of 2007, some PC chipsets, for example the Intel ICH10, had removed support for PATA. Motherboard vendors still wishing to offer Parallel ATA with those chipsets must include an additional interface chip. In more recent computers, the Parallel ATA interface is rarely used even if present, as four or more Serial ATA connectors are usually provided on the motherboard and SATA devices of all types are common. With Western Digital's withdrawal from the PATA market, hard disk drives with the PATA interface were no longer in production after December 2013 for other than specialty applications. Interface Parallel ATA cables transfer data 16 bits at a time. The traditional cable uses 40-pin female insulation displacement connectors (IDC) attached to a 40- or 80-conductor ribbon cable. Each cable has two or three connectors, one of which plugs into a host adapter interfacing with the rest of the computer system. The remaining connector(s) plug into storage devices, most commonly hard disk drives or optical drives. Each connector has 39 physical pins arranged into two rows (2.54 mm, -inch pitch), with a gap or key at pin 20. Earlier connectors may not have that gap, with all 40 pins available. Thus, later cables with the gap filled in are incompatible with earlier connectors, although earlier cables are compatible with later connectors. Round parallel ATA cables (as opposed to ribbon cables) were eventually made available for 'case modders' for cosmetic reasons, as well as claims of improved computer cooling and were easier to handle; however, only ribbon cables are supported by the ATA specifications. Pin 20 In the ATA standard, pin 20 is defined as a mechanical key and is not used. The pin's socket on the female connector is often blocked, requiring pin 20 to be omitted from the male cable or drive connector; it is thus impossible to plug it in the wrong way round. However, some flash memory drives can use pin 20 as VCC_in to power the drive without requiring a special power cable; this feature can only be used if the equipment supports this use of pin 20. Pin 28 Pin 28 of the gray (slave/middle) connector of an 80-conductor cable is not attached to any conductor of the cable. It is attached normally on the black (master drive end) and blue (motherboard end) connectors. This enables cable select functionality. Pin 34 Pin 34 is connected to ground inside the blue connector of an 80-conductor cable but not attached to any conductor of the cable, allowing for detection of such a cable. It is attached normally on the gray and black connectors. 44-pin variant A 44-pin variant PATA connector is used for 2.5 inch drives inside laptops. The pins are closer together (2.0 mm pitch) and the connector is physically smaller than the 40-pin connector. The extra pins carry power. 80-conductor variant ATA's cables have had 40 conductors for most of its history (44 conductors for the smaller form-factor version used for 2.5" drives—the extra four for power), but an 80-conductor version appeared with the introduction of the UDMA/66 mode. All of the additional conductors in the new cable are grounds, interleaved with the signal conductors to reduce the effects of capacitive coupling between neighboring signal conductors, reducing crosstalk. Capacitive coupling is more of a problem at higher transfer rates, and this change was necessary to enable the 66 megabytes per second (MB/s) transfer rate of UDMA4 to work reliably. The faster UDMA5 and UDMA6 modes also require 80-conductor cables. Though the number of conductors doubled, the number of connector pins and the pinout remain the same as 40-conductor cables, and the external appearance of the connectors is identical. Internally, the connectors are different; the connectors for the 80-conductor cable connect a larger number of ground conductors to the ground pins, while the connectors for the 40-conductor cable connect ground conductors to ground pins one-to-one. 80-conductor cables usually come with three differently colored connectors (blue, black, and gray for controller, master drive, and slave drive respectively) as opposed to uniformly colored 40-conductor cable's connectors (commonly all gray). The gray connector on 80-conductor cables has pin 28 CSEL not connected, making it the slave position for drives configured cable select. Multiple devices on a cable If two devices are attached to a single cable, one must be designated as Device 0 (in the past, commonly designated master) and the other as Device 1 (in the past, commonly designated as slave). This distinction is necessary to allow both drives to share the cable without conflict. The Device 0 drive is the drive that usually appears "first" to the computer's BIOS and/or operating system. In most personal computers the drives are often designated as "C:" for the Device 0 and "D:" for the Device 1 referring to one active primary partitions on each. The mode that a device must use is often set by a jumper setting on the device itself, which must be manually set to Device 0 (Master) or Device 1 (Slave). If there is a single device on a cable, it should be configured as Device 0. However, some certain era drives have a special setting called Single for this configuration (Western Digital, in particular). Also, depending on the hardware and software available, a Single drive on a cable will often work reliably even though configured as the Device 1 drive (most often seen where an optical drive is the only device on the secondary ATA interface). The words primary and secondary typically refers to the two IDE cables, which can have two drives each (primary master, primary slave, secondary master, secondary slave). There are many debates about how much a slow device can impact the performance of a faster device on the same cable. On early ATA host adapters, both devices' data transfers can be constrained to the speed of the slower device, if two devices of different speed capabilities are on the same cable. For all modern ATA host adapters, this is not true, as modern ATA host adapters support independent device timing. This allows each device on the cable to transfer data at its own best speed. Even with earlier adapters without independent timing, this effect applies only to the data transfer phase of a read or write operation. This is caused by the omission of both overlapped and queued feature sets from most parallel ATA products. Only one device on a cable can perform a read or write operation at one time; therefore, a fast device on the same cable as a slow device under heavy use will find it has to wait for the slow device to complete its task first. However, most modern devices will report write operations as complete once the data is stored in their onboard cache memory, before the data is written to the (slow) magnetic storage. This allows commands to be sent to the other device on the cable, reducing the impact of the "one operation at a time" limit. The impact of this on a system's performance depends on the application. For example, when copying data from an optical drive to a hard drive (such as during software installation), this effect probably will not matter. Such jobs are necessarily limited by the speed of the optical drive no matter where it is. But if the hard drive in question is also expected to provide good throughput for other tasks at the same time, it probably should not be on the same cable as the optical drive. Cable select A drive mode called cable select was described as optional in ATA-1 and has come into fairly widespread use with ATA-5 and later. A drive set to "cable select" automatically configures itself as Device 0 or Device 1, according to its position on the cable. Cable select is controlled by pin 28. The host adapter grounds this pin; if a device sees that the pin is grounded, it becomes the Device 0 (master) device; if it sees that pin 28 is open, the device becomes the Device 1 (slave) device. This setting is usually chosen by a jumper setting on the drive called "cable select", usually marked CS, which is separate from the Device 0/1 setting. If two drives are configured as Device 0 and Device 1 manually, this configuration does not need to correspond to their position on the cable. Pin 28 is only used to let the drives know their position on the cable; it is not used by the host when communicating with the drives. In other words, the manual master/slave setting using jumpers on the drives takes precedence and allows them to be freely placed on either connector of the ribbon cable. With the 40-conductor cable, it was very common to implement cable select by simply cutting the pin 28 wire between the two device connectors; putting the slave Device 1 device at the end of the cable, and the master Device 0 on the middle connector. This arrangement eventually was standardized in later versions. However, it had one drawback: if there is just one master device on a 2-drive cable, using the middle connector, this results in an unused stub of cable, which is undesirable for physical convenience and electrical reasons. The stub causes signal reflections, particularly at higher transfer rates. Starting with the 80-conductor cable defined for use in ATAPI5/UDMA4, the master Device 0 device goes at the far-from-the-host end of the cable on the black connector, the slave Device 1 goes on the grey middle connector, and the blue connector goes to the host (e.g. motherboard IDE connector, or IDE card). So, if there is only one (Device 0) device on a two-drive cable, using the black connector, there is no cable stub to cause reflections (the unused connector is now in the middle of the ribbon). Also, cable select is now implemented in the grey middle device connector, usually simply by omitting the pin 28 contact from the connector body. Serialized, overlapped, and queued operations The parallel ATA protocols up through ATA-3 require that once a command has been given on an ATA interface, it must complete before any subsequent command may be given. Operations on the devices must be serializedwith only one operation in progress at a timewith respect to the ATA host interface. A useful mental model is that the host ATA interface is busy with the first request for its entire duration, and therefore can not be told about another request until the first one is complete. The function of serializing requests to the interface is usually performed by a device driver in the host operating system. The ATA-4 and subsequent versions of the specification have included an "overlapped feature set" and a "queued feature set" as optional features, both being given the name "Tagged Command Queuing" (TCQ), a reference to a set of features from SCSI which the ATA version attempts to emulate. However, support for these is extremely rare in actual parallel ATA products and device drivers because these feature sets were implemented in such a way as to maintain software compatibility with its heritage as originally an extension of the ISA bus. This implementation resulted in excessive CPU utilization which largely negated the advantages of command queuing. By contrast, overlapped and queued operations have been common in other storage buses; in particular, SCSI's version of tagged command queuing had no need to be compatible with APIs designed for ISA, allowing it to attain high performance with low overhead on buses which supported first party DMA like PCI. This has long been seen as a major advantage of SCSI. The Serial ATA standard has supported native command queueing (NCQ) since its first release, but it is an optional feature for both host adapters and target devices. Many obsolete PC motherboards do not support NCQ, but modern SATA hard disk drives and SATA solid-state drives usually support NCQ, which is not the case for removable (CD/DVD) drives because the ATAPI command set used to control them prohibits queued operations. HDD passwords and security ATA devices may support an optional security feature which is defined in an ATA specification, and thus not specific to any brand or device. The security feature can be enabled and disabled by sending special ATA commands to the drive. If a device is locked, it will refuse all access until it is unlocked. A device can have two passwords: A User Password and a Master Password; either or both may be set. There is a Master Password identifier feature which, if supported and used, can identify the current Master Password (without disclosing it). The master password, if set, can used by the administrator to reset user password, if the end user forgot the user password. On some laptops and some business computers, their BIOS can control the ATA passwords. A device can be locked in two modes: High security mode or Maximum security mode. Bit 8 in word 128 of the IDENTIFY response shows which mode the disk is in: 0 = High, 1 = Maximum. In High security mode, the device can be unlocked with either the User or Master password, using the "SECURITY UNLOCK DEVICE" ATA command. There is an attempt limit, normally set to 5, after which the disk must be power cycled or hard-reset before unlocking can be attempted again. Also in High security mode, the SECURITY ERASE UNIT command can be used with either the User or Master password. In Maximum security mode, the device can be unlocked only with the User password. If the User password is not available, the only remaining way to get at least the bare hardware back to a usable state is to issue the SECURITY ERASE PREPARE command, immediately followed by SECURITY ERASE UNIT. In Maximum security mode, the SECURITY ERASE UNIT command requires the Master password and will completely erase all data on the disk. Word 89 in the IDENTIFY response indicates how long the operation will take. While the ATA lock is intended to be impossible to defeat without a valid password, there are purported workarounds to unlock a device. For NVMe drives, the security features, including lock passwords, were defined in the OPAL standard. For sanitizing entire disks, the built-in Secure Erase command is effective when implemented correctly. There have been a few reported instances of failures to erase some or all data. On some laptops and some business computers, their BIOS can utilize Secure Erase to erase all data of the disk. External parallel ATA devices Due to a short cable length specification and shielding issues it is extremely uncommon to find external PATA devices that directly use PATA for connection to a computer. A device connected externally needs additional cable length to form a U-shaped bend so that the external device may be placed alongside, or on top of the computer case, and the standard cable length is too short to permit this. For ease of reach from motherboard to device, the connectors tend to be positioned towards the front edge of motherboards, for connection to devices protruding from the front of the computer case. This front-edge position makes extension out the back to an external device even more difficult. Ribbon cables are poorly shielded, and the standard relies upon the cabling to be installed inside a shielded computer case to meet RF emissions limits. External hard disk drives or optical disk drives that have an internal PATA interface, use some other interface technology to bridge the distance between the external device and the computer. USB is the most common external interface, followed by Firewire. A bridge chip inside the external devices converts from the USB interface to PATA, and typically only supports a single external device without cable select or master/slave. Specifications The following table shows the names of the versions of the ATA standards and the transfer modes and rates supported by each. Note that the transfer rate for each mode (for example, 66.7 MB/s for UDMA4, commonly called "Ultra-DMA 66", defined by ATA-5) gives its maximum theoretical transfer rate on the cable. This is simply two bytes multiplied by the effective clock rate, and presumes that every clock cycle is used to transfer end-user data. In practice, of course, protocol overhead reduces this value. Congestion on the host bus to which the ATA adapter is attached may also limit the maximum burst transfer rate. For example, the maximum data transfer rate for conventional PCI bus is 133 MB/s, and this is shared among all active devices on the bus. In addition, no ATA hard drives existed in 2005 that were capable of measured sustained transfer rates of above 80 MB/s. Furthermore, sustained transfer rate tests do not give realistic throughput expectations for most workloads: They use I/O loads specifically designed to encounter almost no delays from seek time or rotational latency. Hard drive performance under most workloads is limited first and second by those two factors; the transfer rate on the bus is a distant third in importance. Therefore, transfer speed limits above 66 MB/s really affect performance only when the hard drive can satisfy all I/O requests by reading from its internal cache—a very unusual situation, especially considering that such data is usually already buffered by the operating system. , mechanical hard disk drives can transfer data at up to 524 MB/s, which is far beyond the capabilities of the PATA/133 specification. High-performance solid state drives can transfer data at up to 7000–7500 MB/s. Only the Ultra DMA modes use CRC to detect errors in data transfer between the controller and drive. This is a 16-bit CRC, and it is used for data blocks only. Transmission of command and status blocks do not use the fast signaling methods that would necessitate CRC. For comparison, in Serial ATA, 32-bit CRC is used for both commands and data. Features introduced with each ATA revision Speed of defined transfer modes Related standards, features, and proposals ATAPI Removable Media Device (ARMD) ATAPI devices with removable media, other than CD and DVD drives, are classified as ARMD (ATAPI Removable Media Device) and can appear as either a super-floppy (non-partitioned media) or a hard drive (partitioned media) to the operating system. These can be supported as bootable devices by a BIOS complying with the ATAPI Removable Media Device BIOS Specification, originally developed by Compaq Computer Corporation and Phoenix Technologies. It specifies provisions in the BIOS of a personal computer to allow the computer to be bootstrapped from devices such as Zip drives, Jaz drives, SuperDisk (LS-120) drives, and similar devices. These devices have removable media like floppy disk drives, but capacities more commensurate with hard drives, and programming requirements unlike either. Due to limitations in the floppy controller interface most of these devices were ATAPI devices, connected to one of the host computer's ATA interfaces, similarly to a hard drive or CD-ROM device. However, existing BIOS standards did not support these devices. An ARMD-compliant BIOS allows these devices to be booted from and used under the operating system without requiring device-specific code in the OS. A BIOS implementing ARMD allows the user to include ARMD devices in the boot search order. Usually an ARMD device is configured earlier in the boot order than the hard drive. Similarly to a floppy drive, if bootable media is present in the ARMD drive, the BIOS will boot from it; if not, the BIOS will continue in the search order, usually with the hard drive last. There are two variants of ARMD, ARMD-FDD and ARMD-HDD. Originally ARMD caused the devices to appear as a sort of very large floppy drive, either the primary floppy drive device 00h or the secondary device 01h. Some operating systems required code changes to support floppy disks with capacities far larger than any standard floppy disk drive. Also, standard-floppy disk drive emulation proved to be unsuitable for certain high-capacity floppy disk drives such as Iomega Zip drives. Later the ARMD-HDD, ARMD-"Hard disk device", variant was developed to address these issues. Under ARMD-HDD, an ARMD device appears to the BIOS and the operating system as a hard drive. ATA over Ethernet In August 2004, Sam Hopkins and Brantley Coile of Coraid specified a lightweight ATA over Ethernet protocol to carry ATA commands over Ethernet instead of directly connecting them to a PATA host adapter. This permitted the established block protocol to be reused in storage area network (SAN) applications. Compact Flash Compact Flash in its IDE mode is essentially a miniaturized ATA interface, intended for use on devices that use flash memory storage. No interfacing chips or circuitry are required, other than to directly adapt the smaller CF socket onto the larger ATA connector. (Although most CF cards only support IDE mode up to PIO4, making them much slower in IDE mode than their CF capable speed) The ATA connector specification does not include pins for supplying power to a CF device, so power is inserted into the connector from a separate source. The exception to this is when the CF device is connected to a 44-pin ATA bus designed for 2.5-inch hard disk drives, commonly found in notebook computers, as this bus implementation must provide power to a standard hard disk drive. CF devices can be designated as devices 0 or 1 on an ATA interface, though since most CF devices offer only a single socket, it is not necessary to offer this selection to end users. Although CF can be hot-pluggable with additional design methods, by default when wired directly to an ATA interface, it is not intended to be hot-pluggable.
Technology
Computer hardware
null
2787
https://en.wikipedia.org/wiki/Astrobiology
Astrobiology
Astrobiology (also xenology or exobiology) is a scientific field within the life and environmental sciences that studies the origins, early evolution, distribution, and future of life in the universe by investigating its deterministic conditions and contingent events. As a discipline, astrobiology is founded on the premise that life may exist beyond Earth. Research in astrobiology comprises three main areas: the study of habitable environments in the Solar System and beyond, the search for planetary biosignatures of past or present extraterrestrial life, and the study of the origin and early evolution of life on Earth. The field of astrobiology has its origins in the 20th century with the advent of space exploration and the discovery of exoplanets. Early astrobiology research focused on the search for extraterrestrial life and the study of the potential for life to exist on other planets. In the 1960s and 1970s, NASA began its astrobiology pursuits within the Viking program, which was the first US mission to land on Mars and search for signs of life. This mission, along with other early space exploration missions, laid the foundation for the development of astrobiology as a discipline. Regarding habitable environments, astrobiology investigates potential locations beyond Earth that could support life, such as Mars, Europa, and exoplanets, through research into the extremophiles populating austere environments on Earth, like volcanic and deep sea environments. Research within this topic is conducted utilising the methodology of the geosciences, especially geobiology, for astrobiological applications. The search for biosignatures involves the identification of signs of past or present life in the form of organic compounds, isotopic ratios, or microbial fossils. Research within this topic is conducted utilising the methodology of planetary and environmental science, especially atmospheric science, for astrobiological applications, and is often conducted through remote sensing and in situ missions. Astrobiology also concerns the study of the origin and early evolution of life on Earth to try to understand the conditions that are necessary for life to form on other planets. This research seeks to understand how life emerged from non-living matter and how it evolved to become the diverse array of organisms we see today. Research within this topic is conducted utilising the methodology of paleosciences, especially paleobiology, for astrobiological applications. Astrobiology is a rapidly developing field with a strong interdisciplinary aspect that holds many challenges and opportunities for scientists. Astrobiology programs and research centres are present in many universities and research institutions around the world, and space agencies like NASA and ESA have dedicated departments and programs for astrobiology research. Overview The term astrobiology was first proposed by the Russian astronomer Gavriil Tikhov in 1953. It is etymologically derived from the Greek , "star"; , "life"; and , -logia, "study". A close synonym is exobiology from the Greek Έξω, "external"; , "life"; and , -logia, "study", coined by American molecular biologist Joshua Lederberg; exobiology is considered to have a narrow scope limited to search of life external to Earth. Another associated term is xenobiology, from the Greek ξένος, "foreign"; , "life"; and -λογία, "study", coined by American science fiction writer Robert Heinlein in his work The Star Beast; xenobiology is now used in a more specialised sense, referring to 'biology based on foreign chemistry', whether of extraterrestrial or terrestrial (typically synthetic) origin. While the potential for extraterrestrial life, especially intelligent life, has been explored throughout human history within philosophy and narrative, the question is a verifiable hypothesis and thus a valid line of scientific inquiry; planetary scientist David Grinspoon calls it a field of natural philosophy, grounding speculation on the unknown in known scientific theory. The modern field of astrobiology can be traced back to the 1950s and 1960s with the advent of space exploration, when scientists began to seriously consider the possibility of life on other planets. In 1957, the Soviet Union launched Sputnik 1, the first artificial satellite, which marked the beginning of the Space Age. This event led to an increase in the study of the potential for life on other planets, as scientists began to consider the possibilities opened up by the new technology of space exploration. In 1959, NASA funded its first exobiology project, and in 1960, NASA founded the Exobiology Program, now one of four main elements of NASA's current Astrobiology Program. In 1971, NASA funded Project Cyclops, part of the search for extraterrestrial intelligence, to search radio frequencies of the electromagnetic spectrum for interstellar communications transmitted by extraterrestrial life outside the Solar System. In the 1960s-1970s, NASA established the Viking program, which was the first US mission to land on Mars and search for metabolic signs of present life; the results were inconclusive. In the 1980s and 1990s, the field began to expand and diversify as new discoveries and technologies emerged. The discovery of microbial life in extreme environments on Earth, such as deep-sea hydrothermal vents, helped to clarify the feasibility of potential life existing in harsh conditions. The development of new techniques for the detection of biosignatures, such as the use of stable isotopes, also played a significant role in the evolution of the field. The contemporary landscape of astrobiology emerged in the early 21st century, focused on utilising Earth and environmental science for applications within comparate space environments. Missions included the ESA's Beagle 2, which failed minutes after landing on Mars, NASA's Phoenix lander, which probed the environment for past and present planetary habitability of microbial life on Mars and researched the history of water, and NASA's Curiosity rover, currently probing the environment for past and present planetary habitability of microbial life on Mars. Theoretical foundations Planetary habitability Astrobiological research makes a number of simplifying assumptions when studying the necessary components for planetary habitability. Carbon and Organic Compounds: Carbon is the fourth most abundant element in the universe and the energy required to make or break a bond is at just the appropriate level for building molecules which are not only stable, but also reactive. The fact that carbon atoms bond readily to other carbon atoms allows for the building of extremely long and complex molecules. As such, astrobiological research presumes that the vast majority of life forms in the Milky Way galaxy are based on carbon chemistries, as are all life forms on Earth. However, theoretical astrobiology entertains the potential for other organic molecular bases for life, thus astrobiological research often focuses on identifying environments that have the potential to support life based on the presence of organic compounds. Liquid water: Liquid water is a common molecule that provides an excellent environment for the formation of complicated carbon-based molecules, and is generally considered necessary for life as we know it to exist. Thus, astrobiological research presumes that extraterrestrial life similarly depends upon access to liquid water, and often focuses on identifying environments that have the potential to support liquid water. Some researchers posit environments of water-ammonia mixtures as possible solvents for hypothetical types of biochemistry. Environmental stability: Where organisms adaptively evolve to the conditions of the environments in which they reside, environmental stability is considered necessary for life to exist. This presupposes the necessity of a stable temperature, pressure, and radiation levels; resultantly, astrobiological research focuses on planets orbiting Sun-like red dwarf stars. This is because very large stars have relatively short lifetimes, meaning that life might not have time to emerge on planets orbiting them; very small stars provide so little heat and warmth that only planets in very close orbits around them would not be frozen solid, and in such close orbits these planets would be tidally locked to the star; whereas the long lifetimes of red dwarfs could allow the development of habitable environments on planets with thick atmospheres. This is significant as red dwarfs are extremely common. (
Physical sciences
Astronomy basics
Astronomy
2792
https://en.wikipedia.org/wiki/Anthropic%20principle
Anthropic principle
The anthropic principle, also known as the observation selection effect, is the proposition that the range of possible observations that could be made about the universe is limited by the fact that observations are only possible in the type of universe that is capable of developing intelligent life. Proponents of the anthropic principle argue that it explains why the universe has the age and the fundamental physical constants necessary to accommodate intelligent life. If either had been significantly different, no one would have been around to make observations. Anthropic reasoning has been used to address the question as to why certain measured physical constants take the values that they do, rather than some other arbitrary values, and to explain a perception that the universe appears to be finely tuned for the existence of life. There are many different formulations of the anthropic principle. Philosopher Nick Bostrom counts thirty, but the underlying principles can be divided into "weak" and "strong" forms, depending on the types of cosmological claims they entail. Definition and basis The principle was formulated as a response to a series of observations that the laws of nature and parameters of the universe have values that are consistent with conditions for life as it is known rather than values that would not be consistent with life on Earth. The anthropic principle states that this is an a posteriori necessity, because if life were impossible, no living entity would be there to observe it, and thus it would not be known. That is, it must be possible to observe some universe, and hence, the laws and constants of any such universe must accommodate that possibility. The term anthropic in "anthropic principle" has been argued to be a misnomer. While singling out the currently observable kind of carbon-based life, none of the finely tuned phenomena require human life or some kind of carbon chauvinism. Any form of life or any form of heavy atom, stone, star, or galaxy would do; nothing specifically human or anthropic is involved. The anthropic principle has given rise to some confusion and controversy, partly because the phrase has been applied to several distinct ideas. All versions of the principle have been accused of discouraging the search for a deeper physical understanding of the universe. Critics of the weak anthropic principle point out that its lack of falsifiability entails that it is non-scientific and therefore inherently not useful. Stronger variants of the anthropic principle which are not tautologies can still make claims considered controversial by some; these would be contingent upon empirical verification. Anthropic observations In 1961, Robert Dicke noted that the age of the universe, as seen by living observers, cannot be random. Instead, biological factors constrain the universe to be more or less in a "golden age", neither too young nor too old. If the universe was one tenth as old as its present age, there would not have been sufficient time to build up appreciable levels of metallicity (levels of elements besides hydrogen and helium) especially carbon, by nucleosynthesis. Small rocky planets did not yet exist. If the universe were 10 times older than it actually is, most stars would be too old to remain on the main sequence and would have turned into white dwarfs, aside from the dimmest red dwarfs, and stable planetary systems would have already come to an end. Thus, Dicke explained the coincidence between large dimensionless numbers constructed from the constants of physics and the age of the universe, a coincidence that inspired Dirac's varying-G theory. Dicke later reasoned that the density of matter in the universe must be almost exactly the critical density needed to prevent the Big Crunch (the "Dicke coincidences" argument). The most recent measurements may suggest that the observed density of baryonic matter, and some theoretical predictions of the amount of dark matter, account for about 30% of this critical density, with the rest contributed by a cosmological constant. Steven Weinberg gave an anthropic explanation for this fact: he noted that the cosmological constant has a remarkably low value, some 120 orders of magnitude smaller than the value particle physics predicts (this has been described as the "worst prediction in physics"). However, if the cosmological constant were only several orders of magnitude larger than its observed value, the universe would suffer catastrophic inflation, which would preclude the formation of stars, and hence life. The observed values of the dimensionless physical constants (such as the fine-structure constant) governing the four fundamental interactions are balanced as if fine-tuned to permit the formation of commonly found matter and subsequently the emergence of life. A slight increase in the strong interaction (up to 50% for some authors) would bind the dineutron and the diproton and convert all hydrogen in the early universe to helium; likewise, an increase in the weak interaction also would convert all hydrogen to helium. Water, as well as sufficiently long-lived stable stars, both essential for the emergence of life as it is known, would not exist. More generally, small changes in the relative strengths of the four fundamental interactions can greatly affect the universe's age, structure, and capacity for life. Origin The phrase "anthropic principle" first appeared in Brandon Carter's contribution to a 1973 Kraków symposium honouring Copernicus's 500th birthday. Carter, a theoretical astrophysicist, articulated the Anthropic Principle in reaction to the Copernican Principle, which states that humans do not occupy a privileged position in the Universe. Carter said: "Although our situation is not necessarily central, it is inevitably privileged to some extent." Specifically, Carter disagreed with using the Copernican principle to justify the Perfect Cosmological Principle, which states that all large regions and times in the universe must be statistically identical. The latter principle underlies the steady-state theory, which had recently been falsified by the 1965 discovery of the cosmic microwave background radiation. This discovery was unequivocal evidence that the universe has changed radically over time (for example, via the Big Bang). Carter defined two forms of the anthropic principle, a "weak" one which referred only to anthropic selection of privileged spacetime locations in the universe, and a more controversial "strong" form that addressed the values of the fundamental constants of physics. Roger Penrose explained the weak form as follows: One reason this is plausible is that there are many other places and times in which humans could have evolved. But when applying the strong principle, there is only one universe, with one set of fundamental parameters, so what exactly is the point being made? Carter offers two possibilities: First, humans can use their own existence to make "predictions" about the parameters. But second, "as a last resort", humans can convert these predictions into explanations by assuming that there is more than one universe, in fact a large and possibly infinite collection of universes, something that is now called the multiverse ("world ensemble" was Carter's term), in which the parameters (and perhaps the laws of physics) vary across universes. The strong principle then becomes an example of a selection effect, exactly analogous to the weak principle. Postulating a multiverse is certainly a radical step, but taking it could provide at least a partial answer to a question seemingly out of the reach of normal science: "Why do the fundamental laws of physics take the particular form we observe and not another?" Since Carter's 1973 paper, the term anthropic principle has been extended to cover a number of ideas that differ in important ways from his. Particular confusion was caused by the 1986 book The Anthropic Cosmological Principle by John D. Barrow and Frank Tipler, which distinguished between a "weak" and "strong" anthropic principle in a way very different from Carter's, as discussed in the next section. Carter was not the first to invoke some form of the anthropic principle. In fact, the evolutionary biologist Alfred Russel Wallace anticipated the anthropic principle as long ago as 1904: "Such a vast and complex universe as that which we know exists around us, may have been absolutely required [...] in order to produce a world that should be precisely adapted in every detail for the orderly development of life culminating in man." In 1957, Robert Dicke wrote: "The age of the Universe 'now' is not random but conditioned by biological factors [...] [changes in the values of the fundamental constants of physics] would preclude the existence of man to consider the problem." Ludwig Boltzmann may have been one of the first in modern science to use anthropic reasoning. Prior to knowledge of the Big Bang Boltzmann's thermodynamic concepts painted a picture of a universe that had inexplicably low entropy. Boltzmann suggested several explanations, one of which relied on fluctuations that could produce pockets of low entropy or Boltzmann universes. While most of the universe is featureless in this model, to Boltzmann, it is unremarkable that humanity happens to inhabit a Boltzmann universe, as that is the only place where intelligent life could be. Variants Weak anthropic principle (WAP) (Carter): "... our location in the universe is necessarily privileged to the extent of being compatible with our existence as observers." For Carter, "location" refers to our location in time as well as space. Strong anthropic principle (SAP) (Carter): "[T]he universe (and hence the fundamental parameters on which it depends) must be such as to admit the creation of observers within it at some stage. To paraphrase Descartes, cogito ergo mundus talis est."The Latin tag ("I think, therefore the world is such [as it is]") makes it clear that "must" indicates a deduction from the fact of our existence; the statement is thus a truism. In their 1986 book, The anthropic cosmological principle, John Barrow and Frank Tipler depart from Carter and define the WAP and SAP as follows: Weak anthropic principle (WAP) (Barrow and Tipler): "The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirements that the universe be old enough for it to have already done so."Unlike Carter they restrict the principle to carbon-based life, rather than just "observers". A more important difference is that they apply the WAP to the fundamental physical constants, such as the fine-structure constant, the number of spacetime dimensions, and the cosmological constant—topics that fall under Carter's SAP. Strong anthropic principle (SAP) (Barrow and Tipler): "The Universe must have those properties which allow life to develop within it at some stage in its history."This looks very similar to Carter's SAP, but unlike the case with Carter's SAP, the "must" is an imperative, as shown by the following three possible elaborations of the SAP, each proposed by Barrow and Tipler: "There exists one possible Universe 'designed' with the goal of generating and sustaining 'observers'." This can be seen as simply the classic design argument restated in the garb of contemporary cosmology. It implies that the purpose of the universe is to give rise to intelligent life, with the laws of nature and their fundamental physical constants set to ensure that life emerges and evolves. "Observers are necessary to bring the Universe into being." Barrow and Tipler believe that this is a valid conclusion from quantum mechanics, as John Archibald Wheeler has suggested, especially via his idea that information is the fundamental reality (see It from bit) and his Participatory anthropic principle (PAP) which is an interpretation of quantum mechanics associated with the ideas of John von Neumann and Eugene Wigner. "An ensemble of other different universes is necessary for the existence of our Universe." By contrast, Carter merely says that an ensemble of universes is necessary for the SAP to count as an explanation. The philosophers John Leslie and Nick Bostrom reject the Barrow and Tipler SAP as a fundamental misreading of Carter. For Bostrom, Carter's anthropic principle just warns us to make allowance for anthropic bias—that is, the bias created by anthropic selection effects (which Bostrom calls "observation" selection effects)—the necessity for observers to exist in order to get a result. He writes: Strong self-sampling assumption (SSSA) (Bostrom): "Each observer-moment should reason as if it were randomly selected from the class of all observer-moments in its reference class." Analysing an observer's experience into a sequence of "observer-moments" helps avoid certain paradoxes; but the main ambiguity is the selection of the appropriate "reference class": for Carter's WAP this might correspond to all real or potential observer-moments in our universe; for the SAP, to all in the multiverse. Bostrom's mathematical development shows that choosing either too broad or too narrow a reference class leads to counter-intuitive results, but he is not able to prescribe an ideal choice. According to Jürgen Schmidhuber, the anthropic principle essentially just says that the conditional probability of finding yourself in a universe compatible with your existence is always 1. It does not allow for any additional nontrivial predictions such as "gravity won't change tomorrow". To gain more predictive power, additional assumptions on the prior distribution of alternative universes are necessary. Playwright and novelist Michael Frayn describes a form of the strong anthropic principle in his 2006 book The Human Touch, which explores what he characterises as "the central oddity of the Universe": Character of anthropic reasoning Carter chose to focus on a tautological aspect of his ideas, which has resulted in much confusion. In fact, anthropic reasoning interests scientists because of something that is only implicit in the above formal definitions, namely that humans should give serious consideration to there being other universes with different values of the "fundamental parameters"—that is, the dimensionless physical constants and initial conditions for the Big Bang. Carter and others have argued that life would not be possible in most such universes. In other words, the universe humans live in is fine tuned to permit life. Collins & Hawking (1973) characterized Carter's then-unpublished big idea as the postulate that "there is not one universe but a whole infinite ensemble of universes with all possible initial conditions". If this is granted, the anthropic principle provides a plausible explanation for the fine tuning of our universe: the "typical" universe is not fine-tuned, but given enough universes, a small fraction will be capable of supporting intelligent life. Ours must be one of these, and so the observed fine tuning should be no cause for wonder. Although philosophers have discussed related concepts for centuries, in the early 1970s the only genuine physical theory yielding a multiverse of sorts was the many-worlds interpretation of quantum mechanics. This would allow variation in initial conditions, but not in the truly fundamental constants. Since that time a number of mechanisms for producing a multiverse have been suggested: see the review by Max Tegmark. An important development in the 1980s was the combination of inflation theory with the hypothesis that some parameters are determined by symmetry breaking in the early universe, which allows parameters previously thought of as "fundamental constants" to vary over very large distances, thus eroding the distinction between Carter's weak and strong principles. At the beginning of the 21st century, the string landscape emerged as a mechanism for varying essentially all the constants, including the number of spatial dimensions. The anthropic idea that fundamental parameters are selected from a multitude of different possibilities (each actual in some universe or other) contrasts with the traditional hope of physicists for a theory of everything having no free parameters. As Albert Einstein said: "What really interests me is whether God had any choice in the creation of the world." In 2002, some proponents of the leading candidate for a "theory of everything", string theory, proclaimed "the end of the anthropic principle" since there would be no free parameters to select. In 2003, however, Leonard Susskind stated: "... it seems plausible that the landscape is unimaginably large and diverse. This is the behavior that gives credence to the anthropic principle." The modern form of a design argument is put forth by intelligent design. Proponents of intelligent design often cite the fine-tuning observations that (in part) preceded the formulation of the anthropic principle by Carter as a proof of an intelligent designer. Opponents of intelligent design are not limited to those who hypothesize that other universes exist; they may also argue, anti-anthropically, that the universe is less fine-tuned than often claimed, or that accepting fine tuning as a brute fact is less astonishing than the idea of an intelligent creator. Furthermore, even accepting fine tuning, Sober (2005) and Ikeda and Jefferys, argue that the anthropic principle as conventionally stated actually undermines intelligent design. Paul Davies's book The Goldilocks Enigma (2006) reviews the current state of the fine-tuning debate in detail, and concludes by enumerating the following responses to that debate: The absurd universe: Our universe just happens to be the way it is. The unique universe: There is a deep underlying unity in physics that necessitates the Universe being the way it is. A Theory of Everything will explain why the various features of the Universe must have exactly the values that have been recorded. The multiverse: Multiple universes exist, having all possible combinations of characteristics, and humans inevitably find themselves within a universe that allows us to exist. Intelligent design: A creator designed the Universe with the purpose of supporting complexity and the emergence of intelligence. The life principle: There is an underlying principle that constrains the Universe to evolve towards life and mind. The self-explaining universe: A closed explanatory or causal loop: "perhaps only universes with a capacity for consciousness can exist". This is Wheeler's participatory anthropic principle (PAP). The fake universe: Humans live inside a virtual reality simulation. Omitted here is Lee Smolin's model of cosmological natural selection, also known as fecund universes, which proposes that universes have "offspring" that are more plentiful if they resemble our universe. Also see Gardner (2005). Clearly each of these hypotheses resolve some aspects of the puzzle, while leaving others unanswered. Followers of Carter would admit only option 3 as an anthropic explanation, whereas 3 through 6 are covered by different versions of Barrow and Tipler's SAP (which would also include 7 if it is considered a variant of 4, as in Tipler 1994). The anthropic principle, at least as Carter conceived it, can be applied on scales much smaller than the whole universe. For example, Carter (1983) inverted the usual line of reasoning and pointed out that when interpreting the evolutionary record, one must take into account cosmological and astrophysical considerations. With this in mind, Carter concluded that given the best estimates of the age of the universe, the evolutionary chain culminating in Homo sapiens probably admits only one or two low probability links. Observational evidence No possible observational evidence bears on Carter's WAP, as it is merely advice to the scientist and asserts nothing debatable. The obvious test of Barrow's SAP, which says that the universe is "required" to support life, is to find evidence of life in universes other than ours. Any other universe is, by most definitions, unobservable (otherwise it would be included in our portion of this universe). Thus, in principle Barrow's SAP cannot be falsified by observing a universe in which an observer cannot exist. Philosopher John Leslie states that the Carter SAP (with multiverse) predicts the following: Physical theory will evolve so as to strengthen the hypothesis that early phase transitions occur probabilistically rather than deterministically, in which case there will be no deep physical reason for the values of fundamental constants; Various theories for generating multiple universes will prove robust; Evidence that the universe is fine tuned will continue to accumulate; No life with a non-carbon chemistry will be discovered; Mathematical studies of galaxy formation will confirm that it is sensitive to the rate of expansion of the universe. Hogan has emphasised that it would be very strange if all fundamental constants were strictly determined, since this would leave us with no ready explanation for apparent fine tuning. In fact, humans might have to resort to something akin to Barrow and Tipler's SAP: there would be no option for such a universe not to support life. Probabilistic predictions of parameter values can be made given: a particular multiverse with a "measure", i.e. a well defined "density of universes" (so, for parameter X, one can calculate the prior probability P(X0) dX that X is in the range ), and an estimate of the number of observers in each universe, N(X) (e.g., this might be taken as proportional to the number of stars in the universe). The probability of observing value X is then proportional to . A generic feature of an analysis of this nature is that the expected values of the fundamental physical constants should not be "over-tuned", i.e. if there is some perfectly tuned predicted value (e.g. zero), the observed value need be no closer to that predicted value than what is required to make life possible. The small but finite value of the cosmological constant can be regarded as a successful prediction in this sense. One thing that would not count as evidence for the anthropic principle is evidence that the Earth or the Solar System occupied a privileged position in the universe, in violation of the Copernican principle (for possible counterevidence to this principle, see Copernican principle), unless there was some reason to think that that position was a necessary condition for our existence as observers. Applications of the principle The nucleosynthesis of carbon-12 Fred Hoyle may have invoked anthropic reasoning to predict an astrophysical phenomenon. He is said to have reasoned, from the prevalence on Earth of life forms whose chemistry was based on carbon-12 nuclei, that there must be an undiscovered resonance in the carbon-12 nucleus facilitating its synthesis in stellar interiors via the triple-alpha process. He then calculated the energy of this undiscovered resonance to be 7.6 million electronvolts. Willie Fowler's research group soon found this resonance, and its measured energy was close to Hoyle's prediction. However, in 2010 Helge Kragh argued that Hoyle did not use anthropic reasoning in making his prediction, since he made his prediction in 1953 and anthropic reasoning did not come into prominence until 1980. He called this an "anthropic myth", saying that Hoyle and others made an after-the-fact connection between carbon and life decades after the discovery of the resonance. Cosmic inflation Don Page criticized the entire theory of cosmic inflation as follows. He emphasized that initial conditions that made possible a thermodynamic arrow of time in a universe with a Big Bang origin, must include the assumption that at the initial singularity, the entropy of the universe was low and therefore extremely improbable. Paul Davies rebutted this criticism by invoking an inflationary version of the anthropic principle. While Davies accepted the premise that the initial state of the visible universe (which filled a microscopic amount of space before inflating) had to possess a very low entropy value—due to random quantum fluctuations—to account for the observed thermodynamic arrow of time, he deemed this fact an advantage for the theory. That the tiny patch of space from which our observable universe grew had to be extremely orderly, to allow the post-inflation universe to have an arrow of time, makes it unnecessary to adopt any "ad hoc" hypotheses about the initial entropy state, hypotheses other Big Bang theories require. String theory String theory predicts a large number of possible universes, called the "backgrounds" or "vacua". The set of these vacua is often called the "multiverse" or "anthropic landscape" or "string landscape". Leonard Susskind has argued that the existence of a large number of vacua puts anthropic reasoning on firm ground: only universes whose properties are such as to allow observers to exist are observed, while a possibly much larger set of universes lacking such properties go unnoticed. Steven Weinberg believes the anthropic principle may be appropriated by cosmologists committed to nontheism, and refers to that principle as a "turning point" in modern science because applying it to the string landscape "may explain how the constants of nature that we observe can take values suitable for life without being fine-tuned by a benevolent creator". Others—most notably David Gross but also Luboš Motl, Peter Woit, and Lee Smolin—argue that this is not predictive. Max Tegmark, Mario Livio, and Martin Rees argue that only some aspects of a physical theory need be observable and/or testable for the theory to be accepted, and that many well-accepted theories are far from completely testable at present. Jürgen Schmidhuber (2000–2002) points out that Ray Solomonoff's theory of universal inductive inference and its extensions already provide a framework for maximizing our confidence in any theory, given a limited sequence of physical observations, and some prior distribution on the set of possible explanations of the universe. Zhi-Wei Wang and Samuel L. Braunstein proved that life's existence in the universe depends on various fundamental constants. It suggests that without a complete understanding of these constants, one might incorrectly perceive the universe as being intelligently designed for life. This perspective challenges the view that our universe is unique in its ability to support life. Dimensions of spacetime There are two kinds of dimensions: spatial (bidirectional) and temporal (unidirectional). Let the number of spatial dimensions be N and the number of temporal dimensions be T. That and , setting aside the compactified dimensions invoked by string theory and undetectable to date, can be explained by appealing to the physical consequences of letting N differ from 3 and T differ from 1. The argument is often of an anthropic character and possibly the first of its kind, albeit before the complete concept came into vogue. The implicit notion that the dimensionality of the universe is special is first attributed to Gottfried Wilhelm Leibniz, who in the Discourse on Metaphysics suggested that the world is "the one which is at the same time the simplest in hypothesis and the richest in phenomena". Immanuel Kant argued that 3-dimensional space was a consequence of the inverse square law of universal gravitation. While Kant's argument is historically important, John D. Barrow said that it "gets the punch-line back to front: it is the three-dimensionality of space that explains why we see inverse-square force laws in Nature, not vice-versa" (Barrow 2002:204). In 1920, Paul Ehrenfest showed that if there is only a single time dimension and more than three spatial dimensions, the orbit of a planet about its Sun cannot remain stable. The same is true of a star's orbit around the center of its galaxy. Ehrenfest also showed that if there are an even number of spatial dimensions, then the different parts of a wave impulse will travel at different speeds. If there are spatial dimensions, where k is a positive whole number, then wave impulses become distorted. In 1922, Hermann Weyl claimed that Maxwell's theory of electromagnetism can be expressed in terms of an action only for a four-dimensional manifold. Finally, Tangherlini showed in 1963 that when there are more than three spatial dimensions, electron orbitals around nuclei cannot be stable; electrons would either fall into the nucleus or disperse. Max Tegmark expands on the preceding argument in the following anthropic manner. If T differs from 1, the behavior of physical systems could not be predicted reliably from knowledge of the relevant partial differential equations. In such a universe, intelligent life capable of manipulating technology could not emerge. Moreover, if , Tegmark maintains that protons and electrons would be unstable and could decay into particles having greater mass than themselves. (This is not a problem if the particles have a sufficiently low temperature.) Lastly, if , gravitation of any kind becomes problematic, and the universe would probably be too simple to contain observers. For example, when , nerves cannot cross without intersecting. Hence anthropic and other arguments rule out all cases except and , which describes the world around us. On the other hand, in view of creating black holes from an ideal monatomic gas under its self-gravity, Wei-Xiang Feng showed that -dimensional spacetime is the marginal dimensionality. Moreover, it is the unique dimensionality that can afford a "stable" gas sphere with a "positive" cosmological constant. However, a self-gravitating gas cannot be stably bound if the mass sphere is larger than ~1021 solar masses, due to the small positivity of the cosmological constant observed. In 2019, James Scargill argued that complex life may be possible with two spatial dimensions. According to Scargill, a purely scalar theory of gravity may enable a local gravitational force, and 2D networks may be sufficient for complex neural networks. Metaphysical interpretations Some of the metaphysical disputes and speculations include, for example, attempts to back Pierre Teilhard de Chardin's earlier interpretation of the universe as being Christ centered (compare Omega Point), expressing a creatio evolutiva instead the elder notion of creatio continua. From a strictly secular, humanist perspective, it allows as well to put human beings back in the center, an anthropogenic shift in cosmology. Karl W. Giberson has laconically stated that William Sims Bainbridge disagreed with de Chardin's optimism about a future Omega point at the end of history, arguing that logically, humans are trapped at the Omicron point, in the middle of the Greek alphabet rather than advancing to the end, because the universe does not need to have any characteristics that would support our further technical progress, if the anthropic principle merely requires it to be suitable for our evolution to this point. The anthropic cosmological principle A thorough extant study of the anthropic principle is the book The anthropic cosmological principle by John D. Barrow, a cosmologist, and Frank J. Tipler, a cosmologist and mathematical physicist. This book sets out in detail the many known anthropic coincidences and constraints, including many found by its authors. While the book is primarily a work of theoretical astrophysics, it also touches on quantum physics, chemistry, and earth science. An entire chapter argues that Homo sapiens is, with high probability, the only intelligent species in the Milky Way. The book begins with an extensive review of many topics in the history of ideas the authors deem relevant to the anthropic principle, because the authors believe that principle has important antecedents in the notions of teleology and intelligent design. They discuss the writings of Fichte, Hegel, Bergson, and Alfred North Whitehead, and the Omega Point cosmology of Teilhard de Chardin. Barrow and Tipler carefully distinguish teleological reasoning from eutaxiological reasoning; the former asserts that order must have a consequent purpose; the latter asserts more modestly that order must have a planned cause. They attribute this important but nearly always overlooked distinction to an obscure 1883 book by L. E. Hicks. Seeing little sense in a principle requiring intelligent life to emerge while remaining indifferent to the possibility of its eventual extinction, Barrow and Tipler propose the final anthropic principle (FAP): Intelligent information-processing must come into existence in the universe, and, once it comes into existence, it will never die out. Barrow and Tipler submit that the FAP is both a valid physical statement and "closely connected with moral values". FAP places strong constraints on the structure of the universe, constraints developed further in Tipler's The Physics of Immortality. One such constraint is that the universe must end in a Big Crunch, which seems unlikely in view of the tentative conclusions drawn since 1998 about dark energy, based on observations of very distant supernovas. In his review of Barrow and Tipler, Martin Gardner ridiculed the FAP by quoting the last two sentences of their book as defining a completely ridiculous anthropic principle (CRAP): Reception and controversies Carter has frequently expressed regret for his own choice of the word "anthropic", because it conveys the misleading impression that the principle involves humans in particular, to the exclusion of non-human intelligence more broadly. Others have criticised the word "principle" as being too grandiose to describe straightforward applications of selection effects. A common criticism of Carter's SAP is that it is an easy deus ex machina that discourages searches for physical explanations. To quote Penrose again: "It tends to be invoked by theorists whenever they do not have a good enough theory to explain the observed facts." Carter's SAP and Barrow and Tipler's WAP have been dismissed as truisms or trivial tautologies—that is, statements true solely by virtue of their logical form and not because a substantive claim is made and supported by observation of reality. As such, they are criticized as an elaborate way of saying, "If things were different, they would be different", which is a valid statement, but does not make a claim of some factual alternative over another. Critics of the Barrow and Tipler SAP claim that it is neither testable nor falsifiable, and thus is not a scientific statement but rather a philosophical one. The same criticism has been leveled against the hypothesis of a multiverse, although some argue that it does make falsifiable predictions. A modified version of this criticism is that humanity understands so little about the emergence of life, especially intelligent life, that it is effectively impossible to calculate the number of observers in each universe. Also, the prior distribution of universes as a function of the fundamental constants is easily modified to get any desired result. Many criticisms focus on versions of the strong anthropic principle, such as Barrow and Tipler's anthropic cosmological principle, which are teleological notions that tend to describe the existence of life as a necessary prerequisite for the observable constants of physics. Similarly, Stephen Jay Gould, Michael Shermer, and others claim that the stronger versions of the anthropic principle seem to reverse known causes and effects. Gould compared the claim that the universe is fine-tuned for the benefit of our kind of life to saying that sausages were made long and narrow so that they could fit into modern hotdog buns, or saying that ships had been invented to house barnacles. These critics cite the vast physical, fossil, genetic, and other biological evidence consistent with life having been fine-tuned through natural selection to adapt to the physical and geophysical environment in which life exists. Life appears to have adapted to the universe, and not vice versa. Some applications of the anthropic principle have been criticized as an argument by lack of imagination, for tacitly assuming that carbon compounds and water are the only possible chemistry of life (sometimes called "carbon chauvinism"; see also alternative biochemistry). The range of fundamental physical constants consistent with the evolution of carbon-based life may also be wider than those who advocate a fine-tuned universe have argued. For instance, Harnik et al. propose a Weakless Universe in which the weak nuclear force is eliminated. They show that this has no significant effect on the other fundamental interactions, provided some adjustments are made in how those interactions work. However, if some of the fine-tuned details of our universe were violated, that would rule out complex structures of any kind—stars, planets, galaxies, etc. Lee Smolin has offered a theory designed to improve on the lack of imagination that has been ascribed to anthropic principles. He puts forth his fecund universes theory, which assumes universes have "offspring" through the creation of black holes whose offspring universes have values of physical constants that depend on those of the mother universe. The philosophers of cosmology John Earman, Ernan McMullin, and Jesús Mosterín contend that "in its weak version, the anthropic principle is a mere tautology, which does not allow us to explain anything or to predict anything that we did not already know. In its strong version, it is a gratuitous speculation". A further criticism by Mosterín concerns the flawed "anthropic" inference from the assumption of an infinity of worlds to the existence of one like ours:
Physical sciences
Physical cosmology
Astronomy
2819
https://en.wikipedia.org/wiki/Aerodynamics
Aerodynamics
Aerodynamics ( aero (air) + (dynamics)) is the study of the motion of air, particularly when affected by a solid object, such as an airplane wing. It involves topics covered in the field of fluid dynamics and its subfield of gas dynamics, and is an important domain of study in aeronautics. The term aerodynamics is often used synonymously with gas dynamics, the difference being that "gas dynamics" applies to the study of the motion of all gases, and is not limited to air. The formal study of aerodynamics began in the modern sense in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag were recorded much earlier. Most of the early efforts in aerodynamics were directed toward achieving heavier-than-air flight, which was first demonstrated by Otto Lilienthal in 1891. Since then, the use of aerodynamics through mathematical analysis, empirical approximations, wind tunnel experimentation, and computer simulations has formed a rational basis for the development of heavier-than-air flight and a number of other technologies. Recent work in aerodynamics has focused on issues related to compressible flow, turbulence, and boundary layers and has become increasingly computational in nature. History Modern aerodynamics only dates back to the seventeenth century, but aerodynamic forces have been harnessed by humans for thousands of years in sailboats and windmills, and images and stories of flight appear throughout recorded history, such as the Ancient Greek legend of Icarus and Daedalus. Fundamental concepts of continuum, drag, and pressure gradients appear in the work of Aristotle and Archimedes. In 1726, Sir Isaac Newton became the first person to develop a theory of air resistance, making him one of the first aerodynamicists. Dutch-Swiss mathematician Daniel Bernoulli followed in 1738 with Hydrodynamica in which he described a fundamental relationship between pressure, density, and flow velocity for incompressible flow known today as Bernoulli's principle, which provides one method for calculating aerodynamic lift. In 1757, Leonhard Euler published the more general Euler equations which could be applied to both compressible and incompressible flows. The Euler equations were extended to incorporate the effects of viscosity in the first half of the 1800s, resulting in the Navier–Stokes equations. The Navier–Stokes equations are the most general governing equations of fluid flow but are difficult to solve for the flow around all but the simplest of shapes. In 1799, Sir George Cayley became the first person to identify the four aerodynamic forces of flight (weight, lift, drag, and thrust), as well as the relationships between them, and in doing so outlined the path toward achieving heavier-than-air flight for the next century. In 1871, Francis Herbert Wenham constructed the first wind tunnel, allowing precise measurements of aerodynamic forces. Drag theories were developed by Jean le Rond d'Alembert, Gustav Kirchhoff, and Lord Rayleigh. In 1889, Charles Renard, a French aeronautical engineer, became the first person to reasonably predict the power needed for sustained flight. Otto Lilienthal, the first person to become highly successful with glider flights, was also the first to propose thin, curved airfoils that would produce high lift and low drag. Building on these developments as well as research carried out in their own wind tunnel, the Wright brothers flew the first powered airplane on December 17, 1903. During the time of the first flights, Frederick W. Lanchester, Martin Kutta, and Nikolai Zhukovsky independently created theories that connected circulation of a fluid flow to lift. Kutta and Zhukovsky went on to develop a two-dimensional wing theory. Expanding upon the work of Lanchester, Ludwig Prandtl is credited with developing the mathematics behind thin-airfoil and lifting-line theories as well as work with boundary layers. As aircraft speed increased designers began to encounter challenges associated with air compressibility at speeds near the speed of sound. The differences in airflow under such conditions lead to problems in aircraft control, increased drag due to shock waves, and the threat of structural failure due to aeroelastic flutter. The ratio of the flow speed to the speed of sound was named the Mach number after Ernst Mach who was one of the first to investigate the properties of the supersonic flow. Macquorn Rankine and Pierre Henri Hugoniot independently developed the theory for flow properties before and after a shock wave, while Jakob Ackeret led the initial work of calculating the lift and drag of supersonic airfoils. Theodore von Kármán and Hugh Latimer Dryden introduced the term transonic to describe flow speeds between the critical Mach number and Mach 1 where drag increases rapidly. This rapid increase in drag led aerodynamicists and aviators to disagree on whether supersonic flight was achievable until the sound barrier was broken in 1947 using the Bell X-1 aircraft. By the time the sound barrier was broken, aerodynamicists' understanding of the subsonic and low supersonic flow had matured. The Cold War prompted the design of an ever-evolving line of high-performance aircraft. Computational fluid dynamics began as an effort to solve for flow properties around complex objects and has rapidly grown to the point where entire aircraft can be designed using computer software, with wind-tunnel tests followed by flight tests to confirm the computer predictions. Understanding of supersonic and hypersonic aerodynamics has matured since the 1960s, and the goals of aerodynamicists have shifted from the behaviour of fluid flow to the engineering of a vehicle such that it interacts predictably with the fluid flow. Designing aircraft for supersonic and hypersonic conditions, as well as the desire to improve the aerodynamic efficiency of current aircraft and propulsion systems, continues to motivate new research in aerodynamics, while work continues to be done on important problems in basic aerodynamic theory related to flow turbulence and the existence and uniqueness of analytical solutions to the Navier–Stokes equations. Fundamental concepts Understanding the motion of air around an object (often called a flow field) enables the calculation of forces and moments acting on the object. In many aerodynamics problems, the forces of interest are the fundamental forces of flight: lift, drag, thrust, and weight. Of these, lift and drag are aerodynamic forces, i.e. forces due to air flow over a solid body. Calculation of these quantities is often founded upon the assumption that the flow field behaves as a continuum. Continuum flow fields are characterized by properties such as flow velocity, pressure, density, and temperature, which may be functions of position and time. These properties may be directly or indirectly measured in aerodynamics experiments or calculated starting with the equations for conservation of mass, momentum, and energy in air flows. Density, flow velocity, and an additional property, viscosity, are used to classify flow fields. Flow classification Flow velocity is used to classify flows according to speed regime. Subsonic flows are flow fields in which the air speed field is always below the local speed of sound. Transonic flows include both regions of subsonic flow and regions in which the local flow speed is greater than the local speed of sound. Supersonic flows are defined to be flows in which the flow speed is greater than the speed of sound everywhere. A fourth classification, hypersonic flow, refers to flows where the flow speed is much greater than the speed of sound. Aerodynamicists disagree on the precise definition of hypersonic flow. Compressible flow accounts for varying density within the flow. Subsonic flows are often idealized as incompressible, i.e. the density is assumed to be constant. Transonic and supersonic flows are compressible, and calculations that neglect the changes of density in these flow fields will yield inaccurate results. Viscosity is associated with the frictional forces in a flow. In some flow fields, viscous effects are very small, and approximate solutions may safely neglect viscous effects. These approximations are called inviscid flows. Flows for which viscosity is not neglected are called viscous flows. Finally, aerodynamic problems may also be classified by the flow environment. External aerodynamics is the study of flow around solid objects of various shapes (e.g. around an airplane wing), while internal aerodynamics is the study of flow through passages inside solid objects (e.g. through a jet engine). Continuum assumption Unlike liquids and solids, gases are composed of discrete molecules which occupy only a small fraction of the volume filled by the gas. On a molecular level, flow fields are made up of the collisions of many individual of gas molecules between themselves and with solid surfaces. However, in most aerodynamics applications, the discrete molecular nature of gases is ignored, and the flow field is assumed to behave as a continuum. This assumption allows fluid properties such as density and flow velocity to be defined everywhere within the flow. The validity of the continuum assumption is dependent on the density of the gas and the application in question. For the continuum assumption to be valid, the mean free path length must be much smaller than the length scale of the application in question. For example, many aerodynamics applications deal with aircraft flying in atmospheric conditions, where the mean free path length is on the order of micrometers and where the body is orders of magnitude larger. In these cases, the length scale of the aircraft ranges from a few meters to a few tens of meters, which is much larger than the mean free path length. For such applications, the continuum assumption is reasonable. The continuum assumption is less valid for extremely low-density flows, such as those encountered by vehicles at very high altitudes (e.g. 300,000 ft/90 km) or satellites in Low Earth orbit. In those cases, statistical mechanics is a more accurate method of solving the problem than is continuum aerodynamics. The Knudsen number can be used to guide the choice between statistical mechanics and the continuous formulation of aerodynamics. Conservation laws The assumption of a fluid continuum allows problems in aerodynamics to be solved using fluid dynamics conservation laws. Three conservation principles are used: Conservation of mass Conservation of mass requires that mass is neither created nor destroyed within a flow; the mathematical formulation of this principle is known as the mass continuity equation. Conservation of momentum The mathematical formulation of this principle can be considered an application of Newton's Second Law. Momentum within a flow is only changed by external forces, which may include both surface forces, such as viscous (frictional) forces, and body forces, such as weight. The momentum conservation principle may be expressed as either a vector equation or separated into a set of three scalar equations (x,y,z components). Conservation of energy The energy conservation equation states that energy is neither created nor destroyed within a flow, and that any addition or subtraction of energy to a volume in the flow is caused by heat transfer, or by work into and out of the region of interest. Together, these equations are known as the Navier–Stokes equations, although some authors define the term to only include the momentum equation(s). The Navier–Stokes equations have no known analytical solution and are solved in modern aerodynamics using computational techniques. Because computational methods using high speed computers were not historically available and the high computational cost of solving these complex equations now that they are available, simplifications of the Navier–Stokes equations have been and continue to be employed. The Euler equations are a set of similar conservation equations which neglect viscosity and may be used in cases where the effect of viscosity is expected to be small. Further simplifications lead to Laplace's equation and potential flow theory. Additionally, Bernoulli's equation is a solution in one dimension to both the momentum and energy conservation equations. The ideal gas law or another such equation of state is often used in conjunction with these equations to form a determined system that allows the solution for the unknown variables. Branches of aerodynamics Aerodynamic problems are classified by the flow environment or properties of the flow, including flow speed, compressibility, and viscosity. External aerodynamics is the study of flow around solid objects of various shapes. Evaluating the lift and drag on an airplane or the shock waves that form in front of the nose of a rocket are examples of external aerodynamics. Internal aerodynamics is the study of flow through passages in solid objects. For instance, internal aerodynamics encompasses the study of the airflow through a jet engine or through an air conditioning pipe. Aerodynamic problems can also be classified according to whether the flow speed is below, near or above the speed of sound. A problem is called subsonic if all the speeds in the problem are less than the speed of sound, transonic if speeds both below and above the speed of sound are present (normally when the characteristic speed is approximately the speed of sound), supersonic when the characteristic flow speed is greater than the speed of sound, and hypersonic when the flow speed is much greater than the speed of sound. Aerodynamicists disagree over the precise definition of hypersonic flow; a rough definition considers flows with Mach numbers above 5 to be hypersonic. The influence of viscosity on the flow dictates a third classification. Some problems may encounter only very small viscous effects, in which case viscosity can be considered to be negligible. The approximations to these problems are called inviscid flows. Flows for which viscosity cannot be neglected are called viscous flows. Incompressible aerodynamics An incompressible flow is a flow in which density is constant in both time and space. Although all real fluids are compressible, a flow is often approximated as incompressible if the effect of the density changes cause only small changes to the calculated results. This is more likely to be true when the flow speeds are significantly lower than the speed of sound. Effects of compressibility are more significant at speeds close to or above the speed of sound. The Mach number is used to evaluate whether the incompressibility can be assumed, otherwise the effects of compressibility must be included. Subsonic flow Subsonic (or low-speed) aerodynamics describes fluid motion in flows which are much lower than the speed of sound everywhere in the flow. There are several branches of subsonic flow but one special case arises when the flow is inviscid, incompressible and irrotational. This case is called potential flow and allows the differential equations that describe the flow to be a simplified version of the equations of fluid dynamics, thus making available to the aerodynamicist a range of quick and easy solutions. In solving a subsonic problem, one decision to be made by the aerodynamicist is whether to incorporate the effects of compressibility. Compressibility is a description of the amount of change of density in the flow. When the effects of compressibility on the solution are small, the assumption that density is constant may be made. The problem is then an incompressible low-speed aerodynamics problem. When the density is allowed to vary, the flow is called compressible. In air, compressibility effects are usually ignored when the Mach number in the flow does not exceed 0.3 (about 335 feet (102 m) per second or 228 miles (366 km) per hour at 60 °F (16 °C)). Above Mach 0.3, the problem flow should be described using compressible aerodynamics. Compressible aerodynamics According to the theory of aerodynamics, a flow is considered to be compressible if the density changes along a streamline. This means that – unlike incompressible flow – changes in density are considered. In general, this is the case where the Mach number in part or all of the flow exceeds 0.3. The Mach 0.3 value is rather arbitrary, but it is used because gas flows with a Mach number below that value demonstrate changes in density of less than 5%. Furthermore, that maximum 5% density change occurs at the stagnation point (the point on the object where flow speed is zero), while the density changes around the rest of the object will be significantly lower. Transonic, supersonic, and hypersonic flows are all compressible flows. Transonic flow The term Transonic refers to a range of flow velocities just below and above the local speed of sound (generally taken as Mach 0.8–1.2). It is defined as the range of speeds between the critical Mach number, when some parts of the airflow over an aircraft become supersonic, and a higher speed, typically near Mach 1.2, when all of the airflow is supersonic. Between these speeds, some of the airflow is supersonic, while some of the airflow is not supersonic. Supersonic flow Supersonic aerodynamic problems are those involving flow speeds greater than the speed of sound. Calculating the lift on the Concorde during cruise can be an example of a supersonic aerodynamic problem. Supersonic flow behaves very differently from subsonic flow. Fluids react to differences in pressure; pressure changes are how a fluid is "told" to respond to its environment. Therefore, since sound is, in fact, an infinitesimal pressure difference propagating through a fluid, the speed of sound in that fluid can be considered the fastest speed that "information" can travel in the flow. This difference most obviously manifests itself in the case of a fluid striking an object. In front of that object, the fluid builds up a stagnation pressure as impact with the object brings the moving fluid to rest. In fluid traveling at subsonic speed, this pressure disturbance can propagate upstream, changing the flow pattern ahead of the object and giving the impression that the fluid "knows" the object is there by seemingly adjusting its movement and is flowing around it. In a supersonic flow, however, the pressure disturbance cannot propagate upstream. Thus, when the fluid finally reaches the object it strikes it and the fluid is forced to change its properties – temperature, density, pressure, and Mach number—in an extremely violent and irreversible fashion called a shock wave. The presence of shock waves, along with the compressibility effects of high-flow velocity (see Reynolds number) fluids, is the central difference between the supersonic and subsonic aerodynamics regimes. Hypersonic flow In aerodynamics, hypersonic speeds are speeds that are highly supersonic. In the 1970s, the term generally came to refer to speeds of Mach 5 (5 times the speed of sound) and above. The hypersonic regime is a subset of the supersonic regime. Hypersonic flow is characterized by high temperature flow behind a shock wave, viscous interaction, and chemical dissociation of gas. Associated terminology The incompressible and compressible flow regimes produce many associated phenomena, such as boundary layers and turbulence. Boundary layers The concept of a boundary layer is important in many problems in aerodynamics. The viscosity and fluid friction in the air is approximated as being significant only in this thin layer. This assumption makes the description of such aerodynamics much more tractable mathematically. Turbulence In aerodynamics, turbulence is characterized by chaotic property changes in the flow. These include low momentum diffusion, high momentum convection, and rapid variation of pressure and flow velocity in space and time. Flow that is not turbulent is called laminar flow. Aerodynamics in other fields Engineering design Aerodynamics is a significant element of vehicle design, including road cars and trucks where the main goal is to reduce the vehicle drag coefficient, and racing cars, where in addition to reducing drag the goal is also to increase the overall level of downforce. Aerodynamics is also important in the prediction of forces and moments acting on sailing vessels. It is used in the design of mechanical components such as hard drive heads. Structural engineers resort to aerodynamics, and particularly aeroelasticity, when calculating wind loads in the design of large buildings, bridges, and wind turbines. The aerodynamics of internal passages is important in heating/ventilation, gas piping, and in automotive engines where detailed flow patterns strongly affect the performance of the engine. Environmental design Urban aerodynamics are studied by town planners and designers seeking to improve amenity in outdoor spaces, or in creating urban microclimates to reduce the effects of urban pollution. The field of environmental aerodynamics describes ways in which atmospheric circulation and flight mechanics affect ecosystems. Aerodynamic equations are used in numerical weather prediction. Ball-control in sports Sports in which aerodynamics are of crucial importance include soccer, table tennis, cricket, baseball, and golf, in which most players can control the trajectory of the ball using the "Magnus effect".
Physical sciences
Fluid mechanics
null
2822
https://en.wikipedia.org/wiki/Ash
Ash
Ash is the solid remnants of fires. Specifically, ash refers to all non-aqueous, non-gaseous residues that remain after something burns. In analytical chemistry, to analyse the mineral and metal content of chemical samples, ash is the non-gaseous, non-liquid residue after complete combustion. Ashes as the end product of incomplete combustion are mostly mineral, but usually still contain an amount of combustible organic or other oxidizable residues. The best-known type of ash is wood ash, as a product of wood combustion in campfires, fireplaces, etc. The darker the wood ashes, the higher the content of remaining charcoal from incomplete combustion. The ashes are of different types. Some ashes contain natural compounds that make soil fertile. Others have chemical compounds that can be toxic but may break up in soil from chemical changes and microorganism activity. Like soap, ash is also a disinfecting agent (alkaline). The World Health Organization recommends ash or sand as alternative for handwashing when soap is not available. Before industrialization, ash soaked in water was the primary means of obtaining potash. Natural occurrence Ash occurs naturally from any fire that burns vegetation, and may disperse in the soil to fertilise it, or clump under it for long enough to carbonise into coal. Composition The composition of the ash varies depending on the product burned and its origin. The "ash content" or "mineral content" of a product is derived its incineration under temperatures ranging from to . Wood and plant matter The composition of ash derived from wood and other plant matter varies based on plant species, parts of the plants (such as bark, trunk, or young branches with foliage), type of soil, and time of year. The composition of these ashes also differ greatly depending on mode of combustion. Wood ashes, in addition to residual carbonaceous materials (unconsumed embers, activated carbons impregnated with carbonaceous particles, tars, various gases, etc.), contain a between 20% and 50% calcium in the form of calcium oxide and are generally rich in potassium carbonate. Ashes derived from grasses, and the Gramineae family in particular, are rich in silica. The color of the ash comes from small proportions of inorganic minerals such as iron oxides and manganese. The oxidized metal elements that constitute wood ash are mostly considered alkaline. For example, ash collected from wood boilers is composed of 17–33% calcium in the form of calcium oxide () 2–6% potassium in the form of potassium oxide () 2.5–4.6% magnesium in the form of magnesium oxide () 1–6% phosphorus in the form of phosphorus pentoxide () 3% in total of oxides such as iron oxide, manganese oxide, and sodium oxide The pH of the ash is between 10 and 13, mostly due to the fact that the oxides of calcium, potassium, and sodium are strong bases. Acidic components such as carbon dioxide, phosphoric acid, silicic acid, and sulfuric acid are rarely present and, in the presence of the previously mentioned bases, are generally found in the form of salts, respectively carbonates, phosphates, silicates and sulphates. Strictly speaking, calcium and potassium salts produce the aforementioned calcium oxide (also known as quicklime) and potassium during the combustion of organic matter. But, in practice, quicklime is only obtained via lime-kiln, and potash (from potassium carbonate) or baking soda (from sodium carbonate) is extracted from the ashes. Other substances such as sulfur, chlorine, iron or sodium only appear in small quantities. Still others are rarely found in wood, such as aluminum, zinc, and boron. (depending on the trace elements drawn from the soil by the incinerated plants). Mineral content in ash depends on the species of tree burned, even in the same soil conditions. More chloride is found in conifer trees than broadleaf trees, with seven times as much found in spruces than in oak trees. There is twice as much phosphoric acid in the European aspen than in oaks and twice as much magnesium in elm trees than in the Scotch pine. Ash composition also varies by which part of the tree was burnt. Silicon and calcium salts are more abundant in bark than in wood, while potassium salts are primarily found in wood. Compositional variation also occurred based on the season in which the tree died. Specific types Cremation ashes Cremation ashes, also called cremated remains or "cremains," are the bodily remains left from cremation. They often take the form of a grey powder resembling coarse sand. While often referred to as ashes, the remains primarily consist of powdered bone fragments due to the cremation process, which eliminates the body's organic materials. People often store these ashes in containers like urns, although they are also sometimes buried or scattered in specific locations. Food ashes In food processing, mineral and ash content is used to characterize the presence of organic and inorganic components in food for monitoring quality, nutritional quantification and labeling, analyzing microbiological stability, and more. This process can be used to measure minerals like calcium, sodium, potassium, and phosphorus as well as metal content such as lead, mercury, cadmium, and aluminum. Joss paper ash Analysis of the contents of ash samples shows that joss paper burning can emit many pollutants detrimental to air quality. There is a significant amount of heavy metals in the dust fume and bottom ash, e.g., aluminium, iron, manganese, copper, lead, zinc and cadmium. "Burning of joss paper accounted for up to 42% of the atmospheric rBC [refractory black carbon] mass, higher than traffic (14-17%), crop residue (10-17%), coal (18-20%) during the Hanyi festival in northwest China", according to a 2022 study, "the overall air quality can be worsened due to the practice of uncontrolled burning of joss paper during the festival, which is not just confined to the people who do the burning," and "burning joss paper during worship activities is common in China and most Asian countries with similar traditions." Slash-and-burn ash Wildfire ash High levels of heavy metals, including lead, arsenic, cadmium, and copper were found in the ash debris following the 2007 Californian wildfires. A national clean-up campaign was organised ... In the devastating California Camp Fire (2018) that killed 85 people, lead levels increased by around 50 times in the hours following the fire at a site nearby (Chico). Zinc concentration also increased significantly in Modesto, 150 miles away. Heavy metals such as manganese and calcium were found in numerous California fires as well. Others Ashes from Stubble burning Open burning of waste Cigarette or cigar ash Incinerator bottom ash, a form of ash produced in incinerators Products of coal combustion Bottom ash Fly ash Volcanic ash, ash that consists of fragmented glass, rock, and minerals that appears during an eruption. Wood ash Other properties Aging process Global distillation Uses Fertilizer Ashes have been used since the Neolithic period as fertilizer because they are rich in minerals, especially potash and essential nutrients. They are the main fertilizer in slash-and-burn agriculture, which eventually evolved into controlled burn and forest clearing practices. People in ancient history already possessed extensive knowledge of the nutrients produced by (from social 10th textbook)(manufacturing industries )different ashes. For clay soil in particular, using ash without modification or using , ash whose minerals have been washed with water, was necessary. Laundry Because ashes contain potash, they can be used to make biodegradable laundry detergent. The demand for organic products has led to renewed interest for laundry using ash derived from wood. The French word for laundry is from the Latin word , which means a substance made from ash and used to wash laundry. This usage also developed into a small, traditional architectural structure to the west of Rhône mainstem: the , a masonry structure built with stone or cob, that looks like a cabinet and that carries dirty laundry and fireplace ash; when the is full, the laundry and ash are moved to a laundry container and boiled in water. Laundry using ash derived from wood has the benefit of being free, easy to produce, sustainable, and as efficient as standard laundry washing methods. Health effects Effect on precipitation "Particles of dust or smoke in the atmosphere are essential for precipitation. These particles, called 'condensation nuclei,' provide a surface for water vapor to condense upon. This helps water droplets gather together and become large enough to fall to the earth" Effect on climate change
Physical sciences
Salts and ions: General
Chemistry
2823
https://en.wikipedia.org/wiki/Antiderivative
Antiderivative
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a continuous function is a differentiable function whose derivative is equal to the original function . This can be stated symbolically as . The process of solving for antiderivatives is called antidifferentiation (or indefinite integration), and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as and . Antiderivatives are related to definite integrals through the second fundamental theorem of calculus: the definite integral of a function over a closed interval where the function is Riemann integrable is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval. In physics, antiderivatives arise in the context of rectilinear motion (e.g., in explaining the relationship between position, velocity and acceleration). The discrete equivalent of the notion of antiderivative is antidifference. Examples The function is an antiderivative of , since the derivative of is . Since the derivative of a constant is zero, will have an infinite number of antiderivatives, such as , etc. Thus, all the antiderivatives of can be obtained by changing the value of in , where is an arbitrary constant known as the constant of integration. The graphs of antiderivatives of a given function are vertical translations of each other, with each graph's vertical location depending upon the value . More generally, the power function has antiderivative if , and if . In physics, the integration of acceleration yields velocity plus a constant. The constant is the initial velocity term that would be lost upon taking the derivative of velocity, because the derivative of a constant term is zero. This same pattern applies to further integrations and derivatives of motion (position, velocity, acceleration, and so on). Thus, integration produces the relations of acceleration, velocity and displacement: Uses and properties Antiderivatives can be used to compute definite integrals, using the fundamental theorem of calculus: if is an antiderivative of the continuous function over the interval , then: Because of this, each of the infinitely many antiderivatives of a given function may be called the "indefinite integral" of f and written using the integral symbol with no bounds: If is an antiderivative of , and the function is defined on some interval, then every other antiderivative of differs from by a constant: there exists a number such that for all . is called the constant of integration. If the domain of is a disjoint union of two or more (open) intervals, then a different constant of integration may be chosen for each of the intervals. For instance is the most general antiderivative of on its natural domain Every continuous function has an antiderivative, and one antiderivative is given by the definite integral of with variable upper boundary: for any in the domain of . Varying the lower boundary produces other antiderivatives, but not necessarily all possible antiderivatives. This is another formulation of the fundamental theorem of calculus. There are many elementary functions whose antiderivatives, even though they exist, cannot be expressed in terms of elementary functions. Elementary functions are polynomials, exponential functions, logarithms, trigonometric functions, inverse trigonometric functions and their combinations under composition and linear combination. Examples of these nonelementary integrals are the error function the Fresnel function the sine integral the logarithmic integral function and sophomore's dream For a more detailed discussion, see also Differential Galois theory. Techniques of integration Finding antiderivatives of elementary functions is often considerably harder than finding their derivatives (indeed, there is no pre-defined method for computing indefinite integrals). For some elementary functions, it is impossible to find an antiderivative in terms of other elementary functions. To learn more, see elementary functions and nonelementary integral. There exist many properties and techniques for finding antiderivatives. These include, among others: The linearity of integration (which breaks complicated integrals into simpler ones) Integration by substitution, often combined with trigonometric identities or the natural logarithm The inverse chain rule method (a special case of integration by substitution) Integration by parts (to integrate products of functions) Inverse function integration (a formula that expresses the antiderivative of the inverse of an invertible and continuous function , in terms of and the antiderivative of ). The method of partial fractions in integration (which allows us to integrate all rational functions—fractions of two polynomials) The Risch algorithm Additional techniques for multiple integrations (see for instance double integrals, polar coordinates, the Jacobian and the Stokes' theorem) Numerical integration (a technique for approximating a definite integral when no elementary antiderivative exists, as in the case of ) Algebraic manipulation of integrand (so that other integration techniques, such as integration by substitution, may be used) Cauchy formula for repeated integration (to calculate the -times antiderivative of a function) Computer algebra systems can be used to automate some or all of the work involved in the symbolic techniques above, which is particularly useful when the algebraic manipulations involved are very complex or lengthy. Integrals which have already been derived can be looked up in a table of integrals. Of non-continuous functions Non-continuous functions can have antiderivatives. While there are still open questions in this area, it is known that: Some highly pathological functions with large sets of discontinuities may nevertheless have antiderivatives. In some cases, the antiderivatives of such pathological functions may be found by Riemann integration, while in other cases these functions are not Riemann integrable. Assuming that the domains of the functions are open intervals: A necessary, but not sufficient, condition for a function to have an antiderivative is that have the intermediate value property. That is, if is a subinterval of the domain of and is any real number between and , then there exists a between and such that . This is a consequence of Darboux's theorem. The set of discontinuities of must be a meagre set. This set must also be an F-sigma set (since the set of discontinuities of any function must be of this type). Moreover, for any meagre F-sigma set, one can construct some function having an antiderivative, which has the given set as its set of discontinuities. If has an antiderivative, is bounded on closed finite subintervals of the domain and has a set of discontinuities of Lebesgue measure 0, then an antiderivative may be found by integration in the sense of Lebesgue. In fact, using more powerful integrals like the Henstock–Kurzweil integral, every function for which an antiderivative exists is integrable, and its general integral coincides with its antiderivative. If has an antiderivative on a closed interval , then for any choice of partition if one chooses sample points as specified by the mean value theorem, then the corresponding Riemann sum telescopes to the value . However, if is unbounded, or if is bounded but the set of discontinuities of has positive Lebesgue measure, a different choice of sample points may give a significantly different value for the Riemann sum, no matter how fine the partition. See Example 4 below. Some examples Basic formulae If , then .
Mathematics
Integral calculus
null
2838
https://en.wikipedia.org/wiki/Acrylic%20paint
Acrylic paint
Acrylic paint is a fast-drying paint made of pigment suspended in acrylic polymer emulsion and plasticizers, silicone oils, defoamers, stabilizers, or metal soaps. Most acrylic paints are water-based, but become water-resistant when dry. Depending on how much the paint is diluted with water, or modified with acrylic gels, mediums, or pastes, the finished acrylic painting can resemble a watercolor, a gouache, or an oil painting, or it may have its own unique characteristics not attainable with other media. Water-based acrylic paints are used as latex house paints, as latex is the technical term for a suspension of polymer microparticles in water. Interior latex house paints tend to be a combination of binder (sometimes acrylic, vinyl, PVA, and others), filler, pigment, and water. Exterior latex house paints may also be a co-polymer blend, but the best exterior water-based paints are 100% acrylic, because of its elasticity and other factors. Vinyl, however, costs half of what 100% acrylic resins cost, and polyvinyl acetate (PVA) is even cheaper, so paint companies make many different combinations of them to match the market. History Otto Röhm invented acrylic resin, which was quickly transformed into acrylic paint. As early as 1934, the first usable acrylic resin dispersion was developed by German chemical company BASF, and patented by Rohm and Haas. The synthetic paint was first used in the 1940s, combining some of the properties of oil and watercolor. Between 1946 and 1949, Leonard Bocour and Sam Golden invented a solution acrylic paint under the brand Magna paint. These were mineral spirit-based paints. Water-based acrylic paints were subsequently sold as latex house paints. Soon after the water-based acrylic binders were introduced as house paints, artists and companies alike began to explore the potential of the new binders. Diego Rivera, David Alfaro Siqueiros, and José Clemente Orozco were the first ones who experimented with acrylic paint. This is because they were very impressed with the durability of the acrylic paint. Because of this, artists and companies alike began to produce Politec Acrylic Artists' Colors in Mexico in 1953. According to The Times newspaper, Lancelot Ribeiro pioneered the use of acrylic paints in the UK because of his "increasing impatience" by the 1960s over the time it took for oil paints to dry, as also its "lack of brilliance in its colour potential." He took to the new synthetic plastic bases that commercial paints were beginning to use and soon got help from manufacturers like ICI, Courtaulds, and Geigy. The companies supplied him samples of their latest paints in quantities that he was using three decades later, according to the paper. Initially, the firms thought the PVA compounds would not be needed in commercially viable quantities. But they quickly recognised the potential demand and "so Ribeiro became the godfather of generations of artists using acrylics as an alternative to oils." In 1956, José L. Gutiérrez produced Politec Acrylic Artists' Colors in Mexico, and Henry Levison of Cincinnati-based Permanent Pigments Co. produced Liquitex colors. These two product lines were the first acrylic emulsion artists' paints, with modern high-viscosity paints becoming available in the early 1960s. Meanwhile, on the other side of the globe, 1958 saw the inception of Vynol Paints Pty Ltd (now Derivan) in Australia, who started producing a water-based artist acrylic called Vynol Colour, followed by Matisse Acrylics in the 1960s. Following that development, Golden came up with a waterborne acrylic paint called "Aquatec". In 1963, George Rowney (part of Daler-Rowney since 1983) was the first manufacturer to introduce artists' acrylic paints in Europe, under the brand name "Cryla". Painting with acrylics Acrylic painters can modify the appearance, hardness, flexibility, texture, and other characteristics of the paint surface by using acrylic medium or simply by adding water. Watercolor and oil painters also use various mediums, but the range of acrylic mediums is much greater. Acrylics have the ability to bond to many different surfaces, and mediums can be used to modify their binding characteristics. Acrylics can be used on paper, canvas, and a range of other materials; however, their use on engineered woods such as medium-density fiberboard can be problematic because of the porous nature of those surfaces. In these cases, it is recommended that the surface first be sealed with an appropriate sealer. The process of sealing acrylic painting is called varnishing. Artists use removable varnishes over isolation coat to protect paintings from dust, UV, scratches, etc. This process is similar to varnishing an oil painting. Acrylics can be applied in thin layers or washes to create effects that resemble watercolors and other water-based mediums. They can also be used to build thick layers of paint — gel and molding paste are sometimes used to create paintings with relief features. Acrylic paints are also used in hobbies such as trains, cars, houses, DIY projects, and human models. People who make such models use acrylic paint to build facial features on dolls or raised details on other types of models. Wet acrylic paint is easily removed from paintbrushes and skin with water, whereas oil paints require the use of a hydrocarbon. Acrylics are the most common paints used in grattage, a surrealist technique that began to be used with the advent of this type of paint. Acrylics are used for this purpose because they easily scrape or peel from a surface. Painting techniques Acrylic artists' paints may be thinned with water or acrylic medium and used as washes in the manner of watercolor paints, but unlike watercolor the washes are not rehydratable once dry. For this reason, acrylics do not lend themselves to the color lifting techniques of gum arabic-based watercolor paints. Instead, the paint is applied in layers, sometimes diluting with water or acrylic medium to allow layers underneath to partially show through. Using an acrylic medium gives the paint more of a rich and glossy appearance, whereas using water makes the paint look more like watercolor and have a matte finish. Acrylic paints with gloss or matte finishes are common, although a satin (semi-matte) sheen is most common. Some brands exhibit a range of finishes (e.g. heavy-body paints from Golden, Liquitex, Winsor & Newton and Daler-Rowney); Politec acrylics are fully matte. As with oils, pigment amounts and particle size or shape can affect the paint sheen. Matting agents can also be added during manufacture to dull the finish. If desired, the artist can mix different media with their paints and use topcoats or varnishes to alter or unify sheen. When dry, acrylic paint is generally non-removable from a solid surface if it adheres to the surface. Water or mild solvents do not re-solubilize it, although isopropyl alcohol can lift some fresh paint films off. Toluene and acetone can remove paint films, but they do not lift paint stains very well and are not selective. The use of a solvent to remove paint may result in removal of all of the paint layers (acrylic gesso, et cetera). Oils and warm, soapy water can remove acrylic paint from skin. Acrylic paint can be removed from nonporous plastic surfaces such as miniatures or models using cleaning products such as Dettol (containing chloroxylenol 4.8% v/w). An acrylic sizing should be used to prime canvas in preparation for painting with acrylic paints, to prevent Support Induced Discoloration (SID). Acrylic paint contains surfactants that can pull up discoloration from a raw canvas, especially in transparent glazed or translucent gelled areas. Gesso alone will not stop SID; a sizing must be applied before using a gesso. The viscosity of acrylic can be successfully reduced by using suitable extenders that maintain the integrity of the paint film. There are retarders to slow drying and extend workability time, and flow releases to increase color-blending ability. Properties Grades Commercial acrylic paints come in two grades by manufacturers: Artist acrylics (professional acrylics) are created and designed to resist chemical reactions from exposure to water, ultraviolet light, and oxygen. Professional-grade acrylics have the most pigment, which allows for more medium manipulation and limits the color shift when mixed with other colors or after drying. Student acrylics have working characteristics similar to artist acrylics, but with lower pigment concentrations, less-expensive formulas, and fewer available colors. More expensive pigments are generally replicated by hues. Colors are designed to be mixed even though color strength is lower. Hues may not have exactly the same mixing characteristics as full-strength colors. Varieties Heavy body acrylics are typically found in the Artist and Student Grade paints. "Heavy Body" refers to the viscosity or thickness of the paint. They are the best choice for impasto or heavier paint applications and will hold a brush or knife stroke and even a medium stiff peak. Gel Mediums ("pigment-less paints") are also available in various viscosities and used to thicken or thin paints, as well as extend paints and add transparency. Examples of Heavy Body Acrylics are Matisse Structure Acrylic Colors, Lukas Pastos Acrylics, Liquitex Heavy Body Acrylics and Golden Heavy Body Acrylics. Medium viscosity acrylics – Fluid acrylics, Soft body acrylics, or High Flow acrylics – have a lower viscosity but generally the same pigmentation as the Heavy Body acrylics. Available in either Artist quality or Craft quality, the cost and quality vary accordingly. These paints are good for watercolor techniques, airbrush application, or when smooth coverage is desired. Fluid acrylics can be mixed with any medium to thicken them for impasto work, or to thin them for glazing applications. Examples of fluid acrylics include Lukascryl Liquid, Lukascryl Studio, Liquitex Soft Body and Golden Fluid acrylics. Open acrylics were created to address the one major difference between oil and acrylic paints: the shortened time it takes acrylic paints to dry. Designed by Golden Artist Colors, Inc. with a hydrophilic acrylic resin, these paints can take anywhere from a few hours to a few days, or even weeks, to dry completely, depending on paint thickness, support characteristics, temperature, and humidity. Iridescent, pearl and interference acrylic colors combine conventional pigments with powdered mica (aluminium silicate) or powdered bronze to achieve complex visual effects. Colors have shimmering or reflective characteristics, depending on the coarseness or fineness of the powder. Iridescent colors are used in fine arts and crafts. Acrylic gouache is like traditional gouache because it dries to a matte, opaque finish. However, unlike traditional gouache, the acrylic binder makes it water-resistant once it dries. Like craft paint, it will adhere to a variety of surfaces, not only canvas and paper. This paint is typically used by water-colorists, cartoonists, or illustrators, and for decorative or folk art applications. Examples of acrylic gouache are Lascaux Gouache and Turner Acryl Gouache. Craft acrylics can be used on surfaces besides canvas, such as wood, metal, fabrics, and ceramics. They are used in decorative painting techniques and faux finishes to decorate objects of ordinary life. Although colors can be mixed, pigments are often not specified. Each color line is formulated instead to achieve a wide range of premixed colors. Craft paints usually employ vinyl or PVA resins to increase adhesion and lower cost. Interactive acrylics are all-purpose acrylic artists' colors which have the characteristic fast-drying nature of artists' acrylics, but are formulated to allow artists to delay drying when they need more working time, or re-wet their work when they want to do more wet blending. Exterior acrylics are paints that can withstand outdoor conditions. Like craft acrylics, they adhere to many surfaces. They are more resistant to both water and ultraviolet light. This makes them the acrylic of choice for architectural murals, outdoor signs, and many faux-finishing techniques. Acrylic glass paint is water-based and semi-permanent, making it a suitable paint for temporary displays on glass windows. Acrylic enamel paint creates a smooth, hard shell. It can be oven-baked or air dried. It can be permanent if kept away from harsh conditions such as dishwashing. Differences between acrylic and oil paint The vehicle and binder of oil paints is linseed oil (or another drying oil), whereas acrylic paint has water as the vehicle for an emulsion (suspension) of acrylic polymer, which serves as the binder. Thus, oil paint is said to be "oil-based", whereas acrylic paint is "water-based" (or sometimes "water-borne"). The main practical difference between most acrylics and oil paints is the inherent drying time. Oils allow for more time to blend colors and apply even glazes over underpaintings. This slow-drying aspect of oil can be seen as an advantage for certain techniques, but it impedes an artist trying to work quickly. The fast evaporation of water from regular acrylic paint films can be slowed with the use of acrylic retarders. Retarders are generally glycol or glycerin-based additives. The addition of a retarder slows the evaporation rate of the water. Oil paints may require the use of solvents such as mineral spirits or turpentine to thin the paint and clean up. These solvents generally have some level of toxicity and can be found objectionable. Relatively recently, water-miscible oil paints have been developed for artists' use. Oil paint films can gradually yellow and lose their flexibility over time creating cracks in the paint film; the "fat over lean" rule must be observed to ensure its durability. Oil paint has a higher pigment load than acrylic paint. As linseed oil contains a smaller molecule than acrylic paint, oil paint is able to absorb substantially more pigment. Oil provides a refractive index that is less clear than acrylic dispersions, which imparts a unique "look and feel" to the resultant paint film. Not all the pigments of oil paints are available in acrylics and vice versa, as each medium has different chemical sensitivities. Some historical pigments are alkali sensitive, and therefore cannot be made in an acrylic emulsion; others are just too difficult to formulate. Approximate "hue" color formulations, that do not contain the historical pigments, are typically offered as substitutes. Because of acrylic paint's more flexible nature and more consistent drying time between layers, an artist does not have to follow the same rules of oil painting, where more medium must be applied to each layer to avoid cracking. It usually takes 10–20 minutes for one to two layers of acrylic paint to dry, depending on the brand, quality, and humidity levels of the surrounding environment. Some professional grades of acrylic paint can take 20–30 minutes or even more than an hour. Although canvas needs to be properly primed before painting with oils to prevent the paint medium from eventually rotting the canvas, acrylic can be safely applied straight to the canvas. The rapid drying of acrylic paint tends to discourage blending of color and use of wet-in-wet technique as in oil painting. Even though acrylic retarders can slow drying time to several hours, it remains a relatively fast-drying medium and adding too much acrylic retarder can prevent the paint from ever drying properly. Meanwhile, acrylic paint is very elastic, which prevents cracking from occurring. Acrylic paint's binder is acrylic polymer emulsion – as this binder dries, the paint remains flexible. Another difference between oil and acrylic paints is the versatility offered by acrylic paints. Acrylics are very useful in mixed media, allowing the use of pastel (oil and chalk), charcoal and pen (among others) on top of the dried acrylic painted surface. Mixing other bodies into the acrylic is possible—sand, rice, and even pasta may be incorporated in the artwork. Mixing artist or student grade acrylic paint with household acrylic emulsions is possible, allowing the use of premixed tints straight from the tube or tin, and thereby presenting the painter with a vast color range at their disposal. This versatility is also illustrated by the variety of additional artistic uses for acrylics. Specialized acrylics have been manufactured and used for linoblock printing (acrylic block printing ink has been produced by Derivan since the early 1980s), face painting, airbrushing, watercolor-like techniques, and fabric screen printing. Another difference between oil and acrylic paint is the cleanup. Acrylic paint can be cleaned out of a brush with any soap, while oil paint needs a specific type to be sure to get all the oil out of the brushes. Also, it is easier to let a palette with oil paint dry and then scrape the paint off, whereas one can easily clean wet acrylic paint with water. Difference between acrylic and watercolor paint The biggest difference is that acrylic paint is opaque, whereas watercolor paint is translucent in nature. Watercolors take about 5 to 15 minutes to dry while acrylics take about 10 to 20 minutes. In order to change the tone or shade of a watercolor pigment, one changes the percentage of water mixed in to the color. For brighter colors, one adds more water. For darker colors, one adds less water. In order to create lighter or darker colors with acrylic paints, one adds white or black. Another difference is that watercolors must be painted onto a porous surface, primarily watercolor paper. Acrylic paints can be used on many different surfaces. Both acrylic and watercolor are easy to clean up with water. Acrylic paint should be cleaned with soap and water immediately following use. Watercolor paint can be cleaned with just water.
Technology
Artist's and drafting tools
null
2839
https://en.wikipedia.org/wiki/Angular%20momentum
Angular momentum
Angular momentum (sometimes called moment of momentum or rotational momentum) is the rotational analog of linear momentum. It is an important physical quantity because it is a conserved quantity – the total angular momentum of a closed system remains constant. Angular momentum has both a direction and a magnitude, and both are conserved. Bicycles and motorcycles, flying discs, rifled bullets, and gyroscopes owe their useful properties to conservation of angular momentum. Conservation of angular momentum is also why hurricanes form spirals and neutron stars have high rotational rates. In general, conservation limits the possible motion of a system, but it does not uniquely determine it. The three-dimensional angular momentum for a point particle is classically represented as a pseudovector , the cross product of the particle's position vector (relative to some origin) and its momentum vector; the latter is in Newtonian mechanics. Unlike linear momentum, angular momentum depends on where this origin is chosen, since the particle's position is measured from it. Angular momentum is an extensive quantity; that is, the total angular momentum of any composite system is the sum of the angular momenta of its constituent parts. For a continuous rigid body or a fluid, the total angular momentum is the volume integral of angular momentum density (angular momentum per unit volume in the limit as volume shrinks to zero) over the entire body. Similar to conservation of linear momentum, where it is conserved if there is no external force, angular momentum is conserved if there is no external torque. Torque can be defined as the rate of change of angular momentum, analogous to force. The net external torque on any system is always equal to the total torque on the system; the sum of all internal torques of any system is always 0 (this is the rotational analogue of Newton's third law of motion). Therefore, for a closed system (where there is no net external torque), the total torque on the system must be 0, which means that the total angular momentum of the system is constant. The change in angular momentum for a particular interaction is called angular impulse, sometimes twirl. Angular impulse is the angular analog of (linear) impulse. Examples The trivial case of the angular momentum of a body in an orbit is given by where is the mass of the orbiting object, is the orbit's frequency and is the orbit's radius. The angular momentum of a uniform rigid sphere rotating around its axis, instead, is given by where is the sphere's mass, is the frequency of rotation and is the sphere's radius. Thus, for example, the orbital angular momentum of the Earth with respect to the Sun is about 2.66 × 1040 kg⋅m2⋅s−1, while its rotational angular momentum is about 7.05 × 1033 kg⋅m2⋅s−1. In the case of a uniform rigid sphere rotating around its axis, if, instead of its mass, its density is known, the angular momentum is given by where is the sphere's density, is the frequency of rotation and is the sphere's radius. In the simplest case of a spinning disk, the angular momentum is given by where is the disk's mass, is the frequency of rotation and is the disk's radius. If instead the disk rotates about its diameter (e.g. coin toss), its angular momentum is given by Definition in classical mechanics Just as for angular velocity, there are two special types of angular momentum of an object: the spin angular momentum is the angular momentum about the object's centre of mass, while the orbital angular momentum is the angular momentum about a chosen center of rotation. The Earth has an orbital angular momentum by nature of revolving around the Sun, and a spin angular momentum by nature of its daily rotation around the polar axis. The total angular momentum is the sum of the spin and orbital angular momenta. In the case of the Earth the primary conserved quantity is the total angular momentum of the solar system because angular momentum is exchanged to a small but important extent among the planets and the Sun. The orbital angular momentum vector of a point particle is always parallel and directly proportional to its orbital angular velocity vector ω, where the constant of proportionality depends on both the mass of the particle and its distance from origin. The spin angular momentum vector of a rigid body is proportional but not always parallel to the spin angular velocity vector Ω, making the constant of proportionality a second-rank tensor rather than a scalar. Orbital angular momentum in two dimensions Angular momentum is a vector quantity (more precisely, a pseudovector) that represents the product of a body's rotational inertia and rotational velocity (in radians/sec) about a particular axis. However, if the particle's trajectory lies in a single plane, it is sufficient to discard the vector nature of angular momentum, and treat it as a scalar (more precisely, a pseudoscalar). Angular momentum can be considered a rotational analog of linear momentum. Thus, where linear momentum is proportional to mass and linear speed angular momentum is proportional to moment of inertia and angular speed measured in radians per second. Unlike mass, which depends only on amount of matter, moment of inertia depends also on the position of the axis of rotation and the distribution of the matter. Unlike linear velocity, which does not depend upon the choice of origin, orbital angular velocity is always measured with respect to a fixed origin. Therefore, strictly speaking, should be referred to as the angular momentum relative to that center. In the case of circular motion of a single particle, we can use and to expand angular momentum as reducing to: the product of the radius of rotation and the linear momentum of the particle , where is the linear (tangential) speed. This simple analysis can also apply to non-circular motion if one uses the component of the motion perpendicular to the radius vector: where is the perpendicular component of the motion. Expanding, rearranging, and reducing, angular momentum can also be expressed, where is the length of the moment arm, a line dropped perpendicularly from the origin onto the path of the particle. It is this definition, , to which the term moment of momentum refers. Scalar angular momentum from Lagrangian mechanics Another approach is to define angular momentum as the conjugate momentum (also called canonical momentum) of the angular coordinate expressed in the Lagrangian of the mechanical system. Consider a mechanical system with a mass constrained to move in a circle of radius in the absence of any external force field. The kinetic energy of the system is And the potential energy is Then the Lagrangian is The generalized momentum "canonically conjugate to" the coordinate is defined by Orbital angular momentum in three dimensions To completely define orbital angular momentum in three dimensions, it is required to know the rate at which the position vector sweeps out angle, the direction perpendicular to the instantaneous plane of angular displacement, and the mass involved, as well as how this mass is distributed in space. By retaining this vector nature of angular momentum, the general nature of the equations is also retained, and can describe any sort of three-dimensional motion about the center of rotation – circular, linear, or otherwise. In vector notation, the orbital angular momentum of a point particle in motion about the origin can be expressed as: where is the moment of inertia for a point mass, is the orbital angular velocity of the particle about the origin, is the position vector of the particle relative to the origin, and , is the linear velocity of the particle relative to the origin, and is the mass of the particle. This can be expanded, reduced, and by the rules of vector algebra, rearranged: which is the cross product of the position vector and the linear momentum of the particle. By the definition of the cross product, the vector is perpendicular to both and . It is directed perpendicular to the plane of angular displacement, as indicated by the right-hand rule – so that the angular velocity is seen as counter-clockwise from the head of the vector. Conversely, the vector defines the plane in which and lie. By defining a unit vector perpendicular to the plane of angular displacement, a scalar angular speed results, where and where is the perpendicular component of the motion, as above. The two-dimensional scalar equations of the previous section can thus be given direction: and for circular motion, where all of the motion is perpendicular to the radius . In the spherical coordinate system the angular momentum vector expresses as Analogy to linear momentum Angular momentum can be described as the rotational analog of linear momentum. Like linear momentum it involves elements of mass and displacement. Unlike linear momentum it also involves elements of position and shape. Many problems in physics involve matter in motion about some certain point in space, be it in actual rotation about it, or simply moving past it, where it is desired to know what effect the moving matter has on the point—can it exert energy upon it or perform work about it? Energy, the ability to do work, can be stored in matter by setting it in motion—a combination of its inertia and its displacement. Inertia is measured by its mass, and displacement by its velocity. Their product, is the matter's momentum. Referring this momentum to a central point introduces a complication: the momentum is not applied to the point directly. For instance, a particle of matter at the outer edge of a wheel is, in effect, at the end of a lever of the same length as the wheel's radius, its momentum turning the lever about the center point. This imaginary lever is known as the moment arm. It has the effect of multiplying the momentum's effort in proportion to its length, an effect known as a moment. Hence, the particle's momentum referred to a particular point, is the angular momentum, sometimes called, as here, the moment of momentum of the particle versus that particular center point. The equation combines a moment (a mass turning moment arm ) with a linear (straight-line equivalent) speed . Linear speed referred to the central point is simply the product of the distance and the angular speed versus the point: another moment. Hence, angular momentum contains a double moment: Simplifying slightly, the quantity is the particle's moment of inertia, sometimes called the second moment of mass. It is a measure of rotational inertia. The above analogy of the translational momentum and rotational momentum can be expressed in vector form: for linear motion for rotation The direction of momentum is related to the direction of the velocity for linear movement. The direction of angular momentum is related to the angular velocity of the rotation. Because moment of inertia is a crucial part of the spin angular momentum, the latter necessarily includes all of the complications of the former, which is calculated by multiplying elementary bits of the mass by the squares of their distances from the center of rotation. Therefore, the total moment of inertia, and the angular momentum, is a complex function of the configuration of the matter about the center of rotation and the orientation of the rotation for the various bits. For a rigid body, for instance a wheel or an asteroid, the orientation of rotation is simply the position of the rotation axis versus the matter of the body. It may or may not pass through the center of mass, or it may lie completely outside of the body. For the same body, angular momentum may take a different value for every possible axis about which rotation may take place. It reaches a minimum when the axis passes through the center of mass. For a collection of objects revolving about a center, for instance all of the bodies of the Solar System, the orientations may be somewhat organized, as is the Solar System, with most of the bodies' axes lying close to the system's axis. Their orientations may also be completely random. In brief, the more mass and the farther it is from the center of rotation (the longer the moment arm), the greater the moment of inertia, and therefore the greater the angular momentum for a given angular velocity. In many cases the moment of inertia, and hence the angular momentum, can be simplified by, where is the radius of gyration, the distance from the axis at which the entire mass may be considered as concentrated. Similarly, for a point mass the moment of inertia is defined as, where is the radius of the point mass from the center of rotation, and for any collection of particles as the sum, Angular momentum's dependence on position and shape is reflected in its units versus linear momentum: kg⋅m2/s or N⋅m⋅s for angular momentum versus kg⋅m/s or N⋅s for linear momentum. When calculating angular momentum as the product of the moment of inertia times the angular velocity, the angular velocity must be expressed in radians per second, where the radian assumes the dimensionless value of unity. (When performing dimensional analysis, it may be productive to use orientational analysis which treats radians as a base unit, but this is not done in the International system of units). The units if angular momentum can be interpreted as torque⋅time. An object with angular momentum of can be reduced to zero angular velocity by an angular impulse of . The plane perpendicular to the axis of angular momentum and passing through the center of mass is sometimes called the invariable plane, because the direction of the axis remains fixed if only the interactions of the bodies within the system, free from outside influences, are considered. One such plane is the invariable plane of the Solar System. Angular momentum and torque Newton's second law of motion can be expressed mathematically, or force = mass × acceleration. The rotational equivalent for point particles may be derived as follows: which means that the torque (i.e. the time derivative of the angular momentum) is Because the moment of inertia is , it follows that , and which, reduces to This is the rotational analog of Newton's second law. Note that the torque is not necessarily proportional or parallel to the angular acceleration (as one might expect). The reason for this is that the moment of inertia of a particle can change with time, something that cannot occur for ordinary mass. Conservation of angular momentum General considerations A rotational analog of Newton's third law of motion might be written, "In a closed system, no torque can be exerted on any matter without the exertion on some other matter of an equal and opposite torque about the same axis." Hence, angular momentum can be exchanged between objects in a closed system, but total angular momentum before and after an exchange remains constant (is conserved). Seen another way, a rotational analogue of Newton's first law of motion might be written, "A rigid body continues in a state of uniform rotation unless acted upon by an external influence." Thus with no external influence to act upon it, the original angular momentum of the system remains constant. The conservation of angular momentum is used in analyzing central force motion. If the net force on some body is directed always toward some point, the center, then there is no torque on the body with respect to the center, as all of the force is directed along the radius vector, and none is perpendicular to the radius. Mathematically, torque because in this case and are parallel vectors. Therefore, the angular momentum of the body about the center is constant. This is the case with gravitational attraction in the orbits of planets and satellites, where the gravitational force is always directed toward the primary body and orbiting bodies conserve angular momentum by exchanging distance and velocity as they move about the primary. Central force motion is also used in the analysis of the Bohr model of the atom. For a planet, angular momentum is distributed between the spin of the planet and its revolution in its orbit, and these are often exchanged by various mechanisms. The conservation of angular momentum in the Earth–Moon system results in the transfer of angular momentum from Earth to Moon, due to tidal torque the Moon exerts on the Earth. This in turn results in the slowing down of the rotation rate of Earth, at about 65.7 nanoseconds per day, and in gradual increase of the radius of Moon's orbit, at about 3.82 centimeters per year. The conservation of angular momentum explains the angular acceleration of an ice skater as they bring their arms and legs close to the vertical axis of rotation. By bringing part of the mass of their body closer to the axis, they decrease their body's moment of inertia. Because angular momentum is the product of moment of inertia and angular velocity, if the angular momentum remains constant (is conserved), then the angular velocity (rotational speed) of the skater must increase. The same phenomenon results in extremely fast spin of compact stars (like white dwarfs, neutron stars and black holes) when they are formed out of much larger and slower rotating stars. Conservation is not always a full explanation for the dynamics of a system but is a key constraint. For example, a spinning top is subject to gravitational torque making it lean over and change the angular momentum about the nutation axis, but neglecting friction at the point of spinning contact, it has a conserved angular momentum about its spinning axis, and another about its precession axis. Also, in any planetary system, the planets, star(s), comets, and asteroids can all move in numerous complicated ways, but only so that the angular momentum of the system is conserved. Noether's theorem states that every conservation law is associated with a symmetry (invariant) of the underlying physics. The symmetry associated with conservation of angular momentum is rotational invariance. The fact that the physics of a system is unchanged if it is rotated by any angle about an axis implies that angular momentum is conserved. Relation to Newton's second law of motion While angular momentum total conservation can be understood separately from Newton's laws of motion as stemming from Noether's theorem in systems symmetric under rotations, it can also be understood simply as an efficient method of calculation of results that can also be otherwise arrived at directly from Newton's second law, together with laws governing the forces of nature (such as Newton's third law, Maxwell's equations and Lorentz force). Indeed, given initial conditions of position and velocity for every point, and the forces at such a condition, one may use Newton's second law to calculate the second derivative of position, and solving for this gives full information on the development of the physical system with time. Note, however, that this is no longer true in quantum mechanics, due to the existence of particle spin, which is angular momentum that cannot be described by the cumulative effect of point-like motions in space. As an example, consider decreasing of the moment of inertia, e.g. when a figure skater is pulling in their hands, speeding up the circular motion. In terms of angular momentum conservation, we have, for angular momentum L, moment of inertia I and angular velocity ω: Using this, we see that the change requires an energy of: so that a decrease in the moment of inertia requires investing energy. This can be compared to the work done as calculated using Newton's laws. Each point in the rotating body is accelerating, at each point of time, with radial acceleration of: Let us observe a point of mass m, whose position vector relative to the center of motion is perpendicular to the z-axis at a given point of time, and is at a distance z. The centripetal force on this point, keeping the circular motion, is: Thus the work required for moving this point to a distance dz farther from the center of motion is: For a non-pointlike body one must integrate over this, with m replaced by the mass density per unit z. This gives: which is exactly the energy required for keeping the angular momentum conserved. Note, that the above calculation can also be performed per mass, using kinematics only. Thus the phenomena of figure skater accelerating tangential velocity while pulling their hands in, can be understood as follows in layman's language: The skater's palms are not moving in a straight line, so they are constantly accelerating inwards, but do not gain additional speed because the accelerating is always done when their motion inwards is zero. However, this is different when pulling the palms closer to the body: The acceleration due to rotation now increases the speed; but because of the rotation, the increase in speed does not translate to a significant speed inwards, but to an increase of the rotation speed. Stationary-action principle In classical mechanics it can be shown that the rotational invariance of action functionals implies conservation of angular momentum. The action is defined in classical physics as a functional of positions, often represented by the use of square brackets, and the final and initial times. It assumes the following form in cartesian coordinates:where the repeated indices indicate summation over the index. If the action is invariant of an infinitesimal transformation, it can be mathematically stated as: . Under the transformation, , the action becomes: where we can employ the expansion of the terms up-to first order in : giving the following change in action: Since all rotations can be expressed as matrix exponential of skew-symmetric matrices, i.e. as where is a skew-symmetric matrix and is angle of rotation, we can express the change of coordinates due to the rotation , up-to first order of infinitesimal angle of rotation, as: Combining the equation of motion and rotational invariance of action, we get from the above equations that:Since this is true for any matrix that satisfies it results in the conservation of the following quantity: as . This corresponds to the conservation of angular momentum throughout the motion. Lagrangian formalism In Lagrangian mechanics, angular momentum for rotation around a given axis, is the conjugate momentum of the generalized coordinate of the angle around the same axis. For example, , the angular momentum around the z axis, is: where is the Lagrangian and is the angle around the z axis. Note that , the time derivative of the angle, is the angular velocity . Ordinarily, the Lagrangian depends on the angular velocity through the kinetic energy: The latter can be written by separating the velocity to its radial and tangential part, with the tangential part at the x-y plane, around the z-axis, being equal to: where the subscript i stands for the i-th body, and m, vT and ωz stand for mass, tangential velocity around the z-axis and angular velocity around that axis, respectively. For a body that is not point-like, with density ρ, we have instead: where integration runs over the area of the body, and Iz is the moment of inertia around the z-axis. Thus, assuming the potential energy does not depend on ωz (this assumption may fail for electromagnetic systems), we have the angular momentum of the ith object: We have thus far rotated each object by a separate angle; we may also define an overall angle θz by which we rotate the whole system, thus rotating also each object around the z-axis, and have the overall angular momentum: From Euler–Lagrange equations it then follows that: Since the lagrangian is dependent upon the angles of the object only through the potential, we have: which is the torque on the ith object. Suppose the system is invariant to rotations, so that the potential is independent of an overall rotation by the angle θz (thus it may depend on the angles of objects only through their differences, in the form ). We therefore get for the total angular momentum: And thus the angular momentum around the z-axis is conserved. This analysis can be repeated separately for each axis, giving conversation of the angular momentum vector. However, the angles around the three axes cannot be treated simultaneously as generalized coordinates, since they are not independent; in particular, two angles per point suffice to determine its position. While it is true that in the case of a rigid body, fully describing it requires, in addition to three translational degrees of freedom, also specification of three rotational degrees of freedom; however these cannot be defined as rotations around the Cartesian axes (see Euler angles). This caveat is reflected in quantum mechanics in the non-trivial commutation relations of the different components of the angular momentum operator. Hamiltonian formalism Equivalently, in Hamiltonian mechanics the Hamiltonian can be described as a function of the angular momentum. As before, the part of the kinetic energy related to rotation around the z-axis for the ith object is: which is analogous to the energy dependence upon momentum along the z-axis, . Hamilton's equations relate the angle around the z-axis to its conjugate momentum, the angular momentum around the same axis: The first equation gives And so we get the same results as in the Lagrangian formalism. Note, that for combining all axes together, we write the kinetic energy as: where pr is the momentum in the radial direction, and the moment of inertia is a 3-dimensional matrix; bold letters stand for 3-dimensional vectors. For point-like bodies we have: This form of the kinetic energy part of the Hamiltonian is useful in analyzing central potential problems, and is easily transformed to a quantum mechanical work frame (e.g. in the hydrogen atom problem). Angular momentum in orbital mechanics While in classical mechanics the language of angular momentum can be replaced by Newton's laws of motion, it is particularly useful for motion in central potential such as planetary motion in the solar system. Thus, the orbit of a planet in the solar system is defined by its energy, angular momentum and angles of the orbit major axis relative to a coordinate frame. In astrodynamics and celestial mechanics, a quantity closely related to angular momentum is defined as called specific angular momentum. Note that Mass is often unimportant in orbital mechanics calculations, because motion of a body is determined by gravity. The primary body of the system is often so much larger than any bodies in motion about it that the gravitational effect of the smaller bodies on it can be neglected; it maintains, in effect, constant velocity. The motion of all bodies is affected by its gravity in the same way, regardless of mass, and therefore all move approximately the same way under the same conditions. Solid bodies Angular momentum is also an extremely useful concept for describing rotating rigid bodies such as a gyroscope or a rocky planet. For a continuous mass distribution with density function ρ(r), a differential volume element dV with position vector r within the mass has a mass element dm = ρ(r)dV. Therefore, the infinitesimal angular momentum of this element is: and integrating this differential over the volume of the entire mass gives its total angular momentum: In the derivation which follows, integrals similar to this can replace the sums for the case of continuous mass. Collection of particles For a collection of particles in motion about an arbitrary origin, it is informative to develop the equation of angular momentum by resolving their motion into components about their own center of mass and about the origin. Given, is the mass of particle , is the position vector of particle w.r.t. the origin, is the velocity of particle w.r.t. the origin, is the position vector of the center of mass w.r.t. the origin, is the velocity of the center of mass w.r.t. the origin, is the position vector of particle w.r.t. the center of mass, is the velocity of particle w.r.t. the center of mass, The total mass of the particles is simply their sum, The position vector of the center of mass is defined by, By inspection, and The total angular momentum of the collection of particles is the sum of the angular momentum of each particle, Expanding , Expanding , It can be shown that (see sidebar), and therefore the second and third terms vanish, The first term can be rearranged, and total angular momentum for the collection of particles is finally, The first term is the angular momentum of the center of mass relative to the origin. Similar to , below, it is the angular momentum of one particle of mass M at the center of mass moving with velocity V. The second term is the angular momentum of the particles moving relative to the center of mass, similar to , below. The result is general—the motion of the particles is not restricted to rotation or revolution about the origin or center of mass. The particles need not be individual masses, but can be elements of a continuous distribution, such as a solid body. Rearranging equation () by vector identities, multiplying both terms by "one", and grouping appropriately, gives the total angular momentum of the system of particles in terms of moment of inertia and angular velocity , Single particle case In the case of a single particle moving about the arbitrary origin, and equations () and () for total angular momentum reduce to, Case of a fixed center of mass For the case of the center of mass fixed in space with respect to the origin, and equations () and () for total angular momentum reduce to, Angular momentum in general relativity In modern (20th century) theoretical physics, angular momentum (not including any intrinsic angular momentum – see below) is described using a different formalism, instead of a classical pseudovector. In this formalism, angular momentum is the 2-form Noether charge associated with rotational invariance. As a result, angular momentum is generally not conserved locally for general curved spacetimes, unless they have rotational symmetry; whereas globally the notion of angular momentum itself only makes sense if the spacetime is asymptotically flat. If the spacetime is only axially symmetric like for the Kerr metric, the total angular momentum is not conserved but is conserved which is related to the invariance of rotating around the symmetry-axis, where note that where is the metric, is the rest mass, is the four-velocity, and is the four-position in spherical coordinates. In classical mechanics, the angular momentum of a particle can be reinterpreted as a plane element: in which the exterior product (∧) replaces the cross product (×) (these products have similar characteristics but are nonequivalent). This has the advantage of a clearer geometric interpretation as a plane element, defined using the vectors x and p, and the expression is true in any number of dimensions. In Cartesian coordinates: or more compactly in index notation: The angular velocity can also be defined as an anti-symmetric second order tensor, with components ωij. The relation between the two anti-symmetric tensors is given by the moment of inertia which must now be a fourth order tensor: Again, this equation in L and ω as tensors is true in any number of dimensions. This equation also appears in the geometric algebra formalism, in which L and ω are bivectors, and the moment of inertia is a mapping between them. In relativistic mechanics, the relativistic angular momentum of a particle is expressed as an anti-symmetric tensor of second order: in terms of four-vectors, namely the four-position X and the four-momentum P, and absorbs the above L together with the moment of mass, i.e., the product of the relativistic mass of the particle and its centre of mass, which can be thought of as describing the motion of its centre of mass, since mass–energy is conserved. In each of the above cases, for a system of particles the total angular momentum is just the sum of the individual particle angular momenta, and the centre of mass is for the system. Angular momentum in quantum mechanics In quantum mechanics, angular momentum (like other quantities) is expressed as an operator, and its one-dimensional projections have quantized eigenvalues. Angular momentum is subject to the Heisenberg uncertainty principle, implying that at any time, only one projection (also called "component") can be measured with definite precision; the other two then remain uncertain. Because of this, the axis of rotation of a quantum particle is undefined. Quantum particles do possess a type of non-orbital angular momentum called "spin", but this angular momentum does not correspond to a spinning motion. In relativistic quantum mechanics the above relativistic definition becomes a tensorial operator. Spin, orbital, and total angular momentum The classical definition of angular momentum as can be carried over to quantum mechanics, by reinterpreting r as the quantum position operator and p as the quantum momentum operator. L is then an operator, specifically called the orbital angular momentum operator. The components of the angular momentum operator satisfy the commutation relations of the Lie algebra so(3). Indeed, these operators are precisely the infinitesimal action of the rotation group on the quantum Hilbert space. (
Physical sciences
Classical mechanics
null
2840
https://en.wikipedia.org/wiki/Plum%20pudding%20model
Plum pudding model
The plum pudding model was the first scientific model of the atom to describe an internal structure. It was first proposed by J. J. Thomson in 1904 following his discovery of the electron in 1897, and was rendered obsolete by Ernest Rutherford's discovery of the atomic nucleus in 1911. The model tried to account for two properties of atoms then known: that there are electrons, and that atoms have no net electric charge. Logically there had to be an equal amount of positive charge to balance out the negative charge of the electrons. As Thomson had no idea as to the source of this positive charge, he tentatively proposed that it was everywhere in the atom, and that the atom was spherical. This was the mathematically simplest hypothesis to fit the available evidence, or lack thereof. In such a sphere, the negatively charged electrons would distribute themselves in a more or less even manner throughout the volume, simultaneously repelling each other while being attracted to the positive sphere's center. Despite Thomson's efforts, his model couldn't account for emission spectra and valencies. Based on experimental studies of alpha particle scattering (in the gold foil experiment), Ernest Rutherford developed an alternative model for the atom featuring a compact nucleus where the positive charge is concentrated. Thomson's model is popularly referred to as the "plum pudding model" with the notion that the electrons are distributed uniformly like raisins in a plum pudding. Neither Thomson nor his colleagues ever used this analogy. It seems to have been coined by popular science writers to make the model easier to understand for the layman. The analogy is perhaps misleading because Thomson likened the positive sphere to a liquid rather than a solid since he thought the electrons moved around in it. Significance Thomson's model marks the moment when the development of atomic theory passed from chemists to physicists. While atomic theory was widely accepted by chemists by the end of the 19th century, physicists remained skeptical because the atomic model lacked any properties which concerned their field, such as electric charge, magnetic moment, volume, or absolute mass. Thomson himself was a physicist and his atomic model was a byproduct of his investigations of cathode rays, by which he discovered electrons. Thomson hypothesized that the quantity, arrangement, and motions of electrons in the atom could explain its physical and chemical properties, such as emission spectra, valencies, reactivity, and ionization. He was on the right track, though his approach was based on classical mechanics and he did not have the insight to incorporate quantized energy into it. Background Throughout the 19th century evidence from chemistry and statistical mechanics accumulated that matter was composed of atoms. The structure of the atom was discussed, and by the end of the century the leading model was the vortex theory of the atom, proposed by William Thomson (later Lord Kelvin) in 1867. By 1890, J.J. Thomson had his own version called the "nebular atom" hypothesis, in which atoms were composed of immaterial vortices and suggested similarities between the arrangement of vortices and periodic regularity found among the chemical elements. Thomson's discovery of the electron in 1897 changed his views. Thomson called them "corpuscles" (particles), but they were more commonly called "electrons", the name G. J. Stoney had coined for the "fundamental unit quantity of electricity" in 1891. However even late in 1899, few scientists believed in subatomic particles. Another emerging scientific theme of the 19th century was the discovery and study of radioactivity. Thomson discovered the electron by studying cathode rays, and in 1900 Henri Becquerel determined that the radiation from uranium, now called beta particles, had the same charge/mass ratio as cathode rays. These beta particles were believed to be electrons travelling at high speed. The particles were used by Thomson to probe atoms to find evidence for his atomic theory. The other form of radiation critical to this era of atomic models was alpha particles. Heavier and slower than beta particles, these were the key tool used by Rutherford to find evidence against Thomson's model. In addition to the emerging atomic theory, the electron, and radiation, the last element of history was the many studies of atomic spectra published in the late 19th century. Part of the attraction of the vortex model was its possible role in describing the spectral data as vibrational responses to electromagnetic radiation. Neither Thomson's model nor its successor, Rutherford's model, made progress towards understanding atomic spectra. That would have to wait until Niels Bohr built the first quantum-based atom model. Development Thomson's model was the first to assign a specific inner structure to an atom, though his earliest descriptions did not include mathematical formulas. From 1897 through 1913, Thomson proposed a series of increasingly detailed polyelectron models for the atom. His first versions were qualitative culminating in his 1906 paper and follow on summaries. Thomson's model changed over the course of its initial publication, finally becoming a model with much more mobility containing electrons revolving in the dense field of positive charge rather than a static structure. Thomson attempted unsuccessfully to reshape his model to account for some of the major spectral lines experimentally known for several elements. 1897 Corpuscles inside atoms In a paper titled Cathode Rays, Thomson demonstrated that cathode rays are not light but made of negatively charged particles which he called corpuscles. He observed that cathode rays can be deflected by electric and magnetic fields, which does not happen with light rays. In a few paragraphs near the end of this long paper Thomson discusses the possibility that atoms were made of these corpuscles, calling them primordial atoms. Thomson believed that the intense electric field around the cathode caused the surrounding gas molecules to split up into their component corpuscles, thereby generating cathode rays. Thomson thus showed evidence that atoms were divisible, though he did not attempt to describe their structure at this point. Thomson notes that he was not the first scientist to propose that atoms are divisible, making reference to William Prout who in 1815 found that the atomic weights of various elements were multiples of hydrogen's atomic weight and hypothesised that all atoms were made of hydrogen atoms fused together. Prout's hypothesis was dismissed by chemists when by the 1830s it was found that some elements seemed to have a non-integer atomic weight—e.g. chlorine has an atomic weight of about 35.45. But the idea continued to intrigue scientists. The discrepancies were eventually explained with the discovery of isotopes in 1912. A few months after Thomson's paper appeared, George FitzGerald suggested that the corpuscle identified by Thomson from cathode rays and proposed as parts of an atom was a "free electron", as described by physicist Joseph Larmor and Hendrik Lorentz. While Thomson did not adopt the terminology, the connection convinced other scientists that cathode rays were particles, an important step in their eventual acceptance of an atomic model based on sub-atomic particles. In 1899 Thomson reiterated his atomic model in a paper that showed that negative electricity created by ultraviolet light landing on a metal (known now as the photoelectric effect) has the same mass-to-charge ratio as cathode rays; then he applied his previous method for determining the charge on ions to the negative electric particles created by ultraviolet light. He estimated that the electron's mass was 0.0014 times that of the hydrogen ion (as a fraction: ). In the conclusion of this paper he writes: 1904 Mechanical model of the atom Thomson provided his first detailed description of the atom in his 1904 paper On the Structure of the Atom. Thomson starts with a short description of his model ... the atoms of the elements consist of a number of negatively electrified corpuscles enclosed in a sphere of uniform positive electrification, ... Primarily focused on the electrons, Thomson adopted the positive sphere from Kelvin's atom model proposed a year earlier. He then gives a detailed mechanical analysis of such a system, distributing the electrons uniformly around a ring. The attraction of the positive electrification is balanced by the mutual repulsion of the electrons. His analysis focuses on stability, looking for cases where small changes in position are countered by restoring forces. After discussing his many formulae for stability he turned to analysing patterns in the number of electrons in various concentric rings of stable configurations. These regular patterns Thomson argued are analogous to the periodic law of chemistry behind the structure of the periodic table. This concept, that a model based on subatomic particles could account for chemical trends, encouraged interest in Thomson's model and influenced future work even if the details Thomson's electron assignments turned out to be incorrect. Thomson at this point believed that all the mass of the atom was carried by the electrons. This would mean that even a small atom would have to contain thousands of electrons, and the positive electrification that encapsulated them was without mass. 1905 lecture on electron arrangements In a lecture delivered to the Royal Institution of Great Britain in 1905, Thomson explained that it was too computationally difficult for him to calculate the movements of large numbers of electrons in the positive sphere, so he proposed a practical experiment. This involved magnetised pins pushed into cork discs and set afloat in a basin of water. The pins were oriented such that they repelled each other. Above the centre of the basin was suspended an electromagnet that attracted the pins. The equilibrium arrangement the pins took informed Thomson on what arrangements the electrons in an atom might take. For instance, he observed that while five pins would arrange themselves in a stable pentagon around the centre, six pins could not form a stable hexagon. Instead, one pin would move to the centre and the other five would form a pentagon around the centre pin, and this arrangement was stable. As he added more pins, they would arrange themselves in concentric rings around the centre. The experiment functioned in two dimensions instead of three, but Thomson inferred the electrons in the atom arranged themselves in concentric shells and the could move within these shells but did not move from one shell to another them except when electrons were added or subtracted from the atom. 1906 Estimating electrons per atom Before 1906 Thomson considered the atomic weight to be due to the mass of the electrons (which he continued to call "corpuscles"). Based on his own estimates of the electron mass, an atom would need tens of thousands electrons to account for the mass. In 1906 he used three different methods, X-ray scattering, beta ray absorption, or optical properties of gases, to estimate that "number of corpuscles is not greatly different from the atomic weight". This reduced the number of electrons to tens or at most a couple of hundred and that in turn meant that the positive sphere in Thomson's model contained most of the mass of the atom. This meant that Thomson's mechanical stability work from 1904 and the comparison to the periodic table were no longer valid. Moreover, the alpha particle, so important to the next advance in atomic theory by Rutherford, would no longer be viewed as an atom containing thousands of electrons. In 1907, Thomson published The Corpuscular Theory of Matter which reviewed his ideas on the atom's structure and proposed further avenues of research. In Chapter 6, he further elaborates his experiment using magnetised pins in water, providing an expanded table. For instance, if 59 pins were placed in the pool, they would arrange themselves in concentric rings of the order 20-16-13-8-2 (from outermost to innermost). In Chapter 7, Thomson summarised his 1906 results on the number of electrons in an atom. He included one important correction: he replaced the beta-particle analysis with one based on the cathode ray experiments of August Becker, giving a result in better agreement with other approaches to the problem. Experiments by other scientists in this field had shown that atoms contain far fewer electrons than Thomson previously thought. Thomson now believed the number of electrons in an atom was a small multiple of its atomic weight: "the number of corpuscles in an atom of any element is proportional to the atomic weight of the element — it is a multiple, and not a large one, of the atomic weight of the element." This meant that almost all of the atom's mass had to be carried by the positive sphere, whatever it was made of. Thomson in this book estimated that a hydrogen atom is 1,700 times heavier than an electron (the current measurement is 1,837). Thomson noted that no scientist had yet found a positively charged particle smaller than a hydrogen ion. He also wrote that the positive charge of an atom is a multiple of a basic unit of positive charge, equal to the negative charge of an electron. Thomson refused to jump to the conclusion that the basic unit of positive charge has a mass equal to that of the hydrogen ion, arguing that scientists first had to know how many electrons an atom contains. For all he could tell, a hydrogen ion might still contain a few electrons—perhaps two electrons and three units of positive charge. 1910 Multiple scattering Thomson's difficulty with beta scattering in 1906 lead him to renewed interest in the topic. He encouraged J. Arnold Crowther to experiment with beta scattering through thin foils and, in 1910, Thomson produced a new theory of beta scattering. The two innovations in this paper was the introduction of scattering from the positive sphere of the atom and analysis that multiple or compound scattering was critical to the final results. This theory and Crowther's experimental results would be confronted by Rutherford's theory and Geiger and Mardsen new experiments with alpha particles. Another innovation in Thomson's 1910 paper was that he modelled how an atom might deflect an incoming beta particle if the positive charge of the atom existed in discrete units of equal but arbitrary size, spread evenly throughout the atom, separated by empty space, with each unit having a positive charge equal to the electron's negative charge. Thomson therefore came close to deducing the existence of the proton, which was something Rutherford eventually did. In Rutherford's model of the atom, the protons are clustered in a very small nucleus, but in Thomson's alternative model, the positive units were spread throughout the atom. Thomson's 1910 scattering model In his 1910 paper "On the Scattering of rapidly moving Electrified Particles", Thomson presented equations that modelled how beta particles scatter in a collision with an atom. His work was based on beta scattering studies by James Crowther. Deflection by the positive sphere Thomson typically assumed the positive charge in the atom was uniformly distributed throughout its volume, encapsulating the electrons. In his 1910 paper, Thomson presented the following equation which isolated the effect of this positive sphere: where k is the Coulomb constant, qe is the charge of the beta particle, qg is the charge of the positive sphere, m is the mass of the beta particle, and R is the radius of the sphere. Because the atom is many thousands of times heavier than the beta particle, no correction for recoil is needed. Thomson did not explain how this equation was developed, but the historian John L. Heilbron provided an educated guess he called a "straight-line" approximation. Consider a beta particle passing through the positive sphere with its initial trajectory at a lateral distance b from the centre. The path is assumed to have a very small deflection and therefore is treated here as a straight line. Inside a sphere of uniformly distributed positive charge the force exerted on the beta particle at any point along its path through the sphere would be directed along the radius with magnitude: The component of force perpendicular to the trajectory and thus deflecting the path of the particle would be: The lateral change in momentum py is therefore The resulting angular deflection, , is given by where px is the average horizontal momentum taken to be equal to the incoming momentum. Since we already know the deflection is very small, we can treat as being equal to . To find the average deflection angle , the angle for each value of b and the corresponding L are added across the face sphere, then divided by the cross-section area. per Pythagorean theorem. This matches Thomson's formula in his 1910 paper. Deflection by the electrons Thomson modelled the collisions between a beta particle and the electrons of an atom by calculating the deflection of one collision then multiplying by a factor for the number of collisions as the particle crosses the atom. For the electrons within an arbitrary distance s of the beta particle's path, their mean distance will be . Therefore, the average deflection per electron will be where qe is the elementary charge, k is the Coulomb constant, m and v are the mass and velocity of the beta particle. The factor for the number of collisions was known to be the square root of the number of possible electrons along path. The number of electrons depends upon the density of electrons along the particle path times the path length L. The net deflection caused by all the electrons within this arbitrary cylinder of effect around the beta particle's path is where N0 is the number of electrons per unit volume and is the volume of this cylinder. Since Thomson calculated the deflection would be very small, he treats L as a straight line. Therefore where b is the distance of this chord from the centre. The mean of is given by the integral We can now replace in the equation for to obtain the mean deflection : where N is the number of electrons in the atom, equal to . Deflection by the positive charge in discrete units In his 1910 paper, Thomson proposed an alternative model in which the positive charge exists in discrete units separated by empty space, with those units being evenly distributed throughout the atom's volume. In this concept, the average scattering angle of the beta particle is given by: where σ is the ratio of the volume occupied by the positive charge to the volume of the whole atom. Thomson did not explain how he arrived at this equation. Net deflection To find the combined effect of the positive charge and the electrons on the beta particle's path, Thomson provided the following equation: Demise of the plum pudding model Thomson probed the structure of atoms through beta particle scattering, whereas his former student Ernest Rutherford was interested in alpha particle scattering. Beta particles are electrons emitted by radioactive decay, whereas alpha particles are essentially helium atoms, also emitted in process of decay. Alpha particles have considerably more momentum than beta particles and Rutherford found that matter scatters alpha particles in ways that Thomson's plum pudding model could not predict. Between 1908 and 1913, Ernest Rutherford, Hans Geiger, and Ernest Marsden collaborated on a series of experiments in which they bombarded thin metal foils with a beam of alpha particles and measured the intensity versus scattering angle of the particles. They found that the metal foil could scatter alpha particles by more than 90°. This should not have been possible according to the Thomson model: the scattering into large angles should have been negligible. The odds of a beta particle being scattered by more than 90° under such circumstances is astronomically small, and since alpha particles typically have much more momentum than beta particles, their deflection should be smaller still. The Thomson models simply could not produce electrostatic forces of sufficient strength to cause such large deflection. The charges in the Thomson model were too diffuse. This led Rutherford to discard the Thomson for a new model where the positive charge of the atom is concentrated in a tiny nucleus. Rutherford went on to make more compelling discoveries. In Thomson's model, the positive charge sphere was just an abstract component, but Rutherford found something concrete to attribute the positive charge to: particles he dubbed "protons". Whereas Thomson believed that the electron count was roughly correlated to the atomic weight, Rutherford showed that (in a neutral atom) it is exactly equal to the atomic number. Thomson hypothesised that the arrangement of the electrons in the atom somehow determined the spectral lines of a chemical element. He was on the right track, but it had nothing to do with how atoms circulated in a sphere of positive charge. Scientists eventually discovered that it had to do with how electrons absorb and release energy in discrete quantities, moving through energy levels which correspond to emission and absorption spectra. Thomson had not incorporated quantum mechanics into his atomic model, which at the time was a very new field of physics. Niels Bohr and Erwin Schroedinger later incorporated quantum mechanics into the atomic model. Rutherford's nuclear model Rutherford's 1911 paper on alpha particle scattering showed that Thomson's scattering model could not explain the large angle scattering and it showed that multiple scattering was not necessary to explain the data. However, in the years immediately following its publication few scientists took note. The scattering model predictions were not considered definitive evidence against Thomson's plum pudding model. Thomson and Rutherford had pioneered scattering as a technique to probe atoms, its reliability and value were unproven. Before Rutherford's paper the alpha particle was considered an atom, not a compact mass. It was not clear why it should be a good probe. Moreover, Rutherford's paper did not discuss the atomic electrons vital to practical problems like chemistry or atomic spectroscopy. Rutherford's nuclear model would only become widely accepted after the work of Niels Bohr. Mathematical Thomson problem The Thomson problem in mathematics seeks the optimal distribution of equal point charges on the surface of a sphere. Unlike the original Thomson atomic model, the sphere in this purely mathematical model does not have a charge, and this causes all the point charges to move to the surface of the sphere by their mutual repulsion. There is still no general solution to Thomson's original problem of how electrons arrange themselves within a sphere of positive charge. Origin of the nickname The first known writer to compare Thomson's model to a plum pudding was an anonymous reporter in an article for the British pharmaceutical magazine The Chemist and Druggist in August 1906. The analogy was never used by Thomson nor his colleagues. It seems to have been a conceit of popular science writers to make the model easier to understand for the layman.
Physical sciences
Atomic physics
Physics
2844
https://en.wikipedia.org/wiki/History%20of%20atomic%20theory
History of atomic theory
Atomic theory is the scientific theory that matter is composed of particles called atoms. The definition of the word "atom" has changed over the years in response to scientific discoveries. Initially, it referred to a hypothetical concept of there being some fundamental particle of matter, too small to be seen by the naked eye, that could not be divided. Then the definition was refined to being the basic particles of the chemical elements, when chemists observed that elements seemed to combine with each other in ratios of small whole numbers. Then physicists discovered that these particles had an internal structure of their own and therefore perhaps did not deserve to be called "atoms", but renaming atoms would have been impractical by that point. Atomic theory is one of the most important scientific developments in history, crucial to all the physical sciences. At the start of The Feynman Lectures on Physics, physicist and Nobel laureate Richard Feynman offers the atomic hypothesis as the single most prolific scientific concept. Philosophical atomism The basic idea that matter is made up of tiny indivisible particles is an old idea that appeared in many ancient cultures. The word atom is derived from the ancient Greek word atomos, which means "uncuttable". This ancient idea was based in philosophical reasoning rather than scientific reasoning. Modern atomic theory is not based on these old concepts. In the early 19th century, the scientist John Dalton noticed that chemical substances seemed to combine with each other by discrete and consistent units of weight, and he decided to use the word atom to refer to these units. Groundwork Working in the late 17th century, Robert Boyle developed the concept of a chemical element as substance different from a compound. Near the end of the 18th century, a number of important developments in chemistry emerged without referring to the notion of an atomic theory. The first was Antoine Lavoisier who showed that compounds consist of elements in constant proportion, redefining an element as a substance which scientists could not decompose into simpler substances by experimentation. This brought an end to the ancient idea of the elements of matter being fire, earth, air, and water, which had no experimental support. Lavoisier showed that water can be decomposed into hydrogen and oxygen, which in turn he could not decompose into anything simpler, thereby proving these are elements. Lavoisier also defined the law of conservation of mass, which states that in a chemical reaction, matter does not appear nor disappear into thin air; the total mass remains the same even if the substances involved were transformed. Finally, there was the law of definite proportions, established by the French chemist Joseph Proust in 1797, which states that if a compound is broken down into its constituent chemical elements, then the masses of those constituents will always have the same proportions by weight, regardless of the quantity or source of the original compound. This definition distinguished compounds from mixtures. Dalton's law of multiple proportions John Dalton studied data gathered by himself and by other scientists. He noticed a pattern that later came to be known as the law of multiple proportions: in compounds which contain two particular elements, the amount of Element A per measure of Element B will differ across these compounds by ratios of small whole numbers. This suggested that each element combines with other elements in multiples of a basic quantity. In 1804, Dalton explained his atomic theory to his friend and fellow chemist Thomas Thomson, who published an explanation of Dalton's theory in his book A System of Chemistry in 1807. According to Thomson, Dalton's idea first occurred to him when experimenting with "olefiant gas" (ethylene) and "carburetted hydrogen gas" (methane). Dalton found that "carburetted hydrogen gas" contains twice as much hydrogen per measure of carbon as "olefiant gas", and concluded that a molecule of "olefiant gas" is one carbon atom and one hydrogen atom, and a molecule of "carburetted hydrogen gas" is one carbon atom and two hydrogen atoms. In reality, an ethylene molecule has two carbon atoms and four hydrogen atoms (C2H4), and a methane molecule has one carbon atom and four hydrogen atoms (CH4). In this particular case, Dalton was mistaken about the formulas of these compounds, and it wasn't his only mistake. But in other cases, he got their formulas right, as in the following examples: Example 1 — tin oxides: Dalton identified two types of tin oxide. One is a grey powder that Dalton referred to as "the protoxide of tin", which is 88.1% tin and 11.9% oxygen. The other is a white powder which Dalton referred to as "the deutoxide of tin", which is 78.7% tin and 21.3% oxygen. Adjusting these figures, in the grey powder there is about 13.5 g of oxygen for every 100 g of tin, and in the white powder there is about 27 g of oxygen for every 100 g of tin. 13.5 and 27 form a ratio of 1:2. These compounds are known today as tin(II) oxide (SnO) and tin(IV) oxide (SnO2). In Dalton's terminology, a "protoxide" is a molecule containing a single oxygen atom, and a "deutoxide" molecule has two. The modern equivalents of his terms would be monoxide and dioxide. Example 2 — iron oxides: Dalton identified two oxides of iron. There is one type of iron oxide that is a black powder which Dalton referred to as "the protoxide of iron", which is 78.1% iron and 21.9% oxygen. The other iron oxide is a red powder, which Dalton referred to as "the intermediate or red oxide of iron" which is 70.4% iron and 29.6% oxygen. Adjusting these figures, in the black powder there is about 28 g of oxygen for every 100 g of iron, and in the red powder there is about 42 g of oxygen for every 100 g of iron. 28 and 42 form a ratio of 2:3. These compounds are iron(II) oxide and iron(III) oxide and their formulas are Fe2O2 and Fe2O3 respectively (iron(II) oxide's formula is normally written as FeO, but here it is written as Fe2O2 to contrast it with the other oxide). Dalton described the "intermediate oxide" as being "2 atoms protoxide and 1 of oxygen", which adds up to two atoms of iron and three of oxygen. That averages to one and a half atoms of oxygen for every iron atom, putting it midway between a "protoxide" and a "deutoxide". Example 3 — nitrogen oxides: Dalton was aware of three oxides of nitrogen: "nitrous oxide", "nitrous gas", and "nitric acid". These compounds are known today as nitrous oxide, nitric oxide, and nitrogen dioxide respectively. "Nitrous oxide" is 63.3% nitrogen and 36.7% oxygen, which means it has 80 g of oxygen for every 140 g of nitrogen. "Nitrous gas" is 44.05% nitrogen and 55.95% oxygen, which means there is 160 g of oxygen for every 140 g of nitrogen. "Nitric acid" is 29.5% nitrogen and 70.5% oxygen, which means it has 320 g of oxygen for every 140 g of nitrogen. 80 g, 160 g, and 320 g form a ratio of 1:2:4. The formulas for these compounds are N2O, NO, and NO2. Dalton defined an atom as being the "ultimate particle" of a chemical substance, and he used the term "compound atom" to refer to "ultimate particles" which contain two or more elements. This is inconsistent with the modern definition, wherein an atom is the basic particle of a chemical element and a molecule is an agglomeration of atoms. The term "compound atom" was confusing to some of Dalton's contemporaries as the word "atom" implies indivisibility, but he responded that if a carbon dioxide "atom" is divided, it ceases to be carbon dioxide. The carbon dioxide "atom" is indivisible in the sense that it cannot be divided into smaller carbon dioxide particles. Dalton made the following assumptions on how "elementary atoms" combined to form "compound atoms" (what we today refer to as molecules). When two elements can only form one compound, he assumed it was one atom of each, which he called a "binary compound". If two elements can form two compounds, the first compound is a binary compound and the second is a "ternary compound" consisting of one atom of the first element and two of the second. If two elements can form three compounds between them, then the third compound is a "quaternary" compound containing one atom of the first element and three of the second. Dalton thought that water was a "binary compound", i.e. one hydrogen atom and one oxygen atom. Dalton did not know that in their natural gaseous state, the ultimate particles of oxygen, nitrogen, and hydrogen exist in pairs (O2, N2, and H2). Nor was he aware of valencies. These properties of atoms were discovered later in the 19th century. Because atoms were too small to be directly weighed using the methods of the 19th century, Dalton instead expressed the weights of the myriad atoms as multiples of the hydrogen atom's weight, which Dalton knew was the lightest element. By his measurements, 7 grams of oxygen will combine with 1 gram of hydrogen to make 8 grams of water with nothing left over, and assuming a water molecule to be one oxygen atom and one hydrogen atom, he concluded that oxygen's atomic weight is 7. In reality it is 16. Aside from the crudity of early 19th century measurement tools, the main reason for this error was that Dalton didn't know that the water molecule in fact has two hydrogen atoms, not one. Had he known, he would have doubled his estimate to a more accurate 14. This error was corrected in 1811 by Amedeo Avogadro. Avogadro proposed that equal volumes of any two gases, at equal temperature and pressure, contain equal numbers of molecules (in other words, the mass of a gas's particles does not affect the volume that it occupies). Avogadro's hypothesis, now usually called Avogadro's law, provided a method for deducing the relative weights of the molecules of gaseous elements, for if the hypothesis is correct relative gas densities directly indicate the relative weights of the particles that compose the gases. This way of thinking led directly to a second hypothesis: the particles of certain elemental gases were pairs of atoms, and when reacting chemically these molecules often split in two. For instance, the fact that two liters of hydrogen will react with just one liter of oxygen to produce two liters of water vapor (at constant pressure and temperature) suggested that a single oxygen molecule splits in two in order to form two molecules of water. The formula of water is H2O, not HO. Avogadro measured oxygen's atomic weight to be 15.074. Opposition to atomic theory Dalton's atomic theory attracted widespread interest but not everyone accepted it at first. The law of multiple proportions was shown not to be a universal law when it came to organic substances, whose molecules can be quite large. For instance, in oleic acid there is 34 g of hydrogen for every 216 g of carbon, and in methane there is 72 g of hydrogen for every 216 g of carbon. 34 and 72 form a ratio of 17:36, which is not a ratio of small whole numbers. We know now that carbon-based substances can have very large molecules, larger than any the other elements can form. Oleic acid's formula is C18H34O2 and methane's is CH4. The law of multiple proportions by itself was not complete proof, and atomic theory was not universally accepted until the end of the 19th century. One problem was the lack of uniform nomenclature. The word "atom" implied indivisibility, but Dalton defined an atom as being the ultimate particle of any chemical substance, not just the elements or even matter per se. This meant that "compound atoms" such as carbon dioxide could be divided, as opposed to "elementary atoms". Dalton disliked the word "molecule", regarding it as "diminutive". Amedeo Avogadro did the opposite: he exclusively used the word "molecule" in his writings, eschewing the word "atom", instead using the term "elementary molecule". Jöns Jacob Berzelius used the term "organic atoms" to refer to particles containing three or more elements, because he thought this only existed in organic compounds. Jean-Baptiste Dumas used the terms "physical atoms" and "chemical atoms"; a "physical atom" was a particle that cannot be divided by physical means such as temperature and pressure, and a "chemical atom" was a particle that could not be divided by chemical reactions. The modern definitions of atom and molecule—an atom being the basic particle of an element, and a molecule being an agglomeration of atoms—were established in the late half of the 19th century. A key event was the Karlsruhe Congress in Germany in 1860. As the first international congress of chemists, its goal was to establish some standards in the community. A major proponent of the modern distinction between atoms and molecules was Stanislao Cannizzaro. Cannizzaro criticized past chemists such as Berzelius for not accepting that the particles of certain gaseous elements are actually pairs of atoms, which led to mistakes in their formulation of certain compounds. Berzelius believed that hydrogen gas and chlorine gas particles are solitary atoms. But he observed that when one liter of hydrogen reacts with one liter of chlorine, they form two liters of hydrogen chloride instead of one. Berzelius decided that Avogadro's law does not apply to compounds. Cannizzaro preached that if scientists just accepted the existence of single-element molecules, such discrepancies in their findings would be easily resolved. But Berzelius did not even have a word for that. Berzelius used the term "elementary atom" for a gas particle which contained just one element and "compound atom" for particles which contained two or more elements, but there was nothing to distinguish H2 from H since Berzelius did not believe in H2. So Cannizzaro called for a redefinition so that scientists could understand that a hydrogen molecule can split into two hydrogen atoms in the course of a chemical reaction. A second objection to atomic theory was philosophical. Scientists in the 19th century had no way of directly observing atoms. They inferred the existence of atoms through indirect observations, such as Dalton's law of multiple proportions. Some scientists adopted positions aligned with the philosophy of positivism, arguing that scientists should not attempt to deduce the deeper reality of the universe, but only systemize what patterns they could directly observe. This generation of anti-atomists can be grouped in two camps. The "equivalentists", like Marcellin Berthelot, believed the theory of equivalent weights was adequate for scientific purposes. This generalization of Proust's law of definite proportions summarized observations. For example, 1 gram of hydrogen will combine with 8 grams of oxygen to form 9 grams of water, therefore the "equivalent weight" of oxygen is 8 grams. The "energeticist", like Ernst Mach and Wilhelm Ostwald, were philosophically opposed to hypothesis about reality altogether. In their view, only energy as part of thermodynamics should be the basis of physical models. These positions were eventually quashed by two important advancements that happened later in the 19th century: the development of the periodic table and the discovery that molecules have an internal architecture that determines their properties. Isomerism Scientists discovered some substances have the exact same chemical content but different properties. For instance, in 1827, Friedrich Wöhler discovered that silver fulminate and silver cyanate are both 107 parts silver, 12 parts carbon, 14 parts nitrogen, and 16 parts oxygen (we now know their formulas as both AgCNO). In 1830 Jöns Jacob Berzelius introduced the term isomerism to describe the phenomenon. In 1860, Louis Pasteur hypothesized that the molecules of isomers might have the same set of atoms but in different arrangements. In 1874, Jacobus Henricus van 't Hoff proposed that the carbon atom bonds to other atoms in a tetrahedral arrangement. Working from this, he explained the structures of organic molecules in such a way that he could predict how many isomers a compound could have. Consider, for example, pentane (C5H12). In van 't Hoff's way of modelling molecules, there are three possible configurations for pentane, and scientists did go on to discover three and only three isomers of pentane. Isomerism was not something that could be fully explained by alternative theories to atomic theory, such as radical theory and the theory of types. Mendeleev's periodic table Dmitrii Mendeleev noticed that when he arranged the elements in a row according to their atomic weights, there was a certain periodicity to them. For instance, the second element, lithium, had similar properties to the ninth element, sodium, and the sixteenth element, potassium — a period of seven. Likewise, beryllium, magnesium, and calcium were similar and all were seven places apart from each other on Mendeleev's table. Using these patterns, Mendeleev predicted the existence and properties of new elements, which were later discovered in nature: scandium, gallium, and germanium. Moreover, the periodic table could predict how many atoms of other elements that an atom could bond with — e.g., germanium and carbon are in the same group on the table and their atoms both combine with two oxygen atoms each (GeO2 and CO2). Mendeleev found these patterns validated atomic theory because it showed that the elements could be categorized by their atomic weight. Inserting a new element into the middle of a period would break the parallel between that period and the next, and would also violate Dalton's law of multiple proportions. The elements on the periodic table were originally arranged in order of increasing atomic weight. However, in a number of places chemists chose to swap the positions of certain adjacent elements so that they appeared in a group with other elements with similar properties. For instance, tellurium is placed before iodine even though tellurium is heavier (127.6 vs 126.9) so that iodine can be in the same column as the other halogens. The modern periodic table is based on atomic number, which is equivalent to the nuclear charge, a change had to wait for the discovery of the nucleus. In addition, an entire row of the table was not shown because the noble gases had not been discovered when Mendeleev devised his table. Statistical mechanics In 1738, Swiss physicist and mathematician Daniel Bernoulli postulated that the pressure of gases and heat were both caused by the underlying motion of particles. Using his model he could predict the ideal gas law at constant temperature and suggested that the temperature was proportional to the velocity of the particles. These results were largely ignored for a century. James Clerk Maxwell, a vocal proponent of atomism, revived the kinetic theory in 1860 and 1867. His key insight was that the velocity of particles in a gas would vary around an average value, introducing the concept of a distribution function. Ludwig Boltzmann and Rudolf Clausius expanded his work on gases and the laws of thermodynamics especially the second law relating to entropy. In the 1870s, Josiah Willard Gibbs extended the laws of entropy and thermodynamics and coined the term "statistical mechanics." Boltzmann defended the atomistic hypothesis against major detractors from the time like Ernst Mach or energeticists like Wilhelm Ostwald, who considered that energy was the elementary quantity of reality. At the beginning of the 20th century, Albert Einstein independently reinvented Gibbs' laws, because they had only been printed in an obscure American journal. Einstein later commented that had he known of Gibbs' work, he would "not have published those papers at all, but confined myself to the treatment of some few points [that were distinct]." All of statistical mechanics and the laws of heat, gas, and entropy took the existence of atoms as a necessary postulate. Brownian motion In 1827, the British botanist Robert Brown observed that dust particles inside pollen grains floating in water constantly jiggled about for no apparent reason. In 1905, Einstein theorized that this Brownian motion was caused by the water molecules continuously knocking the grains about, and developed a mathematical model to describe it. This model was validated experimentally in 1908 by French physicist Jean Perrin, who used Einstein's equations to measure the size of atoms. Discovery of the electron Atoms were thought to be the smallest possible division of matter until 1899 when J. J. Thomson discovered the electron through his work on cathode rays. A Crookes tube is a sealed glass container in which two electrodes are separated by a vacuum. When a voltage is applied across the electrodes, cathode rays are generated, creating a glowing patch where they strike the glass at the opposite end of the tube. Through experimentation, Thomson discovered that the rays could be deflected by electric fields and magnetic fields, which meant that these rays were not a form of light but were composed of very light charged particles, and their charge was negative. Thomson called these particles "corpuscles". He measured their mass-to-charge ratio to be several orders of magnitude smaller than that of the hydrogen atom, the smallest atom. This ratio was the same regardless of what the electrodes were made of and what the trace gas in the tube was. In contrast to those corpuscles, positive ions created by electrolysis or X-ray radiation had mass-to-charge ratios that varied depending on the material of the electrodes and the type of gas in the reaction chamber, indicating they were different kinds of particles. In 1898, Thomson measured the charge on ions to be roughly 6 × 10−10 electrostatic units (2 × 10−19 Coulombs). In 1899, he showed that negative electricity created by ultraviolet light landing on a metal (known now as the photoelectric effect) has the same mass-to-charge ratio as cathode rays; then he applied his previous method for determining the charge on ions to the negative electric particles created by ultraviolet light. By this combination he showed that electron's mass was 0.0014 times that of hydrogen ions. These "corpuscles" were so light yet carried so much charge that Thomson concluded they must be the basic particles of electricity, and for that reason other scientists decided that these "corpuscles" should instead be called electrons following an 1894 suggestion by George Johnstone Stoney for naming the basic unit of electrical charge. In 1904, Thomson published a paper describing a new model of the atom. Electrons reside within atoms, and they transplant themselves from one atom to the next in a chain in the action of an electrical current. When electrons do not flow, their negative charge logically must be balanced out by some source of positive charge within the atom so as to render the atom electrically neutral. Having no clue as to the source of this positive charge, Thomson tentatively proposed that the positive charge was everywhere in the atom, the atom being shaped like a sphere—this was the mathematically simplest model to fit the available evidence (or lack of it). The balance of electrostatic forces would distribute the electrons throughout this sphere in a more or less even manner. Thomson further explained that ions are atoms that have a surplus or shortage of electrons. Thomson's model is popularly known as the plum pudding model, based on the idea that the electrons are distributed throughout the sphere of positive charge with the same density as raisins in a plum pudding. Neither Thomson nor his colleagues ever used this analogy. It seems to have been a conceit of popular science writers. The analogy suggests that the positive sphere is like a solid, but Thomson likened it to a liquid, as he proposed that the electrons moved around in it in patterns governed by the electrostatic forces. More to the point, the positive electrification in Thomson's model was an abstraction, he did not propose anything concrete like a particle. Thomson's model was incomplete, it could not predict any of the known properties of the atom such as emission spectra or valencies. In 1906, Robert A. Millikan and Harvey Fletcher performed the oil drop experiment in which they measured the charge of an electron to be about -1.6 × 10−19, a value now defined as -1 e. Since the hydrogen ion and the electron were known to be indivisible and a hydrogen atom is neutral in charge, it followed that the positive charge in hydrogen was equal to this value, i.e. 1 e. Discovery of the nucleus Thomson's plum pudding model was challenged in 1911 by one of his former students, Ernest Rutherford, who presented a new model to explain new experimental data. The new model proposed a concentrated center of charge and mass that was later dubbed the atomic nucleus. Ernest Rutherford and his colleagues Hans Geiger and Ernest Marsden came to have doubts about the Thomson model after they encountered difficulties when they tried to build an instrument to measure the charge-to-mass ratio of alpha particles (these are positively-charged particles emitted by certain radioactive substances such as radium). The alpha particles were being scattered by the air in the detection chamber, which made the measurements unreliable. Thomson had encountered a similar problem in his work on cathode rays, which he solved by creating a near-perfect vacuum in his instruments. Rutherford didn't think he'd run into this same problem because alpha particles usually have much more momentum than electrons. According to Thomson's model of the atom, the positive charge in the atom is not concentrated enough to produce an electric field strong enough to deflect an alpha particle. Yet there was scattering, so Rutherford and his colleagues decided to investigate this scattering carefully. Between 1908 and 1913, Rutherford and his colleagues performed a series of experiments in which they bombarded thin foils of metal with a beam of alpha particles. They spotted alpha particles being deflected by angles greater than 90°. According to Thomson's model, all of the alpha particles should have passed through with negligible deflection. Rutherford deduced that the positive charge of the atom is not distributed throughout the atom's volume as Thomson believed, but is concentrated in a tiny nucleus at the center. This nucleus also carries most of the atom's mass. Only such an intense concentration of charge, anchored by its high mass, could produce an electric field strong enough to deflect the alpha particles as observed. Rutherford's model, being supported primarily by scattering data unfamiliar to many scientists, did not catch on until Niels Bohr joined Rutherford's lab and developed a new model for the electrons. Rutherford model predicted that the scattering of alpha particles would be proportional to the square of the atomic charge. Geiger and Marsden's based their analysis on setting the charge to half of the atomic weight of the foil's material (gold, aluminium, etc.). Amateur physicist Antonius van den Broek noted that there was a more precise relation between the charge and the element's numeric sequence in the order of atomic weights. The sequence number came be called the atomic number and it replaced atomic weight in organizing the periodic table. Bohr model Rutherford deduced the existence of the atomic nucleus through his experiments but he had nothing to say about how the electrons were arranged around it. In 1912, Niels Bohr joined Rutherford's lab and began his work on a quantum model of the atom. Max Planck in 1900 and Albert Einstein in 1905 had postulated that light energy is emitted or absorbed in discrete amounts known as quanta (singular, quantum). This led to a series of atomic models with some quantum aspects, such as that of Arthur Erich Haas in 1910 and the 1912 John William Nicholson atomic model with quantized angular momentum as h/2. The dynamical structure of these models was still classical, but in 1913, Bohr abandon the classical approach. He started his Bohr model of the atom with a quantum hypothesis: an electron could only orbit the nucleus in particular circular orbits with fixed angular momentum and energy, its distance from the nucleus (i.e., their radii) being proportional to its energy. Under this model an electron could not lose energy in a continuous manner; instead, it could only make instantaneous "quantum leaps" between the fixed energy levels. When this occurred, light was emitted or absorbed at a frequency proportional to the change in energy (hence the absorption and emission of light in discrete spectra). In a trilogy of papers Bohr described and applied his model to derive the Balmer series of lines in the atomic spectrum of hydrogen and the related spectrum of He+. He also used he model to describe the structure of the periodic table and aspects of chemical bonding. Together these results lead to Bohr's model being widely accepted by the end of 1915. Bohr's model was not perfect. It could only predict the spectral lines of hydrogen, not those of multielectron atoms. Worse still, it could not even account for all features of the hydrogen spectrum: as spectrographic technology improved, it was discovered that applying a magnetic field caused spectral lines to multiply in a way that Bohr's model couldn't explain. In 1916, Arnold Sommerfeld added elliptical orbits to the Bohr model to explain the extra emission lines, but this made the model very difficult to use, and it still couldn't explain more complex atoms. Discovery of isotopes While experimenting with the products of radioactive decay, in 1913 radiochemist Frederick Soddy discovered that there appeared to be more than one variety of some elements. The term isotope was coined by Margaret Todd as a suitable name for these varieties. That same year, J. J. Thomson conducted an experiment in which he channeled a stream of neon ions through magnetic and electric fields, striking a photographic plate at the other end. He observed two glowing patches on the plate, which suggested two different deflection trajectories. Thomson concluded this was because some of the neon ions had a different mass. The nature of this differing mass would later be explained by the discovery of neutrons in 1932: all atoms of the same element contain the same number of protons, while different isotopes have different numbers of neutrons. Discovery of the proton Back in 1815, William Prout observed that the atomic weights of the known elements were multiples of hydrogen's atomic weight, so he hypothesized that all atoms are agglomerations of hydrogen, a particle which he dubbed "the protyle". Prout's hypothesis was put into doubt when some elements were found to deviate from this pattern—e.g. chlorine atoms on average weigh 35.45 daltons—but when isotopes were discovered in 1913, Prout's observation gained renewed attention. In 1898, J. J. Thomson found that the positive charge of a hydrogen ion was equal to the negative charge of a single electron. In an April 1911 paper concerning his studies on alpha particle scattering, Ernest Rutherford estimated that the charge of an atomic nucleus, expressed as a multiplier of hydrogen's nuclear charge (qe), is roughly half the atom's atomic weight. In June 1911, Van den Broek noted that on the periodic table, each successive chemical element increased in atomic weight on average by 2, which in turn suggested that each successive element's nuclear charge increased by 1 qe. In 1913, van den Broek further proposed that the electric charge of an atom's nucleus, expressed as a multiplier of the elementary charge, is equal to the element's sequential position on the periodic table. Rutherford defined this position as being the element's atomic number. In 1913, Henry Moseley measured the X-ray emissions of all the elements on the periodic table and found that the frequency of the X-ray emissions was a mathematical function of the element's atomic number and the charge of a hydrogen nucleus . In 1917 Rutherford bombarded nitrogen gas with alpha particles and observed hydrogen ions being emitted from the gas. Rutherford concluded that the alpha particles struck the nuclei of the nitrogen atoms, causing hydrogen ions to split off. These observations led Rutherford to conclude that the hydrogen nucleus was a singular particle with a positive charge equal to that of the electron's negative charge. The name "proton" was suggested by Rutherford at an informal meeting of fellow physicists in Cardiff in 1920. The charge number of an atomic nucleus was found to be equal to the element's ordinal position on the periodic table. The nuclear charge number thus provided a simple and clear-cut way of distinguishing the chemical elements from each other, as opposed to Lavoisier's classic definition of a chemical element being a substance that cannot be broken down into simpler substances by chemical reactions. The charge number or proton number was thereafter referred to as the atomic number of the element. In 1923, the International Committee on Chemical Elements officially declared the atomic number to be the distinguishing quality of a chemical element. During the 1920s, some writers defined the atomic number as being the number of "excess protons" in a nucleus. Before the discovery of the neutron, scientists believed that the atomic nucleus contained a number of "nuclear electrons" which cancelled out the positive charge of some of its protons. This explained why the atomic weights of most atoms were higher than their atomic numbers. Helium, for instance, was thought to have four protons and two nuclear electrons in the nucleus, leaving two excess protons and a net nuclear charge of 2+. After the neutron was discovered, scientists realized the helium nucleus in fact contained two protons and two neutrons. Discovery of the neutron Physicists in the 1920s believed that the atomic nucleus contained protons plus a number of "nuclear electrons" that reduced the overall charge. These "nuclear electrons" were distinct from the electrons that orbited the nucleus. This incorrect hypothesis would have explained why the atomic numbers of the elements were less than their atomic weights, and why radioactive elements emit electrons (beta radiation) in the process of nuclear decay. Rutherford even hypothesized that a proton and an electron could bind tightly together into a "neutral doublet". Rutherford wrote that the existence of such "neutral doublets" moving freely through space would provide a more plausible explanation for how the heavier elements could have formed in the genesis of the Universe, given that it is hard for a lone proton to fuse with a large atomic nucleus because of the repulsive electric field. In 1928, Walter Bothe observed that beryllium emitted a highly penetrating, electrically neutral radiation when bombarded with alpha particles. It was later discovered that this radiation could knock hydrogen atoms out of paraffin wax. Initially it was thought to be high-energy gamma radiation, since gamma radiation had a similar effect on electrons in metals, but James Chadwick found that the ionization effect was too strong for it to be due to electromagnetic radiation, so long as energy and momentum were conserved in the interaction. In 1932, Chadwick exposed various elements, such as hydrogen and nitrogen, to the mysterious "beryllium radiation", and by measuring the energies of the recoiling charged particles, he deduced that the radiation was actually composed of electrically neutral particles which could not be massless like the gamma ray, but instead were required to have a mass similar to that of a proton. Chadwick called this new particle "the neutron" and believed that it to be a proton and electron fused together because the neutron had about the same mass as a proton and an electron's mass is negligible by comparison. Neutrons are not in fact a fusion of a proton and an electron. Modern quantum mechanical models In 1924, Louis de Broglie proposed that all particles—particularly subatomic particles such as electrons—have an associated wave. Erwin Schrödinger, fascinated by this idea, developed an equation that describes an electron as a wave function instead of a point. This approach predicted many of the spectral phenomena that Bohr's model failed to explain, but it was difficult to visualize, and faced opposition. One of its critics, Max Born, proposed instead that Schrödinger's wave function did not describe the physical extent of an electron (like a charge distribution in classical electromagnetism), but rather gave the probability that an electron would, when measured, be found at a particular point. This reconciled the ideas of wave-like and particle-like electrons: the behavior of an electron, or of any other subatomic entity, has both wave-like and particle-like aspects, and whether one aspect or the other is observed depend upon the experiment. A consequence of describing particles as waveforms rather than points is that it is mathematically impossible to calculate with precision both the position and momentum of a particle at a given point in time. This became known as the uncertainty principle, a concept first introduced by Werner Heisenberg in 1927. Schrödinger's wave model for hydrogen replaced Bohr's model, with its neat, clearly defined circular orbits. The modern model of the atom describes the positions of electrons in an atom in terms of probabilities. An electron can potentially be found at any distance from the nucleus, but, depending on its energy level and angular momentum, exists more frequently in certain regions around the nucleus than others; this pattern is referred to as its atomic orbital. The orbitals come in a variety of shapes—sphere, dumbbell, torus, etc.—with the nucleus in the middle. The shapes of atomic orbitals are found by solving the Schrödinger equation. Analytic solutions of the Schrödinger equation are known for very few relatively simple model Hamiltonians including the hydrogen atom and the hydrogen molecular ion. Beginning with the helium atom—which contains just two electrons—numerical methods are used to solve the Schrödinger equation. Qualitatively the shape of the atomic orbitals of multi-electron atoms resemble the states of the hydrogen atom. The Pauli principle requires the distribution of these electrons within the atomic orbitals such that no more than two electrons are assigned to any one orbital; this requirement profoundly affects the atomic properties and ultimately the bonding of atoms into molecules.
Physical sciences
Atomic physics
null
2869
https://en.wikipedia.org/wiki/Anxiolytic
Anxiolytic
An anxiolytic (; also antipanic or anti-anxiety agent) is a medication or other intervention that reduces anxiety. This effect is in contrast to anxiogenic agents which increase anxiety. Anxiolytic medications are used for the treatment of anxiety disorders and their related psychological and physical symptoms. Nature of anxiety Anxiety is a naturally-occurring emotion and response. When anxiety levels exceed the tolerability of a person, anxiety disorders may occur. People with anxiety disorders can exhibit fear responses, such as defensive behaviors, high levels of alertness, and negative emotions. Those with anxiety disorders may have concurrent psychological disorders, such as depression. Anxiety disorders are classified using six possible clinical assessments: Different types of anxiety disorders will share some general symptoms while having their own distinctive symptoms. This explains why people with different types of anxiety disorders will respond differently to different classes of anti-anxiety medications. Etiology The etiology of anxiety disorder remains unknown. There are several contributing factors that are still yet to be proved to cause anxiety disorders. These factors include childhood anxiety, drug induction by central stimulant drugs, metabolic diseases or having depressive disorder. Medications Anti-anxiety medication is any drug that can be taken or prescribed for the treatment of anxiety disorders, which may be mediated by neurotransmitters like norepinephrine, serotonin, dopamine, and gamma-aminobutyric acid (GABA) in the central nervous system. Anti-anxiety medication can be classified into six types according to their different mechanisms: antidepressants, benzodiazepines, azapirones, antiepileptics, antipsychotics, and beta blockers. Antidepressants include selective serotonin reuptake inhibitors (SSRIs), serotonin–norepinephrine reuptake inhibitors (SNRIs), tricyclic antidepressants (TCAs), and monoamine oxidase inhibitors (MAOIs). SSRIs are used in all types of anxiety disorders while SNRIs are used for generalized anxiety disorder (GAD). Both of them are considered as first-line anti-anxiety medications. TCAs are second-line treatment as they cause more significant adverse effects when compared to the first-line treatment. Benzodiazepines are effective in emergent and short-term treatment of anxiety disorders due to their fast onset but carry the risk of dependence. Buspirone is indicated for GAD, which has much slower onset but with the advantage of less sedating and withdrawal effects. History The first monoamine oxidase inhibitor (MAOI), iproniazid, was discovered accidentally when developing the new antitubercular drug isoniazid. The drug was found to induce euphoria and improve the patient's appetite and sleep quality. The first tricyclic antidepressant, imipramine, was originally developed and studied to be an antihistamine alongside other first-generation antihistamines of the time, such as promethazine. TCAs can increase the level of norepinephrine and serotonin by inhibiting their reuptake transport proteins. The majority of TCAs exert greater effect on norepinephrine, which leads to side effects like drowsiness and memory loss. In order to be more effective on serotonin agonism and avoid anticholinergic and antihistaminergic side effects, selective serotonin reuptake inhibitors (SSRI) were researched and introduced to treat anxiety disorders. The first SSRI, fluoxetine (Prozac), was discovered in 1974 and approved by FDA in 1987. After that, other SSRIs like sertraline (Zoloft), paroxetine (Paxil), and escitalopram (Lexapro) have entered the market. The first serotonin norepinephrine reuptake inhibitor (SNRI), venlafaxine (Effexor), entered the market in 1993. SNRIs can target serotonin and norepinephrine transporters while avoiding imposing significant effects on other adrenergic (α1, α2, and β), histamine (H1), muscarinic, dopamine, or postsynaptic serotonin receptors. Classifications There are six groups of anti-anxiety medications available that have been proven to be clinically significant in treatment of anxiety disorders. The groups of medications are as follows. Antidepressants Medications that are indicated for both anxiety disorders and depression. Selective serotonin reuptake inhibitors (SSRIs) and serotonin–norepinephrine reuptake inhibitors (SNRIs) are new generations of antidepressants. They have a much lower adverse effect profile than older antidepressants like monoamine oxidase inhibitors (MAOIs) and tricyclic antidepressants (TCAs). Therefore, SSRIs and SNRIs are now the first-line agent in treating long term anxiety disorders, given their applications and significance in all six types of disorders. Benzodiazepines Benzodiazepines are used for acute anxiety and could be added along with current use of SSRIs to stabilize a treatment. Long-term use in treatment plans is not recommended. Different kinds of benzodiazepine will vary in its pharmacological profile, including its strength of effect and time taken for metabolism. The choice of the benzodiazepine will depend on the corresponding profiles. Benzodiazepines are used for emergent or short-term management. They are not recommended as the first-line anti-anxiety drugs, but they can be used in combination with SSRIs/SNRIs during the initial treatment stage. Indications include panic disorder, sleep disorders, seizures, acute behavioral disturbance, muscle spasm and premedication and sedation for procedures. Azapirones Buspirone can be useful in GAD but not particularly effective in treating phobias, panic disorder or social anxiety disorders. It is a safer option for long-term use as it does not cause dependence like benzodiazepines. Antiepileptics Antiepileptics are rarely prescribed as an off-label treatment for anxiety disorders and post-traumatic stress disorders. There have been some suggestions that they may help with anxiety symptoms but there is generally a lack of research on its use. One antiepileptic, pregabalin, has been found to be better at treating GAD than a placebo, and comparable effects to benzodiazepines. It has also been shown be potentially efficient in treating social anxiety disorder. Gabapentin has been prescribed off-label for anxiety despite a lack of research evidence supporting such use, although some studies have indicated that it may relieve anxiety symptoms. The potential anxiolytic effect of tiagabine has been observed in some pre-clinical trials, but its effectiveness has not yet been proved. Similarly, there is a lack of research on valproate for the treatment of anxiety disorders. Antipsychotics Olanzapine and risperidone are atypical antipsychotics which are also effective in GAD and PTSD treatment. However, there is a higher chance of experiencing adverse effects than the other anti-anxiety medications. Beta-adrenoceptor antagonists Propranolol is originally used for high blood pressure and heart diseases. It can also be used to treat anxiety with symptoms like tremor or increased heart rate. They work on the nervous system and alleviate the symptoms as a relief. Propranolol is also commonly used for public speaking when one is nervous. Mechanism of action SSRIs and SNRIs Both selective serotonin reuptake inhibitors (SSRI) and serotonin and norepinephrine reuptake inhibitors (SNRI) are reuptake inhibitors of a class of nerve signal transduction chemical called neurotransmitters. Serotonin and norepinephrine are neurotransmitters that are related to nervous control in mood regulation. The level of these neurotransmitters is regulated by the nerve through reuptake to avoid accumulation of the neurotransmitter at the endings of nerve fibers. By reuptaking the neurotransmitter, the level of neuronal activity will go back down and be ready to go back up upon excitation from a new nerve signal. However the neurotransmitter level of patients with anxiety disorders is usually low or the patients’ nerve fibers are insensitive to the neurotransmitters. SSRIs and SNRIs will then block the channel of reuptake and increase the level of the neurotransmitter. The nerve fibers will inhibit further production of neurotransmitters upon the increase. However the prolonged increase will eventually desensitize the nerve about the change in level. Therefore, the action of both SSRIs and SNRIs will take 4–6 weeks to exert their full effect. Benzodiazepine Benzodiazepines bind selectively to the GABA receptor, which is the receptor protein found in the nervous system and is in control of the nervous response. Benzodiazepine will increase the entry of chloride ions into the cells by improving the binding between GABA and GABA receptors and then the better opening of the channel for chloride ion passage. The high level of chloride ion inside the nerve cells makes the nerve more difficult to depolarize and inhibit further nerve signal transduction. The excitability of the nerves then reduces and the nervous system slows down. Therefore, the drug can alleviate symptoms of anxiety disorder and make the person less nervous. Clinical use Selective serotonin reuptake inhibitors Selective serotonin reuptake inhibitors (SSRIs) are a class of medications used in the treatment of depression, anxiety disorders, OCD and some personality disorders. SSRIs are the first-line anti-anxiety medications. Serotonin is one of the crucial neurotransmitters in mood enhancement, and increasing serotonin level produces an anti-anxiety effect. SSRIs increase the serotonin level in the brain by inhibiting serotonin uptake pumps on serotonergic systems, without interactions with other receptors and ion channels. SSRIs are beneficial in both acute response and long-term maintenance treatment for both depression and anxiety disorder. SSRIs can increase anxiety initially due to negative feedback through the serotonergic autoreceptors; for this reason a concurrent benzodiazepine can be used until the anxiolytic effect of the SSRI occurs. The SSRIs paroxetine and escitalopram are USFDA approved to treat generalized anxiety disorder. Therapeutic use Adverse effect The common early side effects of SSRIs include nausea and loose stool, which can be solved by discontinuing the treatment. Headache, dizziness, insomnia are the common early side effects as well. Sexual dysfunction, anorgasmia, erectile dysfunction, and reduced libido are common adverse side effects of SSRIs. Sometimes they may persist after the cessation of treatment. Withdrawal symptoms like dizziness, headache and flu-like symptoms (fatigue/myalgia/loose stool) may occur if SSRI is stopped suddenly. The brain is incapable of upregulating the receptors to sufficient levels especially after discontinuation of the drugs with short half life like paroxetine. Both fluoxetine and its active metabolite have a long half life therefore it causes the least withdrawal symptoms. Serotonin–norepinephrine reuptake inhibitors Serotonin–norepinephrine reuptake inhibitor (SNRIs) include venlafaxine and duloxetine drugs. Venlafaxine, in extended release form, and duloxetine, are indicated for the treatment of GAD. SNRIs are as effective as SSRIs in the treatment of anxiety disorders. Tricyclic antidepressants Tricyclic antidepressants (TCAs) have anxiolytic effects; however, side effects are often more troubling or severe and overdose is dangerous. They are considered effective, but have generally been replaced by antidepressants that cause different adverse effects. Examples include imipramine, doxepin, amitriptyline, nortriptyline and desipramine. Therapeutic use Contraindication TCAs may cause drug poisoning in patients with hypotension, cardiovascular diseases and arrhythmias. Tetracyclic antidepressants Mirtazapine has demonstrated anxiolytic effect comparable to SSRIs while rarely causing or exacerbating anxiety. Mirtazapine's anxiety reduction tends to occur significantly faster than SSRIs. Monoamine oxidase inhibitors Monoamine oxidase inhibitors (MAOIs) are first-generation antidepressants effective for anxiety treatment but their dietary restrictions, adverse effect profile and availability of newer medications have limited their use. MAOIs include phenelzine, isocarboxazid and tranylcypromine. Pirlindole is a reversible MAOI that lacks dietary restriction. Barbiturates Barbiturates are powerful anxiolytics but the risk of abuse and addiction is high. Many experts consider these drugs obsolete for treating anxiety but valuable for the short-term treatment of severe insomnia, though only after benzodiazepines or non-benzodiazepines have failed. Benzodiazepines Benzodiazepines are prescribed to quell panic attacks. Benzodiazepines are also prescribed in tandem with an antidepressant for the latent period of efficacy associated with many ADs for anxiety disorder. There is risk of benzodiazepine withdrawal and rebound syndrome if BZDs are rapidly discontinued. Tolerance and dependence may occur. The risk of abuse in this class of medication is smaller than in that of barbiturates. Cognitive and behavioral adverse effects are possible. Benzodiazepines include: alprazolam (Xanax), bromazepam, chlordiazepoxide (Librium), clonazepam (Klonopin), diazepam (Valium), lorazepam (Ativan), oxazepam, temazepam, and Triazolam. Therapeutic use Adverse effect Benzodiazepines lead to central nervous system depression, resulting in common adverse effects like drowsiness, oversedation, light-headedness. Memory impairment can be a common adverse effect especially in elderly, hypersalivation, ataxia, slurred speech, psychomotor effects. Sympatholytics Sympatholytics are a group of anti-hypertensives which inhibit activity of the sympathetic nervous system. Beta blockers reduce anxiety by decreasing heart rate and preventing shaking. Beta blockers include propranolol, oxprenolol, and metoprolol. The alpha-1 antagonist prazosin could be effective for PTSD. The alpha-2 agonists clonidine and guanfacine have demonstrated both anxiolytic and anxiogenic effects. Miscellaneous Buspirone Buspirone (Buspar) is a 5-HT1A receptor agonist used to treated generalized anxiety disorder. If an individual has only recently stopped taking benzodiazepines, buspirone will be less effective. Pregabalin Pregabalin (Lyrica) produces anxiolytic effect after one week of use comparable to lorazepam, alprazolam, and venlafaxine with more consistent psychic and somatic anxiety reduction. Unlike BZDs, it does not disrupt sleep architecture nor does it cause cognitive or psychomotor impairment. Hydroxyzine Hydroxyzine (Atarax) is an antihistamine originally approved for clinical use by the FDA in 1956. Hydroxyzine has a calming effect which helps ameliorate anxiety. Hydroxyzine efficacy is comparable to benzodiazepines in the treatment of generalized anxiety disorder. Phenibut Phenibut (Anvifen, Fenibut, Noofen) is an anxiolytic used in Russia. Phenibut is a GABAB receptor agonist, as well as an antagonist at α2δ subunit-containing voltage-dependent calcium channels (VDCCs), similarly to gabapentinoids like gabapentin and pregabalin. The medication is not approved by the FDA for use in the United States, but is sold online as a supplement. Temgicoluril Temgicoluril (Mebicar) is an anxiolytic produced in Latvia and used in Eastern Europe. Temgicoluril has an effect on the structure of limbic-reticular activity, particularly on the hypothalamus, as well as on all four basic neuromediator systems – γ aminobutyric acid (GABA), choline, serotonin and adrenergic activity. Temgicoluril decreases noradrenaline, increases serotonin, and exerts no effect on dopamine. Fabomotizole Fabomotizole (Afobazole) is an anxiolytic drug launched in Russia in the early 2000s. Its mechanism of action is poorly-defined, with GABAergic, NGF and BDNF release promoting, MT1 receptor agonism, MT3 receptor antagonism, and sigma receptor agonism thought to have some involvement. Bromantane Bromantane is a stimulant drug with anxiolytic properties developed in Russia during the late 1980s. Bromantane acts mainly by facilitating the biosynthesis of dopamine, through indirect genomic upregulation of relevant enzymes (tyrosine hydroxylase (TH) and aromatic L-amino acid decarboxylase (AAAD)). Emoxypine Emoxypine is an antioxidant that is also a purported anxiolytic. Its chemical structure resembles that of pyridoxine, a form of vitamin B6. Menthyl isovalerate Menthyl isovalerate is a flavoring food additive marketed as a sedative and anxiolytic drug in Russia under the name Validol. Racetams Some racetam based drugs such as aniracetam can have an antianxiety effect. Alpidem Alpidem is a nonbenzodiazepine anxiolytic with similar anxiolytic effectiveness as benzodiazepines but reduced sedation and cognitive, memory, and motor impairment. It was marketed briefly in France but was withdrawn from the market due to liver toxicity. Etifoxine Etifoxine has similar anxiolytic effects as benzodiazepine drugs, but does not produce the same levels of sedation and ataxia. Further, etifoxine does not affect memory and vigilance, and does not induce rebound anxiety, drug dependence, or withdrawal symptoms. Alcohol Alcohol is sometimes used as an anxiolytic by self-medication. fMRI can measure the anxiolytic effects of alcohol in the human brain. Alternatives to medication Cognitive behavioral therapy (CBT) is an effective treatment for panic disorder, social anxiety disorder, generalized anxiety disorder, and obsessive–compulsive disorder, while exposure therapy is the recommended treatment for anxiety related phobias. Healthcare providers can guide those with anxiety disorder by referring them to self-help resources. Sometimes medication is combined with psychotherapy but research has not found a benefit of combined pharmacotherapy and psychotherapy versus monotherapy. If CBT is found ineffective, both the Canadian and American medical associations then suggest the use of medication.
Biology and health sciences
Psychiatric drugs
Health
2870
https://en.wikipedia.org/wiki/Antipsychotic
Antipsychotic
Antipsychotics, previously known as neuroleptics and major tranquilizers, are a class of psychotropic medication primarily used to manage psychosis (including delusions, hallucinations, paranoia or disordered thought), principally in schizophrenia but also in a range of other psychotic disorders. They are also the mainstay, together with mood stabilizers, in the treatment of bipolar disorder. Moreover, they are also used as adjuncts in the treatment of treatment-resistant major depressive disorder. Use of antipsychotics is associated with reductions in brain tissue volumes, including white matter reduction, an effect which is dose-dependent and time-dependent. A recent controlled trial suggests that second generation antipsychotics combined with intensive psychosocial therapy may potentially prevent pallidal brain volume loss in first episode psychosis. The use of antipsychotics may result in many unwanted side effects such as involuntary movement disorders, gynecomastia, impotence, weight gain and metabolic syndrome. Long-term use can produce adverse effects such as tardive dyskinesia, tardive dystonia, and tardive akathisia. First-generation antipsychotics (e.g., chlorpromazine, haloperidol, etc.), known as typical antipsychotics, were first introduced in the 1950s, and others were developed until the early 1970s. Second-generation antipsychotics, known as atypical antipsychotics, arrived with the introduction of clozapine in the early 1970s followed by others (e.g., risperidone, olanzapine, etc.). Both generations of medication block receptors in the brain for dopamine, but atypicals block serotonin receptors as well. Third-generation antipsychotics were introduced in the 2000s and offer partial agonism, rather than blockade, of dopamine receptors. Neuroleptic, originating from (neuron) and (take hold of)—thus meaning "which takes the nerve"—refers to both common neurological effects and side effects. Medical uses Antipsychotics are most frequently used for the following conditions: Schizophrenia Schizoaffective disorder most commonly in conjunction with either an antidepressant (in the case of the depressive subtype) or a mood stabilizer (in the case of the bipolar subtype). Antipsychotics possess mood stabilizing properties and thus they may be used as standalone medication to treat mood dysregulation. Bipolar disorder (acute mania and mixed episodes) may be treated with either typical or atypical antipsychotics, although atypical antipsychotics are usually preferred because they tend to have more favourable adverse effect profiles and, according to a recent meta-analysis, they tend to have a lower liability for causing conversion from mania to depression. Psychotic depression. In this indication it is a common practice for the psychiatrist to prescribe a combination of an atypical antipsychotic and an antidepressant as this practice is best supported by the evidence. Treatment-resistant depression as an adjunct to standard antidepressant therapy. Given the limited options available to treat the behavioral problems associated with dementia, other pharmacological and non-pharmacological interventions are usually attempted before using antipsychotics. A risk-to-benefit analysis is performed to weigh the risk of the adverse effects of antipsychotics versus: the potential benefit, the adverse effects of alternative interventions, and the risk of failing to intervene when a patient's behavior becomes unsafe. The same can be said for insomnia, in which they are not recommended as first-line therapy. There are evidence-based indications for using antipsychotics in children (e.g., tic disorder, bipolar disorder, psychosis), but the use of antipsychotics outside of those contexts (e.g., to treat behavioral problems) warrants significant caution. Antipsychotics are used to treat tics associated with Tourette syndrome. Aripiprazole, an atypical antipsychotic, is used as add-on medication to ameliorate sexual dysfunction as a symptom of selective serotonin reuptake inhibitor (SSRI) antidepressants in women. Quetiapine is used to treat generalized anxiety disorder. Schizophrenia Antipsychotic drug treatment is a key component of schizophrenia treatment recommendations by the National Institute of Health and Care Excellence (NICE), the American Psychiatric Association, and the British Society for Psychopharmacology. The main aim of treatment with antipsychotics is to reduce the positive symptoms of psychosis, that include delusions and hallucinations. There is mixed evidence to support a significant impact of antipsychotic use on primary negative symptoms (such as apathy, lack of emotional affect, and lack of interest in social interactions) or on cognitive symptoms (memory impairments, reduced ability to plan and execute tasks). In general, the efficacy of antipsychotic treatment in reducing positive symptoms appears to increase with the severity of baseline symptoms. All antipsychotic medications work relatively the same way: by antagonizing D2 dopamine receptors. However, there are some differences when it comes to typical and atypical antipsychotics. For example, atypical antipsychotic medications have been seen to lower the neurocognitive impairment associated with schizophrenia more than conventional antipsychotics, although the reasoning and mechanics of this are still unclear to researchers. Applications of antipsychotic drugs in the treatment of schizophrenia include prophylaxis for those showing symptoms that suggest that they are at high risk of developing psychosis; treatment of first-episode psychosis; maintenance therapy (a form of prophylaxis, maintenance therapy aims to maintain therapeutic benefit and prevent symptom relapse); and treatment of recurrent episodes of acute psychosis. A recent 2024 study found that using high doses of antipsychotics for schizophrenia was linked to a higher risk of mortality. Researchers analyzed data from 32,240 individuals aged 17 to 64 diagnosed with schizophrenia between 2002 and 2012 to arrive at this conclusion. Prevention of psychosis and symptom improvement Test batteries such as the PACE (Personal Assessment and Crisis Evaluation Clinic) and COPS (Criteria of Prodromal Syndromes), which measure low-level psychotic symptoms and cognitive disturbances, are used to evaluate people with early, low-level symptoms of psychosis. Test results are combined with family history information to identify patients in the "high-risk" group; they are considered to have a 20–40% risk of progression to frank psychosis within two years. These patients are often treated with low doses of antipsychotic drugs with the goal of reducing their symptoms and preventing progression to frank psychosis. While generally useful for reducing symptoms, clinical trials to date show little evidence that early use of antipsychotics improves long-term outcomes in those with prodromal symptoms, either alone or in combination with cognitive-behavioral therapy. First-episode psychosis First-episode psychosis (FEP) is the first time that psychotic symptoms are presented. NICE recommends that all people presenting with first-episode psychosis be treated with both an antipsychotic drug and cognitive behavioral therapy (CBT). NICE further recommends that those expressing a preference for CBT alone be informed that combination treatment is more effective. A diagnosis of schizophrenia is not made at this time as it takes longer to be determined by both DSM-5 and ICD-11, and only around 60% of those presenting with a first episode of psychosis will later be diagnosed with schizophrenia. The conversion rate for a first episode of drug induced psychosis to bipolar disorder or schizophrenia is lower, with 30% of people converting to either bipolar disorder or schizophrenia. NICE makes no distinction between substance-induced psychosis and any other form of psychosis. The rate of conversion differs for different classes of drugs. Pharmacological options for the specific treatment of FEP have been discussed in recent reviews. The goals of treatment for FEP include reducing symptoms and potentially improving long-term treatment outcomes. Randomized clinical trials have provided evidence for the efficacy of antipsychotic drugs in achieving the former goal, with first-generation and second generation antipsychotics showing about equal efficacy. The evidence that early treatment has a favorable effect on long-term outcomes is equivocal. Recurrent psychotic episodes Placebo-controlled trials of both first- and second-generation antipsychotic drugs consistently demonstrate the superiority of active drugs over placebos in suppressing psychotic symptoms. A large meta-analysis of 38 trials of antipsychotic drugs in schizophrenia with acute psychotic episodes showed an effect size of about 0.5. There is little or no difference in efficacy among approved antipsychotic drugs, including both first- and second-generation agents. The efficacy of such drugs is suboptimal. Few patients achieve complete resolution of symptoms. Response rates, calculated using various cutoff values for symptom reduction, are low, and their interpretation is complicated by high placebo response rates and selective publication of clinical trial results. Maintenance therapy The majority of patients treated with an antipsychotic drug will experience a response within four weeks. The goals of continuing treatment are to maintain suppression of symptoms, prevent relapse, improve quality of life, and support engagement in psychosocial therapy. Maintenance therapy with antipsychotic drugs is clearly superior to placebo in preventing relapse but is associated with weight gain, movement disorders, and high dropout rates. A 3-year trial following persons receiving maintenance therapy after an acute psychotic episode found that 33% obtained long-lasting symptom reduction, 13% achieved remission, and only 27% experienced satisfactory quality of life. The effect of relapse prevention on long term outcomes is uncertain, as historical studies show little difference in long term outcomes before and after the introduction of antipsychotic drugs. While maintenance therapy clearly reduces the rate of relapses requiring hospitalization, a large observational study in Finland found that, in people that eventually discontinued antipsychotics, the risk of being hospitalized again for a mental health problem or dying increased the longer they were dispensed (and presumably took) antipsychotics prior to stopping therapy. If people did not stop taking antipsychotics, they remained at low risk for relapse and hospitalization compared to those that did. The authors speculated that the difference may be because the people that discontinued treatment after a longer time had more severe mental illness than those that discontinued antipsychotic therapy sooner. A significant challenge in the use of antipsychotic drugs for the prevention of relapse is the poor rate of adherence. In spite of the relatively high rates of adverse effects associated with these drugs, some evidence, including higher dropout rates in placebo arms compared to treatment arms in randomized clinical trials, suggests that most patients who discontinue treatment do so because of suboptimal efficacy. If someone experiences psychotic symptoms due to nonadherence, they may be compelled to receive treatment through a process called involuntary commitment, in which they can be forced to accept treatment (including antipsychotics). A person can also be committed to treatment outside of a hospital, called outpatient commitment. Antipsychotics in long-acting injectable (LAI), or "depot", form have been suggested as a method of decreasing medication nonadherence (sometimes also called non-compliance). NICE advises LAIs be offered to patients when preventing covert, intentional nonadherence is a clinical priority. LAIs are used to ensure adherence in outpatient commitment. A meta-analysis found that LAIs resulted in lower rates of rehospitalization with a hazard ratio of 0.83; however, these results were not statistically significant (the 95% confidence interval was 0.62 to 1.11). Bipolar disorder Antipsychotics are routinely used, often in conjunction with mood stabilizers such as lithium/valproate, as a first-line treatment for manic and mixed episodes associated with bipolar disorder. The reason for this combination is the therapeutic delay of the aforementioned mood stabilizers (for valproate therapeutic effects are usually seen around five days after treatment is commenced whereas lithium usually takes at least a week before the full therapeutic effects are seen) and the comparatively rapid antimanic effects of antipsychotic drugs. The antipsychotics have a documented efficacy when used alone in acute mania/mixed episodes. At least five atypical antipsychotics (lumateperone, cariprazine, lurasidone, olanzapine, and quetiapine) have also been found to possess efficacy in the treatment of bipolar depression as a monotherapy, whereas only olanzapine and quetiapine have been proven to be effective broad-spectrum (i.e., against all three types of relapse—manic, mixed and depressive) prophylactic (or maintenance) treatments in patients with bipolar disorder. A recent Cochrane review also found that olanzapine had a less favourable risk/benefit ratio than lithium as a maintenance treatment for bipolar disorder. The American Psychiatric Association and the UK National Institute for Health and Care Excellence recommend antipsychotics for managing acute psychotic episodes in schizophrenia or bipolar disorder, and as a longer-term maintenance treatment for reducing the likelihood of further episodes. They state that response to any given antipsychotic can be variable so that trials may be necessary, and that lower doses are to be preferred where possible. A number of studies have looked at levels of "compliance" or "adherence" with antipsychotic regimes and found that discontinuation (stopping taking them) by patients is associated with higher rates of relapse, including hospitalization. Dementia Psychosis and agitation develop in as many as 80 percent of people living in nursing homes. Despite a lack of FDA approval and black-box warnings, atypical antipsychotics are very often prescribed to people with dementia. An assessment for an underlying cause of behavior is needed before prescribing antipsychotic medication for symptoms of dementia. Antipsychotics in old age dementia showed a modest benefit compared to placebo in managing aggression or psychosis, but this is combined with a fairly large increase in serious adverse events. Thus, antipsychotics should not be used routinely to treat dementia with aggression or psychosis, but may be an option in a few cases where there is severe distress or risk of physical harm to others. Psychosocial interventions may reduce the need for antipsychotics. In 2005, the FDA issued an advisory warning of an increased risk of death when atypical antipsychotics are used in dementia. In the subsequent 5 years, the use of atypical antipsychotics to treat dementia decreased by nearly 50%. Major depressive disorder A number of atypical antipsychotics have some benefits when used in addition to other treatments in major depressive disorder. Aripiprazole, quetiapine extended-release, and olanzapine (when used in conjunction with fluoxetine) have received the Food and Drug Administration (FDA) labelling for this indication. There is, however, a greater risk of side effects with their use compared to using traditional antidepressants. The greater risk of serious side effects with antipsychotics is why, e.g., quetiapine was denied approval as monotherapy for major depressive disorder or generalized anxiety disorder, and instead was only approved as an adjunctive treatment in combination with traditional antidepressants. A recent study on the use of antipychotics in unipolar depression concluded that the use of those drugs in addition to antidepressants alone leads to a worse disease outcome. This effect is especially pronounced in younger patients with psychotic unipolar depression. Considering the wide use of such combination therapies, further studies on the side effects of antipychotics as an add-on therapy are warranted. Other Global antipsychotic utilization has seen a steady growth since the introduction of atypical (second-generation) antipsychotics and this is ascribed to off-label use for many other unapproved disorders. Besides the above uses antipsychotics may be used for obsessive–compulsive disorder, post-traumatic stress disorder, personality disorders, Tourette syndrome, autism and agitation in those with dementia. Evidence however does not support the use of atypical antipsychotics in eating disorders or personality disorder. The atypical antipsychotic risperidone may be useful for obsessive–compulsive disorder. The use of low doses of antipsychotics for insomnia, while common, is not recommended as there is little evidence of benefit as well as concern regarding adverse effects. Some of the more serious adverse effects may also occur at the low doses used, such as dyslipidemia and neutropenia, and a recent network meta-analysis of 154 double-blind, randomized controlled trials of drug therapies vs. placebo for insomnia in adults found that quetiapine did not demonstrated any short-term benefits in sleep quality. Low dose antipsychotics may also be used in treatment of impulse-behavioural and cognitive-perceptual symptoms of borderline personality disorder. Despite the lack of evidence supporting the benefit of antipsychotics in people with personality disorders, 1 in 4 who do not have a serious mental illness are prescribed them in UK primary care. Many people receive these medication for over a year, contrary to NICE guidelines. In children they may be used in those with disruptive behavior disorders, mood disorders and pervasive developmental disorders or intellectual disability. Antipsychotics are only weakly recommended for Tourette syndrome, because although they are effective, side effects are common. The situation is similar for those on the autism spectrum. Much of the evidence for the off-label use of antipsychotics (for example, for dementia, OCD, PTSD, personality disorders, Tourette's) was of insufficient scientific quality to support such use, especially as there was strong evidence of increased risks of stroke, tremors, significant weight gain, sedation, and gastrointestinal problems. A UK review of unlicensed usage in children and adolescents reported a similar mixture of findings and concerns. A survey of children with pervasive developmental disorder found that 16.5% were taking an antipsychotic drug, most commonly for irritability, aggression, and agitation. Both risperidone and aripiprazole have been approved by the US FDA for the treatment of irritability in autistic children and adolescents. A review in the UK found that the use of antipsychotics in England doubled between 2000 and 2019. Children were prescribed antipsychotics for conditions for which there is no approval, such as autism. Aggressive challenging behavior in adults with intellectual disability is often treated with antipsychotic drugs despite lack of an evidence base. A recent randomized controlled trial, however, found no benefit over placebo and recommended that the use of antipsychotics in this way should no longer be regarded as an acceptable routine treatment. Antipsychotics may be an option, together with stimulants, in people with ADHD and aggressive behavior when other treatments have not worked. They have not been found to be useful for the prevention of delirium among those admitted to hospital. Typicals vs atypicals Aside from reduced extrapyramidal symptoms, and with the clear exception of clozapine, it is unclear whether the atypical (second-generation) antipsychotics offer advantages over older, first generation antipsychotics. Amisulpride, olanzapine, risperidone and clozapine may be more effective but are associated with greater side effects. Typical antipsychotics have equal drop-out and symptom relapse rates to atypicals when used at low to moderate dosages. Clozapine is an effective treatment for those who respond poorly to other drugs ("treatment-resistant" or "refractory" schizophrenia), but it has the potentially serious side effect of agranulocytosis (lowered white blood cell count) in less than 4% of people. Due to bias in the research the accuracy of comparisons of atypical antipsychotics is a concern. In 2005, a US government body, the National Institute of Mental Health published the results of a major independent study (the CATIE project). No other atypical studied (risperidone, quetiapine, and ziprasidone) did better than the first-generation antipsychotic perphenazine on the measures used, nor did they produce fewer adverse effects than the typical antipsychotic perphenazine, although more patients discontinued perphenazine owing to extrapyramidal effects compared to the atypical agents (8% vs. 2% to 4%). This is significant because any patient with tardive dyskinesia was specifically excluded from randomization to perphenazine; i.e., in the CATIE study the patient cohort randomized to receive perphenazne was at lower risk of having extrapyramidal symptoms. Atypical antipsychotics do not appear to lead to improved rates of medication adherence compared to typical antipsychotics. Many researchers question the first-line prescribing of atypicals over typicals, and some even question the distinction between the two classes. In contrast, other researchers point to the significantly higher risk of tardive dyskinesia and other extrapyramidal symptoms with the typicals and for this reason alone recommend first-line treatment with the atypicals, notwithstanding a greater propensity for metabolic adverse effects in the latter. The UK government organization NICE recently revised its recommendation favoring atypicals, to advise that the choice should be an individual one based on the particular profiles of the individual drug and on the patient's preferences. The re-evaluation of the evidence has not necessarily slowed the bias toward prescribing the atypicals. Other uses Antipsychotics, such as risperidone, quetiapine, and olanzapine, have been used as hallucinogen antidotes or "trip killers" to block the effects of serotonergic psychedelics like psilocybin and lysergic acid diethylamide (LSD). Adverse effects Generally, more than one antipsychotic drug should not be used at a time because of increased adverse effects. Some atypicals are associated with considerable weight gain, diabetes and the risk of metabolic syndrome. Unwanted side effects cause people to stop treatment, resulting in relapses. Risperidone (atypical) has a similar rate of extrapyramidal symptoms to haloperidol (typical). A rare but potentially lethal condition of neuroleptic malignant syndrome (NMS) has been associated with the use of antipsychotics. Through its early recognition, and timely intervention rates have declined. However, an awareness of the syndrome is advised to enable intervention. Another less rare condition of tardive dyskinesia can occur due to long-term use of antipsychotics, developing after months or years of use. It is more often reported with use of typical antipsychotics. Very rarely antipsychotics may cause tardive psychosis. Clozapine is associated with side effects that include weight gain, tiredness, and hypersalivation. More serious adverse effects include seizures, NMS, neutropenia, and agranulocytosis (lowered white blood cell count) and its use needs careful monitoring. Clozapine is also associated with thromboembolism (including pulmonary embolism), myocarditis, and cardiomyopathy. A systematic review of clozapine-associated pulmonary embolism indicates that this adverse effect can often be fatal, and that it has an early onset, and is dose-dependent. The findings advised the consideration of using a prevention therapy for venous thromboembolism after starting treatment with clozapine, and continuing this for six months. Constipation is three times more likely to occur with the use of clozapine, and severe cases can lead to ileus and bowel ischemia resulting in many fatalities. Very rare clozapine adverse effects include periorbital edema due to several possible mechanisms (e.g., inhibition of platelet-derived growth factor receptors leading to increased vascular permeability, antagonism of renal dopamine receptors with electrolyte and fluid imbalance and immune-mediated hypersensitivity reactions). However, the risk of serious adverse effects from clozapine is low, and there are the beneficial effects to be gained of a reduced risk of suicide, and aggression. Typical antipsychotics and atypical risperidone can have a side effect of sexual dysfunction. Clozapine, olanzapine, and quetiapine are associated with beneficial effects on sexual functioning helped by various psychotherapies. By rate Common (≥ 1% and up to 50% incidence for most antipsychotic drugs) adverse effects of antipsychotics include: Dysphoria and apathy (due to dopamine receptor blockade) Sedation (particularly common with asenapine, clozapine, olanzapine, quetiapine, chlorpromazine and zotepine) Headaches Dizziness Diarrhea Anxiety Extrapyramidal side effects (particularly common with first-generation antipsychotics), which include: Akathisia, an often distressing sense of inner restlessness. Dystonia, an abnormal muscle contraction Pseudoparkinsonism, symptoms that are similar to what people with Parkinson's disease experience, including tremulousness and drooling Hyperprolactinaemia (rare for those treated with clozapine, quetiapine and aripiprazole), which can cause: Galactorrhoea, the unusual secretion of breast milk. Gynaecomastia, abnormal growth of breast tissue Sexual dysfunction (in both sexes) Osteoporosis Orthostatic hypotension Weight gain (particularly prominent with clozapine, olanzapine, quetiapine and zotepine, can be counteracted by starting the drug with metformin) Anticholinergic side-effects (common for olanzapine, clozapine; less likely on risperidone) such as: Blurred vision Constipation Dry mouth (although hypersalivation may also occur) Reduced perspiration Tardive dyskinesia appears to be more frequent with high-potency first-generation antipsychotics, such as haloperidol, and tends to appear after chronic and not acute treatment. It is characterized by slow (hence the tardive) repetitive, involuntary and purposeless movements, most often of the face, lips, legs, or torso, which tend to resist treatment and are frequently irreversible. The rate of appearance of TD is about 5% per year of use of antipsychotic drug (whatever the drug used) Breast cancer: a systematic review and meta-analysis of observational studies with over 2 million individuals estimated an association between antipsychotic use and breast cancer by over 30%. Rare/Uncommon (<1% incidence for most antipsychotic drugs) adverse effects of antipsychotics include: Blood dyscrasias (e.g., agranulocytosis, leukopenia, and neutropaenia), which is more common in patients on clozapine. Metabolic syndrome and other metabolic problems such as type II diabetes mellitus — particularly common with clozapine, olanzapine and zotepine. In American studies African Americans appeared to be at a heightened risk for developing type II diabetes mellitus. Evidence suggests that females are more sensitive to the metabolic side effects of first-generation antipsychotic drugs than males. Metabolic adverse effects appear to be mediated by antagonizing the histamine H1 and serotonin 5-HT2C receptors and perhaps by interacting with other neurochemical pathways in the central nervous system. Neuroleptic malignant syndrome, a potentially fatal condition characterized by: Autonomic instability, which can manifest with tachycardia, nausea, vomiting, diaphoresis, etc. Hyperthermia — elevated body temperature. Mental status change (confusion, hallucinations, coma, etc.) Muscle rigidity Laboratory abnormalities (e.g., elevated creatine kinase, reduced iron plasma levels, electrolyte abnormalities, etc.) Pancreatitis QT interval prolongation — more prominent in those treated with amisulpride, pimozide, sertindole, thioridazine and ziprasidone. Torsades de pointes Seizures, particularly in people treated with chlorpromazine and clozapine. Thromboembolism Myocardial infarction Stroke Pisa syndrome Long-term effects Some studies have found decreased life expectancy associated with the use of antipsychotics, and argued that more studies are needed. Antipsychotics may also increase the risk of early death in individuals with dementia. Antipsychotics typically worsen symptoms in people with depersonalisation disorder. Antipsychotic polypharmacy (prescribing two or more antipsychotics at the same time for an individual) is a common practice but not evidence-based or recommended, and there are initiatives to curtail it. Similarly, the use of excessively high doses (often the result of polypharmacy) continues despite clinical guidelines and evidence indicating that it is usually no more effective but is usually more harmful. A meta-analysis of observational studies with over two million individuals has suggested a moderate association of antipsychotic use with breast cancer. Loss of grey matter and other brain structural changes over time are observed amongst people diagnosed with schizophrenia. Meta-analyses of the effects of antipsychotic treatment on grey matter volume and the brain's structure have reached conflicting conclusions. A 2012 meta-analysis concluded that grey matter loss is greater in patients treated with first generation antipsychotics relative to those treated with atypicals, and hypothesized a protective effect of atypicals as one possible explanation. A second meta-analysis suggested that treatment with antipsychotics was associated with increased grey matter loss. Animal studies found that monkeys exposed to both first- and second-generation antipsychotics experience significant reduction in brain volume, resulting in an 8-11% reduction in brain volume over a 17–27 month period. The National Association of State Mental Health Program Directors said that antipsychotics are not interchangeable, and it recommends including trying at least one weight-neutral treatment for those patients with potential metabolic issues. Subtle, long-lasting forms of akathisia are often overlooked or confused with post-psychotic depression, in particular when they lack the extrapyramidal aspect that psychiatrists have been taught to expect when looking for signs of akathisia. Adverse effect on cognitive function and increased risk of death in people with dementia along with worsening of symptoms has been described in the literature. Antipsychotics, due to acting as dopamine D2 receptor antagonists and thereby stimulating pituitary lactotrophs, may have a risk of prolactinoma with long-term use. This is also responsible for their induction of hyperprolactinemia (high prolactin levels). Discontinuation The British National Formulary recommends a gradual withdrawal when discontinuing antipsychotics to avoid acute withdrawal syndrome or rapid relapse. Symptoms of withdrawal commonly include nausea, vomiting, and loss of appetite. Other symptoms may include restlessness, increased sweating, and trouble sleeping. Less commonly there may be a feeling of the world spinning, numbness, or muscle pains. Symptoms generally resolve after a short period of time. There is tentative evidence that discontinuation of antipsychotics can result in psychosis. It may also result in recurrence of the condition that is being treated. Rarely, tardive dyskinesia can occur when the medication is stopped. Unexpected psychotic episodes have been observed in patients withdrawing from clozapine. This is referred to as supersensitivity psychosis, not to be equated with tardive dyskinesia. Tardive dyskinesia may abate during withdrawal from the antipsychotic agent, or it may persist. Withdrawal effects may also occur when switching a person from one antipsychotic to another, (it is presumed due to variations of potency and receptor activity). Such withdrawal effects can include cholinergic rebound, an activation syndrome, and motor syndromes including dyskinesias. These adverse effects are more likely during rapid changes between antipsychotic agents, so making a gradual change between antipsychotics minimises these withdrawal effects. The British National Formulary recommends a gradual dose reduction when discontinuing antipsychotic treatment to avoid acute withdrawal symptoms or rapid relapse. The process of cross-titration involves gradually increasing the dose of the new medication while gradually decreasing the dose of the old medication. City and Hackney Clinical Commissioning Group found more than 1,000 patients in their area in July 2019 who had not had regular medication reviews or health checks because they were not registered as having serious mental illness. On average they had been taking these drugs for six years. If this is typical of practice in England more than 100,000 patients are probably in the same position. List of agents Clinically used antipsychotic medications are listed below by drug group. Trade names appear in parentheses. A 2013 review has stated that the division of antipsychotics into first and second generation is perhaps not accurate.
Biology and health sciences
Psychiatric drugs
Health
2885
https://en.wikipedia.org/wiki/Amoxicillin
Amoxicillin
Amoxicillin is an antibiotic medication belonging to the aminopenicillin class of the penicillin family. The drug is used to treat bacterial infections such as middle ear infection, strep throat, pneumonia, skin infections, odontogenic infections, and urinary tract infections. It is taken orally (swallowed by mouth), or less commonly by either intramuscular injection or by an IV bolus injection, which is a relatively quick intravenous injection lasting from a couple of seconds to a few minutes. Common adverse effects include nausea and rash. It may also increase the risk of yeast infections and, when used in combination with clavulanic acid, diarrhea. It should not be used in those who are allergic to penicillin. While usable in those with kidney problems, the dose may need to be decreased. Its use in pregnancy and breastfeeding does not appear to be harmful. Amoxicillin is in the β-lactam family of antibiotics. Amoxicillin was discovered in 1958 and came into medical use in 1972. Amoxil was approved for medical use in the United States in 1974, and in the United Kingdom in 1977. It is on the World Health Organization's List of Essential Medicines. It is one of the most commonly prescribed antibiotics in children. Amoxicillin is available as a generic medication. In 2022, it was the 26th most commonly prescribed medication in the United States, with more than 20million prescriptions. Medical uses Amoxicillin is used in the treatment of a number of infections, including acute otitis media, streptococcal pharyngitis, pneumonia, skin infections, urinary tract infections, Salmonella infections, Lyme disease, and chlamydia infections. Acute otitis media Children with acute otitis media who are younger than six months of age are generally treated with amoxicillin or other antibiotics. Although most children with acute otitis media who are older than two years old do not benefit from treatment with amoxicillin or other antibiotics, such treatment may be helpful in children younger than two years old with acute otitis media that is bilateral or accompanied by ear drainage. In the past, amoxicillin was dosed three times daily when used to treat acute otitis media, which resulted in missed doses in routine ambulatory practice. There is now evidence that two-times daily dosing or once-daily dosing has similar effectiveness. Respiratory infections Most sinusitis infections are caused by viruses, for which amoxicillin and amoxicillin-clavulanate are ineffective, and the small benefit gained by amoxicillin may be overridden by the adverse effects. Amoxicillin is considered the first-line empirical treatment for most cases of uncomplicated bacterial sinusitis in children and adults when culture data is unavailable. Amoxicillin is recommended as the preferred first-line treatment for community-acquired pneumonia in adults by the National Institute for Health and Care Excellence, either alone (mild to moderate severity disease) or in combination with a macrolide. Research suggests that is as effective as co-amoxiclav (a broad-spectrum antibiotic) for people admitted to hospital with pneumonia, regardless of its severity. The World Health Organization (WHO) recommends amoxicillin as first-line treatment for pneumonia that is not "severe". Amoxicillin is used in post-exposure inhalation of anthrax to prevent disease progression and for prophylaxis. H. pylori It is effective as one part of a multi-drug regimen for the treatment of stomach infections of Helicobacter pylori. It is typically combined with a proton-pump inhibitor (such as omeprazole) and a macrolide antibiotic (such as clarithromycin); other drug combinations are also effective. Lyme borreliosis Amoxicillin is effective for the treatment of early cutaneous Lyme borreliosis; the effectiveness and safety of oral amoxicillin is neither better nor worse than common alternatively-used antibiotics. Odontogenic infections Amoxicillin is used to treat odontogenic infections, infections of the tongue, lips, and other oral tissues. It may be prescribed following a tooth extraction, particularly in those with compromised immune systems. Skin infections Amoxicillin is occasionally used for the treatment of skin infections, such as acne vulgaris. It is often an effective treatment for cases of acne vulgaris that have responded poorly to other antibiotics, such as doxycycline and minocycline. Infections in infants in resource-limited settings Amoxicillin is recommended by the World Health Organization for the treatment of infants with signs and symptoms of pneumonia in resource-limited situations when the parents are unable or unwilling to accept hospitalization of the child. Amoxicillin in combination with gentamicin is recommended for the treatment of infants with signs of other severe infections when hospitalization is not an option. Prevention of bacterial endocarditis It is also used to prevent bacterial endocarditis and as a pain-reliever in high-risk people having dental work done, to prevent Streptococcus pneumoniae and other encapsulated bacterial infections in those without spleens, such as people with sickle-cell disease, and for both the prevention and the treatment of anthrax. The United Kingdom recommends against its use for infectious endocarditis prophylaxis. These recommendations do not appear to have changed the rates of infection for infectious endocarditis. Combination treatment Amoxicillin is susceptible to degradation by β-lactamase-producing bacteria, which are resistant to most β-lactam antibiotics, such as penicillin. For this reason, it may be combined with clavulanic acid, a β-lactamase inhibitor. This drug combination is commonly called co-amoxiclav. Spectrum of activity It is a moderate-spectrum, bacteriolytic, β-lactam antibiotic in the aminopenicillin family used to treat susceptible Gram-positive and Gram-negative bacteria. It is usually the drug of choice within the class because it is better absorbed, following oral administration, than other β-lactam antibiotics. In general, Streptococcus, Bacillus subtilis, Enterococcus, Haemophilus, Helicobacter, and Moraxella are susceptible to amoxicillin, whereas Citrobacter, Klebsiella and Pseudomonas aeruginosa are resistant to it. Some E. coli and most clinical strains of Staphylococcus aureus have developed resistance to amoxicillin to varying degrees. Adverse effects Adverse effects are similar to those for other β-lactam antibiotics, including nausea, vomiting, rashes, and antibiotic-associated colitis. Diarrhea (loose bowel movements) may also occur. Rarer adverse effects include mental and behavioral changes, lightheadedness, insomnia, hyperactivity, agitation, confusion, anxiety, sensitivity to lights and sounds, and unclear thinking. Immediate medical care is required upon the first signs of these adverse effects. Similarly to other penicillins, amoxicillin has been associated with an increased risk of seizures. Amoxicillin-induced neurotoxicity has been especially associated with concentrations of greater than 110mg/L. The onset of an allergic reaction to amoxicillin can be very sudden and intense; emergency medical attention must be sought as quickly as possible. The initial phase of such a reaction often starts with a change in mental state, skin rash with intense itching (often beginning in the fingertips and around the groin area and rapidly spreading), and sensations of fever, nausea, and vomiting. Any other symptoms that seem even remotely suspicious must be taken very seriously. However, more mild allergy symptoms, such as a rash, can occur at any time during treatment, even up to a week after treatment has ceased. For some people allergic to amoxicillin, the adverse effects can be fatal due to anaphylaxis. Use of the amoxicillin/clavulanic acid combination for more than one week has caused a drug-induced immunoallergic-type hepatitis in some patients. Young children having ingested acute overdoses of amoxicillin manifested lethargy, vomiting, and renal dysfunction. There is poor reporting of adverse effects of amoxicillin from clinical trials. For this reason, the severity and frequency of adverse effects from amoxicillin are probably higher than reported in clinical trials. Nonallergic rash Between 3 and 10% of children taking amoxicillin (or ampicillin) show a late-developing (>72 hours after beginning medication and having never taken penicillin-like medication previously) rash, which is sometimes referred to as the "amoxicillin rash". The rash can also occur in adults and may rarely be a component of the DRESS syndrome. The rash is described as maculopapular or morbilliform (measles-like; therefore, in medical literature, it is called "amoxicillin-induced morbilliform rash".). It starts on the trunk and can spread from there. This rash is unlikely to be a true allergic reaction and is not a contraindication for future amoxicillin usage, nor should the current regimen necessarily be stopped. However, this common amoxicillin rash and a dangerous allergic reaction cannot easily be distinguished by inexperienced persons, so a healthcare professional is often required to distinguish between the two. A nonallergic amoxicillin rash may also be an indicator of infectious mononucleosis. Some studies indicate about 80–90% of patients with acute Epstein–Barr virus infection treated with amoxicillin or ampicillin develop such a rash. Interactions Amoxicillin may interact with these drugs: Anticoagulants (dabigatran, warfarin). Methotrexate (chemotherapy and immunosuppressant). Typhoid, Cholera and BCG vaccines. Probenecid reduces renal excretion and increases blood levels of amoxicillin. Oral contraceptives potentially become less effective. Allopurinol (gout treatment). Mycophenolate (immunosuppressant) When given intravenously or intramuscularly: It should not be mixed with blood products, or proteinaceous fluids (including protein hydrolysates) or with intravenous lipid emulsions aminoglycoside should be injected at a separate site from amoxicillin if the patient is prescribed both medications at the same time. Neither drug should be mixed in a syringe. Neither should they be mixed in an intravenous fluid container or giving set because of loss of activity of the aminoglycoside under these conditions. ciprofloxacin should not be mixed with amoxicillin. Infusions containing dextran or bicarbonate should not be mixed with amoxicillin solutions. Pharmacology Amoxicillin (α-amino-p-hydroxybenzyl penicillin) is a semisynthetic derivative of penicillin with a structure similar to ampicillin but with better absorption when taken by mouth, thus yielding higher concentrations in blood and in urine. Amoxicillin diffuses easily into tissues and body fluids. It will cross the placenta and is excreted into breastmilk in small quantities. It is metabolized by the liver and excreted into the urine. It has an onset of 30 minutes and a half-life of 3.7 hours in newborns and 1.4 hours in adults. Amoxicillin attaches to the cell wall of susceptible bacteria and results in their death. It is effective against streptococci, pneumococci, enterococci, Haemophilus influenzae, Escherichia coli, Proteus mirabilis, Neisseria meningitidis, Neisseria gonorrhoeae, Shigella, Chlamydia trachomatis, Salmonella, Borrelia burgdorferi, and Helicobacter pylori. As a derivative of ampicillin, amoxicillin is a member of the penicillin family and, like penicillins, is a β-lactam antibiotic. It inhibits cross-linkage between the linear peptidoglycan polymer chains that make up a major component of the bacterial cell wall. It has two ionizable groups in the physiological range (the amino group in alpha-position to the amide carbonyl group and the carboxyl group). Chemistry Amoxicillin is a β-lactam and aminopenicillin antibiotic in terms of chemical structure. It is structurally related to ampicillin. The experimental log P of amoxicillin is 0.87. It is described as an "ambiphilic"—between hydrophilic and lipophilic—antibiotic. History Amoxicillin was one of several semisynthetic derivatives of 6-aminopenicillanic acid (6-APA) developed by the Beecham Group in the 1960s. It was invented by Anthony Alfred Walter Long and John Herbert Charles Nayler, two British scientists. It became available in 1972 and was the second aminopenicillin to reach the market (after ampicillin in 1961). Co-amoxiclav became available in 1981. Society and culture Economics Amoxicillin is relatively inexpensive. In 2022, a survey of eight generic antibiotics commonly prescribed in the United States found their average cost to be about $42.67, while amoxicillin was sold for $12.14 on average. Modes of delivery Pharmaceutical manufacturers make amoxicillin in trihydrate form, for oral use available as capsules, regular, chewable and dispersible tablets, syrup and pediatric suspension for oral use, and as the sodium salt for intravenous administration. An extended-release is available. The intravenous form of amoxicillin is not sold in the United States. When an intravenous aminopenicillin is required in the United States, ampicillin is typically used. When there is an adequate response to ampicillin, the course of antibiotic therapy may often be completed with oral amoxicillin. Research with mice indicated successful delivery using intraperitoneally injected amoxicillin-bearing microparticles. Names Amoxicillin is the international nonproprietary name (INN), British Approved Name (BAN), and United States Adopted Name (USAN), while amoxycillin is the Australian Approved Name (AAN). Amoxicillin is one of the semisynthetic penicillins discovered by the former pharmaceutical company Beecham Group. The patent for amoxicillin has expired, thus amoxicillin and co-amoxiclav preparations are marketed under various brand names across the world. Veterinary uses Amoxicillin is also sometimes used as an antibiotic for animals. The use of amoxicillin for animals intended for human consumption (chickens, cattle, and swine for example) has been approved.
Biology and health sciences
Antibiotics
Health
2889
https://en.wikipedia.org/wiki/Amorphous%20solid
Amorphous solid
In condensed matter physics and materials science, an amorphous solid (or non-crystalline solid) is a solid that lacks the long-range order that is characteristic of a crystal. The terms "glass" and "glassy solid" are sometimes used synonymously with amorphous solid; however, these terms refer specifically to amorphous materials that undergo a glass transition. Examples of amorphous solids include glasses, metallic glasses, and certain types of plastics and polymers. Etymology The term comes from the Greek a ("without"), and morphé ("shape, form"). Structure Amorphous materials have an internal structure of molecular-scale structural blocks that can be similar to the basic structural units in the crystalline phase of the same compound. Unlike in crystalline materials, however, no long-range regularity exists: amorphous materials cannot be described by the repetition of a finite unit cell. Statistical measures, such as the atomic density function and radial distribution function, are more useful in describing the structure of amorphous solids. Although amorphous materials lack long range order, they exhibit localized order on small length scales. By convention, short range order extends only to the nearest neighbor shell, typically only 1-2 atomic spacings. Medium range order may extend beyond the short range order by 1-2 nm. Fundamental properties of amorphous solids Glass transition at high temperatures The freezing from liquid state to amorphous solid - glass transition - is considered one of the very important and unsolved problems of physics. Universal low-temperature properties of amorphous solids At very low temperatures (below 1-10 K), a large family of amorphous solids have various similar low-temperature properties. Although there are various theoretical models, neither glass transition nor low-temperature properties of glassy solids are well understood on the fundamental physics level. Amorphous solids is an important area of condensed matter physics aiming to understand these substances at high temperatures of glass transition and at low temperatures towards absolute zero. From the 1970s, low-temperature properties of amorphous solids were studied experimentally in great detail. For all of these substances, specific heat has a (nearly) linear dependence as a function of temperature, and thermal conductivity has nearly quadratic temperature dependence. These properties are conventionally called anomalous being very different from properties of crystalline solids. On the phenomenological level, many of these properties were described by a collection of tunnelling two-level systems. Nevertheless, the microscopic theory of these properties is still missing after more than 50 years of the research. Remarkably, a dimensionless quantity of internal friction is nearly universal in these materials. This quantity is a dimensionless ratio (up to a numerical constant) of the phonon wavelength to the phonon mean free path. Since the theory of tunnelling two-level states (TLSs) does not address the origin of the density of TLSs, this theory cannot explain the universality of internal friction, which in turn is proportional to the density of scattering TLSs. The theoretical significance of this important and unsolved problem was highlighted by Anthony Leggett. Nano-structured materials Amorphous materials will have some degree of short-range order at the atomic-length scale due to the nature of intermolecular chemical bonding. Furthermore, in very small crystals, short-range order encompasses a large fraction of the atoms; nevertheless, relaxation at the surface, along with interfacial effects, distorts the atomic positions and decreases structural order. Even the most advanced structural characterization techniques, such as X-ray diffraction and transmission electron microscopy, can have difficulty distinguishing amorphous and crystalline structures at short-size scales. Characterization of amorphous solids Due to the lack of long-range order, standard crystallographic techniques are often inadequate in determining the structure of amorphous solids. A variety of electron, X-ray, and computation-based techniques have been used to characterize amorphous materials. Multi-modal analysis is very common for amorphous materials. X-ray and neutron diffraction Unlike crystalline materials, which exhibit strong Bragg diffraction, the diffraction patterns of amorphous materials are characterized by broad and diffuse peaks. As a result, detailed analysis and complementary techniques are required to extract real space structural information from the diffraction patterns of amorphous materials. It is useful to obtain diffraction data from both X-ray and neutron sources as they have different scattering properties and provide complementary data. Pair distribution function analysis can be performed on diffraction data to determine the probability of finding a pair of atoms separated by a certain distance. Another type of analysis that is done with diffraction data of amorphous materials is radial distribution function analysis, which measures the number of atoms found at varying radial distances away from an arbitrary reference atom. From these techniques, the local order of an amorphous material can be elucidated. X-ray absorption fine-structure spectroscopy X-ray absorption fine-structure spectroscopy is an atomic scale probe making it useful for studying materials lacking in long-range order. Spectra obtained using this method provide information on the oxidation state, coordination number, and species surrounding the atom in question as well as the distances at which they are found. Atomic electron tomography The atomic electron tomography technique is performed in transmission electron microscopes capable of reaching sub-Angstrom resolution. A collection of 2D images taken at numerous different tilt angles is acquired from the sample in question and then used to reconstruct a 3D image. After image acquisition, a significant amount of processing must be done to correct for issues such as drift, noise, and scan distortion. High-quality analysis and processing using atomic electron tomography results in a 3D reconstruction of an amorphous material detailing the atomic positions of the different species that are present. Fluctuation electron microscopy Fluctuation electron microscopy is another transmission electron microscopy-based technique that is sensitive to the medium-range order of amorphous materials. Structural fluctuations arising from different forms of medium-range order can be detected with this method. Fluctuation electron microscopy experiments can be done in conventional or scanning transmission electron microscope mode. Computational techniques Simulation and modeling techniques are often combined with experimental methods to characterize structures of amorphous materials. Commonly used computational techniques include density functional theory, molecular dynamics, and reverse Monte Carlo. Uses and observations Amorphous thin films Amorphous phases are important constituents of thin films. Thin films are solid layers of a few nanometres to tens of micrometres thickness that are deposited onto a substrate. So-called structure zone models were developed to describe the microstructure of thin films as a function of the homologous temperature (Th), which is the ratio of deposition temperature to melting temperature. According to these models, a necessary condition for the occurrence of amorphous phases is that (Th) has to be smaller than 0.3. The deposition temperature must be below 30% of the melting temperature. Superconductivity Regarding their applications, amorphous metallic layers played an important role in the discovery of superconductivity in amorphous metals made by Buckel and Hilsch. The superconductivity of amorphous metals, including amorphous metallic thin films, is now understood to be due to phonon-mediated Cooper pairing. The role of structural disorder can be rationalized based on the strong-coupling Eliashberg theory of superconductivity. Thermal protection Amorphous solids typically exhibit higher localization of heat carriers compared to crystalline, giving rise to low thermal conductivity. Products for thermal protection, such as thermal barrier coatings and insulation, rely on materials with ultralow thermal conductivity. Technological uses Today, optical coatings made from TiO2, SiO2, Ta2O5 etc. (and combinations of these) in most cases consist of amorphous phases of these compounds. Much research is carried out into thin amorphous films as a gas-separating membrane layer. The technologically most important thin amorphous film is probably represented by a few nm thin SiO2 layers serving as isolator above the conducting channel of a metal-oxide semiconductor field-effect transistor (MOSFET). Also, hydrogenated amorphous silicon (Si:H) is of technical significance for thin-film solar cells. Pharmaceutical use In the pharmaceutical industry, some amorphous drugs have been shown to offer higher bioavailability than their crystalline counterparts as a result of the higher solubility of the amorphous phase. However, certain compounds can undergo precipitation in their amorphous form in vivo and can then decrease mutual bioavailability if administered together. Studies of GDC-0810 ASDs show a strong interrelationship between microstructure, physical properties and dissolution performance. In soils Amorphous materials in soil strongly influence bulk density, aggregate stability, plasticity, and water holding capacity of soils. The low bulk density and high void ratios are mostly due to glass shards and other porous minerals not becoming compacted. Andisol soils contain the highest amounts of amorphous materials. Phase Amorphous phases were a phenomenon of particular interest for the study of thin-film growth. The growth of polycrystalline films is often used and preceded by an initial amorphous layer, the thickness of which may amount to only a few nm. The most investigated example is represented by the unoriented molecules of thin polycrystalline silicon films. Wedge-shaped polycrystals were identified by transmission electron microscopy to grow out of the amorphous phase only after the latter has exceeded a certain thickness, the precise value of which depends on deposition temperature, background pressure, and various other process parameters. The phenomenon has been interpreted in the framework of Ostwald's rule of stages that predicts the formation of phases to proceed with increasing condensation time towards increasing stability.
Physical sciences
Basics_8
null
2955
https://en.wikipedia.org/wiki/Alkali
Alkali
In chemistry, an alkali (; from the Arabic word , ) is a basic, ionic salt of an alkali metal or an alkaline earth metal. An alkali can also be defined as a base that dissolves in water. A solution of a soluble base has a pH greater than 7.0. The adjective alkaline, and less often, alkalescent, is commonly used in English as a synonym for basic, especially for bases soluble in water. This broad use of the term is likely to have come about because alkalis were the first bases known to obey the Arrhenius definition of a base, and they are still among the most common bases. Etymology The word alkali is derived from Arabic al qalīy (or alkali), meaning (see calcination), referring to the original source of alkaline substances. A water-extract of burned plant ashes, called potash and composed mostly of potassium carbonate, was mildly basic. After heating this substance with calcium hydroxide (slaked lime), a far more strongly basic substance known as caustic potash (potassium hydroxide) was produced. Caustic potash was traditionally used in conjunction with animal fats to produce soft soaps, one of the caustic processes that rendered soaps from fats in the process of saponification, one known since antiquity. Plant potash lent the name to the element potassium, which was first derived from caustic potash, and also gave potassium its chemical symbol K (from the German name ), which ultimately derived from alkali. Common properties of alkalis and bases Alkalis are all Arrhenius bases, ones which form hydroxide ions (OH−) when dissolved in water. Common properties of alkaline aqueous solutions include: Moderately concentrated solutions (over 10−3 M) have a pH of 10 or greater. This means that they will turn phenolphthalein from colorless to pink. Concentrated solutions are caustic (causing chemical burns). Alkaline solutions are slippery or soapy to the touch, due to the saponification of the fatty substances on the surface of the skin. Alkalis are normally water-soluble, although some like barium carbonate are only soluble when reacting with an acidic aqueous solution. Difference between alkali and base The terms "base" and "alkali" are often used interchangeably, particularly outside the context of chemistry and chemical engineering. There are various, more specific definitions for the concept of an alkali. Alkalis are usually defined as a subset of the bases. One of two subsets is commonly chosen. A basic salt of an alkali metal or alkaline earth metal (this includes Mg(OH)2 (magnesium hydroxide) but excludes NH3 (ammonia)). Any base that is soluble in water and forms hydroxide ions or the solution of a base in water. (This includes both Mg(OH)2 and NH3, which forms NH4OH.) The second subset of bases is also called an "Arrhenius base". Alkali salts Alkali salts are soluble hydroxides of alkali metals and alkaline earth metals, of which common examples are: Sodium hydroxide (NaOH) – often called "caustic soda" Potassium hydroxide (KOH) – commonly called "caustic potash" Lye – generic term for either of two previous salts or their mixture Calcium hydroxide (Ca(OH)2) – saturated solution known as "limewater" Magnesium hydroxide (Mg(OH)2) – an atypical alkali since it has low solubility in water (although the dissolved portion is considered a strong base due to complete dissociation of its ions) Alkaline soil Soils with pH values that are higher than 7.3 are usually defined as being alkaline. These soils can occur naturally due to the presence of alkali salts. Although many plants do prefer slightly basic soil (including vegetables like cabbage and fodder like buffalo grass), most plants prefer mildly acidic soil (with pHs between 6.0 and 6.8), and alkaline soils can cause problems. Alkali lakes In alkali lakes (also called soda lakes), evaporation concentrates the naturally occurring carbonate salts, giving rise to an alkalic and often saline lake. Examples of alkali lakes: Alkali Lake, Lake County, Oregon Baldwin Lake, San Bernardino County, California Bear Lake on the Utah–Idaho border Lake Magadi in Kenya Lake Turkana in Kenya Mono Lake, near Owens Valley in California Redberry Lake, Saskatchewan Summer Lake, Lake County, Oregon Tramping Lake, Saskatchewan
Physical sciences
Concepts
Chemistry
2965
https://en.wikipedia.org/wiki/Alcoholism
Alcoholism
Alcoholism is the continued drinking of alcohol despite it causing problems. Some definitions require evidence of dependence and withdrawal. Problematic use of alcohol has been mentioned in the earliest historical records. The World Health Organization (WHO) estimated there were 283 million people with alcohol use disorders worldwide . The term alcoholism was first coined in 1852, but alcoholism and alcoholic are sometimes considered stigmatizing and to discourage seeking treatment, so diagnostic terms such as alcohol use disorder or alcohol dependence are often used instead in a clinical context. Alcohol is addictive, and heavy long-term alcohol use results in many negative health and social consequences. It can damage all the organ systems, but especially affects the brain, heart, liver, pancreas and immune system. Heavy alcohol usage can result in trouble sleeping, and severe cognitive issues like dementia, brain damage, or Wernicke–Korsakoff syndrome. Physical effects include irregular heartbeat, an impaired immune response, liver cirrhosis, increased cancer risk, and severe withdrawal symptoms if stopped suddenly. These health effects can reduce life expectancy by 10 years. Drinking during pregnancy may harm the child's health, and drunk driving increases the risk of traffic accidents. Alcoholism is also associated with increases in violent and non-violent crime. While alcoholism directly resulted in 139,000 deaths worldwide in 2013, in 2012 3.3 million deaths may be attributable globally to alcohol. The development of alcoholism is attributed to both environment and genetics equally. The use of alcohol to self-medicate stress or anxiety can turn into alcoholism. Someone with a parent or sibling with an alcohol use disorder is three to four times more likely to develop an alcohol use disorder themselves, but only a minority of them do. Environmental factors include social, cultural and behavioral influences. High stress levels and anxiety, as well as alcohol's inexpensive cost and easy accessibility, increase the risk. People may continue to drink partly to prevent or improve symptoms of withdrawal. After a person stops drinking alcohol, they may experience a low level of withdrawal lasting for months. Medically, alcoholism is considered both a physical and mental illness. Questionnaires are usually used to detect possible alcoholism. Further information is then collected to confirm the diagnosis. Treatment of alcoholism may take several forms. Due to medical problems that can occur during withdrawal, alcohol cessation should be controlled carefully. One common method involves the use of benzodiazepine medications, such as diazepam. These can be taken while admitted to a health care institution or individually. The medications acamprosate or disulfiram may also be used to help prevent further drinking. Mental illness or other addictions may complicate treatment. Various individual or group therapy or support groups are used to attempt to keep a person from returning to alcoholism. Among them is the abstinence based mutual aid fellowship Alcoholics Anonymous (AA). A 2020 scientific review found that clinical interventions encouraging increased participation in AA (AA/twelve step facilitation (AA/TSF))—resulted in higher abstinence rates over other clinical interventions, and most studies in the review found that AA/TSF led to lower health costs. Many terms, some slurs and some informal, have been used to refer to people affected by alcoholism such as tippler, drunkard, dipsomaniac and souse. Signs and symptoms The risk of alcohol dependence begins at low levels of drinking and increases directly with both the volume of alcohol consumed and a pattern of drinking larger amounts on an occasion, to the point of intoxication, which is sometimes called binge drinking. Binge drinking is the most common pattern of alcoholism. It has different definitions and one of this defines it as a pattern of drinking when a male has five or more drinks on an occasion or a female has at least four drinks on an occasion. Long-term misuse Alcoholism is characterized by an increased tolerance to alcohol – which means that an individual can consume more alcohol – and physical dependence on alcohol, which makes it hard for an individual to control their consumption. The physical dependency caused by alcohol can lead to an affected individual having a very strong urge to drink alcohol. These characteristics play a role in decreasing the ability to stop drinking of an individual with an alcohol use disorder. Alcoholism can have adverse effects on mental health, contributing to psychiatric disorders and increasing the risk of suicide. A depressed mood is a common symptom of heavy alcohol drinkers. Warning signs Warning signs of alcoholism include the consumption of increasing amounts of alcohol and frequent intoxication, preoccupation with drinking to the exclusion of other activities, promises to quit drinking and failure to keep those promises, the inability to remember what was said or done while drinking (colloquially known as "blackouts"), personality changes associated with drinking, denial or the making of excuses for drinking, the refusal to admit excessive drinking, dysfunction or other problems at work or school, the loss of interest in personal appearance or hygiene, marital and economic problems, and the complaint of poor health, with loss of appetite, respiratory infections, or increased anxiety. Physical Short-term effects Drinking enough to cause a blood alcohol concentration (BAC) of 0.03–0.12% typically causes an overall improvement in mood and possible euphoria (intense feelings of well-being and happiness), increased self-confidence and sociability, decreased anxiety, a flushed, red appearance in the face and impaired judgment and fine muscle coordination. A BAC of 0.09% to 0.25% causes lethargy, sedation, balance problems and blurred vision. A BAC of 0.18% to 0.30% causes profound confusion, impaired speech (e.g. slurred speech), staggering, dizziness and vomiting. A BAC from 0.25% to 0.40% causes stupor, unconsciousness, anterograde amnesia, vomiting (death may occur due to inhalation of vomit while unconscious) and respiratory depression (potentially life-threatening). A BAC from 0.35% to 0.80% causes a coma (unconsciousness), life-threatening respiratory depression and possibly fatal alcohol poisoning. With all alcoholic beverages, drinking while driving, operating an aircraft or heavy machinery increases the risk of an accident; many countries have penalties for drunk driving. Long-term effects Having more than one drink a day for women or two drinks for men increases the risk of heart disease, high blood pressure, atrial fibrillation, and stroke. Risk is greater with binge drinking, which may also result in violence or accidents. About 3.3 million deaths (5.9% of all deaths) are believed to be due to alcohol each year. Alcoholism reduces a person's life expectancy by around ten years and alcohol use is the third leading cause of early death in the United States. Long-term alcohol misuse can cause a number of physical symptoms, including cirrhosis of the liver, pancreatitis, epilepsy, polyneuropathy, alcoholic dementia, heart disease, nutritional deficiencies, peptic ulcers and sexual dysfunction, and can eventually be fatal. Other physical effects include an increased risk of developing cardiovascular disease, malabsorption, alcoholic liver disease, and several cancers such as breast cancer and head and neck cancer. Damage to the central nervous system and peripheral nervous system can occur from sustained alcohol consumption. A wide range of immunologic defects can result and there may be a generalized skeletal fragility, in addition to a recognized tendency to accidental injury, resulting in a propensity for bone fractures. Women develop long-term complications of alcohol dependence more rapidly than do men, women also have a higher mortality rate from alcoholism than men. Examples of long-term complications include brain, heart, and liver damage and an increased risk of breast cancer. Additionally, heavy drinking over time has been found to have a negative effect on reproductive functioning in women. This results in reproductive dysfunction such as anovulation, decreased ovarian mass, problems or irregularity of the menstrual cycle, and early menopause. Alcoholic ketoacidosis can occur in individuals who chronically misuse alcohol and have a recent history of binge drinking. The amount of alcohol that can be biologically processed and its effects differ between sexes. Equal dosages of alcohol consumed by men and women generally result in women having higher blood alcohol concentrations (BACs), since women generally have a lower weight and higher percentage of body fat and therefore a lower volume of distribution for alcohol than men. Psychiatric Long-term misuse of alcohol can cause a wide range of mental health problems. Severe cognitive problems are common; approximately 10% of all dementia cases are related to alcohol consumption, making it the second leading cause of dementia. Excessive alcohol use causes damage to brain function, and psychological health can be increasingly affected over time. Social skills are significantly impaired in people with alcoholism due to the neurotoxic effects of alcohol on the brain, especially the prefrontal cortex area of the brain. The social skills that are impaired by alcohol use disorder include impairments in perceiving facial emotions, prosody, perception problems, and theory of mind deficits; the ability to understand humor is also impaired in people who misuse alcohol. Psychiatric disorders are common in people with alcohol use disorders, with as many as 25% also having severe psychiatric disturbances. The most prevalent psychiatric symptoms are anxiety and depression disorders. Psychiatric symptoms usually initially worsen during alcohol withdrawal, but typically improve or disappear with continued abstinence. Psychosis, confusion, and organic brain syndrome may be caused by alcohol misuse, which can lead to a misdiagnosis such as schizophrenia. Panic disorder can develop or worsen as a direct result of long-term alcohol misuse. The co-occurrence of major depressive disorder and alcoholism is well documented. Among those with comorbid occurrences, a distinction is commonly made between depressive episodes that remit with alcohol abstinence ("substance-induced"), and depressive episodes that are primary and do not remit with abstinence ("independent" episodes). Additional use of other drugs may increase the risk of depression. Psychiatric disorders differ depending on gender. Women who have alcohol-use disorders often have a co-occurring psychiatric diagnosis such as major depression, anxiety, panic disorder, bulimia, post-traumatic stress disorder (PTSD), or borderline personality disorder. Men with alcohol-use disorders more often have a co-occurring diagnosis of narcissistic or antisocial personality disorder, bipolar disorder, schizophrenia, impulse disorders or attention deficit/hyperactivity disorder (ADHD). Women with alcohol use disorder are more likely to experience physical or sexual assault, abuse, and domestic violence than women in the general population, which can lead to higher instances of psychiatric disorders and greater dependence on alcohol. Social effects Serious social problems arise from alcohol use disorder; these dilemmas are caused by the pathological changes in the brain and the intoxicating effects of alcohol. Alcohol misuse is associated with an increased risk of committing criminal offences, including child abuse, domestic violence, rape, burglary and assault. Alcoholism is associated with loss of employment, which can lead to financial problems. Drinking at inappropriate times and behavior caused by reduced judgment can lead to legal consequences, such as criminal charges for drunk driving or public disorder, or civil penalties for tortious behavior. An alcoholic's behavior and mental impairment while drunk can profoundly affect those surrounding him and lead to isolation from family and friends. This isolation can lead to marital conflict and divorce, or contribute to domestic violence. Alcoholism can also lead to child neglect, with subsequent lasting damage to the emotional development of children of people with alcohol use disorders. For this reason, children of people with alcohol use disorders can develop a number of emotional problems. For example, they can become afraid of their parents, because of their unstable mood behaviors. They may develop shame over their inadequacy to liberate their parents from alcoholism and, as a result of this, may develop self-image problems, which can lead to depression. Alcohol withdrawal As with similar substances with a sedative-hypnotic mechanism, such as barbiturates and benzodiazepines, withdrawal from alcohol dependence can be fatal if it is not properly managed. Alcohol's primary effect is the increase in stimulation of the GABAA receptor, promoting central nervous system depression. With repeated heavy consumption of alcohol, these receptors are desensitized and reduced in number, resulting in tolerance and physical dependence. When alcohol consumption is stopped too abruptly, the person's nervous system experiences uncontrolled synapse firing. This can result in symptoms that include anxiety, life-threatening seizures, delirium tremens, hallucinations, shakes and possible heart failure. Other neurotransmitter systems are also involved, especially dopamine, NMDA and glutamate. Severe acute withdrawal symptoms such as delirium tremens and seizures rarely occur after 1-week post cessation of alcohol. The acute withdrawal phase can be defined as lasting between one and three weeks. In the period of 3–6 weeks following cessation, anxiety, depression, fatigue, and sleep disturbance are common. Similar post-acute withdrawal symptoms have also been observed in animal models of alcohol dependence and withdrawal. A kindling effect also occurs in people with alcohol use disorders whereby each subsequent withdrawal syndrome is more severe than the previous withdrawal episode; this is due to neuroadaptations which occur as a result of periods of abstinence followed by re-exposure to alcohol. Individuals who have had multiple withdrawal episodes are more likely to develop seizures and experience more severe anxiety during withdrawal from alcohol than alcohol-dependent individuals without a history of past alcohol withdrawal episodes. The kindling effect leads to persistent functional changes in brain neural circuits as well as to gene expression. Kindling also results in the intensification of psychological symptoms of alcohol withdrawal. There are decision tools and questionnaires that help guide physicians in evaluating alcohol withdrawal. For example, the CIWA-Ar objectifies alcohol withdrawal symptoms in order to guide therapy decisions which allows for an efficient interview while at the same time retaining clinical usefulness, validity, and reliability, ensuring proper care for withdrawal patients, who can be in danger of death. Causes A complex combination of genetic and environmental factors influences the risk of the development of alcoholism. Genes that influence the metabolism of alcohol also influence the risk of alcoholism, as can a family history of alcoholism. There is compelling evidence that alcohol use at an early age may influence the expression of genes which increase the risk of alcohol dependence. These genetic and epigenetic results are regarded as consistent with large longitudinal population studies finding that the younger the age of drinking onset, the greater the prevalence of lifetime alcohol dependence. Severe childhood trauma is also associated with a general increase in the risk of drug dependency. Lack of peer and family support is associated with an increased risk of alcoholism developing. Genetics and adolescence are associated with an increased sensitivity to the neurotoxic effects of chronic alcohol misuse. Cortical degeneration due to the neurotoxic effects increases impulsive behaviour, which may contribute to the development, persistence and severity of alcohol use disorders. There is evidence that with abstinence, there is a reversal of at least some of the alcohol induced central nervous system damage. The use of cannabis was associated with later problems with alcohol use. Alcohol use was associated with an increased probability of later use of tobacco and illegal drugs such as cannabis. Availability Alcohol is the most available, widely consumed, and widely misused recreational drug. Beer alone is the world's most widely consumed alcoholic beverage; it is the third-most popular drink overall, after water and tea. It is thought by some to be the oldest fermented beverage. Gender difference Based on combined data in the US from SAMHSA's 2004–2005 National Surveys on Drug Use & Health, the rate of past-year alcohol dependence or misuse among persons aged 12 or older varied by level of alcohol use: 44.7% of past month heavy drinkers, 18.5% binge drinkers, 3.8% past month non-binge drinkers, and 1.3% of those who did not drink alcohol in the past month met the criteria for alcohol dependence or misuse in the past year. Males had higher rates than females for all measures of drinking in the past month: any alcohol use (57.5% vs. 45%), binge drinking (30.8% vs. 15.1%), and heavy alcohol use (10.5% vs. 3.3%), and males were twice as likely as females to have met the criteria for alcohol dependence or misuse in the past year (10.5% vs. 5.1%). However, because females generally weigh less than males, have more fat and less water in their bodies, and metabolize less alcohol in their esophagus and stomach, they are likely to develop higher blood alcohol levels per drink. Women may also be more vulnerable to liver disease. Genetic variation There are genetic variations that affect the risk for alcoholism. Some of these variations are more common in individuals with ancestry from certain areas; for example, Africa, East Asia, the Middle East and Europe. The variants with strongest effect are in genes that encode the main enzymes of alcohol metabolism, ADH1B and ALDH2. These genetic factors influence the rate at which alcohol and its initial metabolic product, acetaldehyde, are metabolized. They are found at different frequencies in people from different parts of the world. The alcohol dehydrogenase allele ADH1B*2 causes a more rapid metabolism of alcohol to acetaldehyde, and reduces risk for alcoholism; it is most common in individuals from East Asia and the Middle East. The alcohol dehydrogenase allele ADH1B*3 also causes a more rapid metabolism of alcohol. The allele ADH1B*3 is only found in some individuals of African descent and certain Native American tribes. African Americans and Native Americans with this allele have a reduced risk of developing alcoholism. Native Americans, however, have a significantly higher rate of alcoholism than average; risk factors such as cultural environmental effects (e.g. trauma) have been proposed to explain the higher rates. The aldehyde dehydrogenase allele ALDH2*2 greatly reduces the rate at which acetaldehyde, the initial product of alcohol metabolism, is removed by conversion to acetate; it greatly reduces the risk for alcoholism. A genome-wide association study (GWAS) of more than 100,000 human individuals identified variants of the gene KLB, which encodes the transmembrane protein β-Klotho, as highly associated with alcohol consumption. The protein β-Klotho is an essential element in cell surface receptors for hormones involved in modulation of appetites for simple sugars and alcohol. Several large GWAS have found differences in the genetics of alcohol consumption and alcohol dependence, although the two are to some degree related. DNA damage Alcohol-induced DNA damage, when not properly repaired, may have a key role in the neurotoxicity induced by alcohol. Metabolic conversion of ethanol to acetaldehyde can occur in the brain and the neurotoxic effects of ethanol appear to be associated with acetaldehyde induced DNA damages including DNA adducts and crosslinks. In addition to acetaldehyde, alcohol metabolism produces potentially genotoxic reactive oxygen species, which have been demonstrated to cause oxidative DNA damage. Diagnosis Definition Because there is disagreement on the definition of the word alcoholism, it is not a recognized diagnosis, and the use of the term alcoholism is discouraged due to its heavily stigmatized connotations. It is classified as alcohol use disorder in the DSM-5 or alcohol dependence in the ICD-11. In 1979, the World Health Organization discouraged the use of alcoholism due to its inexact meaning, preferring alcohol dependence syndrome. Misuse, problem use, abuse, and heavy use of alcohol refer to improper use of alcohol, which may cause physical, social, or moral harm to the drinker. The Dietary Guidelines for Americans, issued by the United States Department of Agriculture (USDA) in 2005, defines "moderate use" as no more than two alcoholic beverages a day for men and no more than one alcoholic beverage a day for women. The National Institute on Alcohol Abuse and Alcoholism (NIAAA) defines binge drinking as the amount of alcohol leading to a blood alcohol content (BAC) of 0.08, which, for most adults, would be reached by consuming five drinks for men or four for women over a two-hour period. According to the NIAAA, men may be at risk for alcohol-related problems if their alcohol consumption exceeds 14 standard drinks per week or 4 drinks per day, and women may be at risk if they have more than 7 standard drinks per week or 3 drinks per day. It defines a standard drink as one 12-ounce bottle of beer, one 5-ounce glass of wine, or 1.5 ounces of distilled spirits. Despite this risk, a 2014 report in the National Survey on Drug Use and Health found that only 10% of either "heavy drinkers" or "binge drinkers" defined according to the above criteria also met the criteria for alcohol dependence, while only 1.3% of non-binge drinkers met the criteria. An inference drawn from this study is that evidence-based policy strategies and clinical preventive services may effectively reduce binge drinking without requiring addiction treatment in most cases. Alcoholism The term alcoholism is commonly used amongst laypeople, but the word is poorly defined. Despite the imprecision inherent in the term, there have been attempts to define how the word alcoholism should be interpreted when encountered. In 1992, it was defined by the National Council on Alcoholism and Drug Dependence (NCADD) and ASAM as "a primary, chronic disease characterized by impaired control over drinking, preoccupation with the drug alcohol, use of alcohol despite adverse consequences, and distortions in thinking." MeSH has had an entry for alcoholism since 1999, and references the 1992 definition. The WHO calls alcoholism "a term of long-standing use and variable meaning", and use of the term was disfavored by a 1979 WHO expert committee. In professional and research contexts, the term alcoholism is not currently favored, but rather alcohol abuse, alcohol dependence, or alcohol use disorder are used. Talbot (1989) observes that alcoholism in the classical disease model follows a progressive course: if people continue to drink, their condition will worsen. This will lead to harmful consequences in their lives, physically, mentally, emotionally, and socially. Johnson (1980) proposed that the emotional progression of the addicted people's response to alcohol has four phases. The first two are considered "normal" drinking and the last two are viewed as "typical" alcoholic drinking. Johnson's four phases consist of: Learning the mood swing. People are introduced to alcohol (in some cultures this can happen at a relatively young age), and they enjoy the happy feeling it produces. At this stage, there is no emotional cost. Seeking the mood swing. People will drink to regain that happy feeling in phase 1; the drinking will increase as more alcohol is required to achieve the same effect. Again at this stage, there are no significant consequences. At the third stage there are physical and social consequences such as hangovers, family problems, and work problems. People will continue to drink excessively, disregarding the problems. The fourth stage can be detrimental with a risk for premature death. People in this phase now drink to feel normal and block out the feelings of overwhelming guilt, remorse, anxiety, and shame they experience when sober. DSM and ICD In the United States, the Diagnostic and Statistical Manual of Mental Disorders (DSM) is the most common diagnostic guide for substance use disorders, whereas most countries use the International Classification of Diseases (ICD) for diagnostic (and other) purposes. The two manuals use similar but not identical nomenclature to classify alcohol problems. Social barriers Attitudes and social stereotypes can create barriers to the detection and treatment of alcohol use disorder. This is more of a barrier for women than men. Fear of stigmatization may lead women to deny that they have a medical condition, to hide their drinking, and to drink alone. This pattern, in turn, leads family, physicians, and others to be less likely to suspect that a woman they know has alcohol use disorder. In contrast, reduced fear of stigma may lead men to admit that they are having a medical condition, to display their drinking publicly, and to drink in groups. This pattern, in turn, leads family, physicians, and others to be more likely to suspect that a man they know is someone with an alcohol use disorder. Screening Screening is recommended among those over the age of 18. Several tools may be used to detect a loss of control of alcohol use. These tools are mostly self-reports in questionnaire form. Another common theme is a score or tally that sums up the general severity of alcohol use. The CAGE questionnaire, named for its four questions, is one such example that may be used to screen patients quickly in a doctor's office. The CAGE questionnaire has demonstrated a high effectiveness in detecting alcohol-related problems; however, it has limitations in people with less severe alcohol-related problems, white women and college students. Other tests are sometimes used for the detection of alcohol dependence, such as the Alcohol Dependence Data Questionnaire, which is a more sensitive diagnostic test than the CAGE questionnaire. It helps distinguish a diagnosis of alcohol dependence from one of heavy alcohol use. The Michigan Alcohol Screening Test (MAST) is a screening tool for alcoholism widely used by courts to determine the appropriate sentencing for people convicted of alcohol-related offenses, driving under the influence being the most common. The Alcohol Use Disorders Identification Test (AUDIT), a screening questionnaire developed by the World Health Organization, is unique in that it has been validated in six countries and is used internationally. Like the CAGE questionnaire, it uses a simple set of questions – a high score earning a deeper investigation. The Paddington Alcohol Test (PAT) was designed to screen for alcohol-related problems amongst those attending Accident and Emergency departments. It concords well with the AUDIT questionnaire but is administered in a fifth of the time. Urine and blood tests There are reliable tests for the actual use of alcohol, one common test being that of blood alcohol content (BAC). These tests do not differentiate people with alcohol use disorders from people without; however, long-term heavy drinking does have a few recognizable effects on the body, including: Macrocytosis (enlarged MCV) Elevated GGT Moderate elevation of AST and ALT and an AST: ALT ratio of 2:1 High carbohydrate deficient transferrin (CDT) With regard to alcoholism, BAC is useful to judge alcohol tolerance, which in turn is a sign of alcoholism. Electrolyte and acid-base abnormalities including hypokalemia, hypomagnesemia, hyponatremia, hyperuricemia, metabolic acidosis, and respiratory alkalosis are common in people with alcohol use disorders. However, none of these blood tests for biological markers is as sensitive as screening questionnaires. Prevention The World Health Organization, the European Union and other regional bodies, national governments and parliaments have formed alcohol policies in order to reduce the harm of alcoholism. Increasing the age at which alcohol can be purchased, and banning or restricting alcohol beverage advertising are common methods to reduce alcohol use among adolescents and young adults in particular, see Alcoholism in adolescence. Another common method of alcoholism prevention is taxation of alcohol products – increasing price of alcohol by 10% is linked with reduction of consumption of up to 10%. Credible, evidence-based educational campaigns in the mass media about the consequences of alcohol misuse have been recommended. Guidelines for parents to prevent alcohol misuse amongst adolescents, and for helping young people with mental health problems have also been suggested. Because alcohol is often used for self-medication of conditions like anxiety temporarily, prevention of alcoholism may be attempted by reducing the severity or prevalence of stress and anxiety in individuals. Management Treatments are varied because there are multiple perspectives of alcoholism. Those who approach alcoholism as a medical condition or disease recommend differing treatments from, for instance, those who approach the condition as one of social choice. Most treatments focus on helping people discontinue their alcohol intake, followed up with life training and/or social support to help them resist a return to alcohol use. Since alcoholism involves multiple factors which encourage a person to continue drinking, they must all be addressed to successfully prevent a relapse. An example of this kind of treatment is detoxification followed by a combination of supportive therapy, attendance at self-help groups, and ongoing development of coping mechanisms. Much of the treatment community for alcoholism supports an abstinence-based zero tolerance approach popularized by the 12 step program of Alcoholics Anonymous; however, some prefer a harm-reduction approach. Cessation of alcohol intake Medical treatment for alcohol detoxification usually involves administration of a benzodiazepine, in order to ameliorate alcohol withdrawal syndrome's adverse impact. The addition of phenobarbital improves outcomes if benzodiazepine administration lacks the usual efficacy, and phenobarbital alone might be an effective treatment. Propofol also might enhance treatment for individuals showing limited therapeutic response to a benzodiazepine. Individuals who are only at risk of mild to moderate withdrawal symptoms can be treated as outpatients. Individuals at risk of a severe withdrawal syndrome as well as those who have significant or acute comorbid conditions can be treated as inpatients. Direct treatment can be followed by a treatment program for alcohol dependence or alcohol use disorder to attempt to reduce the risk of relapse. Experiences following alcohol withdrawal, such as depressed mood and anxiety, can take weeks or months to abate while other symptoms persist longer due to persisting neuroadaptations. Psychological Various forms of group therapy or psychotherapy are sometimes used to encourage and support abstinence from alcohol, or to reduce alcohol consumption to levels that are not associated with adverse outcomes. Mutual-aid group-counseling is an approach used to facilitate relapse prevention. Alcoholics Anonymous was one of the earliest organizations formed to provide mutual peer support and non-professional counseling, however the effectiveness of Alcoholics Anonymous is disputed. A 2020 Cochrane review concluded that Twelve-Step Facilitation (TSF) probably achieves outcomes such as fewer drinks per drinking day, however evidence for such a conclusion comes from low to moderate certainty evidence "so should be regarded with caution". Others include LifeRing Secular Recovery, SMART Recovery, Women for Sobriety, and Secular Organizations for Sobriety. Manualized Twelve Step Facilitation (TSF) interventions (i.e. therapy which encourages active, long-term Alcoholics Anonymous participation) for Alcohol Use Disorder lead to higher abstinence rates, compared to other clinical interventions and to wait-list control groups. Moderate drinking Moderate drinking amongst people with alcohol dependence—often termed 'controlled drinking'—has been subject to significant controversy. Indeed, much of the skepticism toward the viability of moderate drinking goals stems from historical ideas about 'alcoholism', now replaced with 'alcohol use disorder' or alcohol dependence in most scientific contexts. A 2021 meta-analysis and systematic review of controlled drinking covering 22 studies concluded controlled drinking was a 'non-inferior' outcome to abstinence for many drinkers. Rationing and moderation programs such as Moderation Management and DrinkWise do not mandate complete abstinence. While most people with alcohol use disorders are unable to limit their drinking in this way, some return to moderate drinking. A 2002 US study by the National Institute on Alcohol Abuse and Alcoholism (NIAAA) showed that 17.7% of individuals diagnosed as alcohol dependent more than one year prior returned to low-risk drinking. This group, however, showed fewer initial symptoms of dependency. A follow-up study, using the same subjects that were judged to be in remission in 2001–2002, examined the rates of return to problem drinking in 2004–2005. The study found abstinence from alcohol was the most stable form of remission for recovering alcoholics. There was also a 1973 study showing chronic alcoholics drinking moderately again, but a 1982 follow-up showed that 95% of subjects were not able to maintain drinking in moderation over the long term. Another study was a long-term (60 year) follow-up of two groups of alcoholic men which concluded that "return to controlled drinking rarely persisted for much more than a decade without relapse or evolution into abstinence." Internet based measures appear to be useful at least in the short term. Medications In the United States there are four approved medications for alcoholism: acamprosate, two methods of using naltrexone and disulfiram. Acamprosate may stabilise the brain chemistry that is altered due to alcohol dependence via antagonising the actions of glutamate, a neurotransmitter which is hyperactive in the post-withdrawal phase. By reducing excessive NMDA activity which occurs at the onset of alcohol withdrawal, acamprosate can reduce or prevent alcohol withdrawal related neurotoxicity. Acamprosate reduces the risk of relapse amongst alcohol-dependent persons. Acamprosate is not recommended in those with advanced, decompensated liver cirrhosis due to the risk of liver toxicity. Naltrexone is a competitive antagonist for opioid receptors, effectively blocking the effects of endorphins and opioids. Naltrexone may be given as a daily oral tablet or as a monthly intramuscular injection. Naltrexone is used to decrease cravings for alcohol and encourage abstinence. Alcohol causes the body to release endorphins, which in turn release dopamine and activate the reward pathways; hence in the body Naltrexone reduces the pleasurable effects from consuming alcohol. Evidence supports a reduced risk of relapse among alcohol-dependent persons and a decrease in excessive drinking. Naltrexone should not be used in those with advanced liver disease due to the risk of liver toxicity. Nalmefene also appears effective and works in a similar manner. Disulfiram prevents the elimination of acetaldehyde by inhibiting the enzyme acetaldehyde dehydrogenase. Acetaldehyde is a chemical the body produces when breaking down ethanol. Acetaldehyde itself is the cause of many hangover symptoms from alcohol use. The overall effect is acute discomfort when alcohol is ingested characterized by flushing, nausea, a rapid heart rate and low blood pressure. Disulfiram should not be used in those with advanced liver disease due to the risk of life-threatening liver toxicity. Several other drugs are also used and many are under investigation. Benzodiazepines are a first line medication in the management of acute alcohol withdrawal, however their use outside of the acute withdrawal period is not recommended. Benzodiazepines with a shorter half life, such as lorazepam or oxazepam are preferred in the treatment of alcohol withdrawal as their shorter half lives and less active metabolites have a lower risk of confusion in those with liver disease. If used long-term, they can cause a worse outcome in alcoholism. Alcoholics on chronic benzodiazepines have a lower rate of achieving abstinence from alcohol than those not taking benzodiazepines. Initiating prescriptions of benzodiazepines or sedative-hypnotics in individuals in recovery has a high rate of relapse with one author reporting more than a quarter of people relapsed after being prescribed sedative-hypnotics. Those who are long-term users of benzodiazepines should not be withdrawn rapidly, as severe anxiety and panic may develop, which are known risk factors for alcohol use disorder relapse. Taper regimes of 6–12 months have been found to be the most successful, with reduced intensity of withdrawal. Calcium carbimide works in the same way as disulfiram; it has an advantage in that the occasional adverse effects of disulfiram, hepatotoxicity and drowsiness, do not occur with calcium carbimide. Ondansetron and topiramate are supported by tentative evidence in people with certain genetic patterns. Evidence for ondansetron is stronger in people who have recently started to abuse alcohol. Topiramate is a derivative of the naturally occurring sugar monosaccharide D-fructose. Review articles characterize topiramate as showing "encouraging", "promising", "efficacious", and "insufficient" results in the treatment of alcohol use disorders. Evidence does not support the use of selective serotonin reuptake inhibitors (SSRIs), tricyclic antidepressants (TCAs), antipsychotics, or gabapentin. Research Topiramate, a derivative of the naturally occurring sugar monosaccharide D-fructose, has been found effective in helping alcoholics quit or cut back on the amount they drink. Evidence suggests that topiramate antagonizes excitatory glutamate receptors, inhibits dopamine release, and enhances inhibitory gamma-aminobutyric acid function. A 2008 review of the effectiveness of topiramate concluded that the results of published trials are promising, however as of 2008, data was insufficient to support using topiramate in conjunction with brief weekly compliance counseling as a first-line agent for alcohol dependence. A 2010 review found that topiramate may be superior to existing alcohol pharmacotherapeutic options. Topiramate effectively reduces craving and alcohol withdrawal severity as well as improving quality-of-life-ratings. Baclofen, a GABAB receptor agonist, is under study for the treatment of alcoholism. According to a 2017 Cochrane Systematic Review, there is insufficient evidence to determine the effectiveness or safety for the use of baclofen for withdrawal symptoms in alcoholism. Psilocybin-assisted psychotherapy is under study for the treatment of patients with alcohol use disorder. Dual addictions and dependencies Alcoholics may also require treatment for other psychotropic drug addictions and drug dependencies. The most common dual dependence syndrome with alcohol dependence is benzodiazepine dependence, with studies showing 10–20% of alcohol-dependent individuals had problems of dependence and/or misuse problems of benzodiazepine drugs such as diazepam or clonazepam. These drugs are, like alcohol, depressants. Benzodiazepines may be used legally, if they are prescribed by doctors for anxiety problems or other mood disorders, or they may be purchased as illegal drugs. Benzodiazepine use increases cravings for alcohol and the volume of alcohol consumed by problem drinkers. Benzodiazepine dependency requires careful reduction in dosage to avoid benzodiazepine withdrawal syndrome and other health consequences. Dependence on other sedative-hypnotics such as zolpidem and zopiclone as well as opiates and illegal drugs is common in alcoholics. Alcohol itself is a sedative-hypnotic and is cross-tolerant with other sedative-hypnotics such as barbiturates, benzodiazepines and nonbenzodiazepines. Dependence upon and withdrawal from sedative-hypnotics can be medically severe and, as with alcohol withdrawal, there is a risk of psychosis or seizures if not properly managed. Epidemiology The World Health Organization estimates that there are about 380 million people with alcoholism worldwide (5.1% of the population over 15 years of age), with it being most common among males and young adults. Geographically, it is least common in Africa (1.1% of the population) and has the highest rates in Eastern Europe (11%). in the United States, about 17 million (7%) of adults and 0.7 million (2.8%) of those age 12 to 17 years of age are affected. About 12% of American adults have had an alcohol dependence problem at some time in their life. In the United States and Western Europe, 10–20% of men and 5–10% of women at some point in their lives will meet criteria for alcoholism. In England, the number of "dependent drinkers" was calculated as over 600,000 in 2019. Estonia had the highest death rate from alcohol in Europe in 2015 at 8.8 per 100,000 population. In the United States, 30% of people admitted to hospital have a problem related to alcohol. Within the medical and scientific communities, there is a broad consensus regarding alcoholism as a disease state. For example, the American Medical Association considers alcohol a drug and states that "drug addiction is a chronic, relapsing brain disease characterized by compulsive drug seeking and use despite often devastating consequences. It results from a complex interplay of biological vulnerability, environmental exposure, and developmental factors (e.g., stage of brain maturity)." Alcoholism has a higher prevalence among men, though, in recent decades, the proportion of female alcoholics has increased. Current evidence indicates that in both men and women, alcoholism is 50–60% genetically determined, leaving 40–50% for environmental influences. Most alcoholics develop alcoholism during adolescence or young adulthood. Prognosis Alcoholism often reduces a person's life expectancy by around ten years. The most common cause of death in alcoholics is from cardiovascular complications. There is a high rate of suicide in chronic alcoholics, which increases the longer a person drinks. Approximately 3–15% of alcoholics die by suicide, and research has found that over 50% of all suicides are associated with alcohol or drug dependence. This is believed to be due to alcohol causing physiological distortion of brain chemistry, as well as social isolation. Suicide is also common in adolescent alcohol abusers. Research in 2000 found that 25% of suicides in adolescents were related to alcohol abuse. Among those with alcohol dependence after one year, some met the criteria for low-risk drinking, even though only 26% of the group received any treatment, with the breakdown as follows: 25% were found to be still dependent, 27% were in partial remission (some symptoms persist), 12% asymptomatic drinkers (consumption increases chances of relapse) and 36% were fully recovered – made up of 18% low-risk drinkers plus 18% abstainers. In contrast, however, the results of a long-term (60-year) follow-up of two groups of alcoholic men indicated that "return to controlled drinking rarely persisted for much more than a decade without relapse or evolution into abstinence....return-to-controlled drinking, as reported in short-term studies, is often a mirage." History Historically the name dipsomania was coined by German physician C. W. Hufeland in 1819 before it was superseded by alcoholism. That term now has a more specific meaning. The term alcoholism was first used by Swedish physician Magnus Huss in an 1852 publication to describe the systemic adverse effects of alcohol. Alcohol has a long history of use and misuse throughout recorded history. Biblical, Egyptian and Babylonian sources record the history of abuse and dependence on alcohol. In some ancient cultures alcohol was worshiped and in others, its misuse was condemned. Excessive alcohol misuse and drunkenness were recognized as causing social problems even thousands of years ago. However, the defining of habitual drunkenness as it was then known as and its adverse consequences were not well established medically until the 18th century. In 1647 a Greek monk named Agapios was the first to document that chronic alcohol misuse was associated with toxicity to the nervous system and body which resulted in a range of medical disorders such as seizures, paralysis, and internal bleeding. In the 1910s and 1920s, the effects of alcohol misuse and chronic drunkenness boosted membership of the temperance movement and led to the prohibition of alcohol in many countries in North America and the Nordic countries, nationwide bans on the production, importation, transportation, and sale of alcoholic beverages that generally remained in place until the late 1920s or early 1930s; these policies resulted in the decline of death rates from cirrhosis and alcoholism. In 2005, alcohol dependence and misuse was estimated to cost the US economy approximately 220 billion dollars per year, more than cancer and obesity. Society and culture The various health problems associated with long-term alcohol consumption are generally perceived as detrimental to society; for example, money due to lost labor-hours, medical costs due to injuries due to drunkenness and organ damage from long-term use, and secondary treatment costs, such as the costs of rehabilitation facilities and detoxification centers. Alcohol use is a major contributing factor for head injuries, motor vehicle injuries (27%), interpersonal violence (18%), suicides (18%), and epilepsy (13%). Beyond the financial costs that alcohol consumption imposes, there are also significant social costs to both the alcoholic and their family and friends. For instance, alcohol consumption by a pregnant woman can lead to an incurable and damaging condition known as fetal alcohol syndrome, which often results in cognitive deficits, mental health problems, an inability to live independently and an increased risk of criminal behaviour, all of which can cause emotional stress for parents and caregivers. Estimates of the economic costs of alcohol misuse, collected by the World Health Organization, vary from 1–6% of a country's GDP. One Australian estimate pegged alcohol's social costs at 24% of all drug misuse costs; a similar Canadian study concluded alcohol's share was 41%. One study quantified the cost to the UK of all forms of alcohol misuse in 2001 as £18.5–20 billion. All economic costs in the United States in 2006 have been estimated at $223.5 billion. The idea of hitting rock bottom refers to an experience of stress that can be attributed to alcohol misuse. There is no single definition for this idea, and people may identify their own lowest points in terms of lost jobs, lost relationships, health problems, legal problems, or other consequences of alcohol misuse. The concept is promoted by 12-step recovery groups and researchers using the transtheoretical model of motivation for behavior change. The first use of this slang phrase in the formal medical literature appeared in a 1965 review in the British Medical Journal, which said that some men refused treatment until they "hit rock bottom", but that treatment was generally more successful for "the alcohol addict who has friends and family to support him" than for impoverished and homeless addicts. Stereotypes of alcoholics are often found in fiction and popular culture. The "town drunk" is a stock character in Western popular culture. Stereotypes of drunkenness may be based on racism or xenophobia, as in the fictional depiction of the Irish as heavy drinkers. Studies by social psychologists Stivers and Greeley attempt to document the perceived prevalence of high alcohol consumption amongst the Irish in America. Alcohol consumption is relatively similar between many European cultures, the United States, and Australia. In Asian countries that have a high gross domestic product, there is heightened drinking compared to other Asian countries, but it is nowhere near as high as it is in other countries like the United States. It is also inversely seen, with countries that have very low gross domestic product showing high alcohol consumption. In a study done on Korean immigrants in Canada, they reported alcohol was typically an integral part of their meal but is the only time solo drinking should occur. They also generally believe alcohol is necessary at any social event, as it helps conversations start. Peyote, a psychoactive agent, has even shown promise in treating alcoholism. Alcohol had actually replaced peyote as Native Americans' psychoactive agent of choice in rituals when peyote was outlawed.
Biology and health sciences
Drugs and medication
null
2974
https://en.wikipedia.org/wiki/Abelian%20group
Abelian group
In mathematics, an abelian group, also called a commutative group, is a group in which the result of applying the group operation to two group elements does not depend on the order in which they are written. That is, the group operation is commutative. With addition as an operation, the integers and the real numbers form abelian groups, and the concept of an abelian group may be viewed as a generalization of these examples. Abelian groups are named after Niels Henrik Abel. The concept of an abelian group underlies many fundamental algebraic structures, such as fields, rings, vector spaces, and algebras. The theory of abelian groups is generally simpler than that of their non-abelian counterparts, and finite abelian groups are very well understood and fully classified. Definition An abelian group is a set , together with an operation ・ , that combines any two elements and of to form another element of denoted . The symbol ・ is a general placeholder for a concretely given operation. To qualify as an abelian group, the set and operation, , must satisfy four requirements known as the abelian group axioms (some authors include in the axioms some properties that belong to the definition of an operation: namely that the operation is defined for any ordered pair of elements of , that the result is well-defined, and that the result belongs to ): Associativity For all , , and in , the equation holds. Identity element There exists an element in , such that for all elements in , the equation holds. Inverse element For each in there exists an element in such that , where is the identity element. Commutativity For all , in , . A group in which the group operation is not commutative is called a "non-abelian group" or "non-commutative group". Facts Notation There are two main notational conventions for abelian groups – additive and multiplicative. Generally, the multiplicative notation is the usual notation for groups, while the additive notation is the usual notation for modules and rings. The additive notation may also be used to emphasize that a particular group is abelian, whenever both abelian and non-abelian groups are considered, some notable exceptions being near-rings and partially ordered groups, where an operation is written additively even when non-abelian. Multiplication table To verify that a finite group is abelian, a table (matrix) – known as a Cayley table – can be constructed in a similar fashion to a multiplication table. If the group is under the the entry of this table contains the product . The group is abelian if and only if this table is symmetric about the main diagonal. This is true since the group is abelian iff for all , which is iff the entry of the table equals the entry for all , i.e. the table is symmetric about the main diagonal. Examples For the integers and the operation addition , denoted , the operation + combines any two integers to form a third integer, addition is associative, zero is the additive identity, every integer has an additive inverse, , and the addition operation is commutative since for any two integers and . Every cyclic group is abelian, because if , are in , then . Thus the integers, , form an abelian group under addition, as do the integers modulo , . Every ring is an abelian group with respect to its addition operation. In a commutative ring the invertible elements, or units, form an abelian multiplicative group. In particular, the real numbers are an abelian group under addition, and the nonzero real numbers are an abelian group under multiplication. Every subgroup of an abelian group is normal, so each subgroup gives rise to a quotient group. Subgroups, quotients, and direct sums of abelian groups are again abelian. The finite simple abelian groups are exactly the cyclic groups of prime order. The concepts of abelian group and -module agree. More specifically, every -module is an abelian group with its operation of addition, and every abelian group is a module over the ring of integers in a unique way. In general, matrices, even invertible matrices, do not form an abelian group under multiplication because matrix multiplication is generally not commutative. However, some groups of matrices are abelian groups under matrix multiplication – one example is the group of rotation matrices. Historical remarks Camille Jordan named abelian groups after Norwegian mathematician Niels Henrik Abel, as Abel had found that the commutativity of the group of a polynomial implies that the roots of the polynomial can be calculated by using radicals. Properties If is a natural number and is an element of an abelian group written additively, then can be defined as ( summands) and . In this way, becomes a module over the ring of integers. In fact, the modules over can be identified with the abelian groups. Theorems about abelian groups (i.e. modules over the principal ideal domain ) can often be generalized to theorems about modules over an arbitrary principal ideal domain. A typical example is the classification of finitely generated abelian groups which is a specialization of the structure theorem for finitely generated modules over a principal ideal domain. In the case of finitely generated abelian groups, this theorem guarantees that an abelian group splits as a direct sum of a torsion group and a free abelian group. The former may be written as a direct sum of finitely many groups of the form for prime, and the latter is a direct sum of finitely many copies of . If are two group homomorphisms between abelian groups, then their sum , defined by , is again a homomorphism. (This is not true if is a non-abelian group.) The set of all group homomorphisms from to is therefore an abelian group in its own right. Somewhat akin to the dimension of vector spaces, every abelian group has a rank. It is defined as the maximal cardinality of a set of linearly independent (over the integers) elements of the group. Finite abelian groups and torsion groups have rank zero, and every abelian group of rank zero is a torsion group. The integers and the rational numbers have rank one, as well as every nonzero additive subgroup of the rationals. On the other hand, the multiplicative group of the nonzero rationals has an infinite rank, as it is a free abelian group with the set of the prime numbers as a basis (this results from the fundamental theorem of arithmetic). The center of a group is the set of elements that commute with every element of . A group is abelian if and only if it is equal to its center . The center of a group is always a characteristic abelian subgroup of . If the quotient group of a group by its center is cyclic then is abelian. Finite abelian groups Cyclic groups of integers modulo , , were among the first examples of groups. It turns out that an arbitrary finite abelian group is isomorphic to a direct sum of finite cyclic groups of prime power order, and these orders are uniquely determined, forming a complete system of invariants. The automorphism group of a finite abelian group can be described directly in terms of these invariants. The theory had been first developed in the 1879 paper of Georg Frobenius and Ludwig Stickelberger and later was both simplified and generalized to finitely generated modules over a principal ideal domain, forming an important chapter of linear algebra. Any group of prime order is isomorphic to a cyclic group and therefore abelian. Any group whose order is a square of a prime number is also abelian. In fact, for every prime number there are (up to isomorphism) exactly two groups of order , namely and . Classification The fundamental theorem of finite abelian groups states that every finite abelian group can be expressed as the direct sum of cyclic subgroups of prime-power order; it is also known as the basis theorem for finite abelian groups. Moreover, automorphism groups of cyclic groups are examples of abelian groups. This is generalized by the fundamental theorem of finitely generated abelian groups, with finite groups being the special case when G has zero rank; this in turn admits numerous further generalizations. The classification was proven by Leopold Kronecker in 1870, though it was not stated in modern group-theoretic terms until later, and was preceded by a similar classification of quadratic forms by Carl Friedrich Gauss in 1801; see history for details. The cyclic group of order is isomorphic to the direct sum of and if and only if and are coprime. It follows that any finite abelian group is isomorphic to a direct sum of the form in either of the following canonical ways: the numbers are powers of (not necessarily distinct) primes, or divides , which divides , and so on up to . For example, can be expressed as the direct sum of two cyclic subgroups of order 3 and 5: . The same can be said for any abelian group of order 15, leading to the remarkable conclusion that all abelian groups of order 15 are isomorphic. For another example, every abelian group of order 8 is isomorphic to either (the integers 0 to 7 under addition modulo 8), (the odd integers 1 to 15 under multiplication modulo 16), or .
Mathematics
Algebra
null
2992
https://en.wikipedia.org/wiki/Amputation
Amputation
Amputation is the removal of a limb by trauma, medical illness, or surgery. As a surgical measure, it is used to control pain or a disease process in the affected limb, such as malignancy or gangrene. In some cases, it is carried out on individuals as a preventive surgery for such problems. A special case is that of congenital amputation, a congenital disorder, where fetal limbs have been cut off by constrictive bands. In some countries, judicial amputation is currently used to punish people who commit crimes. Amputation has also been used as a tactic in war and acts of terrorism; it may also occur as a war injury. In some cultures and religions, minor amputations or mutilations are considered a ritual accomplishment. When done by a person, the person executing the amputation is an amputator. The oldest evidence of this practice comes from a skeleton found buried in Liang Tebo cave, East Kalimantan, Indonesian Borneo dating back to at least 31,000 years ago, where it was done when the amputee was a young child. Types Leg Lower limb amputations can be divided into two broad categories: minor and major amputations. Minor amputations generally refer to the amputation of digits. Major amputations are commonly below-knee- or above-knee amputations. Common partial foot amputations include the Chopart, Lisfranc, and ray amputations. Common forms of ankle disarticulations include Pyrogoff, Boyd, and Syme amputations. A less common major amputation is the Van Nes rotation, or rotationplasty, i.e. the turning around and reattachment of the foot to allow the ankle joint to take over the function of the knee. Types of amputations include: partial foot amputation amputation of the lower limb distal to the ankle joint ankle disarticulation amputation of the lower limb at the ankle joint trans-tibial amputation amputation of the lower limb between the knee joint and the ankle joint, commonly referred to as a below-knee amputation knee disarticulation amputation of the lower limb at the knee joint trans-femoral amputation amputation of the lower limb between the hip joint and the knee joint, commonly referred to an above-knee amputation hip disarticulation amputation of the lower limb at the hip joint trans-pelvic disarticulation amputation of the whole lower limb together with all or part of the pelvis, also known as a hemipelvectomy or hindquarter amputation Arm Types of upper extremity amputations include: partial hand amputation wrist disarticulation trans-radial amputation, commonly referred to as below-elbow or forearm amputation elbow disarticulation trans-humeral amputation, commonly referred to as above-elbow amputation shoulder disarticulation forequarter amputation A variant of the trans-radial amputation is the Krukenberg procedure in which the radius and ulna are used to create a stump capable of a pincer action. Other Facial amputations include but are not limited to: amputation of the ears amputation of the nose (rhinotomy) amputation of the tongue (glossectomy). amputation of the eyes (enucleation). amputation of the teeth (Dental evulsion). Removal of teeth, mainly incisors, is or was practiced by some cultures for ritual purposes (for instance in the Iberomaurusian culture of Neolithic North Africa). Breasts: amputation of the breasts (mastectomy). Genitals: amputation of the testicles (castration). amputation of the penis (penectomy). amputation of the foreskin (circumcision). amputation of the clitoris (clitoridectomy). Hemicorporectomy, or amputation at the waist, and decapitation, or amputation at the neck, are the most radical amputations. Genital modification and mutilation may involve amputating tissue, although not necessarily as a result of injury or disease. Self-amputation In some rare cases when a person has become trapped in a deserted place, with no means of communication or hope of rescue, the victim has amputated their own limb. The most notable case of this is Aron Ralston, a hiker who amputated his own right forearm after it was pinned by a boulder in a hiking accident and he was unable to free himself for over five days. Body integrity identity disorder is a psychological condition in which an individual feels compelled to remove one or more of their body parts, usually a limb. In some cases, that individual may take drastic measures to remove the offending appendages, either by causing irreparable damage to the limb so that medical intervention cannot save the limb, or by causing the limb to be severed. Urgent In surgery, a guillotine amputation is an amputation performed without closure of the skin in an urgent setting. Typical indications include catastrophic trauma or infection control in the setting of infected gangrene. A guillotine amputation is typically followed with a more time-consuming, definitive amputation such as an above or below knee amputation. Causes Circulatory disorders Diabetic vasculopathy Sepsis with peripheral necrosis Peripheral artery disease which can lead to gangrene A severe deep vein thrombosis (phlegmasia cerulea dolens) can cause compartment syndrome and gangrene Neoplasm Cancerous bone or soft tissue tumors (e.g. osteosarcoma, chondrosarcoma, fibrosarcoma, epithelioid sarcoma, Ewing's sarcoma, synovial sarcoma, sacrococcygeal teratoma, liposarcoma), melanoma Trauma Severe limb injuries in which the efforts to save the limb fail or the limb cannot be saved. Traumatic amputation (an unexpected amputation that occurs at the scene of an accident, where the limb is partially or entirely severed as a direct result of the accident, for example, a finger that is severed from the blade of a table saw) Amputation in utero (Amniotic band) Congenital anomalies Deformities of digits and/or limbs (e.g., proximal femoral focal deficiency, Fibular hemimelia) Extra digits and/or limbs (e.g., polydactyly) Infection Bone infection (osteomyelitis) and/or diabetic foot infections Gangrene Trench foot Necrosis Meningococcal meningitis Streptococcus Vibrio vulnificus Necrotizing fasciitis Gas gangrene Legionella Influenza A Virus Animal bites Sepsis Bubonic plague Frostbite Frostbite is a cold-related injury occurring when an area (typically a limb or other extremity) is exposed to extreme low temperatures, causing the freezing of the skin or other tissues. Its pathophysiology involves the formation of ice crystals upon freezing and blood clots upon thawing, leading to cell damage and cell death. Treatment of severe frostbite may require surgical amputation of the affected tissue or limb; if there is deep injury autoamputation may occur. Athletic performance Sometimes professional athletes may choose to have a non-essential digit amputated to relieve chronic pain and impaired performance. Australian Rules footballer Daniel Chick elected to have his left ring finger amputated as chronic pain and injury was limiting his performance. Rugby union player Jone Tawake also had a finger removed. National Football League safety Ronnie Lott had the tip of his little finger removed after it was damaged in the 1985 NFL season. Criminal penalties According to Quran 5:38, the punishment for stealing is the amputation of the hand. Under Sharia law, after repeated offense, the foot may also be cut off. This is still in practice today in countries like Brunei, the United Arab Emirates, Iran, Saudi Arabia, Yemen, and 11 of the 36 states within Nigeria. Cross-amputation is one of the Hudud punishments prescribed under Islamic jurisprudence (Sharia law) and involves cutting off the right hand and left foot of the alleged transgressor. The scriptural authority for the double amputation procedure is in the Quran (surah 5.33-34) which stipulates: The severe punishment, for "highway robbery (hirabah, qat' al-tariq) and civil disturbance against Islam", is usually carried out in a single session in public, without anaesthetic and using a sword. The ancient punishment is practised in Islamic countries such as Saudi Arabia; Sudan; Somalia; Mauritania, the Maldives; Iran; Taliban-era Afghanistan and Yemen. In 1779, Thomas Jefferson proposed a bill to the Virginia Assembly that ostensibly would have replaced capital punishment with other penalties, including amputation, for certain crimes, although not all were really punishable by death at the time. For the crimes of rape, sodomy, and polygamy (the latter removed from a later version), the punishment was to be castration for men or rhinotomy for women. For intentional maiming, the bill specified literal eye for an eye retribution. The bill never passed, due to the combination of its perceived barbarity in some parts and perceived leniency in others. From the 16th century, English law provided for cutting off a hand as punishment for striking someone inside a courtroom. Thomas Jefferson's punishments revision bill also intended to repeal this. As of 2021, this form of punishment is controversial, as most modern cultures consider it to be morally abhorrent, as it has the effect of permanently disabling a person and constitutes torture. It is thus seen as grossly disproportionate for crimes less than those such as murder. Surgery Method Surgeons performing an amputation have to first ligate the supplying artery and vein, so as to prevent hemorrhage (bleeding). The muscles are transected, and finally, the bone is sawed through with an oscillating saw. Sharp and rough edges of bones are filed, skin and muscle flaps are then transposed over the stump, occasionally with the insertion of elements to attach a prosthesis. Distal stabilisation of muscles is often performed. This allows effective muscle contraction which reduces atrophy, allows functional use of the stump and maintains soft tissue coverage of the remnant bone. The preferred stabilisation technique is myodesis where the muscle is attached to the bone or its periosteum. In joint disarticulation amputations tenodesis may be used where the muscle tendon is attached to the bone. Muscles are attached under similar tension to normal physiological conditions. An experimental technique known as the "Ewing amputation" aims to improve post-amputation proprioception. Another technique with similar goals, which has been tested in a clinical trial, is Agonist-antagonist Myoneural Interface (AMI). In 1920,  Dr. Janos Ertl Sr. of Hungary, developed the Ertl procedure in order to return a high number of amputees to the work force. The Ertl technique, an osteomyoplastic procedure for transtibial amputation, can be used to create a highly functional residual limb. Creation of a tibiofibular bone bridge provides a stable, broad tibiofibular articulation that may be capable of some distal weight bearing. Several different modified techniques and fibular bridge fixation methods have been used; however, no current evidence exists regarding comparison of the different techniques. Post-operative management A 2019 Cochrane systematic review aimed to determine whether rigid dressings were more effective than soft dressings in helping wounds heal following transtibial (below the knee) amputations. Due to the limited and very low certainty evidence available, the authors concluded that it was uncertain what the benefits and harms were for each dressing type. They recommended that clinicians consider the pros and cons of each dressing type on a case-by-case basis: rigid dressings may potentially benefit patients who have a high risk of falls; soft dressings may potentially benefit patients who have poor skin integrity. A 2017 review found that the use of rigid removable dressings (RRD's) in trans-tibial amputations, rather than soft bandaging, improved healing time, reduced edema, prevented knee flexion contractures and reduced complications, including further amputation, from external trauma such as falls onto the stump. Post-operative management, in addition to wound healing, considers maintenance of limb strength, joint range, edema management, preservation of the intact limb (if applicable) and stump desensitization. Trauma Traumatic amputation is the partial or total avulsion of a part of a body during a serious accident, like traffic, labor, or combat. Traumatic amputation of a human limb, either partial or total, creates the immediate danger of death from blood loss. Orthopedic surgeons often assess the severity of different injuries using the Mangled Extremity Severity Score. Given different clinical and situational factors, they can predict the likelihood of amputation. This is especially useful for emergency physicians to quickly evaluate patients and decide on consultations. Causes Traumatic amputation is uncommon in humans (1 per 20,804 population per year). Loss of limb usually happens immediately during the accident, but sometimes a few days later after medical complications. Statistically, the most common causes of traumatic amputations are: Vehicle accidents (cars, motorcycles, bicycles, trains, etc.) Labor accidents (equipment, instruments, cylinders, chainsaws, press machines, meat machines, wood machines, etc.) Agricultural accidents, with machines and mower equipment Electric shock hazards Firearms, bladed weapons, explosives Violent rupture of ship rope or industry wire rope Ring traction (ring amputation, de-gloving injuries) Building doors and car doors Animal attacks Gas cylinder explosions Other rare accidents Treatment The development of the science of microsurgery over the last 40 years has provided several treatment options for a traumatic amputation, depending on the patient's specific trauma and clinical situation: 1st choice: Surgical amputation - break - prosthesis 2nd choice: Surgical amputation - transplantation of other tissue - plastic reconstruction. 3rd choice: Replantation - reconnection - revascularisation of amputated limb, by microscope (after 1969) 4th choice: Transplantation of cadaveric hand (after 2000) Epidemiology In the United States in 1999, there were 14,420 non-fatal traumatic amputations according to the American Statistical Association. Of these, 4,435 occurred as a result of traffic and transportation accidents and 9,985 were due to labor accidents. Of all traumatic amputations, the distribution percentage is 30.75% for traffic accidents and 69.24% for labor accidents. The population of the United States in 1999 was about 300,000,000, so the conclusion is that there is one amputation per 20,804 persons per year. In the group of labor amputations, 53% occurred in laborers and technicians, 30% in production and service workers, 16% in silviculture and fishery workers. A study found that in 2010, 22.8% of patients undergoing amputation of a lower extremity in the United States were readmitted to the hospital within 30 days. In 2017, an estimated 57.7 million people globally were living with existing traumatic limb injuries. Of these 57.7 million, the leading causes of amputation "were falls (36.2%), road injuries (15.7%), other transportation injuries (11.2%), and mechanical forces (10.4%)." On 2 August 2023, an investigation by The Wall Street Journal found that Ukrainian medical amputations in the war came to between 20,000 and 50,000 including both military and civilians. In comparison, during World War One 41,000 British and 67,000 Germans needed amputations. Prevention Methods in preventing amputation, limb-sparing techniques, depend on the problems that might cause amputations to be necessary. Chronic infections, often caused by diabetes or decubitus ulcers in bedridden patients, are common causes of infections that lead to gangrene, which, when widespread, necessitates amputation. There are two key challenges: first, many patients have impaired circulation in their extremities, and second, they have difficulty curing infections in limbs with poor blood circulation. Crush injuries where there is extensive tissue damage and poor circulation also benefit from hyperbaric oxygen therapy (HBOT). The high level of oxygenation and revascularization speed up recovery times and prevent infections. A study found that the patented method called Circulator Boot achieved significant results in prevention of amputation in patients with diabetes and arteriosclerosis. Another study found it also effective for healing limb ulcers caused by peripheral vascular disease. The boot checks the heart rhythm and compresses the limb between heartbeats; the compression helps cure the wounds in the walls of veins and arteries, and helps to push the blood back to the heart. For victims of trauma, advances in microsurgery in the 1970s have made replantations of severed body parts possible. The establishment of laws, rules, and guidelines, and employment of modern equipment help protect people from traumatic amputations. Prognosis The individual may experience psychological trauma and emotional discomfort. The stump will remain an area of reduced mechanical stability. Limb loss can present significant or even drastic practical limitations. A large proportion of amputees (50–80%) experience the phenomenon of phantom limbs; they feel body parts that are no longer there. These limbs can itch, ache, burn, feel tense, dry or wet, locked in or trapped or they can feel as if they are moving. Some scientists believe it has to do with a kind of neural map that the brain has of the body, which sends information to the rest of the brain about limbs regardless of their existence. Phantom sensations and phantom pain may also occur after the removal of body parts other than the limbs, e.g. after amputation of the breast, extraction of a tooth (phantom tooth pain) or removal of an eye (phantom eye syndrome). A similar phenomenon is unexplained sensation in a body part unrelated to the amputated limb. It has been hypothesized that the portion of the brain responsible for processing stimulation from amputated limbs, being deprived of input, expands into the surrounding brain, (Phantoms in the Brain: V.S. Ramachandran and Sandra Blakeslee) such that an individual who has had an arm amputated will experience unexplained pressure or movement on his face or head. In many cases, the phantom limb aids in adaptation to a prosthesis, as it permits the person to experience proprioception of the prosthetic limb. To support improved resistance or usability, comfort or healing, some type of stump socks may be worn instead of or as part of wearing a prosthesis. Another side effect can be heterotopic ossification, especially when a bone injury is combined with a head injury. The brain signals the bone to grow instead of scar tissue to form, and nodules and other growth can interfere with prosthetics and sometimes require further operations. This type of injury has been especially common among soldiers wounded by improvised explosive devices in the Iraq War. Due to technological advances in prosthetics, many amputees live active lives with little restriction. Organizations such as the Challenged Athletes Foundation have been developed to give amputees the opportunity to be involved in athletics and adaptive sports such as amputee soccer. Nearly half of the individuals who have an amputation due to vascular disease will die within 5 years, usually secondary to the extensive co-morbidities rather than due to direct consequences of amputation. This is higher than the five year mortality rates for breast cancer, colon cancer, and prostate cancer. Of persons with diabetes who have a lower extremity amputation, up to 55% will require amputation of the second leg within two to three years. Etymology The word amputation is borrowed from Latin amputātus, past participle of amputāre "to prune back (a plant), prune away, remove by cutting (unwanted parts or features), cut off (a branch, limb, body part)," from am-, assimilated variant of amb- "about, around" + putāre "to prune, make clean or tidy, scour (wool)". The English word "Poes" was first applied to surgery in the 17th century, possibly first in Peter Lowe's A discourse of the Whole Art of Chirurgerie (published in either 1597 or 1612); his work was derived from 16th-century French texts and early English writers also used the words "extirpation" (16th-century French texts tended to use extirper), "disarticulation", and "dismemberment" (from the Old French desmembrer and a more common term before the 17th century for limb loss or removal), or simply "cutting", but by the end of the 17th century "amputation" had come to dominate as the accepted medical term. Notable cases Patch Adams Rick Allen Douglas Bader Götz of the Iron Hand Carl Brashear Lisa Bufano Roberto Carlos Tammy Duckworth Kalamandalam Sankaran Embranthiri Terry Fox Zach Gowen Pete Gray Shaquem Griffin Robert David Hall Bethany Hamilton Hugh Herr Frida Kahlo Ronnie Lott Hari Budha Magar Aimee Mullins Oscar Pistorius Amy Purdy Aron Ralston Hans-Ulrich Rudel Alex Zanardi
Biology and health sciences
Medical procedures
null
2995
https://en.wikipedia.org/wiki/Archaeopteryx
Archaeopteryx
Archaeopteryx (; ), sometimes referred to by its German name, "" ( Primeval Bird) is a genus of bird-like dinosaurs. The name derives from the ancient Greek (archaīos), meaning "ancient", and (ptéryx), meaning "feather" or "wing". Between the late 19th century and the early 21st century, Archaeopteryx was generally accepted by palaeontologists and popular reference books as the oldest known bird (member of the group Avialae). Older potential avialans have since been identified, including Anchiornis, Xiaotingia, and Aurornis. Archaeopteryx lived in the Late Jurassic around 150 million years ago, in what is now southern Germany, during a time when Europe was an archipelago of islands in a shallow warm tropical sea, much closer to the equator than it is now. Similar in size to a Eurasian magpie, with the largest individuals possibly attaining the size of a raven, the largest species of Archaeopteryx could grow to about in length. Despite their small size, broad wings, and inferred ability to fly or glide, Archaeopteryx had more in common with other small Mesozoic dinosaurs than with modern birds. In particular, they shared the following features with the dromaeosaurids and troodontids: jaws with sharp teeth, three fingers with claws, a long bony tail, hyperextensible second toes ("killing claw"), feathers (which also suggest warm-bloodedness), and various features of the skeleton. These features make Archaeopteryx a clear candidate for a transitional fossil between non-avian dinosaurs and avian dinosaurs (birds). Thus, Archaeopteryx plays an important role, not only in the study of the origin of birds, but in the study of dinosaurs. It was named from a single feather in 1861, the identity of which has been controversial. That same year, the first complete specimen of Archaeopteryx was announced. Over the years, eleven more fossils of Archaeopteryx have surfaced. Despite variation among these fossils, most experts regard all the remains that have been discovered as belonging to a single species, although this is still debated. Most of these twelve fossils include impressions of feathers. Because these feathers are of an advanced form (flight feathers), these fossils are evidence that the evolution of feathers began before the Late Jurassic. The type specimen of Archaeopteryx was discovered just two years after Charles Darwin published On the Origin of Species. Archaeopteryx seemed to confirm Darwin's theories and has since become a key piece of evidence for the origin of birds, the transitional fossils debate, and confirmation of evolution. Archaeopteryx was long considered to be the beginning of the evolutionary tree of birds. However, in recent years, the discovery of several small, feathered dinosaurs has created a mystery for palaeontologists, raising questions about which animals are the ancestors of modern birds and which are their relatives. History of discovery Over the years, fourteen body fossil specimens of Archaeopteryx have been found. All of the fossils come from the limestone deposits, quarried for centuries, near , Germany. These quarries excavate sediments from the Solnhofen Limestone formation and related units. The initial specimen was the first dinosaur to be discovered with feathers. The initial discovery, a single feather, was unearthed in 1860 or 1861 and described in 1861 by . It is now in the Natural History Museum of Berlin. Though it was the initial holotype, there were indications that it might not have been from the same animal as the body fossils. In 2019 it was reported that laser imaging had revealed the structure of the quill (which had not been visible since some time after the feather was described), and that the feather was inconsistent with the morphology of all other Archaeopteryx feathers known, leading to the conclusion that it originated from another dinosaur. This conclusion was challenged in 2020 as being unlikely; the feather was identified on the basis of morphology as most likely having been an upper major primary covert feather. The first skeleton, known as the London Specimen (BMNH 37001), was unearthed in 1861 near , Germany, and perhaps given to local physician in return for medical services. He then sold it for £700 (roughly £83,000 in 2020) to the Natural History Museum in London, where it remains. Missing most of its head and neck, it was described in 1863 by Richard Owen as Archaeopteryx macrura, allowing for the possibility it did not belong to the same species as the feather. In the subsequent fourth edition of his On the Origin of Species, Charles Darwin described how some authors had maintained "that the whole class of birds came suddenly into existence during the eocene period; but now we know, on the authority of Professor Owen, that a bird certainly lived during the deposition of the upper greensand; and still more recently, that strange bird, the Archaeopteryx, with a long lizard-like tail, bearing a pair of feathers on each joint, and with its wings furnished with two free claws, has been discovered in the oolitic slates of Solnhofen. Hardly any recent discovery shows more forcibly than this how little we as yet know of the former inhabitants of the world." The Greek word () means 'ancient, primeval'. primarily means 'wing', but it can also be just 'feather'. Meyer suggested this in his description. At first he referred to a single feather which appeared to resemble a modern bird's remex (wing feather), but he had heard of and been shown a rough sketch of the London specimen, to which he referred as a "" ("skeleton of an animal covered in similar feathers"). In German, this ambiguity is resolved by the term which does not necessarily mean a wing used for flying. was the favoured translation of Archaeopteryx among German scholars in the late nineteenth century. In English, 'ancient pinion' offers a rough approximation to this. Since then, twelve specimens have been recovered: The Berlin Specimen (HMN 1880/81) was discovered in 1874 or 1875 on the Blumenberg near , Germany, by farmer Jakob Niemeyer. He sold this precious fossil for the money to buy a cow in 1876, to innkeeper Johann Dörr, who again sold it to Ernst Otto Häberlein, the son of K. Häberlein. Placed on sale between 1877 and 1881, with potential buyers including O. C. Marsh of Yale University's Peabody Museum, it eventually was bought for 20,000 Goldmark by the Berlin's Natural History Museum, where it now is displayed. The transaction was financed by Ernst Werner von Siemens, founder of the company that bears his name. Described in 1884 by Wilhelm Dames, it is the most complete specimen, and the first with a complete head. In 1897 it was named by Dames as a new species, A. siemensii; though often considered a synonym of A. lithographica, several 21st century studies have concluded that it is a distinct species which includes the Berlin, Munich, and Thermopolis specimens. Composed of a torso, the Maxberg Specimen (S5) was discovered in 1956 near Langenaltheim; it was brought to the attention of professor Florian Heller in 1958 and described by him in 1959. The specimen is missing its head and tail, although the rest of the skeleton is mostly intact. Although it was once exhibited at the Maxberg Museum in Solnhofen, it is currently missing. It belonged to Eduard Opitsch, who loaned it to the museum until 1974. After his death in 1991, it was discovered that the specimen was missing and may have been stolen or sold. The Haarlem Specimen (TM 6428/29, also known as the Teylers Specimen) was discovered in 1855 near , Germany, and described as a Pterodactylus crassipes in 1857 by Meyer. It was reclassified in 1970 by John Ostrom and is currently located at the Teylers Museum in Haarlem, the Netherlands. It was the very first specimen found, but was incorrectly classified at the time. It is also one of the least complete specimens, consisting mostly of limb bones, isolated cervical vertebrae, and ribs. In 2017 it was named as a separate genus Ostromia, considered more closely related to Anchiornis from China. The Eichstätt Specimen (JM 2257) was discovered in 1951 near Workerszell, Germany, and described by Peter Wellnhofer in 1974. Currently located at the Jura Museum in Eichstätt, Germany, it is the smallest known specimen and has the second-best head. It is possibly a separate genus (Jurapteryx recurva) or species (A. recurva). The Solnhofen Specimen (unnumbered specimen) was discovered in the 1970s near Eichstätt, Germany, and described in 1988 by Wellnhofer. Currently located at the Bürgermeister-Müller-Museum in Solnhofen, it originally was classified as Compsognathus by an amateur collector, the same mayor Friedrich Müller after which the museum is named. It is the largest specimen known and may belong to a separate genus and species, Wellnhoferia grandis. It is missing only portions of the neck, tail, backbone, and head. The Munich Specimen (BSP 1999 I 50, formerly known as the Solenhofer-Aktien-Verein Specimen) was discovered on 3 August 1992 near Langenaltheim and described in 1993 by Wellnhofer. It is currently located at the Paläontologisches Museum München in Munich, to which it was sold in 1999 for 1.9 million Deutschmark. What was initially believed to be a bony sternum turned out to be part of the coracoid, but a cartilaginous sternum may have been present. Only the front of its face is missing. It has been used as the basis for a distinct species, A. bavarica, but more recent studies suggest it belongs to A. siemensii. An eighth, fragmentary specimen was discovered in 1990 in the younger Mörnsheim Formation at Daiting, Suevia. Therefore, it is known as the Daiting Specimen, and had been known since 1996 only from a cast, briefly shown at the Naturkundemuseum in Bamberg. The original was purchased by palaeontologist Raimund Albertsdörfer in 2009. It was on display for the first time with six other original fossils of Archaeopteryx at the Munich Mineral Show in October 2009. The Daiting Specimen was subsequently named Archaeopteryx albersdoerferi by Kundrat et al. (2018). After a lengthy period in a closed private collection, it was moved to the Museum of Evolution at Knuthenborg Safaripark (Denmark) in 2022, where it has since been on display and also been made available for researchers. Another fragmentary fossil was found in 2000. It is in private possession and, since 2004, on loan to the Bürgermeister-Müller Museum in Solnhofen, so it is called the Bürgermeister-Müller Specimen; the institute itself officially refers to it as the "Exemplar of the families Ottman & Steil, Solnhofen". As the fragment represents the remains of a single wing of Archaeopteryx, it is colloquially known as "chicken wing". Long in a private collection in Switzerland, the Thermopolis Specimen (WDC CSG 100) was discovered in Bavaria and described in 2005 by Mayr, Pohl, and Peters. Donated to the Wyoming Dinosaur Center in Thermopolis, Wyoming, it has the best-preserved head and feet; most of the neck and the lower jaw have not been preserved. The "Thermopolis" specimen was described on 2 December 2005 Science journal article as "A well-preserved Archaeopteryx specimen with theropod features"; it shows that Archaeopteryx lacked a reversed toe—a universal feature of birds—limiting its ability to perch on branches and implying a terrestrial or trunk-climbing lifestyle. This has been interpreted as evidence of theropod ancestry. In 1988, Gregory S. Paul claimed to have found evidence of a hyperextensible second toe, but this was not verified and accepted by other scientists until the Thermopolis specimen was described. "Until now, the feature was thought to belong only to the species' close relatives, the deinonychosaurs." The Thermopolis Specimen was assigned to Archaeopteryx siemensii in 2007. The specimen is considered to represent the most complete and best-preserved Archaeopteryx remains yet. The discovery of an eleventh specimen was announced in 2011; it was described in 2014. It is one of the more complete specimens, but is missing much of the skull and one forelimb. It is privately owned and has yet to be given a name. Palaeontologists of the Ludwig Maximilian University of Munich studied the specimen, which revealed previously unknown features of the plumage, such as feathers on both the upper and lower legs and metatarsus, and the only preserved tail tip. A twelfth specimen had been discovered by an amateur collector in 2010 at the Schamhaupten quarry, but the finding was only announced in February 2014. It was scientifically described in 2018. It represents a complete and mostly articulated skeleton with skull. It is the only specimen lacking preserved feathers. It is from the Painten Formation and somewhat older than the other specimens. The existence of a thirteenth specimen (the Chicago specimen) was announced in 2024 by the Field Museum in Chicago, US. One of two specimens in an institution outside Europe, the specimen was originally identified in a private collection in Switzerland, and had been acquired by these collectors in 1990, prior to Germany's 2015 ban on exporting Archaeopteryx specimens. The specimen was acquired by the Field Museum in 2022, and went on public display in 2024 following two years of preparation. The specimen is to be studied by famed paleornithologist Jingmai O'Connor. A fourteenth specimen, SMNK-PAL 10,000, was published in January 2025, this one from the Mörnsheim Formation. It preserves the right forelimb, shoulder, and fragments of the other limbs, with various features of the shoulder and forelimb resembling Archaeopteryx more than any other avialan within the Mörnsheim Formation. However, due to the fragmentary nature of this specimen, it cannot be assigned to a specific species within Archaeopteryx. Authenticity Beginning in 1985, an amateur group including astronomer Fred Hoyle and physicist Lee Spetner, published a series of papers claiming that the feathers on the Berlin and London specimens of Archaeopteryx were forged. Their claims were repudiated by Alan J. Charig and others at the Natural History Museum in London. Most of their supposed evidence for a forgery was based on unfamiliarity with the processes of lithification; for example, they proposed that, based on the difference in texture associated with the feathers, feather impressions were applied to a thin layer of cement, without realizing that feathers themselves would have caused a textural difference. They also misinterpreted the fossils, claiming that the tail was forged as one large feather, when visibly this is not the case. In addition, they claimed that the other specimens of Archaeopteryx known at the time did not have feathers, which is incorrect; the Maxberg and Eichstätt specimens have obvious feathers. They also expressed disbelief that slabs would split so smoothly, or that one half of a slab containing fossils would have good preservation, but not the counterslab. These are common properties of Solnhofen fossils, because the dead animals would fall onto hardened surfaces, which would form a natural plane for the future slabs to split along and would leave the bulk of the fossil on one side and little on the other. Finally, the motives they suggested for a forgery are not strong, and are contradictory; one is that Richard Owen wanted to forge evidence in support of Charles Darwin's theory of evolution, which is unlikely given Owen's views toward Darwin and his theory. The other is that Owen wanted to set a trap for Darwin, hoping the latter would support the fossils so Owen could discredit him with the forgery; this is unlikely because Owen wrote a detailed paper on the London specimen, so such an action would certainly backfire. Charig et al. pointed to the presence of hairline cracks in the slabs running through both rock and fossil impressions, and mineral growth over the slabs that had occurred before discovery and preparation, as evidence that the feathers were original. Spetner et al. then attempted to show that the cracks would have propagated naturally through their postulated cement layer, but neglected to account for the fact that the cracks were old and had been filled with calcite, and thus were not able to propagate. They also attempted to show the presence of cement on the London specimen through X-ray spectroscopy, and did find something that was not rock; it was not cement either, and is most probably a fragment of silicone rubber left behind when moulds were made of the specimen. Their suggestions have not been taken seriously by palaeontologists, as their evidence was largely based on misunderstandings of geology, and they never discussed the other feather-bearing specimens, which have increased in number since then. Charig et al. reported a discolouration: a dark band between two layers of limestone – they say it is the product of sedimentation. It is natural for limestone to take on the colour of its surroundings and most limestones are coloured (if not colour banded) to some degree, so the darkness was attributed to such impurities. They also mention that a complete absence of air bubbles in the rock slabs is further proof that the specimen is authentic. Description Most of the specimens of Archaeopteryx that have been discovered come from the Solnhofen limestone in Bavaria, southern Germany, which is a , a rare and remarkable geological formation known for its superbly detailed fossils laid down during the early Tithonian stage of the Jurassic period, approximately 150.8–148.5million years ago. Archaeopteryx was roughly the size of a raven, with broad wings that were rounded at the ends and a long tail compared to its body length. It could reach up to in body length and in wingspan, with an estimated mass of . Archaeopteryx feathers, although less documented than its other features, were very similar in structure to modern-day bird feathers. Despite the presence of numerous avian features, Archaeopteryx had many non-avian theropod dinosaur characteristics. Unlike modern birds, Archaeopteryx had small teeth, as well as a long bony tail, features which Archaeopteryx shared with other dinosaurs of the time. Because it displays features common to both birds and non-avian dinosaurs, Archaeopteryx has often been considered a link between them. In the 1970s, John Ostrom, following Thomas Henry Huxley's lead in 1868, argued that birds evolved within theropod dinosaurs and Archaeopteryx was a critical piece of evidence for this argument; it had several avian features, such as a wishbone, flight feathers, wings, and a partially reversed first toe along with dinosaur and theropod features. For instance, it has a long ascending process of the ankle bone, interdental plates, an obturator process of the ischium, and long chevrons in the tail. In particular, Ostrom found that Archaeopteryx was remarkably similar to the theropod family Dromaeosauridae. Archaeopteryx had three separate digits on each fore-leg each ending with a "claw". Few birds have such features. Some birds, such as ducks, swans, Jacanas (Jacana sp.), and the hoatzin (Opisthocomus hoazin), have them concealed beneath their leg-feathers. Plumage Specimens of Archaeopteryx were most notable for their well-developed flight feathers. They were markedly asymmetrical and showed the structure of flight feathers in modern birds, with vanes given stability by a barb-barbule-barbicel arrangement. The tail feathers were less asymmetrical, again in line with the situation in modern birds and also had firm vanes. The thumb did not yet bear a separately movable tuft of stiff feathers. The body plumage of Archaeopteryx is less well-documented and has only been properly researched in the well-preserved Berlin specimen. Thus, as more than one species seems to be involved, the research into the Berlin specimen's feathers does not necessarily hold true for the rest of the species of Archaeopteryx. In the Berlin specimen, there are "trousers" of well-developed feathers on the legs; some of these feathers seem to have a basic contour feather structure, but are somewhat decomposed (they lack barbicels as in ratites). In part they are firm and thus capable of supporting flight. A patch of pennaceous feathers is found running along its back, which was quite similar to the contour feathers of the body plumage of modern birds in being symmetrical and firm, although not as stiff as the flight-related feathers. Apart from that, the feather traces in the Berlin specimen are limited to a sort of "proto-down" not dissimilar to that found in the dinosaur Sinosauropteryx: decomposed and fluffy, and possibly even appearing more like fur than feathers in life (although not in their microscopic structure). These occur on the remainder of the body—although some feathers did not fossilize and others were obliterated during preparation, leaving bare patches on specimens—and the lower neck. There is no indication of feathering on the upper neck and head. While these conceivably may have been nude, this may still be an artefact of preservation. It appears that most Archaeopteryx specimens became embedded in anoxic sediment after drifting some time on their backs in the sea—the head, neck and the tail are generally bent downward, which suggests that the specimens had just started to rot when they were embedded, with tendons and muscle relaxing so that the characteristic shape (death pose) of the fossil specimens was achieved. This would mean that the skin already was softened and loose, which is bolstered by the fact that in some specimens the flight feathers were starting to detach at the point of embedding in the sediment. So it is hypothesized that the pertinent specimens moved along the sea bed in shallow water for some time before burial, the head and upper neck feathers sloughing off, while the more firmly attached tail feathers remained. Colouration In 2011, graduate student Ryan Carney and colleagues performed the first colour study on an Archaeopteryx specimen. Using scanning electron microscopy technology and energy-dispersive X-ray analysis, the team was able to detect the structure of melanosomes in the isolated feather specimen described in 1861. The resultant measurements were then compared to those of 87modern bird species, and the original colour was calculated with a 95% likelihood to be black. The feather was determined to be black throughout, with heavier pigmentation in the distal tip. The feather studied was most probably a dorsal covert, which would have partly covered the primary feathers on the wings. The study does not mean that Archaeopteryx was entirely black, but suggests that it had some black colouration which included the coverts. Carney pointed out that this is consistent with what is known of modern flight characteristics, in that black melanosomes have structural properties that strengthen feathers for flight. In a 2013 study published in the Journal of Analytical Atomic Spectrometry, new analyses of Archaeopteryxs feathers revealed that the animal may have had complex light- and dark-coloured plumage, with heavier pigmentation in the distal tips and outer vanes. This analysis of colour distribution was based primarily on the distribution of sulphate within the fossil. An author on the previous Archaeopteryx colour study argued against the interpretation of such biomarkers as an indicator of eumelanin in the full Archaeopteryx specimen. Carney and other colleagues also argued against the 2013 study's interpretation of the sulphate and trace metals, and in a 2020 study published in Scientific Reports demonstrated that the isolated covert feather was entirely matte black (as opposed to black and white, or iridescent) and that the remaining "plumage patterns of Archaeopteryx remain unknown". Classification Today, fossils of the genus Archaeopteryx are usually assigned to one or two species, A. lithographica and A. siemensii, but their taxonomic history is complicated. Ten names have been published for the handful of specimens. As interpreted today, the name A. lithographica only referred to the single feather described by Meyer. In 1954 Gavin de Beer concluded that the London specimen was the holotype. In 1960, Swinton accordingly proposed that the name Archaeopteryx lithographica be placed on the official genera list making the alternative names Griphosaurus and Griphornis invalid. The ICZN, implicitly accepting De Beer's standpoint, did indeed suppress the plethora of alternative names initially proposed for the first skeleton specimens, which mainly resulted from the acrimonious dispute between Meyer and his opponent Johann Andreas Wagner (whose Griphosaurus problematicus—'problematic riddle-lizard'—was a vitriolic sneer at Meyer's Archaeopteryx). In addition, in 1977, the Commission ruled that the first species name of the Haarlem specimen, crassipes, described by Meyer as a pterosaur before its true nature was realized, was not to be given preference over lithographica in instances where scientists considered them to represent the same species. It has been noted that the feather, the first specimen of Archaeopteryx described, does not correspond well with the flight-related feathers of Archaeopteryx. It certainly is a flight feather of a contemporary species, but its size and proportions indicate that it may belong to another, smaller species of feathered theropod, of which only this feather is known so far. As the feather had been designated the type specimen, the name Archaeopteryx should then no longer be applied to the skeletons, thus creating significant nomenclatorial confusion. In 2007, two sets of scientists therefore petitioned the ICZN requesting that the London specimen explicitly be made the type by designating it as the new holotype specimen, or neotype. This suggestion was upheld by the ICZN after four years of debate, and the London specimen was designated the neotype on 3 October 2011. Below is a cladogram published in 2013 by Godefroit et al. Species It has been argued that all the specimens belong to the same species, A. lithographica. Differences do exist among the specimens, and while some researchers regard these as due to the different ages of the specimens, some may be related to actual species diversity. In particular, the Munich, Eichstätt, Solnhofen, and Thermopolis specimens differ from the London, Berlin, and Haarlem specimens in being smaller or much larger, having different finger proportions, having more slender snouts lined with forward-pointing teeth, and the possible presence of a sternum. Due to these differences, most individual specimens have been given their own species name at one point or another. The Berlin specimen has been designated as Archaeornis siemensii, the Eichstätt specimen as Jurapteryx recurva, the Munich specimen as Archaeopteryx bavarica, and the Solnhofen specimen as Wellnhoferia grandis. In 2007, a review of all well-preserved specimens including the then-newly discovered Thermopolis specimen concluded that two distinct species of Archaeopteryx could be supported: A. lithographica (consisting of at least the London and Solnhofen specimens), and A. siemensii (consisting of at least the Berlin, Munich, and Thermopolis specimens). The two species are distinguished primarily by large flexor tubercles on the foot claws in A. lithographica (the claws of A. siemensii specimens being relatively simple and straight). A. lithographica also had a constricted portion of the crown in some teeth and a stouter metatarsus. A supposed additional species, Wellnhoferia grandis (based on the Solnhofen specimen), seems to be indistinguishable from A. lithographica except in its larger size. Synonyms If two names are given, the first denotes the original describer of the "species", the second the author on whom the given name combination is based. As always in zoological nomenclature, putting an author's name in parentheses denotes that the taxon was originally described in a different genus. Archaeopteryx lithographica Meyer, 1861 [conserved name] Archaeopterix lithographica Anon., 1861 [lapsus] Griphosaurus problematicus Wagner, 1862 [rejected name 1961 per ICZN Opinion 607] Griphornis longicaudatus Owen vide Woodward, 1862 [rejected name 1961 per ICZN Opinion 607] Archaeopteryx macrura Owen, 1862 [rejected name 1961 per ICZN Opinion 607] Archaeopteryx oweni Petronievics, 1917 [rejected name 1961 per ICZN Opinion 607] Archaeopteryx recurva Howgate, 1984 Jurapteryx recurva (Howgate, 1984) Howgate, 1985 Wellnhoferia grandis Elżanowski, 2001 Archaeopteryx siemensii Dames, 1897 Archaeornis siemensii (Dames, 1897) Petronievics, 1917 Archaeopteryx bavarica Wellnhofer, 1993 "Archaeopteryx" vicensensis (Anon. fide Lambrecht, 1933) is a nomen nudum for what appears to be an undescribed pterosaur. Phylogenetic position Modern palaeontology has often classified Archaeopteryx as the most primitive bird. However, it is not thought to be a true ancestor of modern birds, but rather a close relative of that ancestor. Nonetheless, Archaeopteryx was often used as a model of the true ancestral bird. Several authors have done so. Lowe (1935) and Thulborn (1984) questioned whether Archaeopteryx truly was the first bird. They suggested that Archaeopteryx was a dinosaur that was no more closely related to birds than were other dinosaur groups. Kurzanov (1987) suggested that Avimimus was more likely to be the ancestor of all birds than Archaeopteryx. Barsbold (1983) and Zweers and Van den Berge (1997) noted that many maniraptoran lineages are extremely birdlike, and they suggested that different groups of birds may have descended from different dinosaur ancestors. The discovery of the closely related Xiaotingia in 2011 led to new phylogenetic analyses that suggested that Archaeopteryx is a deinonychosaur rather than an avialan, and therefore, not a "bird" under most common uses of that term. A more thorough analysis was published soon after to test this hypothesis, and failed to arrive at the same result; it found Archaeopteryx in its traditional position at the base of Avialae, while Xiaotingia was recovered as a basal dromaeosaurid or troodontid. The authors of the follow-up study noted that uncertainties still exist, and that it may not be possible to state confidently whether or not Archaeopteryx is a member of Avialae or not, barring new and better specimens of relevant species. Phylogenetic studies conducted by Senter, et al. (2012) and Turner, Makovicky, and Norell (2012) also found Archaeopteryx to be more closely related to living birds than to dromaeosaurids and troodontids. On the other hand, Godefroit et al. (2013) recovered Archaeopteryx as more closely related to dromaeosaurids and troodontids in the analysis included in their description of Eosinopteryx brevipenna. The authors used a modified version of the matrix from the study describing Xiaotingia, adding Jinfengopteryx elegans and Eosinopteryx brevipenna to it, as well as adding four additional characters related to the development of the plumage. Unlike the analysis from the description of Xiaotingia, the analysis conducted by Godefroit, et al. did not find Archaeopteryx to be related particularly closely to Anchiornis and Xiaotingia, which were recovered as basal troodontids instead. Agnolín and Novas (2013) found Archaeopteryx and (possibly synonymous) Wellnhoferia to be from a clade sister to the lineage including Jeholornis and Pygostylia, with Microraptoria, Unenlagiinae, and the clade containing Anchiornis and Xiaotingia being successively closer outgroups to the Avialae (defined by the authors as the clade stemming from the last common ancestor of Archaeopteryx and Aves). Another phylogenetic study by Godefroit, et al., using a more inclusive matrix than the one from the analysis in the description of Eosinopteryx brevipenna, also found Archaeopteryx to be a member of Avialae (defined by the authors as the most inclusive clade containing Passer domesticus, but not Dromaeosaurus albertensis or Troodon formosus). Archaeopteryx was found to form a grade at the base of Avialae with Xiaotingia, Anchiornis, and Aurornis. Compared to Archaeopteryx, Xiaotingia was found to be more closely related to extant birds, while both Anchiornis and Aurornis were found to be more distantly so. Hu et al. (2018), Wang et al. (2018) and Hartman et al. (2019) found Archaeopteryx to have been a deinonychosaur instead of an avialan. More specifically, it and closely related taxa were considered basal deinonychosaurs, with dromaeosaurids and troodontids forming together a parallel lineage within the group. Because Hartman et al. found Archaeopteryx isolated in a group of flightless deinonychosaurs (otherwise considered "anchiornithids"), they considered it highly probable that this animal evolved flight independently from bird ancestors (and from Microraptor and Yi). The following cladogram illustrates their hypothesis regarding the position of Archaeopteryx: The authors, however, found that the Archaeopteryx being an avialan was only slightly less likely than this hypothesis, and as likely as Archaeopterygidae and Troodontidae being sister clades. Palaeobiology Flight As in the wings of modern birds, the flight feathers of Archaeopteryx were somewhat asymmetrical and the tail feathers were rather broad. This implies that the wings and tail were used for lift generation, but it is unclear whether Archaeopteryx was capable of flapping flight or simply a glider. The lack of a bony breastbone suggests that Archaeopteryx was not a very strong flier, but flight muscles might have attached to the thick, boomerang-shaped wishbone, the platelike coracoids, or perhaps, to a cartilaginous sternum. The sideways orientation of the glenoid (shoulder) joint between scapula, coracoid, and humerus—instead of the dorsally angled arrangement found in modern birds—may indicate that Archaeopteryx was unable to lift its wings above its back, a requirement for the upstroke found in modern flapping flight. According to a study by Philip Senter in 2006, Archaeopteryx was indeed unable to use flapping flight as modern birds do, but it may well have used a downstroke-only flap-assisted gliding technique. However, a more recent study solves this issue by suggesting a different flight stroke configuration for non-avian flying theropods. Archaeopteryx wings were relatively large, which would have resulted in a low stall speed and reduced turning radius. The short and rounded shape of the wings would have increased drag, but also could have improved its ability to fly through cluttered environments such as trees and brush (similar wing shapes are seen in birds that fly through trees and brush, such as crows and pheasants). The presence of "hind wings", asymmetrical flight feathers stemming from the legs similar to those seen in dromaeosaurids such as Microraptor, also would have added to the aerial mobility of Archaeopteryx. The first detailed study of the hind wings by Longrich in 2006, suggested that the structures formed up to 12% of the total airfoil. This would have reduced stall speed by up to 6% and turning radius by up to 12%. The feathers of Archaeopteryx were asymmetrical. This has been interpreted as evidence that it was a flyer, because flightless birds tend to have symmetrical feathers. Some scientists, including Thomson and Speakman, have questioned this. They studied more than 70 families of living birds, and found that some flightless types do have a range of asymmetry in their feathers, and that the feathers of Archaeopteryx fall into this range. The degree of asymmetry seen in Archaeopteryx is more typical for slow flyers than for flightless birds. In 2010, Robert L. Nudds and Gareth J. Dyke in the journal Science published a paper in which they analysed the rachises of the primary feathers of Confuciusornis and Archaeopteryx. The analysis suggested that the rachises on these two genera were thinner and weaker than those of modern birds relative to body mass. The authors determined that Archaeopteryx and Confuciusornis, were unable to use flapping flight. This study was criticized by Philip J. Currie and Luis Chiappe. Chiappe suggested that it is difficult to measure the rachises of fossilized feathers, and Currie speculated that Archaeopteryx and Confuciusornis must have been able to fly to some degree, as their fossils are preserved in what is believed to have been marine or lake sediments, suggesting that they must have been able to fly over deep water. Gregory Paul also disagreed with the study, arguing in a 2010 response that Nudds and Dyke had overestimated the masses of these early birds, and that more accurate mass estimates allowed powered flight even with relatively narrow rachises. Nudds and Dyke had assumed a mass of for the Munich specimen Archaeopteryx, a young juvenile, based on published mass estimates of larger specimens. Paul argued that a more reasonable body mass estimate for the Munich specimen is about . Paul also criticized the measurements of the rachises themselves, noting that the feathers in the Munich specimen are poorly preserved. Nudds and Dyke reported a diameter of for the longest primary feather, which Paul could not confirm using photographs. Paul measured some of the inner primary feathers, finding rachises across. Despite these criticisms, Nudds and Dyke stood by their original conclusions. They claimed that Paul's statement, that an adult Archaeopteryx would have been a better flyer than the juvenile Munich specimen, was dubious. This, they reasoned, would require an even thicker rachis, evidence for which has not yet been presented. Another possibility is that they had not achieved true flight, but instead used their wings as aids for extra lift while running over water after the fashion of the basilisk lizard, which could explain their presence in lake and marine deposits (see Origin of avian flight). In 2004, scientists analysing a detailed CT scan of the braincase of the London Archaeopteryx concluded that its brain was significantly larger than that of most dinosaurs, indicating that it possessed the brain size necessary for flying. The overall brain anatomy was reconstructed using the scan. The reconstruction showed that the regions associated with vision took up nearly one-third of the brain. Other well-developed areas involved hearing and muscle coordination. The skull scan also revealed the structure of its inner ear. The structure more closely resembles that of modern birds than the inner ear of non-avian reptiles. These characteristics taken together suggest that Archaeopteryx had the keen sense of hearing, balance, spatial perception, and coordination needed to fly. Archaeopteryx had a cerebrum-to-brain-volume ratio 78% of the way to modern birds from the condition of non-coelurosaurian dinosaurs such as Carcharodontosaurus or Allosaurus, which had a crocodile-like anatomy of the brain and inner ear. Newer research shows that while the Archaeopteryx brain was more complex than that of more primitive theropods, it had a more generalized brain volume among Maniraptora dinosaurs, even smaller than that of other non-avian dinosaurs in several instances, which indicates the neurological development required for flight was already a common trait in the maniraptoran clade. Recent studies of flight feather barb geometry reveal that modern birds possess a larger barb angle in the trailing vane of the feather, whereas Archaeopteryx lacks this large barb angle, indicating potentially weak flight abilities. Archaeopteryx continues to play an important part in scientific debates about the origin and evolution of birds. Some scientists see it as a semi-arboreal climbing animal, following the idea that birds evolved from tree-dwelling gliders (the "trees down" hypothesis for the evolution of flight proposed by O. C. Marsh). Other scientists see Archaeopteryx as running quickly along the ground, supporting the idea that birds evolved flight by running (the "ground up" hypothesis proposed by Samuel Wendell Williston). Still others suggest that Archaeopteryx might have been at home both in the trees and on the ground, like modern crows, and this latter view is what currently is considered best supported by morphological characters. Altogether, it appears that the species was not particularly specialized for running on the ground or for perching. A scenario outlined by Elżanowski in 2002 suggested that Archaeopteryx used its wings mainly to escape predators by glides punctuated with shallow downstrokes to reach successively higher perches, and alternatively, to cover longer distances (mainly) by gliding down from cliffs or treetops. In March 2018, scientists reported that Archaeopteryx was likely capable of a flight stroke cycle morphologically closer to the grabbing motion of maniraptorans and distinct from that of modern birds. This study on Archaeopteryxs bone histology identified biomechanical and physiological adaptations exhibited by modern volant birds that perform intermittent flapping, such as pheasants and other burst flyers. Some researchers suggested that the feather sheaths of Archaeopteryx shows a center-out, flight related moulting strategy like modern birds. As it was a weak flier, this would have been extremely advantageous in preserving its maximum flight performance. Kiat and colleagues reinterpreted this purported moulting evidence to be problematic and equivocal at best, and considered that these structures more likely represents the calami traces of the fully grown feathers, though the original authors still remained by their conclusion. Growth An histological study by Erickson, Norell, Zhongue, and others in 2009 estimated that Archaeopteryx grew relatively slowly compared to modern birds, presumably because the outermost portions of Archaeopteryx bones appear poorly vascularized; in living vertebrates, poorly vascularized bone is correlated with slow growth rate. They also assume that all known skeletons of Archaeopteryx come from juvenile specimens. Because the bones of Archaeopteryx could not be histologically sectioned in a formal skeletochronological (growth ring) analysis, Erickson and colleagues used bone vascularity (porosity) to estimate bone growth rate. They assumed that poorly vascularized bone grows at similar rates in all birds and in Archaeopteryx. The poorly vascularized bone of Archaeopteryx might have grown as slowly as that in a mallard (2.5micrometres per day) or as fast as that in an ostrich (4.2micrometres per day). Using this range of bone growth rates, they calculated how long it would take to "grow" each specimen of Archaeopteryx to the observed size; it may have taken at least 970 days (there were 375 days in a Late Jurassic year) to reach an adult size of . The study also found that the avialans Jeholornis and Sapeornis grew relatively slowly, as did the dromaeosaurid Mahakala. The avialans Confuciusornis and Ichthyornis grew relatively quickly, following a growth trend similar to that of modern birds. One of the few modern birds that exhibit slow growth is the flightless kiwi, and the authors speculated that Archaeopteryx and the kiwi had similar basal metabolic rate. Daily activity patterns Comparisons between the scleral rings of Archaeopteryx and modern birds and reptiles indicate that it may have been diurnal, similar to most modern birds. Palaeoecology The richness and diversity of the Solnhofen limestones in which all specimens of Archaeopteryx have been found have shed light on an ancient Jurassic Bavaria strikingly different from the present day. The latitude was similar to Florida, though the climate was likely to have been drier, as evidenced by fossils of plants with adaptations for arid conditions and a lack of terrestrial sediments characteristic of rivers. Evidence of plants, although scarce, include cycads and conifers while animals found include a large number of insects, small lizards, pterosaurs, and Compsognathus. The excellent preservation of Archaeopteryx fossils and other terrestrial fossils found at Solnhofen indicates that they did not travel far before becoming preserved. The Archaeopteryx specimens found were therefore likely to have lived on the low islands surrounding the Solnhofen lagoon rather than to have been corpses that drifted in from farther away. Archaeopteryx skeletons are considerably less numerous in the deposits of Solnhofen than those of pterosaurs, of which seven genera have been found. The pterosaurs included species such as Rhamphorhynchus belonging to the Rhamphorhynchidae, the group which dominated the ecological niche currently occupied by seabirds, and which became extinct at the end of the Jurassic. The pterosaurs, which also included Pterodactylus, were common enough that it is unlikely that the specimens found are vagrants from the larger islands to the north. The islands that surrounded the Solnhofen lagoon were low lying, semi-arid, and sub-tropical with a long dry season and little rain. The closest modern analogue for the Solnhofen conditions is said to be Orca Basin in the northern Gulf of Mexico, although it is much deeper than the Solnhofen lagoons. The flora of these islands was adapted to these dry conditions and consisted mostly of low () shrubs. Contrary to reconstructions of Archaeopteryx climbing large trees, these seem to have been mostly absent from the islands; few trunks have been found in the sediments and fossilized tree pollen also is absent. The lifestyle of Archaeopteryx is difficult to reconstruct and there are several theories regarding it. Some researchers suggest that it was primarily adapted to life on the ground, while other researchers suggest that it was principally arboreal on the basis of the curvature of the claws which has since been questioned. The absence of trees does not preclude Archaeopteryx from an arboreal lifestyle, as several species of bird live exclusively in low shrubs. Various aspects of the morphology of Archaeopteryx point to either an arboreal or ground existence, including the length of its legs and the elongation in its feet; some authorities consider it likely to have been a generalist capable of feeding in both shrubs and open ground, as well as along the shores of the lagoon. It most likely hunted small prey, seizing it with its jaws if it was small enough, or with its claws if it was larger.
Biology and health sciences
General articles
null
3038
https://en.wikipedia.org/wiki/Acid%E2%80%93base%20reaction
Acid–base reaction
In chemistry, an acid–base reaction is a chemical reaction that occurs between an acid and a base. It can be used to determine pH via titration. Several theoretical frameworks provide alternative conceptions of the reaction mechanisms and their application in solving related problems; these are called the acid–base theories, for example, Brønsted–Lowry acid–base theory. Their importance becomes apparent in analyzing acid–base reactions for gaseous or liquid species, or when acid or base character may be somewhat less apparent. The first of these concepts was provided by the French chemist Antoine Lavoisier, around 1776. It is important to think of the acid–base reaction models as theories that complement each other. For example, the current Lewis model has the broadest definition of what an acid and base are, with the Brønsted–Lowry theory being a subset of what acids and bases are, and the Arrhenius theory being the most restrictive. Acid–base definitions Historic development The concept of an acid–base reaction was first proposed in 1754 by Guillaume-François Rouelle, who introduced the word "base" into chemistry to mean a substance which reacts with an acid to give it solid form (as a salt). Bases are mostly bitter in nature. Lavoisier's oxygen theory of acids The first scientific concept of acids and bases was provided by Lavoisier in around 1776. Since Lavoisier's knowledge of strong acids was mainly restricted to oxoacids, such as (nitric acid) and (sulfuric acid), which tend to contain central atoms in high oxidation states surrounded by oxygen, and since he was not aware of the true composition of the hydrohalic acids (HF, HCl, HBr, and HI), he defined acids in terms of their containing oxygen, which in fact he named from Greek words meaning "acid-former" (). The Lavoisier definition held for over 30 years, until the 1810 article and subsequent lectures by Sir Humphry Davy in which he proved the lack of oxygen in hydrogen sulfide (), hydrogen telluride (), and the hydrohalic acids. However, Davy failed to develop a new theory, concluding that "acidity does not depend upon any particular elementary substance, but upon peculiar arrangement of various substances". One notable modification of oxygen theory was provided by Jöns Jacob Berzelius, who stated that acids are oxides of nonmetals while bases are oxides of metals. Liebig's hydrogen theory of acids In 1838, Justus von Liebig proposed that an acid is a hydrogen-containing compound whose hydrogen can be replaced by a metal. This redefinition was based on his extensive work on the chemical composition of organic acids, finishing the doctrinal shift from oxygen-based acids to hydrogen-based acids started by Davy. Liebig's definition, while completely empirical, remained in use for almost 50 years until the adoption of the Arrhenius definition. Arrhenius definition The first modern definition of acids and bases in molecular terms was devised by Svante Arrhenius. A hydrogen theory of acids, it followed from his 1884 work with Friedrich Wilhelm Ostwald in establishing the presence of ions in aqueous solution and led to Arrhenius receiving the Nobel Prize in Chemistry in 1903. As defined by Arrhenius: An Arrhenius acid is a substance that ionises in water to form hydrogen ions (); that is, an acid increases the concentration of H+ ions in an aqueous solution. This causes the protonation of water, or the creation of the hydronium () ion. Thus, in modern times, the symbol is interpreted as a shorthand for , because it is now known that a bare proton does not exist as a free species in aqueous solution. This is the species which is measured by pH indicators to measure the acidity or basicity of a solution. An Arrhenius base is a substance that dissociates in water to form hydroxide () ions; that is, a base increases the concentration of ions in an aqueous solution. The Arrhenius definitions of acidity and alkalinity are restricted to aqueous solutions and are not valid for most non-aqueous solutions, and refer to the concentration of the solvent ions. Under this definition, pure and HCl dissolved in toluene are not acidic, and molten NaOH and solutions of calcium amide in liquid ammonia are not alkaline. This led to the development of the Brønsted–Lowry theory and subsequent Lewis theory to account for these non-aqueous exceptions. The reaction of an acid with a base is called a neutralization reaction. The products of this reaction are a salt and water. In this traditional representation an acid–base neutralization reaction is formulated as a double-replacement reaction. For example, the reaction of hydrochloric acid (HCl) with sodium hydroxide (NaOH) solutions produces a solution of sodium chloride (NaCl) and some additional water molecules. The modifier (aq) in this equation was implied by Arrhenius, rather than included explicitly. It indicates that the substances are dissolved in water. Though all three substances, HCl, NaOH and NaCl are capable of existing as pure compounds, in aqueous solutions they are fully dissociated into the aquated ions and . Example: Baking powder Baking powder is used to cause the dough for breads and cakes to "rise" by creating millions of tiny carbon dioxide bubbles. Baking powder is not to be confused with baking soda, which is sodium bicarbonate (). Baking powder is a mixture of baking soda (sodium bicarbonate) and acidic salts. The bubbles are created because, when the baking powder is combined with water, the sodium bicarbonate and acid salts react to produce gaseous carbon dioxide. Whether commercially or domestically prepared, the principles behind baking powder formulations remain the same. The acid–base reaction can be generically represented as shown: The real reactions are more complicated because the acids are complicated. For example, starting with sodium bicarbonate and monocalcium phosphate (), the reaction produces carbon dioxide by the following stoichiometry: A typical formulation (by weight) could call for 30% sodium bicarbonate, 5–12% monocalcium phosphate, and 21–26% sodium aluminium sulfate. Alternately, a commercial baking powder might use sodium acid pyrophosphate as one of the two acidic components instead of sodium aluminium sulfate. Another typical acid in such formulations is cream of tartar (), a derivative of tartaric acid. Brønsted–Lowry definition The Brønsted–Lowry definition, formulated in 1923, independently by Johannes Nicolaus Brønsted in Denmark and Martin Lowry in England, is based upon the idea of protonation of bases through the deprotonation of acids – that is, the ability of acids to "donate" hydrogen ions () otherwise known as protons to bases, which "accept" them. An acid–base reaction is, thus, the removal of a hydrogen ion from the acid and its addition to the base. The removal of a hydrogen ion from an acid produces its conjugate base, which is the acid with a hydrogen ion removed. The reception of a proton by a base produces its conjugate acid, which is the base with a hydrogen ion added. Unlike the previous definitions, the Brønsted–Lowry definition does not refer to the formation of salt and solvent, but instead to the formation of conjugate acids and conjugate bases, produced by the transfer of a proton from the acid to the base. In this approach, acids and bases are fundamentally different in behavior from salts, which are seen as electrolytes, subject to the theories of Debye, Onsager, and others. An acid and a base react not to produce a salt and a solvent, but to form a new acid and a new base. The concept of neutralization is thus absent. Brønsted–Lowry acid–base behavior is formally independent of any solvent, making it more all-encompassing than the Arrhenius model. The calculation of pH under the Arrhenius model depended on alkalis (bases) dissolving in water (aqueous solution). The Brønsted–Lowry model expanded what could be pH tested using insoluble and soluble solutions (gas, liquid, solid). The general formula for acid–base reactions according to the Brønsted–Lowry definition is: where HA represents the acid, B represents the base, represents the conjugate acid of B, and represents the conjugate base of HA. For example, a Brønsted–Lowry model for the dissociation of hydrochloric acid (HCl) in aqueous solution would be the following: The removal of from the produces the chloride ion, , the conjugate base of the acid. The addition of to the (acting as a base) forms the hydronium ion, , the conjugate acid of the base. Water is amphoteric that is, it can act as both an acid and a base. The Brønsted–Lowry model explains this, showing the dissociation of water into low concentrations of hydronium and hydroxide ions: This equation is demonstrated in the image below: Here, one molecule of water acts as an acid, donating an and forming the conjugate base, , and a second molecule of water acts as a base, accepting the ion and forming the conjugate acid, . As an example of water acting as an acid, consider an aqueous solution of pyridine, . In this example, a water molecule is split into a hydrogen ion, which is donated to a pyridine molecule, and a hydroxide ion. In the Brønsted–Lowry model, the solvent does not necessarily have to be water, as is required by the Arrhenius Acid–Base model. For example, consider what happens when acetic acid, , dissolves in liquid ammonia. An ion is removed from acetic acid, forming its conjugate base, the acetate ion, . The addition of an ion to an ammonia molecule of the solvent creates its conjugate acid, the ammonium ion, . The Brønsted–Lowry model calls hydrogen-containing substances (like ) acids. Thus, some substances, which many chemists considered to be acids, such as or , are excluded from this classification due to lack of hydrogen. Gilbert N. Lewis wrote in 1938, "To restrict the group of acids to those substances that contain hydrogen interferes as seriously with the systematic understanding of chemistry as would the restriction of the term oxidizing agent to substances containing oxygen." Furthermore, and are not considered Brønsted bases, but rather salts containing the bases and . Lewis definition The hydrogen requirement of Arrhenius and Brønsted–Lowry was removed by the Lewis definition of acid–base reactions, devised by Gilbert N. Lewis in 1923, in the same year as Brønsted–Lowry, but it was not elaborated by him until 1938. Instead of defining acid–base reactions in terms of protons or other bonded substances, the Lewis definition defines a base (referred to as a Lewis base) to be a compound that can donate an electron pair, and an acid (a Lewis acid) to be a compound that can receive this electron pair. For example, boron trifluoride, is a typical Lewis acid. It can accept a pair of electrons as it has a vacancy in its octet. The fluoride ion has a full octet and can donate a pair of electrons. Thus is a typical Lewis acid, Lewis base reaction. All compounds of group 13 elements with a formula can behave as Lewis acids. Similarly, compounds of group 15 elements with a formula , such as amines, , and phosphines, , can behave as Lewis bases. Adducts between them have the formula with a dative covalent bond, shown symbolically as ←, between the atoms A (acceptor) and D (donor). Compounds of group 16 with a formula may also act as Lewis bases; in this way, a compound like an ether, , or a thioether, , can act as a Lewis base. The Lewis definition is not limited to these examples. For instance, carbon monoxide acts as a Lewis base when it forms an adduct with boron trifluoride, of formula . Adducts involving metal ions are referred to as co-ordination compounds; each ligand donates a pair of electrons to the metal ion. The reaction can be seen as an acid–base reaction in which a stronger base (ammonia) replaces a weaker one (water). The Lewis and Brønsted–Lowry definitions are consistent with each other since the reaction is an acid–base reaction in both theories. Solvent system definition One of the limitations of the Arrhenius definition is its reliance on water solutions. Edward Curtis Franklin studied the acid–base reactions in liquid ammonia in 1905 and pointed out the similarities to the water-based Arrhenius theory. Albert F.O. Germann, working with liquid phosgene, , formulated the solvent-based theory in 1925, thereby generalizing the Arrhenius definition to cover aprotic solvents. Germann pointed out that in many solutions, there are ions in equilibrium with the neutral solvent molecules: solvonium ions: a generic name for positive ions. These are also sometimes called solvo-acids; when protonated solvent, they are lyonium ions. solvate ions: a generic name for negative ions. These are also sometimes called solve-bases; when deprotonated solvent, they are lyate ions. For example, water and ammonia undergo such dissociation into hydronium and hydroxide, and ammonium and amide, respectively: Some aprotic systems also undergo such dissociation, such as dinitrogen tetroxide into nitrosonium and nitrate, antimony trichloride into dichloroantimonium and tetrachloroantimonate, and phosgene into chlorocarboxonium and chloride: A solute that causes an increase in the concentration of the solvonium ions and a decrease in the concentration of solvate ions is defined as an acid. A solute that causes an increase in the concentration of the solvate ions and a decrease in the concentration of the solvonium ions is defined as a base. Thus, in liquid ammonia, (supplying ) is a strong base, and (supplying ) is a strong acid. In liquid sulfur dioxide (), thionyl compounds (supplying ) behave as acids, and sulfites (supplying ) behave as bases. The non-aqueous acid–base reactions in liquid ammonia are similar to the reactions in water: Nitric acid can be a base in liquid sulfuric acid: The unique strength of this definition shows in describing the reactions in aprotic solvents; for example, in liquid : Because the solvent system definition depends on the solute as well as on the solvent itself, a particular solute can be either an acid or a base depending on the choice of the solvent: is a strong acid in water, a weak acid in acetic acid, and a weak base in fluorosulfonic acid; this characteristic of the theory has been seen as both a strength and a weakness, because some substances (such as and ) have been seen to be acidic or basic on their own right. On the other hand, solvent system theory has been criticized as being too general to be useful. Also, it has been thought that there is something intrinsically acidic about hydrogen compounds, a property not shared by non-hydrogenic solvonium salts. Lux–Flood definition This acid–base theory was a revival of the oxygen theory of acids and bases proposed by German chemist Hermann Lux in 1939, further improved by Håkon Flood and is still used in modern geochemistry and electrochemistry of molten salts. This definition describes an acid as an oxide ion () acceptor and a base as an oxide ion donor. For example: This theory is also useful in the systematisation of the reactions of noble gas compounds, especially the xenon oxides, fluorides, and oxofluorides. Usanovich definition Mikhail Usanovich developed a general theory that does not restrict acidity to hydrogen-containing compounds, but his approach, published in 1938, was even more general than Lewis theory. Usanovich's theory can be summarized as defining an acid as anything that accepts negative species or donates positive ones, and a base as the reverse. This defined the concept of redox (oxidation-reduction) as a special case of acid–base reactions. Some examples of Usanovich acid–base reactions include: Rationalizing the strength of Lewis acid–base interactions HSAB theory In 1963, Ralph Pearson proposed a qualitative concept known as the Hard and Soft Acids and Bases principle. later made quantitative with help of Robert Parr in 1984. 'Hard' applies to species that are small, have high charge states, and are weakly polarizable. 'Soft' applies to species that are large, have low charge states and are strongly polarizable. Acids and bases interact, and the most stable interactions are hard–hard and soft–soft. This theory has found use in organic and inorganic chemistry. ECW model The ECW model created by Russell S. Drago is a quantitative model that describes and predicts the strength of Lewis acid base interactions, . The model assigned and parameters to many Lewis acids and bases. Each acid is characterized by an and a . Each base is likewise characterized by its own and . The and parameters refer, respectively, to the electrostatic and covalent contributions to the strength of the bonds that the acid and base will form. The equation is The term represents a constant energy contribution for acid–base reaction such as the cleavage of a dimeric acid or base. The equation predicts reversal of acids and base strengths. The graphical presentations of the equation show that there is no single order of Lewis base strengths or Lewis acid strengths. Acid–base equilibrium The reaction of a strong acid with a strong base is essentially a quantitative reaction. For example, In this reaction both the sodium and chloride ions are spectators as the neutralization reaction, does not involve them. With weak bases addition of acid is not quantitative because a solution of a weak base is a buffer solution. A solution of a weak acid is also a buffer solution. When a weak acid reacts with a weak base an equilibrium mixture is produced. For example, adenine, written as AH, can react with a hydrogen phosphate ion, . The equilibrium constant for this reaction can be derived from the acid dissociation constants of adenine and of the dihydrogen phosphate ion. The notation [X] signifies "concentration of X". When these two equations are combined by eliminating the hydrogen ion concentration, an expression for the equilibrium constant, is obtained. Acid–alkali reaction An acid–alkali reaction is a special case of an acid–base reaction, where the base used is also an alkali. When an acid reacts with an alkali salt (a metal hydroxide), the product is a metal salt and water. Acid–alkali reactions are also neutralization reactions. In general, acid–alkali reactions can be simplified to by omitting spectator ions. Acids are in general pure substances that contain hydrogen cations () or cause them to be produced in solutions. Hydrochloric acid () and sulfuric acid () are common examples. In water, these break apart into ions: The alkali breaks apart in water, yielding dissolved hydroxide ions: .
Physical sciences
Chemistry
null
3049
https://en.wikipedia.org/wiki/Autumn
Autumn
Autumn, also known as fall, is one of the four temperate seasons on Earth. Outside the tropics, autumn marks the transition from summer to winter, in September (Northern Hemisphere) or March (Southern Hemisphere). Autumn is the season when the duration of daylight becomes noticeably shorter and the temperature cools considerably. Day length decreases and night length increases as the season progresses until the winter solstice in December (Northern Hemisphere) and June (Southern Hemisphere). One of its main features in temperate climates is the striking change in colour of the leaves of deciduous trees as they prepare to shed. Date definitions Some cultures regard the autumnal equinox as "mid-autumn", while others with a longer temperature lag treat the equinox as the start of autumn. In the English-speaking world of high latitude countries, autumn traditionally began with Lammas Day and ended around Hallowe'en, the approximate mid-points between midsummer, the autumnal equinox, and midwinter. Meteorologists (and Australia and most of the temperate countries in the southern hemisphere) use a definition based on Gregorian calendar months, with autumn being September, October, and November in the northern hemisphere, and March, April, and May in the southern hemisphere. In the higher latitude countries in the Northern Hemisphere, autumn traditionally starts with the September equinox (21 to 24 September) and ends with the winter solstice (21 or 22 December). Popular culture in the United States associates Labor Day, the first Monday in September, as the end of summer and the start of autumn. Certain summer traditions, such as wearing white, are discouraged after that date. As daytime and nighttime temperatures decrease, trees change colour and then shed their leaves. Persians celebrate the beginning of the autumn on Mehregan. Under the traditional East Asian solar term system, autumn starts on or around 8 August and ends on or about 7 November. In Ireland, the autumn months according to the national meteorological service, Met Éireann, are September, October, and November. However, according to the Irish Calendar, which is based on ancient Gaelic traditions, autumn lasts throughout the months of August, September, and October, or possibly a few days later, depending on tradition. In the Irish language, September is known as ("middle of autumn") and October as ("end of autumn"). Late Roman Republic scholar Marcus Terentius Varro defined autumn as lasting from the third day before the Ides of Sextilis (August 11) to the fifth day before the Ides of November (November 9). Etymology The word autumn () is derived from Latin autumnus, archaic auctumnus, possibly from the ancient Etruscan root autu- and has within it connotations of the passing of the year. Alternative etymologies include ) or ('dry'). After the Greek era, the word continued to be used as the Old French word ( in modern French) or in Middle English, and was later normalised to the original Latin. In the Medieval period, there are rare examples of its use as early as the 12th century, but by the 16th century, it was in common use. Before the 16th century, harvest was the term usually used to refer to the season, as it is common in other West Germanic languages to this day (cf. Dutch , German , and Scots ). However, as more people gradually moved from working the land to living in towns, the word harvest lost its reference to the time of year and came to refer only to the actual activity of reaping, and autumn, as well as fall, began to replace it as a reference to the season. The alternative word fall for the season traces its origins to old Germanic languages. The exact derivation is unclear, with the Old English or and the Old Norse all being possible candidates. However, these words all have the meaning "to fall from a height" and are clearly derived either from a common root or from each other. The term came to denote the season in 16th-century England, a contraction of Middle English expressions like "fall of the leaf" and "fall of the year". Compare the origin of spring from "spring of the leaf" and "spring of the year". During the 17th century, English settlers began emigrating to the new North American colonies, and took the English language with them. While the term fall gradually became nearly obsolete in Britain, it became the more common term in North America. The name backend, a once common name for the season in Northern England, has today been largely replaced by the name autumn. Associations Harvest Association with the transition from warm to cold weather, and its related status as the season of the primary harvest, has dominated its themes and popular images. In Western cultures, personifications of autumn are usually pretty, well-fed females adorned with fruits, vegetables and grains that ripen at this time. Many cultures feature autumnal harvest festivals, often the most important on their calendars. Still-extant echoes of these celebrations are found in the autumn Thanksgiving holiday of the United States and Canada, and the Jewish Sukkot holiday with its roots as a full-moon harvest festival of "tabernacles" (living in outdoor huts around the time of harvest). There are also the many festivals celebrated by indigenous peoples of the Americas tied to the harvest of ripe foods gathered in the wild, the Chinese Mid-Autumn or Moon festival, and many others. The predominant mood of these autumnal celebrations is a gladness for the fruits of the earth mixed with a certain melancholy linked to the imminent arrival of harsh weather. This view is presented in English poet John Keats' poem To Autumn, where he describes the season as a time of bounteous fecundity, a time of "mellow fruitfulness". In North America, while most foods are harvested during the autumn, foods usually associated with the season include pumpkins (which are integral parts of both Thanksgiving and Halloween) and apples, which are used to make the seasonal beverage apple cider. Melancholia Autumn, especially in poetry, has often been associated with melancholia. The possibilities and opportunities of summer are gone, and the chill of winter is on the horizon. Skies turn grey, the amount of usable daylight drops rapidly, and many people turn inward, both physically and mentally. It has been referred to as an unhealthy season. Similar examples may be found in Irish poet W.B. Yeats' poem The Wild Swans at Coole where the maturing season that the poet observes symbolically represents his own ageing self. Like the natural world that he observes, he too has reached his prime and now must look forward to the inevitability of old age and death. French poet Paul Verlaine's "Chanson d'automne" ("Autumn Song") is likewise characterised by strong, painful feelings of sorrow. Keats' To Autumn, written in September 1819, echoes this sense of melancholic reflection but also emphasises the lush abundance of the season. The song "Autumn Leaves", based on the French song "Les Feuilles mortes", uses the melancholic atmosphere of the season and the end of summer as a metaphor for the mood of being separated from a loved one. Halloween Autumn is associated with Halloween (influenced by Samhain, a Celtic autumn festival), and with it a widespread marketing campaign that promotes it. The Celtic people also used this time to celebrate the harvest with a time of feasting. At the same time though, it was a celebration of death as well. Crops were harvested, livestock were butchered, and Winter was coming. Halloween, 31 October, is in autumn in the northern hemisphere. Television, film, book, costume, home decoration, and confectionery businesses use this time of year to promote products closely associated with such a holiday, with promotions going from late August or early September to 31 October, since their themes rapidly lose strength once the holiday ends, and advertising starts concentrating on Christmas. Other associations In some parts of the northern hemisphere, autumn has a strong association with the end of summer holiday and the start of a new school year, particularly for children in primary and secondary education. "Back to School" advertising and preparations usually occurs in the weeks leading to the beginning of autumn. Thanksgiving Day is a national holiday celebrated in Canada, in the United States, in some of the Caribbean islands and in Liberia. Thanksgiving is celebrated on the second Monday of October in Canada, on the fourth Thursday of November in the United States (where it is commonly regarded as the start of the Christmas and holiday season), and around the same part of the year in other places. Similarly named festival holidays occur in Germany and Japan. Television stations and networks, particularly in North America, traditionally begin their regular seasons in their autumn, with new series and new episodes of existing series debuting mostly during late September or early October (series that debut outside the autumn season are usually known as mid-season replacements). A sweeps period takes place in November to measure Nielsen Ratings. American football is played almost exclusively in the autumn months; at the high school level, seasons run from late August through early November, with some playoff games and holiday rivalry contests being played as late as Thanksgiving. In many American states, the championship games take place in early December. College football's regular season runs from September through November, while the main professional circuit, the National Football League, plays from September through to early January. Summer sports, such as association football (in Northern America, East Asia, Argentina, and South Africa), Canadian football, stock car racing, tennis, golf, cricket, and professional baseball, wrap up their seasons in early to late autumn; Major League Baseball's championship World Series is popularly known as the "Fall Classic". (Amateur baseball is usually finished by August.) Likewise, professional winter sports, such as ice hockey and basketball, and most leagues of association football in Europe, are in the early stages of their seasons during autumn; American college basketball and college ice hockey play teams outside their athletic conferences during the late autumn before their in-conference schedules begin in winter. The Christian religious holidays of All Saints' Day and All Souls' Day are observed in autumn in the Northern hemisphere. Easter falls in autumn in the southern hemisphere. The secular celebration of International Workers' Day also falls in autumn in the southern hemisphere. Since 1997, Autumn has been one of the top 100 names for girls in the United States. In Indian mythology, autumn is considered to be the preferred season for the goddess of learning Saraswati, who is also known by the name of "goddess of autumn" (Sharada). In Asian mysticism, Autumn is associated with the element of metal, and subsequently with the colour white, the White Tiger of the West, and death and mourning. Tourism Although colour change in leaves occurs wherever deciduous trees are found, coloured autumn foliage is noted in various regions of the world: most of North America, Eastern Asia (including China, Korea, and Japan), Europe, southeast, south, and part of the midwest of Brazil, the forest of Patagonia, eastern Australia and New Zealand's South Island. Eastern Canada and New England are famous for their autumnal foliage, and this attracts major tourism (worth billions of US dollars) for the regions. Views of autumn Allegories of autumn in art
Physical sciences
Seasons
null
3072
https://en.wikipedia.org/wiki/Arcturus
Arcturus
|- bgcolor="#FFFAFA" | Note (category: variability): || H and K emission vary. Arcturus is the brightest star in the northern constellation of Boötes. With an apparent visual magnitude of −0.05, it is the fourth-brightest star in the night sky, and the brightest in the northern celestial hemisphere. The name Arcturus originated from ancient Greece; it was then cataloged as α Boötis by Johann Bayer in 1603, which is Latinized to Alpha Boötis. Arcturus forms one corner of the Spring Triangle asterism. Located relatively close at 36.7 light-years from the Sun, Arcturus is a red giant of spectral type K1.5III—an aging star around 7.1 billion years old that has used up its core hydrogen and evolved off the main sequence. It is about the same mass as the Sun, but has expanded to 25 times its size (around 35 million kilometers) and is around 170 times as luminous. Nomenclature The traditional name Arcturus is Latinised from the ancient Greek Ἀρκτοῦρος (Arktouros) and means "Guardian of the Bear", ultimately from ἄρκτος (arktos), "bear" and οὖρος (ouros), "watcher, guardian". The designation of Arcturus as α Boötis (Latinised to Alpha Boötis) was made by Johann Bayer in 1603. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Arcturus for α Boötis. Observation With an apparent visual magnitude of −0.05, Arcturus is the brightest star in the northern celestial hemisphere and the fourth-brightest star in the night sky, after Sirius (−1.46 apparent magnitude), Canopus (−0.72) and α Centauri (combined magnitude of −0.27). However, α Centauri AB is a binary star, whose components are each fainter than Arcturus. This makes Arcturus the third-brightest individual star, just ahead of α Centauri A (officially named Rigil Kentaurus), whose apparent magnitude . The French mathematician and astronomer Jean-Baptiste Morin observed Arcturus in the daytime with a telescope in 1635. This was the first recorded full daylight viewing for any star other than the Sun and supernovae. Arcturus has been seen at or just before sunset with the naked eye. Arcturus is visible from both of Earth's hemispheres as it is located 19° north of the celestial equator. The star culminates at midnight on 27 April, and at 9 p.m. on June 10 being visible during the late northern spring or the southern autumn. From the northern hemisphere, an easy way to find Arcturus is to follow the arc of the handle of the Big Dipper (or Plough in the UK). By continuing in this path, one can find Spica, "Arc to Arcturus, then spike (or speed on) to Spica". Together with the bright stars Spica and Regulus (or Denebola, depending on the source), Arcturus is part of the Spring Triangle asterism. With Cor Caroli, these four stars form the Great Diamond asterism. Ptolemy described Arcturus as subrufa ("slightly red"): it has a B-V color index of +1.23, roughly midway between Pollux (B-V +1.00) and Aldebaran (B-V +1.54). η Boötis, or Muphrid, is only 3.3 light-years distant from Arcturus, and would have a visual magnitude −2.5, about as bright as Jupiter at its brightest from Earth, whereas an observer on the former system would find Arcturus with a magnitude -5.0, slightly brighter than Venus as seen from Earth, but with an orangish color. Physical characteristics Based upon an annual parallax shift of 88.83 milliarcseconds, as measured by the Hipparcos satellite, Arcturus is from Earth. The parallax margin of error is 0.54 milliarcseconds, translating to a distance margin of error of ±. Because of its proximity, Arcturus has a high proper motion, two arcseconds a year, greater than any first magnitude star other than α Centauri. It is [[list of nearest giant stars|the second-closest giant star to Earth, after Pollux. Arcturus is moving rapidly () relative to the Sun, and is now almost at its closest point to the Sun. Closest approach will happen in about 4,000 years, when the star will be a few hundredths of a light-year closer to Earth than it is today. (In antiquity, Arcturus was closer to the centre of the constellation.) Arcturus is thought to be an old-disk star, and appears to be moving with a group of 52 other such stars, known as the Arcturus stream. With an absolute magnitude of −0.30, Arcturus is, together with Vega and Sirius, one of the most luminous stars in the Sun's neighborhood. It is about 110 times brighter than the Sun in visible light wavelengths, but this underestimates its strength as much of the light it gives off is in the infrared; total (bolometric) power output is about 180 times that of the Sun. With a near-infrared J band magnitude of −2.2, only Betelgeuse (−2.9) and R Doradus (−2.6) are brighter. The lower output in visible light is due to a lower efficacy as the star has a lower surface temperature than the Sun. There have been suggestions that Arcturus might be a member of a binary system with a faint, cool companion, but no companion has been directly detected. In the absence of a binary companion, the mass of Arcturus cannot be measured directly, but models suggest it is slightly greater than that of the Sun. Evolutionary matching to the observed physical parameters gives a mass of , while the oxygen isotope ratio for a first dredge-up star gives a mass of . The star, given its evolutionary state, is expected to have undergone significant mass loss in the past. The star displays magnetic activity that is heating the coronal structures, and it undergoes a solar-type magnetic cycle with a duration that is probably less than 14 years. A weak magnetic field has been detected in the photosphere with a strength of around half a gauss. The magnetic activity appears to lie along four latitudes and is rotationally modulated. Arcturus is estimated to be around 6 to 8.5 billion years old, but there is some uncertainty about its evolutionary status. Based upon the color characteristics of Arcturus, it is currently ascending the red-giant branch and will continue to do so until it accumulates a large enough degenerate helium core to ignite the helium flash. It has likely exhausted the hydrogen from its core and is now in its active hydrogen shell burning phase. However, Charbonnel et al. (1998) placed it slightly above the horizontal branch, and suggested it has already completed the helium flash stage. Spectrum Arcturus has evolved off the main sequence to the red giant branch, reaching an early K-type stellar classification. It is frequently assigned the spectral type of K0III, but in 1989 was used as the spectral standard for type K1.5III Fe−0.5, with the suffix notation indicating a mild underabundance of iron compared to typical stars of its type. As the brightest K-type giant in the sky, it has been the subject of multiple atlases with coverage from the ultraviolet to infrared. The spectrum shows a dramatic transition from emission lines in the ultraviolet to atomic absorption lines in the visible range and molecular absorption lines in the infrared. This is due to the optical depth of the atmosphere varying with wavelength. The spectrum shows very strong absorption in some molecular lines that are not produced in the photosphere but in a surrounding shell. Examination of carbon monoxide lines show the molecular component of the atmosphere extending outward to 2–3 times the radius of the star, with the chromospheric wind steeply accelerating to 35–40 km/s in this region. Astronomers term "metals" those elements with higher atomic numbers than helium. The atmosphere of Arcturus has an enrichment of alpha elements relative to iron but only about a third of solar metallicity. Arcturus is possibly a Population II star. Oscillations As one of the brightest stars in the sky, Arcturus has been the subject of a number of studies in the emerging field of asteroseismology. Belmonte and colleagues carried out a radial velocity (Doppler shift of spectral lines) study of the star in April and May 1988, which showed variability with a frequency of the order of a few microhertz (μHz), the highest peak corresponding to 4.3 μHz (2.7 days) with an amplitude of 60 ms−1, with a frequency separation of c. 5 μHz. They suggested that the most plausible explanation for the variability of Arcturus is stellar oscillations. Asteroseismological measurements allow direct calculation of the mass and radius, giving values of and . This form of modelling is still relatively inaccurate, but a useful check on other models. Search for planets Hipparcos satellite astrometry suggested that Arcturus is a binary star, with the companion about twenty times dimmer than the primary and orbiting close enough to be at the very limits of humans' current ability to make it out. Recent results remain inconclusive, but do support the marginal Hipparcos detection of a binary companion. In 1993, radial velocity measurements of Aldebaran, Arcturus and Pollux showed that Arcturus exhibited a long-period radial velocity oscillation, which could be interpreted as a substellar companion. This substellar object would be nearly 12 times the mass of Jupiter and be located roughly at the same orbital distance from Arcturus as the Earth is from the Sun, at 1.1 astronomical units. However, all three stars surveyed showed similar oscillations yielding similar companion masses, and the authors concluded that the variation was likely to be intrinsic to the star rather than due to the gravitational effect of a companion. So far no substellar companion has been confirmed. Mythology One astronomical tradition associates Arcturus with the mythology around Arcas, who was about to shoot and kill his own mother Callisto who had been transformed into a bear. Zeus averted their imminent tragic fate by transforming the boy into the constellation Boötes, called Arctophylax "bear guardian" by the Greeks, and his mother into Ursa Major (Greek: Arctos "the bear"). The account is given in Hyginus's Astronomy. Aratus in his Phaenomena said that the star Arcturus lay below the belt of Arctophylax, and according to Ptolemy in the Almagest it lay between his thighs. An alternative lore associates the name with the legend around Icarius, who gave the gift of wine to other men, but was murdered by them, because they had had no experience with intoxication and mistook the wine for poison. It is stated that Icarius became Arcturus while his dog, Maira, became Canicula (Procyon), although "Arcturus" here may be used in the sense of the constellation rather than the star. Cultural significance As one of the brightest stars in the sky, Arcturus has been significant to observers since antiquity. In ancient Mesopotamia, it was linked to the god Enlil, and also known as Shudun, "yoke", or SHU-PA of unknown derivation in the Three Stars Each Babylonian star catalogues and later MUL.APIN around 1100 BC. In ancient Greek, the star is found in ancient astronomical literature, e.g. Hesiod's Work and Days, circa 700 BC, as well as Hipparchus's and Ptolemy's star catalogs. The folk-etymology connecting the star name with the bears (Greek: ἄρκτος, arktos) was probably invented much later. It fell out of use in favour of Arabic names until it was revived in the Renaissance. Arcturus is also mentioned in Plato's "Laws" (844e) as a herald for the season of vintage, specifically figs and grapes. In Arabic, Arcturus is one of two stars called al-simāk "the uplifted ones" (the other is Spica). Arcturus is specified as السماك الرامح as-simāk ar-rāmiħ "the uplifted one of the lancer". The term Al Simak Al Ramih has appeared in Al Achsasi Al Mouakket catalogue (translated into Latin as Al Simak Lanceator). This has been variously romanized in the past, leading to obsolete variants such as Aramec and Azimech. For example, the name Alramih is used in Geoffrey Chaucer's A Treatise on the Astrolabe (1391). Another Arabic name is Haris-el-sema, from حارس السماء ħāris al-samā’ "the keeper of heaven". or حارس الشمال ħāris al-shamāl’ "the keeper of north". In Indian astronomy, Arcturus is called Swati or Svati (Devanagari स्वाति, Transliteration IAST svāti, svātī́), possibly 'su' + 'ati' ("great goer", in reference to its remoteness) meaning very beneficent. It has been referred to as "the real pearl" in Bhartṛhari's kāvyas. In Chinese astronomy, Arcturus is called Da Jiao (), because it is the brightest star in the Chinese constellation called Jiao Xiu (). Later it became a part of another constellation Kang Xiu (). The Wotjobaluk Koori people of southeastern Australia knew Arcturus as Marpean-kurrk, mother of Djuit (Antares) and another star in Boötes, Weet-kurrk (Muphrid). Its appearance in the north signified the arrival of the larvae of the wood ant (a food item) in spring. The beginning of summer was marked by the star's setting with the Sun in the west and the disappearance of the larvae. The people of Milingimbi Island in Arnhem Land saw Arcturus and Muphrid as man and woman, and took the appearance of Arcturus at sunrise as a sign to go and harvest rakia or spikerush. The Weilwan of northern New South Wales knew Arcturus as Guembila "red". Prehistoric Polynesian navigators knew Arcturus as Hōkūleʻa, the "Star of Joy". Arcturus is the zenith star of the Hawaiian Islands. Using Hōkūleʻa and other stars, the Polynesians launched their double-hulled canoes from Tahiti and the Marquesas Islands. Traveling east and north they eventually crossed the equator and reached the latitude at which Arcturus would appear directly overhead in the summer night sky. Knowing they had arrived at the exact latitude of the island chain, they sailed due west on the trade winds to landfall. If Hōkūleʻa could be kept directly overhead, they landed on the southeastern shores of the Big Island of Hawaii. For a return trip to Tahiti the navigators could use Sirius, the zenith star of that island. Since 1976, the Polynesian Voyaging Society's Hōkūleʻa has crossed the Pacific Ocean many times under navigators who have incorporated this wayfinding technique in their non-instrument navigation. Arcturus had several other names that described its significance to indigenous Polynesians. In the Society Islands, Arcturus, called Ana-tahua-taata-metua-te-tupu-mavae ("a pillar to stand by"), was one of the ten "pillars of the sky", bright stars that represented the ten heavens of the Tahitian afterlife. In Hawaii, the pattern of Boötes was called Hoku-iwa, meaning "stars of the frigatebird". This constellation marked the path for Hawaiʻiloa on his return to Hawaii from the South Pacific Ocean. The Hawaiians called Arcturus Hoku-leʻa. It was equated to the Tuamotuan constellation Te Kiva, meaning "frigatebird", which could either represent the figure of Boötes or just Arcturus. However, Arcturus may instead be the Tuamotuan star called Turu. The Hawaiian name for Arcturus as a single star was likely Hoku-leʻa, which means "star of gladness", or "clear star". In the Marquesas Islands, Arcturus was probably called Tau-tou and was the star that ruled the month approximating January. The Māori and Moriori called it Tautoru, a variant of the Marquesan name and a name shared with Orion's Belt. In Inuit astronomy, Arcturus is called the Old Man (Uttuqalualuk in Inuit languages) and The First Ones (Sivulliik in Inuit languages). The Miꞌkmaq of eastern Canada saw Arcturus as Kookoogwéss, the owl. Early-20th-century Armenian scientist Nazaret Daghavarian theorized that the star commonly referred to in Armenian folklore as Gutani astgh (Armenian: Գութանի աստղ; lit. star of the plow) was in fact Arcturus, as the constellation of Boötes was called "Ezogh" (Armenian: Եզող; lit. the person who is plowing) by Armenians. In popular culture In Ancient Rome, the star's celestial activity was supposed to portend tempestuous weather, and a personification of the star acts as narrator of the prologue to Plautus' comedy Rudens (circa 211 BC). The Kāraṇḍavyūha Sūtra, compiled at the end of the 4th century or beginning of the 5th century, names one of Avalokiteśvaras meditative absorptions as "The face of Arcturus". One of the possible etymologies offered for the name "Arthur" assumes that it is derived from "Arcturus" and that the late 5th to early 6th-century figure on whom the myth of King Arthur is based was originally named for the star. In the Middle Ages, Arcturus was considered a Behenian fixed star and attributed to the stone jasper and the plantain herb. Cornelius Agrippa listed its kabbalistic sign under the alternate name Alchameth. Arcturus's light was employed in the mechanism used to open the 1933 Chicago World's Fair. The star was chosen as it was thought that light from Arcturus had started its journey at about the time of the previous Chicago World's Fair in 1893 (at 36.7 light-years away, the light actually started in 1896). At the height of the American Civil War, President Abraham Lincoln observed Arcturus through a 9.6-inch refractor telescope when he visited the Naval Observatory in Washington, D.C., in August 1863.
Physical sciences
Notable stars
null
3076
https://en.wikipedia.org/wiki/Antares
Antares
Antares is the brightest star in the constellation of Scorpius. It has the Bayer designation α Scorpii, which is Latinised to Alpha Scorpii. Often referred to as "the heart of the scorpion", Antares is flanked by σ Scorpii and τ Scorpii near the center of the constellation. Distinctly reddish when viewed with the naked eye, Antares is a slow irregular variable star that ranges in brightness from an apparent visual magnitude of +0.6 down to +1.6. It is on average the fifteenth-brightest star in the night sky. Antares is the brightest and most evolved stellar member of the Scorpius–Centaurus association, the nearest OB association to the Sun. It is located about from Earth at the rim of the Upper Scorpius subgroup, and is illuminating the Rho Ophiuchi cloud complex in its foreground. Classified as spectral type M1.5Iab-Ib, Antares is a red supergiant, a large evolved massive star and one of the largest stars visible to the naked eye. If placed at the center of the Solar System, it would extend out to somewhere in the asteroid belt. Its mass is calculated to be around 13 or 15 to 16 times that of the Sun. Antares appears as a single star when viewed with the naked eye, but it is actually a binary star system, with its two components called α Scorpii A and α Scorpii B. The brighter of the pair is the red supergiant, while the fainter is a hot main sequence star of magnitude 5.5. They have a projected separation of about . Its traditional name Antares derives from the Ancient Greek , meaning "rival to Ares", due to the similarity of its reddish hue to the appearance of the planet Mars. Nomenclature α Scorpii (Latinised to Alpha Scorpii) is the star's Bayer designation. Antares has the Flamsteed designation 21 Scorpii, as well as catalogue designations such as HR 6134 in the Bright Star Catalogue and HD 148478 in the Henry Draper Catalogue. As a prominent infrared source, it appears in the Two Micron All-Sky Survey catalogue as 2MASS J16292443-2625549 and the Infrared Astronomical Satellite (IRAS) Sky Survey Atlas catalogue as IRAS 16262–2619. It is also catalogued as a double star WDS J16294-2626 and CCDM J16294-2626. Antares is a variable star and is listed in the General Catalogue of Variable Stars, but as a Bayer-designated star it does not have a separate variable star designation. Its traditional name Antares derives from the Ancient Greek , meaning "rival to Ares", due to the similarity of its reddish hue to the appearance of the planet Mars. The comparison of Antares with Mars may have originated with early Mesopotamian astronomers which is considered an outdated speculation, because the name of this star in Mesopotamian astronomy has always been "heart of Scorpion" and it was associated with the goddess Lisin. Some scholars have speculated that the star may have been named after Antar, or Antarah ibn Shaddad, the Arab warrior-hero celebrated in the pre-Islamic poems Mu'allaqat. However, the name "Antares" is already proven in the Greek culture, e.g. in Ptolemy's Almagest and Tetrabiblos. In 2016, the International Astronomical Union organised a Working Group on Star Names (WGSN) to catalog and standardise proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Antares for the star α Scorpii A. It is now so entered in the IAU Catalog of Star Names. Observation Antares is visible all night around May 31 of each year, when the star is at opposition to the Sun. Antares then rises at dusk and sets at dawn as seen at the equator. For two to three weeks on either side of November 30, Antares is not visible in the night sky from mid-northern latitudes, because it is near conjunction with the Sun. In higher northern latitudes, Antares is only visible low in the south in summertime. Higher than 64° northern latitude, the star does not rise at all. Antares is easier to see from the southern hemisphere due to its southerly declination. In the whole of Antarctica, the star is circumpolar as the whole continent is above 64° S latitude. History Radial velocity variations were observed in the spectrum of Antares in the early 20th century, and attempts were made to derive spectroscopic orbits. It became apparent that the small variations could not be due to orbital motion, and they were actually caused by pulsation of the star's atmosphere. Even in 1928, it was calculated that the size of the star must vary by about 20%. Antares was first reported to have a companion star by Johann Tobias Bürg during an occultation on April 13, 1819, although this was not widely accepted and dismissed as a possible atmospheric effect. It was then observed by Scottish astronomer James William Grant FRSE while in India on 23 July 1844. It was rediscovered by Ormsby M. Mitchel in 1846 and measured by William Rutter Dawes in April 1847. In 1952, Antares was reported to vary in brightness. A photographic magnitude range from 3.00 to 3.16 was described. The brightness has been monitored by the American Association of Variable Star Observers since 1945, and it has been classified as an LC slow irregular variable star, whose apparent magnitude slowly varies between extremes of +0.6 and +1.6, although usually near magnitude +1.0. There is no obvious periodicity, but statistical analyses have suggested periods of 1,733 days or days. No separate long secondary period has been detected, although it has been suggested that primary periods longer than a thousand days are analogous to long secondary periods. Research published in 2018 demonstrated that Ngarrindjeri Aboriginal people from South Australia observed the variability of Antares and incorporated it into their oral traditions as Waiyungari (meaning 'red man'). Occultations and conjunctions Antares is 4.57 degrees south of the ecliptic, one of four first magnitude stars within 6° of the ecliptic (the others are Spica, Regulus and Aldebaran), so it can be occulted by the Moon. The occultation of 31 July 2009 was visible in much of southern Asia and the Middle East. Every year around December 2 the Sun passes 5° north of Antares. Lunar occultations of Antares are fairly common, depending on the 18.6-year cycle of the lunar nodes. The last cycle ended in 2010 and the next begins in 2023. Shown at right is a video of a reappearance event, clearly showing events for both components. Antares can also be occulted by the planets, e.g. Venus, but these events are rare. The last occultation of Antares by Venus took place on September 17, 525 BC; the next one will be November 17, 2400. Other planets have been calculated not to have occulted Antares over the last millennium, nor will they in the next millennium, as most planets stay near the ecliptic and pass north of Antares. Venus will be extremely near Antares on October 19, 2117, and every eight years thereafter through to October 29, 2157, it will pass south of the star. Illumination of Rho Ophiuchi cloud complex Antares is the brightest and most evolved stellar member of the Scorpius–Centaurus association, the nearest OB association to the Sun. It is a member of the Upper Scorpius subgroup of the association, which contains thousands of stars with a mean age of 11 million years. Antares is located about from Earth at the rim of the Upper Scorpius subgroup, and is illuminating the Rho Ophiuchi cloud complex in its foreground. The illuminated cloud is sometimes referred to as the Antares Nebula or is otherwise identified as VdB 107. Stellar system α Scorpii is a double star that is thought to form a binary system. The best calculated orbit for the stars is still considered to be unreliable. It describes an almost circular orbit seen nearly edge-on, with a period of 1,218 years and a semi-major axis of about . Other recent estimates of the period have ranged from 880 years for a calculated orbit, to 2,562 years for a simple Kepler's Law estimate. Early measurements of the pair found them to be about apart in 1847–49, or apart in 1848. More modern observations consistently give separations around . The variations in the separation are often interpreted as evidence of orbital motion, but are more likely to be simply observational inaccuracies with very little true relative motion between the two components. The pair have a projected separation of about 529 astronomical units (AU) (≈ 80 billion km) at the estimated distance of Antares, giving a minimum value for the distance between them. Spectroscopic examination of the energy states in the outflow of matter from the companion star suggests that the latter is over beyond the primary (about 33 billion km). Antares Antares is a red supergiant star with a stellar classification of M1.5Iab-Ib, and is indicated to be a spectral standard for that class. Due to the nature of the star, the derived parallax measurements have large errors, so that the true distance of Antares is approximately from the Sun. The brightness of Antares at visual wavelengths is about 10,000 times that of the Sun, but because the star radiates a considerable part of its energy in the infrared part of the spectrum, the true bolometric luminosity is around 100,000 times that of the Sun. There is a large margin of error assigned to values for the bolometric luminosity, typically 30% or more. There is also considerable variation between values published by different authors, for example and published in 2012 and 2013. The mass of the star has been calculated to be about , or . Comparison of the effective temperature and luminosity of Antares to theoretical evolutionary tracks for massive stars suggest a progenitor mass of and an age of 12 million years (MYr), or an initial mass of and an age of 11 to 15 MYr. Comparison of observations from antiquity to theoretical evolutionary tracks suggests an initial mass of , or the possibility that Antares is on a blue loop with an initial mass of (while excluding as a possible mass estimate). These correspond to ages from 11.8 to 17.3 MYr. These initial mass estimates mean that Antares may have once resembled massive blue stars like the members of the Acrux system, which have similar initial masses (both Antares and Acrux are members of the wider Scorpius–Centaurus association). Massive stars like Antares are expected to explode as supernovae. Like most cool supergiants, Antares's size has much uncertainty due to the tenuous and translucent nature of the extended outer regions of the star. Defining an effective temperature is difficult due to spectral lines being generated at different depths in the atmosphere, and linear measurements produce different results depending on the wavelength observed. In addition, Antares pulsates in size, varying its radius by 19%. It also varies in temperature by 150 K, lagging 70 days behind radial velocity changes which are likely to be caused by the pulsations. The diameter of Antares can be measured most accurately using interferometry or observing lunar occultations events. An apparent diameter from occultations 41.3 ± 0.1 milliarcseconds has been published. Interferometry allows synthesis of a view of the stellar disc, which is then represented as a limb-darkened disk surrounded by an extended atmosphere. The diameter of the limb-darkened disk was measured as in 2009 and in 2010. The linear radius of the star can be calculated from its angular diameter and distance. However, the distance to Antares is not known with the same accuracy as modern measurements of its diameter. An estimate obtained by interferometry in 1925 by Francis G. Pease at the Mount Wilson Observatory gave Antares a diameter of , equal to approximately , making it the then largest star known. Antares is now known to be somewhat larger; for instance, the Hipparcos satellite's trigonometric parallax of with modern angular diameter estimates lead to a radius of about . Older radii estimates exceeding were derived from older measurements of the diameter, but those measurements are likely to have been affected by asymmetry of the atmosphere and the narrow range of infrared wavelengths observed; Antares has an extended shell which radiates strongly at those particular wavelengths. Despite its large size compared to the Sun, Antares is dwarfed by even larger red supergiants, such as VY Canis Majoris, KY Cygni, RW Cephei or Mu Cephei. Antares, like the similarly sized red supergiant Betelgeuse in the constellation Orion, will almost certainly explode as a supernova, probably in million years. For a few months, the Antares supernova could be as bright as the full moon and be visible in daytime. Antares B Antares B is a magnitude 5.5 blue-white main-sequence star of spectral type B2.5V; it also has numerous unusual spectral lines suggesting it has been polluted by matter ejected by Antares. It is assumed to be a relatively normal early-B main sequence star with a mass around , a temperature around , and a radius of about . As it falls short of the mass limit required for stars to undergo a supernova, it will likely expand into a red giant before dying as a massive white dwarf similar to Sirius B. Antares B is normally difficult to see in small telescopes due to glare from Antares, but can sometimes be seen in apertures over . It is often described as green, but this is probably either a contrast effect, or the result of the mixing of light from the two stars when they are seen together through a telescope and are too close to be completely resolved. Antares B can sometimes be observed with a small telescope for a few seconds during lunar occultations while Antares is hidden by the Moon. Antares B appears a profound blue or bluish-green color, in contrast to the orange-red Antares. Etymology and mythology In the Babylonian star catalogues dating from at least 1100 BCE, Antares was called GABA GIR.TAB, "the Breast of the Scorpion". In MUL.APIN, which dates between 1100 and 700 BC, it is one of the stars of Ea in the southern sky and denotes the breast of the Scorpion goddess Ishhara. Later names that translate as "the Heart of Scorpion" include from the Arabic . This had been directly translated from the Ancient Greek . was a calque of the Greek name rendered in Latin. In ancient Mesopotamia, Antares may have been known by various names: Urbat, Bilu-sha-ziri ("the Lord of the Seed"), Kak-shisa ("the Creator of Prosperity"), Dar Lugal ("The King"), Masu Sar ("the Hero and the King"), and Kakkab Bir ("the Vermilion Star"). In ancient Egypt, Antares represented the scorpion goddess Serket (and was the symbol of Isis in the pyramidal ceremonies). It was called "the red one of the prow". In Persia, Antares was known as one of the four "royal stars". In India, it with σ Scorpii and τ Scorpii were Jyeshthā (the eldest or biggest, probably attributing its huge size), one of the nakshatra (Hindu lunar mansions). The ancient Chinese called Antares 心宿二 (Xīnxiù'èr, "second star of the Heart"), because it was the second star of the mansion Xin (心). It was the national star of the Shang dynasty, and it was sometimes referred to as () because of its reddish appearance. The Māori people of New Zealand call Antares Rēhua, and regard it as the chief of all the stars especially the Matariki. Rēhua is father of Puanga/Puaka (Rigel), an important star in the calculation of the Māori calendar. The Wotjobaluk Koori people of Victoria, Australia, knew Antares as Djuit, son of Marpean-kurrk (Arcturus); the stars on each side represented his wives. The Kulin Kooris saw Antares (Balayang) as the brother of Bunjil (Altair). In culture Antares appears in the flag of Brazil, which displays 27 stars, each representing a federated unit of Brazil. Antares represents the state of Piauí. The 1995 Oldsmobile Antares concept car is named after the star. Antares is one of the medieval Behenian fixed stars.
Physical sciences
Notable stars
Astronomy
3077
https://en.wikipedia.org/wiki/Aldebaran
Aldebaran
Aldebaran () (Proto-Semitic *dVbr- “bee”) is a star located in the zodiac constellation of Taurus. It has the Bayer designation α Tauri, which is Latinized to Alpha Tauri and abbreviated Alpha Tau or α Tau. Aldebaran varies in brightness from an apparent visual magnitude 0.75 down to 0.95, making it the brightest star in the constellation, as well as (typically) the fourteenth-brightest star in the night sky. It is positioned at a distance of approximately 65 light-years from the Sun. The star lies along the line of sight to the nearby Hyades cluster. Aldebaran is a red giant, meaning that it is cooler than the Sun with a surface temperature of , but its radius is about 45 times the Sun's, so it is over 400 times as luminous. As a giant star, it has moved off the main sequence on the Hertzsprung–Russell diagram after depleting its supply of hydrogen in the core. The star spins slowly and takes 520 days to complete a rotation. Aldebaran is believed to host a planet several times the mass of Jupiter, named . Nomenclature The traditional name Aldebaran derives from the Arabic (), meaning , because it seems to follow the Pleiades. In 2016, the International Astronomical Union Working Group on Star Names (WGSN) approved the proper name Aldebaran for this star. Aldebaran is the brightest star in the constellation Taurus, with the Bayer designation α Tauri, latinised as Alpha Tauri. It has the Flamsteed designation 87 Tauri as the 87th star in the constellation of approximately 7th magnitude or brighter, ordered by right ascension. It also has the Bright Star Catalogue number 1457, the HD number 29139, and the Hipparcos catalogue number 21421, mostly seen in scientific publications. It is a variable star listed in the General Catalogue of Variable Stars, but it is listed using its Bayer designation and does not have a separate variable star designation. Aldebaran and several nearby stars are included in double star catalogues such as the Washington Double Star Catalog as WDS 04359+1631 and the Aitken Double Star Catalogue as ADS 3321. It was included with an 11th-magnitude companion as a double star as H IV 66 in the Herschel Catalogue of Double Stars and Σ II 2 in the Struve Double Star Catalog, and together with a 14th-magnitude star as β 550 in the Burnham Double Star Catalogue. Observation Aldebaran is one of the easiest stars to find in the night sky, partly due to its brightness and partly due to being near one of the more noticeable asterisms in the sky. Following the three stars of Orion's belt in the direction opposite to Sirius, the first bright star encountered is Aldebaran. It is best seen at midnight between late November and early December. The star is, by chance, in the line of sight between the Earth and the Hyades, so it has the appearance of being the brightest member of the open cluster, but the cluster that forms the bull's-head-shaped asterism is more than twice as far away, at about 150 light years. Aldebaran is 5.47 degrees south of the ecliptic and so can be occulted by the Moon. Such occultations occur when the Moon's ascending node is near the autumnal equinox. A series of 49 occultations occurred starting on 29 January 2015 and ending at 3 September 2018. Each event was visible from points in the northern hemisphere or close to the equator; people in e.g. Australia or South Africa can never observe an Aldebaran occultation since it is too far south of the ecliptic. A reasonably accurate estimate for the diameter of Aldebaran was obtained during the occultation of 22 September 1978. In the 2020s, Aldebaran is in conjunction in ecliptic longitude with the sun around May 30 of each year. With a near-infrared J band magnitude of −2.1, only Betelgeuse (−2.9), R Doradus (−2.6), and Arcturus (−2.2) are brighter at that wavelength. Observational history On 11 March AD 509, a lunar occultation of Aldebaran was observed in Athens, Greece. English astronomer Edmund Halley studied the timing of this event, and in 1718 concluded that Aldebaran must have changed position since that time, moving several minutes of arc further to the north. This, as well as observations of the changing positions of stars Sirius and Arcturus, led to the discovery of proper motion. Based on present day observations, the position of Aldebaran has shifted 7′ in the last 2000 years; roughly a quarter the diameter of the full moon. Due to precession of the equinoxes, 5,000 years ago the vernal equinox was close to Aldebaran. Between 420,000 and 210,000 years ago, Aldebaran was the brightest star in the night sky, peaking in brightness 320,000 years ago with an apparent magnitude of . English astronomer William Herschel discovered a faint companion to Aldebaran in 1782; an 11th-magnitude star at an angular separation of 117″. This star was shown to be itself a close double star by S. W. Burnham in 1888, and he discovered an additional 14th-magnitude companion at an angular separation of 31″. Follow-on measurements of proper motion showed that Herschel's companion was diverging from Aldebaran, and hence they were not physically connected. However, the companion discovered by Burnham had almost exactly the same proper motion as Aldebaran, suggesting that the two formed a wide binary star system. Working at his private observatory in Tulse Hill, England, in 1864 William Huggins performed the first studies of the spectrum of Aldebaran, where he was able to identify the lines of nine elements, including iron, sodium, calcium, and magnesium. In 1886, Edward C. Pickering at the Harvard College Observatory used a photographic plate to capture fifty absorption lines in the spectrum of Aldebaran. This became part of the Draper Catalogue, published in 1890. By 1887, the photographic technique had improved to the point that it was possible to measure a star's radial velocity from the amount of Doppler shift in the spectrum. By this means, the recession velocity of Aldebaran was estimated as (48 km/s), using measurements performed at Potsdam Observatory by Hermann C. Vogel and his assistant Julius Scheiner. Aldebaran was observed using an interferometer attached to the Hooker Telescope at the Mount Wilson Observatory in 1921 in order to measure its angular diameter, but it was not resolved in these observations. The extensive history of observations of Aldebaran led to it being included in the list of 33 stars chosen as benchmarks for the Gaia mission to calibrate derived stellar parameters. It had previously been used to calibrate instruments on board the Hubble Space Telescope. Physical characteristics Aldebaran is listed as the spectral standard for type K5+ III stars. Its spectrum shows that it is a giant star that has evolved off the main sequence band of the HR diagram after exhausting the hydrogen at its core. The collapse of the center of the star into a degenerate helium core has ignited a shell of hydrogen outside the core and Aldebaran is now on the red giant branch (RGB). The effective temperature of Aldebaran's photosphere is . It has a surface gravity of , typical for a giant star, but around 25 times lower than the Earth's and 700 times lower than the Sun's. Its metallicity is about 30% lower than the Sun's. Measurements by the Hipparcos satellite and other sources put Aldebaran around away. Asteroseismology has determined that it is about 16% more massive than the Sun, yet it shines with 518 times the Sun's luminosity due to the expanded radius. The angular diameter of Aldebaran has been measured many times. The value adopted as part of the Gaia benchmark calibration is . It is 44 times the diameter of the Sun, approximately 61 million kilometres. Aldebaran is a slightly variable star, assigned to the slow irregular type LB. The General Catalogue of Variable Stars indicates variation between apparent magnitude 0.75 and 0.95 from historical reports. Modern studies show a smaller amplitude, with some showing almost no variation. Hipparcos photometry shows an amplitude of only about 0.02 magnitudes and a possible period around 18 days. Intensive ground-based photometry showed variations of up to 0.03 magnitudes and a possible period around 91 days. Analysis of observations over a much longer period still find a total amplitude likely to be less than 0.1 magnitudes, and the variation is considered to be irregular. The photosphere shows abundances of carbon, oxygen, and nitrogen that suggest the giant has gone through its first dredge-up stage—a normal step in the evolution of a star into a red giant during which material from deep within the star is brought up to the surface by convection. With its slow rotation, Aldebaran lacks a dynamo needed to generate a corona and hence is not a source of hard X-ray emission. However, small scale magnetic fields may still be present in the lower atmosphere, resulting from convection turbulence near the surface. The measured strength of the magnetic field on Aldebaran is . Any resulting soft X-ray emissions from this region may be attenuated by the chromosphere, although ultraviolet emission has been detected in the spectrum. The star is currently losing mass at a rate of (about one Earth mass in 300,000 years) with a velocity of . This stellar wind may be generated by the weak magnetic fields in the lower atmosphere. Beyond the chromosphere of Aldebaran is an extended molecular outer atmosphere (MOLsphere) where the temperature is cool enough for molecules of gas to form. This region lies at about 2.5 times the radius of the star and has a temperature of about . The spectrum reveals lines of carbon monoxide, water, and titanium oxide. Outside the MOLSphere, the stellar wind continues to expand until it reaches the termination shock boundary with the hot, ionized interstellar medium that dominates the Local Bubble, forming a roughly spherical astrosphere with a radius of around , centered on Aldebaran. Visual companions Five faint stars appear close to Aldebaran in the sky. These double star components were given upper-case Latin letter designations more or less in the order of their discovery, with the letter A reserved for the primary star. Some characteristics of these components, including their position relative to Aldebaran, are shown in the table. Some surveys, for example Gaia Data Release 2, have indicated that Alpha Tauri B may have about the same proper motion and parallax as Aldebaran and thus may be a physical binary system. These measurements are difficult, since the dim B component appears so close to the bright primary star, and the margin of error is too large to establish (or exclude) a physical relationship between the two. So far neither the B component, nor anything else, has been unambiguously shown to be physically associated with Aldebaran. The Gaia Data Release 3 again suggest a close distance to Aldebaran and similar proper motions. With a parallax of 47.25 milliarcseconds, this translates into a distance of . The NASA Exoplanet Archive recognizes Aldebaran as a binary star, with Aldebaran B being the secondary star. A spectral type of M2.5 has been published for Alpha Tauri B. Alpha Tauri CD is a binary system with the C and D component stars gravitationally bound to and co-orbiting each other. These co-orbiting stars have been shown to be located far beyond Aldebaran and are members of the Hyades star cluster. As with the rest of the stars in the cluster they do not physically interact with Aldebaran in any way. Planetary system In 1993 radial velocity measurements of Aldebaran, Arcturus and Pollux showed that Aldebaran exhibited a long-period radial velocity oscillation, which could be interpreted as a substellar companion. The measurements for Aldebaran implied a companion with a minimum mass 11.4 times that of Jupiter in a 643-day orbit at a separation of in a mildly eccentric orbit. However, all three stars surveyed showed similar oscillations yielding similar companion masses, and the authors concluded that the variation was likely to be intrinsic to the star rather than due to the gravitational effect of a companion. In 2015 a study showed stable long-term evidence for both a planetary companion and stellar activity. An asteroseismic analysis of the residuals to the planet fit has determined that Aldebaran b has a minimum mass of Jupiter masses, and that when the star was on the main sequence it would have given this planet Earth-like levels of illumination and therefore, potentially, temperature. This would place it and any of its moons in the habitable zone. Follow-up study in 2019 have found the evidence for planetary existence inconclusive though. Etymology and mythology Aldebaran was originally ( in Arabic), meaning , since it follows the Pleiades; in fact, the Arabs sometimes also applied‍ the name to the Hyades as a whole. A variety of transliterated spellings have been used, with the current Aldebaran becoming standard relatively recently. Mythology This easily seen and striking star in its suggestive asterism is a popular subject for ancient and modern myths. Mexican culture: For the Seris of northwestern Mexico, this star provides light for the seven women giving birth (Pleiades). It has three names: , , and (). The lunar month corresponding to October is called . Australian Aboriginal culture: amongst indigenous people of the Clarence River, in north-eastern New South Wales, this star is the ancestor Karambal, who stole another man's wife. The woman's husband tracked him down and burned the tree in which he was hiding. It is believed that he rose to the sky as smoke and became the star Aldebaran. Persian culture: Aldebaran is considered one of the 4 "royal stars". Names in other languages In Indian astronomy it is identified as the lunar station Rohini. In Hindu mythology, Rohini is one of the twenty-seven daughters of the sage-king Daksha and Asikni, and the favourite wife of the moon god, Chandra. In Ancient Greek it has been called , literally or . In Chinese, (), meaning , refers to an asterism consisting of Aldebaran, ε Tauri, δ3 Tauri, δ1 Tauri, γ Tauri, 71 Tauri and λ Tauri. Consequently, the Chinese name for Aldebaran itself is (), . In Hawaiian, the star is named Kapuahi. In Biblical Hebrew, עָשׁ (ʿāš) in Job 9:9 and עַ֫יִשׁ (ʿayiš) in Job 38:32 have been identified with it and translated accordingly in English versions such as NJPS and REB. In modern culture As the brightest star in a Zodiac constellation, it is given great significance within astrology. Irish singer and composer Enya has a piece released on her eponymous album in 1986, which lyricist Roma Ryan titled Aldebaran after the star in Taurus. The name Aldebaran or Alpha Tauri has been adopted many times, including Aldebaran Rock in Antarctica United States Navy stores ship and proposed micro-satellite launch vehicle Aldebaran French company Aldebaran Robotics Fashion brand AlphaTauri Formula 1 team Scuderia AlphaTauri, active from to , previously known as Toro Rosso One of the chariot race horses owned by Sheikh Ilderim in the movie Ben-Hur The star also appears in works of fiction such as Far from the Madding Crowd (1874) and Down and Out in Paris and London (1933). It is frequently seen in science fiction, including the Lensman series (1948–1954), Fallen Dragon (2001) and passingly in Kim Stanley Robinson's "Blue Mars" (1996). Aldebaran is associated with Hastur, also known as The King in Yellow, in the horror stories of Robert W. Chambers. Aldebaran regularly features in conspiracy theories as one of the origins of extraterrestrial aliens, often linked to Nazi UFOs. A well-known example is the German conspiracy theorist Axel Stoll, who considered the star the home of the Aryan race and the target of expeditions by the Wehrmacht. The planetary exploration probe Pioneer 10 is no longer powered or in contact with Earth, but its trajectory is taking it in the general direction of Aldebaran. It is expected to make its closest approach in about two million years. The Austrian chemist Carl Auer von Welsbach proposed the name aldebaranium (chemical symbol Ad) for a rare earth element that he (among others) had found. Today, it is called ytterbium (symbol Yb).
Physical sciences
Notable stars
Astronomy
3078
https://en.wikipedia.org/wiki/Altair
Altair
Altair is the brightest star in the constellation of Aquila and the twelfth-brightest star in the night sky. It has the Bayer designation Alpha Aquilae, which is Latinised from α Aquilae and abbreviated Alpha Aql or α Aql. Altair is an A-type main-sequence star with an apparent visual magnitude of 0.77 and is one of the vertices of the Summer Triangle asterism; the other two vertices are marked by Deneb and Vega. It is located at a distance of from the Sun. Altair is currently in the G-cloud—a nearby interstellar cloud, an accumulation of gas and dust. Altair rotates rapidly, with a velocity at the equator of approximately 286 km/s. This is a significant fraction of the star's estimated breakup speed of 400 km/s. A study with the Palomar Testbed Interferometer revealed that Altair is not spherical, but is flattened at the poles due to its high rate of rotation. Other interferometric studies with multiple telescopes, operating in the infrared, have imaged and confirmed this phenomenon. Nomenclature α Aquilae (Latinised to Alpha Aquilae) is the star's Bayer designation. The traditional name Altair has been used since medieval times. It is an abbreviation of the Arabic phrase Al-Nisr Al-Ṭa'ir, "". In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN, which included Altair for this star. It is now so entered in the IAU Catalog of Star Names. Physical characteristics Along with β Aquilae and γ Aquilae, Altair forms the well-known line of stars sometimes referred to as the Family of Aquila or Shaft of Aquila. Altair is a type-A main-sequence star with about 1.8 times the mass of the Sun and 11 times its luminosity. It is thought to be a young star close to the zero age main sequence at about 100 million years old, although previous estimates gave an age closer to one billion years old. Altair rotates rapidly, with a rotational period of under eight hours; for comparison, the equator of the Sun makes a complete rotation in a little more than 25 days, but Altair's rotation is similar to, and slightly faster than, those of Jupiter and Saturn. Like those two planets, its rapid rotation causes the star to be oblate; its equatorial diameter is over 20 percent greater than its polar diameter. Satellite measurements made in 1999 with the Wide Field Infrared Explorer showed that the brightness of Altair fluctuates slightly, varying by just a few thousandths of a magnitude with several different periods less than 2 hours. As a result, it was identified in 2005 as a Delta Scuti variable star. Its light curve can be approximated by adding together a number of sine waves, with periods that range between 0.8 and 1.5 hours. It is a weak source of coronal X-ray emission, with the most active sources of emission being located near the star's equator. This activity may be due to convection cells forming at the cooler equator. Rotational effects The angular diameter of Altair was measured interferometrically by R. Hanbury Brown and his co-workers at Narrabri Observatory in the 1960s. They found a diameter of 3milliarcseconds. Although Hanbury Brown et al. realized that Altair would be rotationally flattened, they had insufficient data to experimentally observe its oblateness. Later, using infrared interferometric measurements made by the Palomar Testbed Interferometer in 1999 and 2000, Altair was found to be flattened. This work was published by G. T. van Belle, David R. Ciardi and their co-authors in 2001. Theory predicts that, owing to Altair's rapid rotation, its surface gravity and effective temperature should be lower at the equator, making the equator less luminous than the poles. This phenomenon, known as gravity darkening or the von Zeipel effect, was confirmed for Altair by measurements made by the Navy Precision Optical Interferometer in 2001, and analyzed by Ohishi et al. (2004) and Peterson et al. (2006). Also, A. Domiciano de Souza et al. (2005) verified gravity darkening using the measurements made by the Palomar and Navy interferometers, together with new measurements made by the VINCI instrument at the VLTI. Altair is one of the few stars for which a direct image has been obtained. In 2006 and 2007, J. D. Monnier and his coworkers produced an image of Altair's surface from 2006 infrared observations made with the MIRC instrument on the CHARA array interferometer; this was the first time the surface of any main-sequence star, apart from the Sun, had been imaged. The false-color image was published in 2007. The equatorial radius of the star was estimated to be 2.03 solar radii, and the polar radius 1.63 solar radii—a 25% increase of the stellar radius from pole to equator. The polar axis is inclined by about 60° to the line of sight from the Earth. Etymology, mythology and culture The term Al Nesr Al Tair appeared in Al Achsasi al Mouakket's catalogue, which was translated into Latin as Vultur Volans. This name was applied by the Arabs to the asterism of Altair, β Aquilae and γ Aquilae and probably goes back to the ancient Babylonians and Sumerians, who called Altair "the eagle star". The spelling Atair has also been used. Medieval astrolabes of England and Western Europe depicted Altair and Vega as birds. The Koori people of Victoria also knew Altair as Bunjil, the wedge-tailed eagle, and β and γ Aquilae are his two wives the black swans. The people of the Murray River knew the star as Totyerguil. The Murray River was formed when Totyerguil the hunter speared Otjout, a giant Murray cod, who, when wounded, churned a channel across southern Australia before entering the sky as the constellation Delphinus. In Chinese belief, the asterism consisting of Altair, β Aquilae and γ Aquilae is known as Hé Gǔ (; lit. "river drum"). The Chinese name for Altair is thus Hé Gǔ èr (; lit. "river drum two", meaning the "second star of the drum at the river"). However, Altair is better known by its other names: Qiān Niú Xīng ( / ) or Niú Láng Xīng (), translated as the cowherd star. These names are an allusion to a love story, The Cowherd and the Weaver Girl, in which Niulang (represented by Altair) and his two children (represented by β Aquilae and γ Aquilae) are separated from respectively their wife and mother Zhinu (represented by Vega) by the Milky Way. They are only permitted to meet once a year, when magpies form a bridge to allow them to cross the Milky Way. The people of Micronesia called Altair Mai-lapa, meaning "big/old breadfruit", while the Māori people called this star Poutu-te-rangi, meaning "pillar of heaven". In Western astrology, the star was ill-omened, portending danger from reptiles. This star is one of the asterisms used by Bugis sailors for navigation, called bintoéng timoro, meaning "eastern star". A group of Japanese scientists sent a radio signal to Altair in 1983 with the hopes of contacting extraterrestrial life. NASA announced Altair as the name of the Lunar Surface Access Module (LSAM) on December 13, 2007. The Russian-made Beriev Be-200 Altair seaplane is also named after the star. Visual companions The bright primary star has the multiple star designation WDS 19508+0852A and has several faint visual companion stars, WDS 19508+0852B, C, D, E, F and G. All are much more distant than Altair and not physically associated.
Physical sciences
Notable stars
Astronomy
3107
https://en.wikipedia.org/wiki/Asymptote
Asymptote
In analytic geometry, an asymptote () of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the x or y coordinates tends to infinity. In projective geometry and related contexts, an asymptote of a curve is a line which is tangent to the curve at a point at infinity. The word asymptote is derived from the Greek ἀσύμπτωτος (asumptōtos) which means "not falling together", from ἀ priv. + σύν "together" + πτωτ-ός "fallen". The term was introduced by Apollonius of Perga in his work on conic sections, but in contrast to its modern meaning, he used it to mean any line that does not intersect the given curve. There are three kinds of asymptotes: horizontal, vertical and oblique. For curves given by the graph of a function , horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to Vertical asymptotes are vertical lines near which the function grows without bound. An oblique asymptote has a slope that is non-zero but finite, such that the graph of the function approaches it as x tends to More generally, one curve is a curvilinear asymptote of another (as opposed to a linear asymptote) if the distance between the two curves tends to zero as they tend to infinity, although the term asymptote by itself is usually reserved for linear asymptotes. Asymptotes convey information about the behavior of curves in the large, and determining the asymptotes of a function is an important step in sketching its graph. The study of asymptotes of functions, construed in a broad sense, forms a part of the subject of asymptotic analysis. Introduction The idea that a curve may come arbitrarily close to a line without actually becoming the same may seem to counter everyday experience. The representations of a line and a curve as marks on a piece of paper or as pixels on a computer screen have a positive width. So if they were to be extended far enough they would seem to merge, at least as far as the eye could discern. But these are physical representations of the corresponding mathematical entities; the line and the curve are idealized concepts whose width is 0 (see Line). Therefore, the understanding of the idea of an asymptote requires an effort of reason rather than experience. Consider the graph of the function shown in this section. The coordinates of the points on the curve are of the form where x is a number other than 0. For example, the graph contains the points (1, 1), (2, 0.5), (5, 0.2), (10, 0.1), ... As the values of become larger and larger, say 100, 1,000, 10,000 ..., putting them far to the right of the illustration, the corresponding values of , .01, .001, .0001, ..., become infinitesimal relative to the scale shown. But no matter how large becomes, its reciprocal is never 0, so the curve never actually touches the x-axis. Similarly, as the values of become smaller and smaller, say .01, .001, .0001, ..., making them infinitesimal relative to the scale shown, the corresponding values of , 100, 1,000, 10,000 ..., become larger and larger. So the curve extends further and further upward as it comes closer and closer to the y-axis. Thus, both the x and y-axis are asymptotes of the curve. These ideas are part of the basis of concept of a limit in mathematics, and this connection is explained more fully below. Asymptotes of functions The asymptotes most commonly encountered in the study of calculus are of curves of the form . These can be computed using limits and classified into horizontal, vertical and oblique asymptotes depending on their orientation. Horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to +∞ or −∞. As the name indicates they are parallel to the x-axis. Vertical asymptotes are vertical lines (perpendicular to the x-axis) near which the function grows without bound. Oblique asymptotes are diagonal lines such that the difference between the curve and the line approaches 0 as x tends to +∞ or −∞. Vertical asymptotes The line x = a is a vertical asymptote of the graph of the function if at least one of the following statements is true: where is the limit as x approaches the value a from the left (from lesser values), and is the limit as x approaches a from the right. For example, if ƒ(x) = x/(x–1), the numerator approaches 1 and the denominator approaches 0 as x approaches 1. So and the curve has a vertical asymptote x = 1. The function ƒ(x) may or may not be defined at a, and its precise value at the point x = a does not affect the asymptote. For example, for the function has a limit of +∞ as , ƒ(x) has the vertical asymptote , even though ƒ(0) = 5. The graph of this function does intersect the vertical asymptote once, at (0, 5). It is impossible for the graph of a function to intersect a vertical asymptote (or a vertical line in general) in more than one point. Moreover, if a function is continuous at each point where it is defined, it is impossible that its graph does intersect any vertical asymptote. A common example of a vertical asymptote is the case of a rational function at a point x such that the denominator is zero and the numerator is non-zero. If a function has a vertical asymptote, then it isn't necessarily true that the derivative of the function has a vertical asymptote at the same place. An example is at . This function has a vertical asymptote at because and . The derivative of is the function . For the sequence of points for that approaches both from the left and from the right, the values are constantly . Therefore, both one-sided limits of at can be neither nor . Hence doesn't have a vertical asymptote at . Horizontal asymptotes Horizontal asymptotes are horizontal lines that the graph of the function approaches as . The horizontal line y = c is a horizontal asymptote of the function y = ƒ(x) if or . In the first case, ƒ(x) has y = c as asymptote when x tends to , and in the second ƒ(x) has y = c as an asymptote as x tends to . For example, the arctangent function satisfies and So the line is a horizontal asymptote for the arctangent when x tends to , and is a horizontal asymptote for the arctangent when x tends to . Functions may lack horizontal asymptotes on either or both sides, or may have one horizontal asymptote that is the same in both directions. For example, the function has a horizontal asymptote at y = 0 when x tends both to and because, respectively, Other common functions that have one or two horizontal asymptotes include (that has an hyperbola as it graph), the Gaussian function the error function, and the logistic function. Oblique asymptotes When a linear asymptote is not parallel to the x- or y-axis, it is called an oblique asymptote or slant asymptote. A function ƒ(x) is asymptotic to the straight line (m ≠ 0) if In the first case the line is an oblique asymptote of ƒ(x) when x tends to +∞, and in the second case the line is an oblique asymptote of ƒ(x) when x tends to −∞. An example is ƒ(x) = x + 1/x, which has the oblique asymptote y = x (that is m = 1, n = 0) as seen in the limits Elementary methods for identifying asymptotes The asymptotes of many elementary functions can be found without the explicit use of limits (although the derivations of such methods typically use limits). General computation of oblique asymptotes for functions The oblique asymptote, for the function f(x), will be given by the equation y = mx + n. The value for m is computed first and is given by where a is either or depending on the case being studied. It is good practice to treat the two cases separately. If this limit doesn't exist then there is no oblique asymptote in that direction. Having m then the value for n can be computed by where a should be the same value used before. If this limit fails to exist then there is no oblique asymptote in that direction, even should the limit defining m exist. Otherwise is the oblique asymptote of ƒ(x) as x tends to a. For example, the function has and then so that is the asymptote of ƒ(x) when x tends to +∞. The function has and then , which does not exist. So does not have an asymptote when x tends to +∞. Asymptotes for rational functions A rational function has at most one horizontal asymptote or oblique (slant) asymptote, and possibly many vertical asymptotes. The degree of the numerator and degree of the denominator determine whether or not there are any horizontal or oblique asymptotes. The cases are tabulated below, where deg(numerator) is the degree of the numerator, and deg(denominator) is the degree of the denominator. The vertical asymptotes occur only when the denominator is zero (If both the numerator and denominator are zero, the multiplicities of the zero are compared). For example, the following function has vertical asymptotes at x = 0, and x = 1, but not at x = 2. Oblique asymptotes of rational functions When the numerator of a rational function has degree exactly one greater than the denominator, the function has an oblique (slant) asymptote. The asymptote is the polynomial term after dividing the numerator and denominator. This phenomenon occurs because when dividing the fraction, there will be a linear term, and a remainder. For example, consider the function shown to the right. As the value of x increases, f approaches the asymptote y = x. This is because the other term, 1/(x+1), approaches 0. If the degree of the numerator is more than 1 larger than the degree of the denominator, and the denominator does not divide the numerator, there will be a nonzero remainder that goes to zero as x increases, but the quotient will not be linear, and the function does not have an oblique asymptote. Transformations of known functions If a known function has an asymptote (such as y=0 for f(x)=ex), then the translations of it also have an asymptote. If x=a is a vertical asymptote of f(x), then x=a+h is a vertical asymptote of f(x-h) If y=c is a horizontal asymptote of f(x), then y=c+k is a horizontal asymptote of f(x)+k If a known function has an asymptote, then the scaling of the function also have an asymptote. If y=ax+b is an asymptote of f(x), then y=cax+cb is an asymptote of cf(x) For example, f(x)=ex-1+2 has horizontal asymptote y=0+2=2, and no vertical or oblique asymptotes. General definition Let be a parametric plane curve, in coordinates A(t) = (x(t),y(t)). Suppose that the curve tends to infinity, that is: A line ℓ is an asymptote of A if the distance from the point A(t) to ℓ tends to zero as t → b. From the definition, only open curves that have some infinite branch can have an asymptote. No closed curve can have an asymptote. For example, the upper right branch of the curve y = 1/x can be defined parametrically as x = t, y = 1/t (where t > 0). First, x → ∞ as t → ∞ and the distance from the curve to the x-axis is 1/t which approaches 0 as t → ∞. Therefore, the x-axis is an asymptote of the curve. Also, y → ∞ as t → 0 from the right, and the distance between the curve and the y-axis is t which approaches 0 as t → 0. So the y-axis is also an asymptote. A similar argument shows that the lower left branch of the curve also has the same two lines as asymptotes. Although the definition here uses a parameterization of the curve, the notion of asymptote does not depend on the parameterization. In fact, if the equation of the line is then the distance from the point A(t) = (x(t),y(t)) to the line is given by if γ(t) is a change of parameterization then the distance becomes which tends to zero simultaneously as the previous expression. An important case is when the curve is the graph of a real function (a function of one real variable and returning real values). The graph of the function y = ƒ(x) is the set of points of the plane with coordinates (x,ƒ(x)). For this, a parameterization is This parameterization is to be considered over the open intervals (a,b), where a can be −∞ and b can be +∞. An asymptote can be either vertical or non-vertical (oblique or horizontal). In the first case its equation is x = c, for some real number c. The non-vertical case has equation , where m and are real numbers. All three types of asymptotes can be present at the same time in specific examples. Unlike asymptotes for curves that are graphs of functions, a general curve may have more than two non-vertical asymptotes, and may cross its vertical asymptotes more than once. Curvilinear asymptotes Let be a parametric plane curve, in coordinates A(t) = (x(t),y(t)), and B be another (unparameterized) curve. Suppose, as before, that the curve A tends to infinity. The curve B is a curvilinear asymptote of A if the shortest distance from the point A(t) to a point on B tends to zero as t → b. Sometimes B is simply referred to as an asymptote of A, when there is no risk of confusion with linear asymptotes. For example, the function has a curvilinear asymptote , which is known as a parabolic asymptote because it is a parabola rather than a straight line. Asymptotes and curve sketching Asymptotes are used in procedures of curve sketching. An asymptote serves as a guide line to show the behavior of the curve towards infinity. In order to get better approximations of the curve, curvilinear asymptotes have also been used although the term asymptotic curve seems to be preferred. Algebraic curves The asymptotes of an algebraic curve in the affine plane are the lines that are tangent to the projectivized curve through a point at infinity. For example, one may identify the asymptotes to the unit hyperbola in this manner. Asymptotes are often considered only for real curves, although they also make sense when defined in this way for curves over an arbitrary field. A plane curve of degree n intersects its asymptote at most at n−2 other points, by Bézout's theorem, as the intersection at infinity is of multiplicity at least two. For a conic, there are a pair of lines that do not intersect the conic at any complex point: these are the two asymptotes of the conic. A plane algebraic curve is defined by an equation of the form P(x,y) = 0 where P is a polynomial of degree n where Pk is homogeneous of degree k. Vanishing of the linear factors of the highest degree term Pn defines the asymptotes of the curve: setting , if , then the line is an asymptote if and are not both zero. If and , there is no asymptote, but the curve has a branch that looks like a branch of parabola. Such a branch is called a , even when it does not have any parabola that is a curvilinear asymptote. If the curve has a singular point at infinity which may have several asymptotes or parabolic branches. Over the complex numbers, Pn splits into linear factors, each of which defines an asymptote (or several for multiple factors). Over the reals, Pn splits in factors that are linear or quadratic factors. Only the linear factors correspond to infinite (real) branches of the curve, but if a linear factor has multiplicity greater than one, the curve may have several asymptotes or parabolic branches. It may also occur that such a multiple linear factor corresponds to two complex conjugate branches, and does not corresponds to any infinite branch of the real curve. For example, the curve has no real points outside the square , but its highest order term gives the linear factor x with multiplicity 4, leading to the unique asymptote x=0. Asymptotic cone The hyperbola has the two asymptotes The equation for the union of these two lines is Similarly, the hyperboloid is said to have the asymptotic cone The distance between the hyperboloid and cone approaches 0 as the distance from the origin approaches infinity. More generally, consider a surface that has an implicit equation where the are homogeneous polynomials of degree and . Then the equation defines a cone which is centered at the origin. It is called an asymptotic cone, because the distance to the cone of a point of the surface tends to zero when the point on the surface tends to infinity.
Mathematics
Mathematical analysis
null
3118
https://en.wikipedia.org/wiki/Arithmetic
Arithmetic
Arithmetic is an elementary branch of mathematics that studies numerical operations like addition, subtraction, multiplication, and division. In a wider sense, it also includes exponentiation, extraction of roots, and taking logarithms. Arithmetic systems can be distinguished based on the type of numbers they operate on. Integer arithmetic is about calculations with positive and negative integers. Rational number arithmetic involves operations on fractions of integers. Real number arithmetic is about calculations with real numbers, which include both rational and irrational numbers. Another distinction is based on the numeral system employed to perform calculations. Decimal arithmetic is the most common. It uses the basic numerals from 0 to 9 and their combinations to express numbers. Binary arithmetic, by contrast, is used by most computers and represents numbers as combinations of the basic numerals 0 and 1. Computer arithmetic deals with the specificities of the implementation of binary arithmetic on computers. Some arithmetic systems operate on mathematical objects other than numbers, such as interval arithmetic and matrix arithmetic. Arithmetic operations form the basis of many branches of mathematics, such as algebra, calculus, and statistics. They play a similar role in the sciences, like physics and economics. Arithmetic is present in many aspects of daily life, for example, to calculate change while shopping or to manage personal finances. It is one of the earliest forms of mathematics education that students encounter. Its cognitive and conceptual foundations are studied by psychology and philosophy. The practice of arithmetic is at least thousands and possibly tens of thousands of years old. Ancient civilizations like the Egyptians and the Sumerians invented numeral systems to solve practical arithmetic problems in about 3000 BCE. Starting in the 7th and 6th centuries BCE, the ancient Greeks initiated a more abstract study of numbers and introduced the method of rigorous mathematical proofs. The ancient Indians developed the concept of zero and the decimal system, which Arab mathematicians further refined and spread to the Western world during the medieval period. The first mechanical calculators were invented in the 17th century. The 18th and 19th centuries saw the development of modern number theory and the formulation of axiomatic foundations of arithmetic. In the 20th century, the emergence of electronic calculators and computers revolutionized the accuracy and speed with which arithmetic calculations could be performed. Definition, etymology, and related fields Arithmetic is the fundamental branch of mathematics that studies numbers and their operations. In particular, it deals with numerical calculations using the arithmetic operations of addition, subtraction, multiplication, and division. In a wider sense, it also includes exponentiation, extraction of roots, and logarithm. The term arithmetic has its root in the Latin term which derives from the Ancient Greek words (arithmos), meaning , and (arithmetike tekhne), meaning . There are disagreements about its precise definition. According to a narrow characterization, arithmetic deals only with natural numbers. However, the more common view is to include operations on integers, rational numbers, real numbers, and sometimes also complex numbers in its scope. Some definitions restrict arithmetic to the field of numerical calculations. When understood in a wider sense, it also includes the study of how the concept of numbers developed, the analysis of properties of and relations between numbers, and the examination of the axiomatic structure of arithmetic operations. Arithmetic is closely related to number theory and some authors use the terms as synonyms. However, in a more specific sense, number theory is restricted to the study of integers and focuses on their properties and relationships such as divisibility, factorization, and primality. Traditionally, it is known as higher arithmetic. Numbers Numbers are mathematical objects used to count quantities and measure magnitudes. They are fundamental elements in arithmetic since all arithmetic operations are performed on numbers. There are different kinds of numbers and different numeral systems to represent them. Kinds The main kinds of numbers employed in arithmetic are natural numbers, whole numbers, integers, rational numbers, and real numbers. The natural numbers are whole numbers that start from 1 and go to infinity. They exclude 0 and negative numbers. They are also known as counting numbers and can be expressed as . The symbol of the natural numbers is . The whole numbers are identical to the natural numbers with the only difference being that they include 0. They can be represented as and have the symbol . Some mathematicians do not draw the distinction between the natural and the whole numbers by including 0 in the set of natural numbers. The set of integers encompasses both positive and negative whole numbers. It has the symbol and can be expressed as . Based on how natural and whole numbers are used, they can be distinguished into cardinal and ordinal numbers. Cardinal numbers, like one, two, and three, are numbers that express the quantity of objects. They answer the question "how many?". Ordinal numbers, such as first, second, and third, indicate order or placement in a series. They answer the question "what position?". A number is rational if it can be represented as the ratio of two integers. For instance, the rational number is formed by dividing the integer 1, called the numerator, by the integer 2, called the denominator. Other examples are and . The set of rational numbers includes all integers, which are fractions with a denominator of 1. The symbol of the rational numbers is . Decimal fractions like 0.3 and 25.12 are a special type of rational numbers since their denominator is a power of 10. For instance, 0.3 is equal to , and 25.12 is equal to . Every rational number corresponds to a finite or a repeating decimal. Irrational numbers are numbers that cannot be expressed through the ratio of two integers. They are often required to describe geometric magnitudes. For example, if a right triangle has legs of the length 1 then the length of its hypotenuse is given by the irrational number . is another irrational number and describes the ratio of a circle's circumference to its diameter. The decimal representation of an irrational number is infinite without repeating decimals. The set of rational numbers together with the set of irrational numbers makes up the set of real numbers. The symbol of the real numbers is . Even wider classes of numbers include complex numbers and quaternions. Numeral systems A numeral is a symbol to represent a number and numeral systems are representational frameworks. They usually have a limited amount of basic numerals, which directly refer to certain numbers. The system governs how these basic numerals may be combined to express any number. Numeral systems are either positional or non-positional. All early numeral systems were non-positional. For non-positional numeral systems, the value of a digit does not depend on its position in the numeral. The simplest non-positional system is the unary numeral system. It relies on one symbol for the number 1. All higher numbers are written by repeating this symbol. For example, the number 7 can be represented by repeating the symbol for 1 seven times. This system makes it cumbersome to write large numbers, which is why many non-positional systems include additional symbols to directly represent larger numbers. Variations of the unary numeral systems are employed in tally sticks using dents and in tally marks. Egyptian hieroglyphics had a more complex non-positional numeral system. They have additional symbols for numbers like 10, 100, 1000, and 10,000. These symbols can be combined into a sum to more conveniently express larger numbers. For instance, the numeral for 10,405 uses one time the symbol for 10,000, four times the symbol for 100, and five times the symbol for 1. A similar well-known framework is the Roman numeral system. It has the symbols I, V, X, L, C, D, M as its basic numerals to represent the numbers 1, 5, 10, 50, 100, 500, and 1000. A numeral system is positional if the position of a basic numeral in a compound expression determines its value. Positional numeral systems have a radix that acts as a multiplicand of the different positions. For each subsequent position, the radix is raised to a higher power. In the common decimal system, also called the Hindu–Arabic numeral system, the radix is 10. This means that the first digit is multiplied by , the next digit is multiplied by , and so on. For example, the decimal numeral 532 stands for . Because of the effect of the digits' positions, the numeral 532 differs from the numerals 325 and 253 even though they have the same digits. Another positional numeral system used extensively in computer arithmetic is the binary system, which has a radix of 2. This means that the first digit is multiplied by , the next digit by , and so on. For example, the number 13 is written as 1101 in the binary notation, which stands for . In computing, each digit in the binary notation corresponds to one bit. The earliest positional system was developed by ancient Babylonians and had a radix of 60. Operations Arithmetic operations are ways of combining, transforming, or manipulating numbers. They are functions that have numbers both as input and output. The most important operations in arithmetic are addition, subtraction, multiplication, and division. Further operations include exponentiation, extraction of roots, and logarithm. If these operations are performed on variables rather than numbers, they are sometimes referred to as algebraic operations. Two important concepts in relation to arithmetic operations are identity elements and inverse elements. The identity element or neutral element of an operation does not cause any change if it is applied to another element. For example, the identity element of addition is 0 since any sum of a number and 0 results in the same number. The inverse element is the element that results in the identity element when combined with another element. For instance, the additive inverse of the number 6 is -6 since their sum is 0. There are not only inverse elements but also inverse operations. In an informal sense, one operation is the inverse of another operation if it undoes the first operation. For example, subtraction is the inverse of addition since a number returns to its original value if a second number is first added and subsequently subtracted, as in . Defined more formally, the operation "" is an inverse of the operation "" if it fulfills the following condition: if and only if . Commutativity and associativity are laws governing the order in which some arithmetic operations can be carried out. An operation is commutative if the order of the arguments can be changed without affecting the results. This is the case for addition, for instance, is the same as . Associativity is a rule that affects the order in which a series of operations can be carried out. An operation is associative if, in a series of two operations, it does not matter which operation is carried out first. This is the case for multiplication, for example, since is the same as . Addition and subtraction Addition is an arithmetic operation in which two numbers, called the addends, are combined into a single number, called the sum. The symbol of addition is . Examples are and . The term summation is used if several additions are performed in a row. Counting is a type of repeated addition in which the number 1 is continuously added. Subtraction is the inverse of addition. In it, one number, known as the subtrahend, is taken away from another, known as the minuend. The result of this operation is called the difference. The symbol of subtraction is . Examples are and . Subtraction is often treated as a special case of addition: instead of subtracting a positive number, it is also possible to add a negative number. For instance . This helps to simplify mathematical computations by reducing the number of basic arithmetic operations needed to perform calculations. The additive identity element is 0 and the additive inverse of a number is the negative of that number. For instance, and . Addition is both commutative and associative. Multiplication and division Multiplication is an arithmetic operation in which two numbers, called the multiplier and the multiplicand, are combined into a single number called the product. The symbols of multiplication are , , and *. Examples are and . If the multiplicand is a natural number then multiplication is the same as repeated addition, as in . Division is the inverse of multiplication. In it, one number, known as the dividend, is split into several equal parts by another number, known as the divisor. The result of this operation is called the quotient. The symbols of division are and . Examples are and . Division is often treated as a special case of multiplication: instead of dividing by a number, it is also possible to multiply by its reciprocal. The reciprocal of a number is 1 divided by that number. For instance, . The multiplicative identity element is 1 and the multiplicative inverse of a number is the reciprocal of that number. For example, and . Multiplication is both commutative and associative. Exponentiation and logarithm Exponentiation is an arithmetic operation in which a number, known as the base, is raised to the power of another number, known as the exponent. The result of this operation is called the power. Exponentiation is sometimes expressed using the symbol ^ but the more common way is to write the exponent in superscript right after the base. Examples are and ^. If the exponent is a natural number then exponentiation is the same as repeated multiplication, as in . Roots are a special type of exponentiation using a fractional exponent. For example, the square root of a number is the same as raising the number to the power of and the cube root of a number is the same as raising the number to the power of . Examples are and . Logarithm is the inverse of exponentiation. The logarithm of a number to the base is the exponent to which must be raised to produce . For instance, since , the logarithm base 10 of 1000 is 3. The logarithm of to base is denoted as , or without parentheses, , or even without the explicit base, , when the base can be understood from context. So, the previous example can be written . Exponentiation and logarithm do not have general identity elements and inverse elements like addition and multiplication. The neutral element of exponentiation in relation to the exponent is 1, as in . However, exponentiation does not have a general identity element since 1 is not the neutral element for the base. Exponentiation and logarithm are neither commutative nor associative. Types Different types of arithmetic systems are discussed in the academic literature. They differ from each other based on what type of number they operate on, what numeral system they use to represent them, and whether they operate on mathematical objects other than numbers. Integer arithmetic Integer arithmetic is the branch of arithmetic that deals with the manipulation of positive and negative whole numbers. Simple one-digit operations can be performed by following or memorizing a table that presents the results of all possible combinations, like an addition table or a multiplication table. Other common methods are verbal counting and finger-counting. For operations on numbers with more than one digit, different techniques can be employed to calculate the result by using several one-digit operations in a row. For example, in the method addition with carries, the two numbers are written one above the other. Starting from the rightmost digit, each pair of digits is added together. The rightmost digit of the sum is written below them. If the sum is a two-digit number then the leftmost digit, called the "carry", is added to the next pair of digits to the left. This process is repeated until all digits have been added. Other methods used for integer additions are the number line method, the partial sum method, and the compensation method. A similar technique is utilized for subtraction: it also starts with the rightmost digit and uses a "borrow" or a negative carry for the column on the left if the result of the one-digit subtraction is negative. A basic technique of integer multiplication employs repeated addition. For example, the product of can be calculated as . A common technique for multiplication with larger numbers is called long multiplication. This method starts by writing the multiplier above the multiplicand. The calculation begins by multiplying the multiplier only with the rightmost digit of the multiplicand and writing the result below, starting in the rightmost column. The same is done for each digit of the multiplicand and the result in each case is shifted one position to the left. As a final step, all the individual products are added to arrive at the total product of the two multi-digit numbers. Other techniques used for multiplication are the grid method and the lattice method. Computer science is interested in multiplication algorithms with a low computational complexity to be able to efficiently multiply very large integers, such as the Karatsuba algorithm, the Schönhage–Strassen algorithm, and the Toom–Cook algorithm. A common technique used for division is called long division. Other methods include short division and chunking. Integer arithmetic is not closed under division. This means that when dividing one integer by another integer, the result is not always an integer. For instance, 7 divided by 2 is not a whole number but 3.5. One way to ensure that the result is an integer is to round the result to a whole number. However, this method leads to inaccuracies as the original value is altered. Another method is to perform the division only partially and retain the remainder. For example, 7 divided by 2 is 3 with a remainder of 1. These difficulties are avoided by rational number arithmetic, which allows for the exact representation of fractions. A simple method to calculate exponentiation is by repeated multiplication. For instance, the exponentiation of can be calculated as . A more efficient technique used for large exponents is exponentiation by squaring. It breaks down the calculation into a number of squaring operations. For example, the exponentiation can be written as . By taking advantage of repeated squaring operations, only 7 individual operations are needed rather than the 64 operations required for regular repeated multiplication. Methods to calculate logarithms include the Taylor series and continued fractions. Integer arithmetic is not closed under logarithm and under exponentiation with negative exponents, meaning that the result of these operations is not always an integer. Number theory Number theory studies the structure and properties of integers as well as the relations and laws between them. Some of the main branches of modern number theory include elementary number theory, analytic number theory, algebraic number theory, and geometric number theory. Elementary number theory studies aspects of integers that can be investigated using elementary methods. Its topics include divisibility, factorization, and primality. Analytic number theory, by contrast, relies on techniques from analysis and calculus. It examines problems like how prime numbers are distributed and the claim that every even number is a sum of two prime numbers. Algebraic number theory employs algebraic structures to analyze the properties of and relations between numbers. Examples are the use of fields and rings, as in algebraic number fields like the ring of integers. Geometric number theory uses concepts from geometry to study numbers. For instance, it investigates how lattice points with integer coordinates behave in a plane. Further branches of number theory are probabilistic number theory, which employs methods from probability theory, combinatorial number theory, which relies on the field of combinatorics, computational number theory, which approaches number-theoretic problems with computational methods, and applied number theory, which examines the application of number theory to fields like physics, biology, and cryptography. Influential theorems in number theory include the fundamental theorem of arithmetic, Euclid's theorem, and Fermat's Last Theorem. According to the fundamental theorem of arithmetic, every integer greater than 1 is either a prime number or can be represented as a unique product of prime numbers. For example, the number 18 is not a prime number and can be represented as , all of which are prime numbers. The number 19, by contrast, is a prime number that has no other prime factorization. Euclid's theorem states that there are infinitely many prime numbers. Fermat's Last Theorem is the statement that no positive integer values exist for , , and that solve the equation if is greater than . Rational number arithmetic Rational number arithmetic is the branch of arithmetic that deals with the manipulation of numbers that can be expressed as a ratio of two integers. Most arithmetic operations on rational numbers can be calculated by performing a series of integer arithmetic operations on the numerators and the denominators of the involved numbers. If two rational numbers have the same denominator then they can be added by adding their numerators and keeping the common denominator. For example, . A similar procedure is used for subtraction. If the two numbers do not have the same denominator then they must be transformed to find a common denominator. This can be achieved by scaling the first number with the denominator of the second number while scaling the second number with the denominator of the first number. For instance, . Two rational numbers are multiplied by multiplying their numerators and their denominators respectively, as in . Dividing one rational number by another can be achieved by multiplying the first number with the reciprocal of the second number. This means that the numerator and the denominator of the second number change position. For example, . Unlike integer arithmetic, rational number arithmetic is closed under division as long as the divisor is not 0. Both integer arithmetic and rational number arithmetic are not closed under exponentiation and logarithm. One way to calculate exponentiation with a fractional exponent is to perform two separate calculations: one exponentiation using the numerator of the exponent followed by drawing the nth root of the result based on the denominator of the exponent. For example, . The first operation can be completed using methods like repeated multiplication or exponentiation by squaring. One way to get an approximate result for the second operation is to employ Newton's method, which uses a series of steps to gradually refine an initial guess until it reaches the desired level of accuracy. The Taylor series or the continued fraction method can be utilized to calculate logarithms. The decimal fraction notation is a special way of representing rational numbers whose denominator is a power of 10. For instance, the rational numbers , , and are written as 0.1, 3.71, and 0.0044 in the decimal fraction notation. Modified versions of integer calculation methods like addition with carry and long multiplication can be applied to calculations with decimal fractions. Not all rational numbers have a finite representation in the decimal notation. For example, the rational number corresponds to 0.333... with an infinite number of 3s. The shortened notation for this type of repeating decimal is 0.. Every repeating decimal expresses a rational number. Real number arithmetic Real number arithmetic is the branch of arithmetic that deals with the manipulation of both rational and irrational numbers. Irrational numbers are numbers that cannot be expressed through fractions or repeated decimals, like the root of 2 and . Unlike rational number arithmetic, real number arithmetic is closed under exponentiation as long as it uses a positive number as its base. The same is true for the logarithm of positive real numbers as long as the logarithm base is positive and not 1. Irrational numbers involve an infinite non-repeating series of decimal digits. Because of this, there is often no simple and accurate way to express the results of arithmetic operations like or In cases where absolute precision is not required, the problem of calculating arithmetic operations on real numbers is usually addressed by truncation or rounding. For truncation, a certain number of leftmost digits are kept and remaining digits are discarded or replaced by zeros. For example, the number has an infinite number of digits starting with 3.14159.... If this number is truncated to 4 decimal places, the result is 3.141. Rounding is a similar process in which the last preserved digit is increased by one if the next digit is 5 or greater but remains the same if the next digit is less than 5, so that the rounded number is the best approximation of a given precision for the original number. For instance, if the number is rounded to 4 decimal places, the result is 3.142 because the following digit is a 5, so 3.142 is closer to than 3.141. These methods allow computers to efficiently perform approximate calculations on real numbers. Approximations and errors In science and engineering, numbers represent estimates of physical quantities derived from measurement or modeling. Unlike mathematically exact numbers such as or scientifically relevant numerical data are inherently inexact, involving some measurement uncertainty. One basic way to express the degree of certainty about each number's value and avoid false precision is to round each measurement to a certain number of digits, called significant digits, which are implied to be accurate. For example, a person's height measured with a tape measure might only be precisely known to the nearest centimeter, so should be presented as 1.62 meters rather than 1.6217 meters. If converted to imperial units, this quantity should be rounded to 64 inches or 63.8 inches rather than 63.7795 inches, to clearly convey the precision of the measurement. When a number is written using ordinary decimal notation, leading zeros are not significant, and trailing zeros of numbers not written with a decimal point are implicitly considered to be non-significant. For example, the numbers 0.056 and 1200 each have only 2 significant digits, but the number 40.00 has 4 significant digits. Representing uncertainty using only significant digits is a relatively crude method, with some unintuitive subtleties; explicitly keeping track of an estimate or upper bound of the approximation error is a more sophisticated approach. In the example, the person's height might be represented as meters or . In performing calculations with uncertain quantities, the uncertainty should be propagated to calculated quantities. When adding or subtracting two or more quantities, add the absolute uncertainties of each summand together to obtain the absolute uncertainty of the sum. When multiplying or dividing two or more quantities, add the relative uncertainties of each factor together to obtain the relative uncertainty of the product. When representing uncertainty by significant digits, uncertainty can be coarsely propagated by rounding the result of adding or subtracting two or more quantities to the leftmost last significant decimal place among the summands, and by rounding the result of multiplying or dividing two or more quantities to the least number of significant digits among the factors. (See .) More sophisticated methods of dealing with uncertain values include interval arithmetic and affine arithmetic. Interval arithmetic describes operations on intervals. Intervals can be used to represent a range of values if one does not know the precise magnitude, for example, because of measurement errors. Interval arithmetic includes operations like addition and multiplication on intervals, as in and . It is closely related to affine arithmetic, which aims to give more precise results by performing calculations on affine forms rather than intervals. An affine form is a number together with error terms that describe how the number may deviate from the actual magnitude. The precision of numerical quantities can be expressed uniformly using normalized scientific notation, which is also convenient for concisely representing numbers which are much larger or smaller than 1. Using scientific notation, a number is decomposed into the product of a number between 1 and 10, called the significand, and 10 raised to some integer power, called the exponent. The significand consists of the significant digits of the number, and is written as a leading digit 1–9 followed by a decimal point and a sequence of digits 0–9. For example, the normalized scientific notation of the number 8276000 is with significand 8.276 and exponent 6, and the normalized scientific notation of the number 0.00735 is with significand 7.35 and exponent −3. Unlike ordinary decimal notation, where trailing zeros of large numbers are implicitly considered to be non-significant, in scientific notation every digit in the significand is considered significant, and adding trailing zeros indicates higher precision. For example, while the number 1200 implicitly has only 2 significant digits, the number explicitly has 3. A common method employed by computers to approximate real number arithmetic is called floating-point arithmetic. It represents real numbers similar to the scientific notation through three numbers: a significand, a base, and an exponent. The precision of the significand is limited by the number of bits allocated to represent it. If an arithmetic operation results in a number that requires more bits than are available, the computer rounds the result to the closest representable number. This leads to rounding errors. A consequence of this behavior is that certain laws of arithmetic are violated by floating-point arithmetic. For example, floating-point addition is not associative since the rounding errors introduced can depend on the order of the additions. This means that the result of is sometimes different from the result of The most common technical standard used for floating-point arithmetic is called IEEE 754. Among other things, it determines how numbers are represented, how arithmetic operations and rounding are performed, and how errors and exceptions are handled. In cases where computation speed is not a limiting factor, it is possible to use arbitrary-precision arithmetic, for which the precision of calculations is only restricted by the computer's memory. Tool use Forms of arithmetic can also be distinguished by the tools employed to perform calculations and include many approaches besides the regular use of pen and paper. Mental arithmetic relies exclusively on the mind without external tools. Instead, it utilizes visualization, memorization, and certain calculation techniques to solve arithmetic problems. One such technique is the compensation method, which consists in altering the numbers to make the calculation easier and then adjusting the result afterward. For example, instead of calculating , one calculates which is easier because it uses a round number. In the next step, one adds to the result to compensate for the earlier adjustment. Mental arithmetic is often taught in primary education to train the numerical abilities of the students. The human body can also be employed as an arithmetic tool. The use of hands in finger counting is often introduced to young children to teach them numbers and simple calculations. In its most basic form, the number of extended fingers corresponds to the represented quantity and arithmetic operations like addition and subtraction are performed by extending or retracting fingers. This system is limited to small numbers compared to more advanced systems which employ different approaches to represent larger quantities. The human voice is used as an arithmetic aid in verbal counting. Tally marks are a simple system based on external tools other than the body. This system relies on mark making, such as strokes drawn on a surface or notches carved into a wooden stick, to keep track of quantities. Some forms of tally marks arrange the strokes in groups of five to make them easier to read. The abacus is a more advanced tool to represent numbers and perform calculations. An abacus usually consists of a series of rods, each holding several beads. Each bead represents a quantity, which is counted if the bead is moved from one end of a rod to the other. Calculations happen by manipulating the positions of beads until the final bead pattern reveals the result. Related aids include counting boards, which use tokens whose value depends on the area on the board in which they are placed, and counting rods, which are arranged in horizontal and vertical patterns to represent different numbers. Sectors and slide rules are more refined calculating instruments that rely on geometric relationships between different scales to perform both basic and advanced arithmetic operations. Printed tables were particularly relevant as an aid to look up the results of operations like logarithm and trigonometric functions. Mechanical calculators automate manual calculation processes. They present the user with some form of input device to enter numbers by turning dials or pressing keys. They include an internal mechanism usually consisting of gears, levers, and wheels to perform calculations and display the results. For electronic calculators and computers, this procedure is further refined by replacing the mechanical components with electronic circuits like microprocessors that combine and transform electric signals to perform calculations. Others There are many other types of arithmetic. Modular arithmetic operates on a finite set of numbers. If an operation would result in a number outside this finite set then the number is adjusted back into the set, similar to how the hands of clocks start at the beginning again after having completed one cycle. The number at which this adjustment happens is called the modulus. For example, a regular clock has a modulus of 12. In the case of adding 4 to 9, this means that the result is not 13 but 1. The same principle applies also to other operations, such as subtraction, multiplication, and division. Some forms of arithmetic deal with operations performed on mathematical objects other than numbers. Interval arithmetic describes operations on intervals. Vector arithmetic and matrix arithmetic describe arithmetic operations on vectors and matrices, like vector addition and matrix multiplication. Arithmetic systems can be classified based on the numeral system they rely on. For instance, decimal arithmetic describes arithmetic operations in the decimal system. Other examples are binary arithmetic, octal arithmetic, and hexadecimal arithmetic. Compound unit arithmetic describes arithmetic operations performed on magnitudes with compound units. It involves additional operations to govern the transformation between single unit and compound unit quantities. For example, the operation of reduction is used to transform the compound quantity 1 h 90 min into the single unit quantity 150 min. Non-Diophantine arithmetics are arithmetic systems that violate traditional arithmetic intuitions and include equations like and . They can be employed to represent some real-world situations in modern physics and everyday life. For instance, the equation can be used to describe the observation that if one raindrop is added to another raindrop then they do not remain two separate entities but become one. Axiomatic foundations Axiomatic foundations of arithmetic try to provide a small set of laws, called axioms, from which all fundamental properties of and operations on numbers can be derived. They constitute logically consistent and systematic frameworks that can be used to formulate mathematical proofs in a rigorous manner. Two well-known approaches are the Dedekind–Peano axioms and set-theoretic constructions. The Dedekind–Peano axioms provide an axiomatization of the arithmetic of natural numbers. Their basic principles were first formulated by Richard Dedekind and later refined by Giuseppe Peano. They rely only on a small number of primitive mathematical concepts, such as 0, natural number, and successor. The Peano axioms determine how these concepts are related to each other. All other arithmetic concepts can then be defined in terms of these primitive concepts. 0 is a natural number. For every natural number, there is a successor, which is also a natural number. The successors of two different natural numbers are never identical. 0 is not the successor of a natural number. If a set contains 0 and every successor then it contains every natural number. Numbers greater than 0 are expressed by repeated application of the successor function . For example, is and is . Arithmetic operations can be defined as mechanisms that affect how the successor function is applied. For instance, to add to any number is the same as applying the successor function two times to this number. Various axiomatizations of arithmetic rely on set theory. They cover natural numbers but can also be extended to integers, rational numbers, and real numbers. Each natural number is represented by a unique set. 0 is usually defined as the empty set . Each subsequent number can be defined as the union of the previous number with the set containing the previous number. For example, , , and . Integers can be defined as ordered pairs of natural numbers where the second number is subtracted from the first one. For instance, the pair (9, 0) represents the number 9 while the pair (0, 9) represents the number -9. Rational numbers are defined as pairs of integers where the first number represents the numerator and the second number represents the denominator. For example, the pair (3, 7) represents the rational number . One way to construct the real numbers relies on the concept of Dedekind cuts. According to this approach, each real number is represented by a partition of all rational numbers into two sets, one for all numbers below the represented real number and the other for the rest. Arithmetic operations are defined as functions that perform various set-theoretic transformations on the sets representing the input numbers to arrive at the set representing the result. History The earliest forms of arithmetic are sometimes traced back to counting and tally marks used to keep track of quantities. Some historians suggest that the Lebombo bone (dated about 43,000 years ago) and the Ishango bone (dated about 22,000 to 30,000 years ago) are the oldest arithmetic artifacts but this interpretation is disputed. However, a basic sense of numbers may predate these findings and might even have existed before the development of language. It was not until the emergence of ancient civilizations that a more complex and structured approach to arithmetic began to evolve, starting around 3000 BCE. This became necessary because of the increased need to keep track of stored items, manage land ownership, and arrange exchanges. All the major ancient civilizations developed non-positional numeral systems to facilitate the representation of numbers. They also had symbols for operations like addition and subtraction and were aware of fractions. Examples are Egyptian hieroglyphics as well as the numeral systems invented in Sumeria, China, and India. The first positional numeral system was developed by the Babylonians starting around 1800 BCE. This was a significant improvement over earlier numeral systems since it made the representation of large numbers and calculations on them more efficient. Abacuses have been utilized as hand-operated calculating tools since ancient times as efficient means for performing complex calculations. Early civilizations primarily used numbers for concrete practical purposes, like commercial activities and tax records, but lacked an abstract concept of number itself. This changed with the ancient Greek mathematicians, who began to explore the abstract nature of numbers rather than studying how they are applied to specific problems. Another novel feature was their use of proofs to establish mathematical truths and validate theories. A further contribution was their distinction of various classes of numbers, such as even numbers, odd numbers, and prime numbers. This included the discovery that numbers for certain geometrical lengths are irrational and therefore cannot be expressed as a fraction. The works of Thales of Miletus and Pythagoras in the 7th and 6th centuries BCE are often regarded as the inception of Greek mathematics. Diophantus was an influential figure in Greek arithmetic in the 3rd century BCE because of his numerous contributions to number theory and his exploration of the application of arithmetic operations to algebraic equations. The ancient Indians were the first to develop the concept of zero as a number to be used in calculations. The exact rules of its operation were written down by Brahmagupta in around 628 CE. The concept of zero or none existed long before, but it was not considered an object of arithmetic operations. Brahmagupta further provided a detailed discussion of calculations with negative numbers and their application to problems like credit and debt. The concept of negative numbers itself is significantly older and was first explored in Chinese mathematics in the first millennium BCE. Indian mathematicians also developed the positional decimal system used today, in particular the concept of a zero digit instead of empty or missing positions. For example, a detailed treatment of its operations was provided by Aryabhata around the turn of the 6th century CE. The Indian decimal system was further refined and expanded to non-integers during the Islamic Golden Age by Middle Eastern mathematicians such as Al-Khwarizmi. His work was influential in introducing the decimal numeral system to the Western world, which at that time relied on the Roman numeral system. There, it was popularized by mathematicians like Leonardo Fibonacci, who lived in the 12th and 13th centuries and also developed the Fibonacci sequence. During the Middle Ages and Renaissance, many popular textbooks were published to cover the practical calculations for commerce. The use of abacuses also became widespread in this period. In the 16th century, the mathematician Gerolamo Cardano conceived the concept of complex numbers as a way to solve cubic equations. The first mechanical calculators were developed in the 17th century and greatly facilitated complex mathematical calculations, such as Blaise Pascal's calculator and Gottfried Wilhelm Leibniz's stepped reckoner. The 17th century also saw the discovery of the logarithm by John Napier. In the 18th and 19th centuries, mathematicians such as Leonhard Euler and Carl Friedrich Gauss laid the foundations of modern number theory. Another development in this period concerned work on the formalization and foundations of arithmetic, such as Georg Cantor's set theory and the Dedekind–Peano axioms used as an axiomatization of natural-number arithmetic. Computers and electronic calculators were first developed in the 20th century. Their widespread use revolutionized both the accuracy and speed with which even complex arithmetic computations can be calculated. In various fields Education Arithmetic education forms part of primary education. It is one of the first forms of mathematics education that children encounter. Elementary arithmetic aims to give students a basic sense of numbers and to familiarize them with fundamental numerical operations like addition, subtraction, multiplication, and division. It is usually introduced in relation to concrete scenarios, like counting beads, dividing the class into groups of children of the same size, and calculating change when buying items. Common tools in early arithmetic education are number lines, addition and multiplication tables, counting blocks, and abacuses. Later stages focus on a more abstract understanding and introduce the students to different types of numbers, such as negative numbers, fractions, real numbers, and complex numbers. They further cover more advanced numerical operations, like exponentiation, extraction of roots, and logarithm. They also show how arithmetic operations are employed in other branches of mathematics, such as their application to describe geometrical shapes and the use of variables in algebra. Another aspect is to teach the students the use of algorithms and calculators to solve complex arithmetic problems. Psychology The psychology of arithmetic is interested in how humans and animals learn about numbers, represent them, and use them for calculations. It examines how mathematical problems are understood and solved and how arithmetic abilities are related to perception, memory, judgment, and decision making. For example, it investigates how collections of concrete items are first encountered in perception and subsequently associated with numbers. A further field of inquiry concerns the relation between numerical calculations and the use of language to form representations. Psychology also explores the biological origin of arithmetic as an inborn ability. This concerns pre-verbal and pre-symbolic cognitive processes implementing arithmetic-like operations required to successfully represent the world and perform tasks like spatial navigation. One of the concepts studied by psychology is numeracy, which is the capability to comprehend numerical concepts, apply them to concrete situations, and reason with them. It includes a fundamental number sense as well as being able to estimate and compare quantities. It further encompasses the abilities to symbolically represent numbers in numbering systems, interpret numerical data, and evaluate arithmetic calculations. Numeracy is a key skill in many academic fields. A lack of numeracy can inhibit academic success and lead to bad economic decisions in everyday life, for example, by misunderstanding mortgage plans and insurance policies. Philosophy The philosophy of arithmetic studies the fundamental concepts and principles underlying numbers and arithmetic operations. It explores the nature and ontological status of numbers, the relation of arithmetic to language and logic, and how it is possible to acquire arithmetic knowledge. According to Platonism, numbers have mind-independent existence: they exist as abstract objects outside spacetime and without causal powers. This view is rejected by intuitionists, who claim that mathematical objects are mental constructions. Further theories are logicism, which holds that mathematical truths are reducible to logical truths, and formalism, which states that mathematical principles are rules of how symbols are manipulated without claiming that they correspond to entities outside the rule-governed activity. The traditionally dominant view in the epistemology of arithmetic is that arithmetic truths are knowable a priori. This means that they can be known by thinking alone without the need to rely on sensory experience. Some proponents of this view state that arithmetic knowledge is innate while others claim that there is some form of rational intuition through which mathematical truths can be apprehended. A more recent alternative view was suggested by naturalist philosophers like Willard Van Orman Quine, who argue that mathematical principles are high-level generalizations that are ultimately grounded in the sensory world as described by the empirical sciences. Others Arithmetic is relevant to many fields. In daily life, it is required to calculate change when shopping, manage personal finances, and adjust a cooking recipe for a different number of servings. Businesses use arithmetic to calculate profits and losses and analyze market trends. In the field of engineering, it is used to measure quantities, calculate loads and forces, and design structures. Cryptography relies on arithmetic operations to protect sensitive information by encrypting data and messages. Arithmetic is intimately connected to many branches of mathematics that depend on numerical operations. Algebra relies on arithmetic principles to solve equations using variables. These principles also play a key role in calculus in its attempt to determine rates of change and areas under curves. Geometry uses arithmetic operations to measure the properties of shapes while statistics utilizes them to analyze numerical data. Due to the relevance of arithmetic operations throughout mathematics, the influence of arithmetic extends to most sciences such as physics, computer science, and economics. These operations are used in calculations, problem-solving, data analysis, and algorithms, making them integral to scientific research, technological development, and economic modeling.
Mathematics
Mathematics
null
3170
https://en.wikipedia.org/wiki/Arithmetic%20function
Arithmetic function
In number theory, an arithmetic, arithmetical, or number-theoretic function is generally any function f(n) whose domain is the positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of n". There is a larger class of number-theoretic functions that do not fit this definition, for example, the prime-counting functions. This article provides links to functions of both classes. An example of an arithmetic function is the divisor function whose value at a positive integer n is equal to the number of divisors of n. Arithmetic functions are often extremely irregular (see table), but some of them have series expansions in terms of Ramanujan's sum. Multiplicative and additive functions An arithmetic function a is completely additive if a(mn) = a(m) + a(n) for all natural numbers m and n; completely multiplicative if a(mn) = a(m)a(n) for all natural numbers m and n; Two whole numbers m and n are called coprime if their greatest common divisor is 1, that is, if there is no prime number that divides both of them. Then an arithmetic function a is additive if a(mn) = a(m) + a(n) for all coprime natural numbers m and n; multiplicative if a(mn) = a(m)a(n) for all coprime natural numbers m and n. Notation In this article, and mean that the sum or product is over all prime numbers: and Similarly, and mean that the sum or product is over all prime powers with strictly positive exponent (so is not included): The notations and mean that the sum or product is over all positive divisors of n, including 1 and n. For example, if , then The notations can be combined: and mean that the sum or product is over all prime divisors of n. For example, if n = 18, then and similarly and mean that the sum or product is over all prime powers dividing n. For example, if n = 24, then Ω(n), ω(n), νp(n) – prime power decomposition The fundamental theorem of arithmetic states that any positive integer n can be represented uniquely as a product of powers of primes: where p1 < p2 < ... < pk are primes and the aj are positive integers. (1 is given by the empty product.) It is often convenient to write this as an infinite product over all the primes, where all but a finite number have a zero exponent. Define the p-adic valuation νp(n) to be the exponent of the highest power of the prime p that divides n. That is, if p is one of the pi then νp(n) = ai, otherwise it is zero. Then In terms of the above the prime omega functions ω and Ω are defined by To avoid repetition, formulas for the functions listed in this article are, whenever possible, given in terms of n and the corresponding pi, ai, ω, and Ω. Multiplicative functions σk(n), τ(n), d(n) – divisor sums σk(n) is the sum of the kth powers of the positive divisors of n, including 1 and n, where k is a complex number. σ1(n), the sum of the (positive) divisors of n, is usually denoted by σ(n). Since a positive number to the zero power is one, σ0(n) is therefore the number of (positive) divisors of n; it is usually denoted by d(n) or τ(n) (for the German Teiler = divisors). Setting k = 0 in the second product gives φ(n) – Euler totient function φ(n), the Euler totient function, is the number of positive integers not greater than n that are coprime to n. Jk(n) – Jordan totient function Jk(n), the Jordan totient function, is the number of k-tuples of positive integers all less than or equal to n that form a coprime (k + 1)-tuple together with n. It is a generalization of Euler's totient, . μ(n) – Möbius function μ(n), the Möbius function, is important because of the Möbius inversion formula. See , below. This implies that μ(1) = 1. (Because Ω(1) = ω(1) = 0.) τ(n) – Ramanujan tau function τ(n), the Ramanujan tau function, is defined by its generating function identity: Although it is hard to say exactly what "arithmetical property of n" it "expresses", (τ(n) is (2π)−12 times the nth Fourier coefficient in the q-expansion of the modular discriminant function) it is included among the arithmetical functions because it is multiplicative and it occurs in identities involving certain σk(n) and rk(n) functions (because these are also coefficients in the expansion of modular forms). cq(n) – Ramanujan's sum cq(n), Ramanujan's sum, is the sum of the nth powers of the primitive qth roots of unity: Even though it is defined as a sum of complex numbers (irrational for most values of q), it is an integer. For a fixed value of n it is multiplicative in q: If q and r are coprime, then ψ(n) – Dedekind psi function The Dedekind psi function, used in the theory of modular functions, is defined by the formula Completely multiplicative functions λ(n) – Liouville function λ(n), the Liouville function, is defined by χ(n) – characters All Dirichlet characters χ(n) are completely multiplicative. Two characters have special notations: The principal character (mod n) is denoted by χ0(a) (or χ1(a)). It is defined as The quadratic character (mod n) is denoted by the Jacobi symbol for odd n (it is not defined for even n): In this formula is the Legendre symbol, defined for all integers a and all odd primes p by Following the normal convention for the empty product, Additive functions ω(n) – distinct prime divisors ω(n), defined above as the number of distinct primes dividing n, is additive (see Prime omega function). Completely additive functions Ω(n) – prime divisors Ω(n), defined above as the number of prime factors of n counted with multiplicities, is completely additive (see Prime omega function). νp(n) – p-adic valuation of an integer n For a fixed prime p, νp(n), defined above as the exponent of the largest power of p dividing n, is completely additive. Logarithmic derivative , where is the arithmetic derivative. Neither multiplicative nor additive π(x), Π(x), ϑ(x), ψ(x) – prime-counting functions These important functions (which are not arithmetic functions) are defined for non-negative real arguments, and are used in the various statements and proofs of the prime number theorem. They are summation functions (see the main section just below) of arithmetic functions which are neither multiplicative nor additive. π(x), the prime-counting function, is the number of primes not exceeding x. It is the summation function of the characteristic function of the prime numbers. A related function counts prime powers with weight 1 for primes, 1/2 for their squares, 1/3 for cubes, etc. It is the summation function of the arithmetic function which takes the value 1/k on integers which are the kth power of some prime number, and the value 0 on other integers. ϑ(x) and ψ(x), the Chebyshev functions, are defined as sums of the natural logarithms of the primes not exceeding x. The second Chebyshev function ψ(x) is the summation function of the von Mangoldt function just below. Λ(n) – von Mangoldt function Λ(n), the von Mangoldt function, is 0 unless the argument n is a prime power , in which case it is the natural logarithm of the prime p: p(n) – partition function p(n), the partition function, is the number of ways of representing n as a sum of positive integers, where two representations with the same summands in a different order are not counted as being different: λ(n) – Carmichael function λ(n), the Carmichael function, is the smallest positive number such that   for all a coprime to n. Equivalently, it is the least common multiple of the orders of the elements of the multiplicative group of integers modulo n. For powers of odd primes and for 2 and 4, λ(n) is equal to the Euler totient function of n; for powers of 2 greater than 4 it is equal to one half of the Euler totient function of n: and for general n it is the least common multiple of λ of each of the prime power factors of n: h(n) – class number h(n), the class number function, is the order of the ideal class group of an algebraic extension of the rationals with discriminant n. The notation is ambiguous, as there are in general many extensions with the same discriminant. See quadratic field and cyclotomic field for classical examples. rk(n) – sum of k squares rk(n) is the number of ways n can be represented as the sum of k squares, where representations that differ only in the order of the summands or in the signs of the square roots are counted as different. D(n) – Arithmetic derivative Using the Heaviside notation for the derivative, the arithmetic derivative D(n) is a function such that if n prime, and (the product rule) Summation functions Given an arithmetic function a(n), its summation function A(x) is defined by A can be regarded as a function of a real variable. Given a positive integer m, A is constant along open intervals m < x < m + 1, and has a jump discontinuity at each integer for which a(m) ≠ 0. Since such functions are often represented by series and integrals, to achieve pointwise convergence it is usual to define the value at the discontinuities as the average of the values to the left and right: Individual values of arithmetic functions may fluctuate wildly – as in most of the above examples. Summation functions "smooth out" these fluctuations. In some cases it may be possible to find asymptotic behaviour for the summation function for large x. A classical example of this phenomenon is given by the divisor summatory function, the summation function of d(n), the number of divisors of n: An average order of an arithmetic function is some simpler or better-understood function which has the same summation function asymptotically, and hence takes the same values "on average". We say that g is an average order of f if as x tends to infinity. The example above shows that d(n) has the average order log(n). Dirichlet convolution Given an arithmetic function a(n), let Fa(s), for complex s, be the function defined by the corresponding Dirichlet series (where it converges): Fa(s) is called a generating function of a(n). The simplest such series, corresponding to the constant function a(n) = 1 for all n, is ζ(s) the Riemann zeta function. The generating function of the Möbius function is the inverse of the zeta function: Consider two arithmetic functions a and b and their respective generating functions Fa(s) and Fb(s). The product Fa(s)Fb(s) can be computed as follows: It is a straightforward exercise to show that if c(n) is defined by then This function c is called the Dirichlet convolution of a and b, and is denoted by . A particularly important case is convolution with the constant function a(n) = 1 for all n, corresponding to multiplying the generating function by the zeta function: Multiplying by the inverse of the zeta function gives the Möbius inversion formula: If f is multiplicative, then so is g. If f is completely multiplicative, then g is multiplicative, but may or may not be completely multiplicative. Relations among the functions There are a great many formulas connecting arithmetical functions with each other and with the functions of analysis, especially powers, roots, and the exponential and log functions. The page divisor sum identities contains many more generalized and related examples of identities involving arithmetic functions. Here are a few examples: Dirichlet convolutions     where λ is the Liouville function.             Möbius inversion             Möbius inversion                         Möbius inversion             Möbius inversion             Möbius inversion           where λ is the Liouville function.             Möbius inversion Sums of squares For all     (Lagrange's four-square theorem). where the Kronecker symbol has the values There is a formula for r3 in the section on class numbers below. where .     where Define the function as That is, if n is odd, is the sum of the kth powers of the divisors of n, that is, and if n is even it is the sum of the kth powers of the even divisors of n minus the sum of the kth powers of the odd divisors of n.     Adopt the convention that Ramanujan's if x is not an integer. Divisor sum convolutions Here "convolution" does not mean "Dirichlet convolution" but instead refers to the formula for the coefficients of the product of two power series: The sequence is called the convolution or the Cauchy product of the sequences an and bn. These formulas may be proved analytically (see Eisenstein series) or by elementary methods.                     where τ(n) is Ramanujan's function.     Since σk(n) (for natural number k) and τ(n) are integers, the above formulas can be used to prove congruences for the functions. See Ramanujan tau function for some examples. Extend the domain of the partition function by setting       This recurrence can be used to compute p(n). Class number related Peter Gustav Lejeune Dirichlet discovered formulas that relate the class number h of quadratic number fields to the Jacobi symbol. An integer D is called a fundamental discriminant if it is the discriminant of a quadratic number field. This is equivalent to D ≠ 1 and either a) D is squarefree and D ≡ 1 (mod 4) or b) D ≡ 0 (mod 4), D/4 is squarefree, and D/4 ≡ 2 or 3 (mod 4). Extend the Jacobi symbol to accept even numbers in the "denominator" by defining the Kronecker symbol: Then if D < −4 is a fundamental discriminant There is also a formula relating r3 and h. Again, let D be a fundamental discriminant, D < −4. Then Prime-count related Let   be the nth harmonic number. Then   is true for every natural number n if and only if the Riemann hypothesis is true.     The Riemann hypothesis is also equivalent to the statement that, for all n > 5040, (where γ is the Euler–Mascheroni constant). This is Robin's theorem. Menon's identity In 1965 P Kesava Menon proved This has been generalized by a number of mathematicians. For example, B. Sury N. Rao where a1, a2, ..., as are integers, gcd(a1, a2, ..., as, n) = 1. László Fejes Tóth where m1 and m2 are odd, m = lcm(m1, m2). In fact, if f is any arithmetical function where stands for Dirichlet convolution. Miscellaneous Let m and n be distinct, odd, and positive. Then the Jacobi symbol satisfies the law of quadratic reciprocity: Let D(n) be the arithmetic derivative. Then the logarithmic derivative See Arithmetic derivative for details. Let λ(n) be Liouville's function. Then     and     Let λ(n) be Carmichael's function. Then     Further, See Multiplicative group of integers modulo n and Primitive root modulo n.                   Note that             Compare this with             where τ(n) is Ramanujan's function. First 100 values of some arithmetic functions
Mathematics
Functions: General
null
3262
https://en.wikipedia.org/wiki/Agar
Agar
Agar ( or ), or agar-agar, is a jelly-like substance consisting of polysaccharides obtained from the cell walls of some species of red algae, primarily from “ogonori” and “tengusa”. As found in nature, agar is a mixture of two components, the linear polysaccharide agarose and a heterogeneous mixture of smaller molecules called agaropectin. It forms the supporting structure in the cell walls of certain species of algae and is released on boiling. These algae are known as agarophytes, belonging to the Rhodophyta (red algae) phylum. The processing of food-grade agar removes the agaropectin, and the commercial product is essentially pure agarose. Agar has been used as an ingredient in desserts throughout Asia and also as a solid substrate to contain culture media for microbiological work. Agar can be used as a laxative; an appetite suppressant; a vegan substitute for gelatin; a thickener for soups; in fruit preserves, ice cream, and other desserts; as a clarifying agent in brewing; and for sizing paper and fabrics. Etymology The word agar comes from agar-agar, the Malay name for red algae (Gigartina, Eucheuma, Gracilaria) from which the jelly is produced. It is also known as Kanten () (from the phrase kan-zarashi tokoroten () or "cold-exposed agar"), Japanese isinglass, China grass, Ceylon moss or Jaffna moss. Gracilaria edulis or its synonym G. lichenoides is specifically referred to as agal-agal or Ceylon agar. History Macroalgae have been used widely as food by coastal cultures, especially in Southeast Asia. In the Philippines, Gracilaria, known as gulaman (also guraman, gar-garao, or gulaman dagat, among other names) in Tagalog, have been harvested and used as food for centuries, eaten both fresh or sun-dried and turned into jellies. The earliest historical attestation is from the Vocabulario de la lengua tagala (1754) by the Jesuit priests Juan de Noceda and Pedro de Sanlucar, where golaman or gulaman was defined as "una yerva, de que se haze conserva a modo de Halea, naze en la mar" ("a herb, from which a jam-like preserve is made, grows in the sea"), with an additional entry for guinolaman to refer to food made with the jelly. Carrageenan, derived from gusô (Eucheuma spp.), which also congeals into a gel-like texture is also used similarly among the Visayan peoples and have been recorded in the even earlier Diccionario De La Lengua Bisaya, Hiligueina y Haraia de la isla de Panay y Sugbu y para las demas islas (c.1637) of the Augustinian missionary Alonso de Méntrida . In the book, Méntrida describes gusô as being cooked until it melts, and then allowed to congeal into a sour dish. In Ambon Island in the Maluku Islands of Indonesia, agar is extracted from Graciliaria and eaten as a type of pickle or a sauce. Jelly seaweeds were also favoured and foraged by Malay communities living on the coasts of the Riau Archipelago and Singapore in Southeast Asia for centuries. 19th century records indicate that dried Graciliaria were one of the bulk exports of British Malaya to China. Poultices made from agar were also used for swollen knee joints and sores in Johore and Singapore. The application of agar as a food additive in Japan is alleged to have been discovered in 1658 by Mino Tarōzaemon (), an innkeeper in current Fushimi-ku, Kyoto who, according to legend, was said to have discarded surplus seaweed soup (Tokoroten) and noticed that it gelled later after a winter night's freezing. Agar was first subjected to chemical analysis in 1859 by the French chemist Anselme Payen, who had obtained agar from the marine algae Gelidium corneum. Beginning in the late 19th century, agar began to be used as a solid medium for growing various microbes. Agar was first described for use in microbiology in 1882 by the German microbiologist Walther Hesse, an assistant working in Robert Koch's laboratory, on the suggestion of his wife Fanny Hesse. Agar quickly supplanted gelatin as the base of microbiological media, due to its higher melting temperature, allowing microbes to be grown at higher temperatures without the media liquefying. With its newfound use in microbiology, agar production quickly increased. This production centered on Japan, which produced most of the world's agar until World War II. However, with the outbreak of World War II, many nations were forced to establish domestic agar industries in order to continue microbiological research. Around the time of World War II, approximately 2,500 tons of agar were produced annually. By the mid-1970s, production worldwide had increased dramatically to approximately 10,000 tons each year. Since then, production of agar has fluctuated due to unstable and sometimes over-utilized seaweed populations. Chemical composition Agar consists of a mixture of two polysaccharides: agarose and agaropectin, with agarose making up about 70% of the mixture, while agaropectin makes about 30% of it. Agarose is a linear polymer, made up of repeating units of agarobiose, a disaccharide made up of D-galactose and 3,6-anhydro-L-galactopyranose. Agaropectin is a heterogeneous mixture of smaller molecules that occur in lesser amounts, and is made up of alternating units of D-galactose and L-galactose heavily modified with acidic side-groups, such as sulfate, glucuronate, and pyruvate. Physical properties Agar exhibits a phenomenon known as hysteresis whereby, when mixed with water, it solidifies and forms a gel below about , which is called the gel point, and melts above , which is the melting point. Hysteresis is the property of having a difference between the gel point and melting point temperatures. This property lends a suitable balance between easy melting and good gel stability at relatively high temperatures. Since many scientific applications require incubation at temperatures close to human body temperature (37 °C), agar is more appropriate than other solidifying agents that melt at this temperature, such as gelatin. Uses Culinary Agar-agar is a natural vegetable gelatin counterpart. It is white and semi-translucent when sold in packages as washed and dried strips or in powdered form. It can be used to make jellies, puddings, and custards. When making jelly, it is boiled in water until the solids dissolve. Sweetener, flavoring, coloring, fruits and or vegetables are then added, and the liquid is poured into molds to be served as desserts and vegetable aspics or incorporated with other desserts such as a layer of jelly in a cake. Agar-agar is approximately 80% dietary fiber, so it can serve as an intestinal regulator. Its bulking quality has been behind fad diets in Asia, for example the kanten (the Japanese word for agar-agar) diet. Once ingested, kanten triples in size and absorbs water. This results in the consumers feeling fuller. Asian culinary One use of agar in Japanese cuisine is in anmitsu, a dessert made of small cubes of agar jelly and served in a bowl with various fruits or other ingredients. It is also the main ingredient in mizu yōkan, another popular Japanese food. In Philippine cuisine, it is used to make the jelly bars in the various gulaman refreshments like sago't gulaman, samalamig, or desserts such as buko pandan, agar flan, halo-halo, fruit cocktail jelly, and the black and red gulaman used in various fruit salads. In Vietnamese cuisine, jellies made of flavored layers of agar-agar, called thạch, are a popular dessert, and are often made in ornate molds for special occasions. In Indian cuisine, agar is used for making desserts. In Burmese cuisine, a sweet jelly known as kyauk kyaw is made from agar. Agar jelly is widely used in Taiwanese bubble tea. Other culinary It can be used as addition to (or as a replacement for) pectin in jelly, jam, or marmalade, as a substitute to gelatin for its superior gelling properties, and as a strengthening ingredient in souffles and custards. Another use of agar-agar is in a Russian dish ptich'ye moloko (bird's milk), a rich jellified custard (or soft meringue) used as a cake filling or chocolate-glazed as individual sweets. Agar-agar may also be used as the gelling agent in gel clarification, a culinary technique used to clarify stocks, sauces, and other liquids. Mexico has traditional candies made out of Agar gelatin, most of them in colorful, half-circle shapes that resemble a melon or watermelon fruit slice, and commonly covered with sugar. They are known in Spanish as Dulce de Agar (Agar sweets) Agar-agar is an allowed nonorganic/nonsynthetic additive used as a thickener, gelling agent, texturizer, moisturizer, emulsifier, flavor enhancer, and absorbent in certified organic foods. Microbiology Agar plate An agar plate or Petri dish is used to provide a growth medium using a mix of agar and other nutrients in which microorganisms, including bacteria and fungi, can be cultured and observed under the microscope. Agar is indigestible for many organisms so that microbial growth does not affect the gel used and it remains stable. Agar is typically sold commercially as a powder that can be mixed with water and prepared similarly to gelatin before use as a growth medium. Nutrients are typically added to meet the nutritional needs of the microbes organism, the formulations of which may be "undefined" where the precise composition is unknown, or "defined" where the exact chemical composition is known. Agar is often dispensed using a sterile media dispenser. Different algae produce various types of agar. Each agar has unique properties that suit different purposes. Because of the agarose component, the agar solidifies. When heated, agarose has the potential to melt and then solidify. Because of this property, they are referred to as "physical gels". In contrast, polyacrylamide polymerization is an irreversible process, and the resulting products are known as chemical gels. There are a variety of different types of agar that support the growth of different microorganisms. A nutrient agar may be permissive, allowing for the cultivation of any non-fastidious microorganisms; a commonly-used nutrient agar for bacteria is the Luria Bertani (LB) agar which contains lysogeny broth, a nutrient-rich medium used for bacterial growth. Additionally, 2216 Marine Broth (MB) agar, with high salt content, is optimized for growing heterotrophic marine bacteria like those of the Vibrio genus, while Terrific Broth (TB) agar is used to non-selectively culture high yields of the bacterium E. coli. More generally, enriched media is an agar variety that is infused with the necessary nutrients required by fastidious organisms to grow. Despite the large diversity of agar mediums, yeast extract is a common ingredient across all varieties as it is a macronutrient that provides a nitrogen source for all bacterial cell types. Other fastidious organisms may require the addition of different biological fluids such as horse or sheep blood, serum, egg yolk, and so on. Agar plates can also be selective, and can be used to promote the growth of bacteria of interest while inhibiting others. A variety of chemicals may be added to create an environment favourable for specific types of bacteria or bacteria with certain properties, but not conducive for growth of others. For example, antibiotics may be added in cloning experiments whereby bacteria with antibiotic-resistant plasmid are selected. In addition to antibiotic treated agar, other selective and indicator agar plates include TCBS agar and MacConkey agar. Thiosulfate citrate bile salts sucrose (TCBS) agar is used to differentiate Vibrio species based on their sucrose metabolism, since only some will metabolize the sucrose in the plate and change its pH. Indicator dyes included in the gel will display a visual change of the pH by changing the gel color from green to yellow. MacConkey agar contains bile salts and crystal violet to selectively grow gram-negative bacteria and differentiate between species using pH-indicator dyes that demonstrate lactose metabolism properties. Motility assays As a gel, an agar or agarose medium is porous and therefore can be used to measure microorganism motility and mobility. The gel's porosity is directly related to the concentration of agarose in the medium, so various levels of effective viscosity (from the cell's "point of view") can be selected, depending on the experimental objectives. A common identification assay involves culturing a sample of the organism deep within a block of nutrient agar. Cells will attempt to grow within the gel structure. Motile species will be able to migrate, albeit slowly, throughout the gel, and infiltration rates can then be visualized, whereas non-motile species will show growth only along the now-empty path introduced by the invasive initial sample deposition. Another setup commonly used for measuring chemotaxis and chemokinesis utilizes the under-agarose cell migration assay, whereby a layer of agarose gel is placed between a cell population and a chemoattractant. As a concentration gradient develops from the diffusion of the chemoattractant into the gel, various cell populations requiring different stimulation levels to migrate can then be visualized over time using microphotography as they tunnel upward through the gel against gravity along the gradient. Plant biology Research grade agar is used extensively in plant biology as it is optionally supplemented with a nutrient and/or vitamin mixture that allows for seedling germination in Petri dishes under sterile conditions (given that the seeds are sterilized as well). Nutrient and/or vitamin supplementation for Arabidopsis thaliana is standard across most experimental conditions. Murashige & Skoog (MS) nutrient mix and Gamborg's B5 vitamin mix in general are used. A 1.0% agar/0.44% MS+vitamin dH2O solution is suitable for growth media between normal growth temps. When using agar, within any growth medium, it is important to know that the solidification of the agar is pH-dependent. The optimal range for solidification is between 5.4 and 5.7. Usually, the application of potassium hydroxide is needed to increase the pH to this range. A general guideline is about 600 μl 0.1M KOH per 250 ml GM. This entire mixture can be sterilized using the liquid cycle of an autoclave. This medium nicely lends itself to the application of specific concentrations of phytohormones etc. to induce specific growth patterns in that one can easily prepare a solution containing the desired amount of hormone, add it to the known volume of GM, and autoclave to both sterilize and evaporate off any solvent that may have been used to dissolve the often-polar hormones. This hormone/GM solution can be spread across the surface of Petri dishes sown with germinated and/or etiolated seedlings. Experiments with the moss Physcomitrella patens, however, have shown that choice of the gelling agent – agar or Gelrite – does influence phytohormone sensitivity of the plant cell culture. Other uses Agar is used: As an impression material in dentistry. As a medium to precisely orient the tissue specimen and secure it by agar pre-embedding (especially useful for small endoscopy biopsy specimens) for histopathology processing To make salt bridges and gel plugs for use in electrochemistry. In formicariums as a transparent substitute for sand and a source of nutrition. As a natural ingredient in forming modeling clay for young children to play with. As an allowed biofertilizer component in organic farming. As a substrate for precipitin reactions in immunology. At different times as a substitute for gelatin in photographic emulsions, arrowroot in preparing silver paper and as a substitute for fish glue in resist etching. As an MRI elastic gel phantom to mimic tissue mechanical properties in Magnetic Resonance Elastography Gelidium agar is used primarily for bacteriological plates. Gracilaria agar is used mainly in food applications. In 2016, AMAM, a Japanese company, developed a prototype for Agar-based commercial packaging system called Agar Plasticity, intended as a replacement for oil-based plastic packaging.
Biology and health sciences
Carbohydrates
Biology
3263
https://en.wikipedia.org/wiki/Acid%20rain
Acid rain
Acid rain is rain or any other form of precipitation that is unusually acidic, meaning that it has elevated levels of hydrogen ions (low pH). Most water, including drinking water, has a neutral pH that exists between 6.5 and 8.5, but acid rain has a pH level lower than this and ranges from 4–5 on average. The more acidic the acid rain is, the lower its pH is. Acid rain can have harmful effects on plants, aquatic animals, and infrastructure. Acid rain is caused by emissions of sulfur dioxide and nitrogen oxide, which react with the water molecules in the atmosphere to produce acids. Acid rain has been shown to have adverse impacts on forests, freshwaters, soils, microbes, insects and aquatic life-forms. In ecosystems, persistent acid rain reduces tree bark durability, leaving flora more susceptible to environmental stressors such as drought, heat/cold and pest infestation. Acid rain is also capable of detrimenting soil composition by stripping it of nutrients such as calcium and magnesium which play a role in plant growth and maintaining healthy soil. In terms of human infrastructure, acid rain also causes paint to peel, corrosion of steel structures such as bridges, and weathering of stone buildings and statues as well as having impacts on human health. Some governments, including those in Europe and North America, have made efforts since the 1970s to reduce the release of sulfur dioxide and nitrogen oxide into the atmosphere through air pollution regulations. These efforts have had positive results due to the widespread research on acid rain starting in the 1960s and the publicized information on its harmful effects. The main source of sulfur and nitrogen compounds that result in acid rain are anthropogenic, but nitrogen oxides can also be produced naturally by lightning strikes and sulfur dioxide is produced by volcanic eruptions. Definition "Acid rain" is rain with a pH less than 5. "Clean" or unpolluted rain has a pH greater than 5 but still less than pH = 7 owing to the acidity caused by carbon dioxide acid according to the following reactions: A variety of natural and human-made sources contribute to the acidity. For example nitric acid produced by electric discharge in the atmosphere such as lightning. The usual anthropogenic sources are sulfur dioxide and nitrogen oxide. They react with water (as does carbon dioxide) to give solutions with pH < 5. Occasional pH readings in rain and fog water of well below 2.4 have been reported in industrialized areas. History Acid rain was first systematically studied in Europe in the 1960s and in the United States and Canada in the following decade. In Europe The corrosive effect of polluted, acidic city air on limestone and marble was noted in the 17th century by John Evelyn, who remarked upon the poor condition of the Arundel marbles. Since the Industrial Revolution, emissions of sulfur dioxide and nitrogen oxides into the atmosphere have increased. In 1852, Robert Angus Smith was the first to show the relationship between acid rain and atmospheric pollution in Manchester, England. Smith coined the term "acid rain" in 1872. In the late 1960s, scientists began widely observing and studying the phenomenon. At first, the main focus in this research lay on local effects of acid rain. Waldemar Christofer Brøgger was the first to acknowledge long-distance transportation of pollutants crossing borders from the United Kingdom to Norway – a problem systematically studied by Brynjulf Ottar in the 1970s. Ottar's work was strongly influenced by Swedish soil scientist Svante Odén, who had drawn widespread attention to Europe's acid rain problem in popular newspapers and wrote a landmark paper on the subject in 1968. In the United States The earliest report about acid rain in the United States came from chemical evidence gathered from Hubbard Brook Valley; public awareness of acid rain in the US increased in the 1970s after The New York Times reported on these findings. In 1972, a group of scientists, including Gene Likens, discovered the rain that was deposited at White Mountains of New Hampshire was acidic. The pH of the sample was measured to be 4.03 at Hubbard Brook. The Hubbard Brook Ecosystem Study followed up with a series of research studies that analyzed the environmental effects of acid rain. The alumina from soils neutralized acid rain that mixed with stream water at Hubbard Brook. The result of this research indicated that the chemical reaction between acid rain and aluminium leads to an increasing rate of soil weathering. Experimental research examined the effects of increased acidity in streams on ecological species. In 1980, scientists modified the acidity of Norris Brook, New Hampshire, and observed the change in species' behaviors. There was a decrease in species diversity, an increase in community dominants, and a reduction in the food web complexity. In 1980, the US Congress passed an Acid Deposition Act. This Act established an 18-year assessment and research program under the direction of the National Acidic Precipitation Assessment Program (NAPAP). NAPAP enlarged a network of monitoring sites to determine how acidic precipitation was, seeking to determine long-term trends, and established a network for dry deposition. Using a statistically based sampling design, NAPAP quantified the effects of acid rain on a regional basis by targeting research and surveys to identify and quantify the impact of acid precipitation on freshwater and terrestrial ecosystems. NAPAP also assessed the effects of acid rain on historical buildings, monuments, and building materials. It also funded extensive studies on atmospheric processes and potential control programs. From the start, policy advocates from all sides attempted to influence NAPAP activities to support their particular policy advocacy efforts, or to disparage those of their opponents. For the US Government's scientific enterprise, a significant impact of NAPAP were lessons learned in the assessment process and in environmental research management to a relatively large group of scientists, program managers, and the public. In 1981, the National Academy of Sciences was looking into research about the controversial issues regarding acid rain. President Ronald Reagan dismissed the issues of acid rain until his personal visit to Canada and confirmed that the Canadian border suffered from the drifting pollution from smokestacks originating in the US Midwest. Reagan honored the agreement to Canadian Prime Minister Pierre Trudeau's enforcement of anti-pollution regulation. In 1982, Reagan commissioned William Nierenberg to serve on the National Science Board. Nierenberg selected scientists including Gene Likens to serve on a panel to draft a report on acid rain. In 1983, the panel of scientists came up with a draft report, which concluded that acid rain is a real problem and solutions should be sought. White House Office of Science and Technology Policy reviewed the draft report and sent Fred Singer's suggestions of the report, which cast doubt on the cause of acid rain. The panelists revealed rejections against Singer's positions and submitted the report to Nierenberg in April. In May 1983, the House of Representatives voted against legislation controlling sulfur emissions. There was a debate about whether Nierenberg delayed the release of the report. Nierenberg denied the saying about his suppression of the report and stated that it was withheld after the House's vote because it was not ready to be published. In 1991, the US National Acid Precipitation Assessment Program (NAPAP) provided its first assessment of acid rain in the United States. It reported that 5% of New England Lakes were acidic, with sulfates being the most common problem. They noted that 2% of the lakes could no longer support Brook Trout, and 6% of the lakes were unsuitable for the survival of many minnow species. Subsequent Reports to Congress have documented chemical changes in soil and freshwater ecosystems, nitrogen saturation, soil nutrient decreases, episodic acidification, regional haze, and damage to historical monuments. Meanwhile, in 1990, the US Congress passed a series of amendments to the Clean Air Act. Title IV of these amendments established a cap and trade system designed to control emissions of sulfur dioxide and nitrogen oxides. Both these emissions proved to cause a significant problem for U.S. citizens and their access to healthy, clean air. Title IV called for a total reduction of about 10 million tons of SO2 emissions from power plants, close to a 50% reduction. It was implemented in two phases. Phase I began in 1995 and limited sulfur dioxide emissions from 110 of the largest power plants to 8.7 million tons of sulfur dioxide. One power plant in New England (Merrimack) was in Phase I. Four other plants (Newington, Mount Tom, Brayton Point, and Salem Harbor) were added under other program provisions. Phase II began in 2000 and affects most of the power plants in the country. During the 1990s, research continued. On March 10, 2005, the EPA issued the Clean Air Interstate Rule (CAIR). This rule provides states with a solution to the problem of power plant pollution that drifts from one state to another. CAIR will permanently cap emissions of SO2 and NOx in the eastern United States. When fully implemented, CAIR will reduce SO2 emissions in 28 eastern states and the District of Columbia by over 70% and NOx emissions by over 60% from 2003 levels. Overall, the program's cap and trade program has been successful in achieving its goals. Since the 1990s, SO2 emissions have dropped 40%, and according to the Pacific Research Institute, acid rain levels have dropped 65% since 1976. Conventional regulation was used in the European Union, which saw a decrease of over 70% in SO2 emissions during the same period. In 2007, total SO2 emissions were 8.9 million tons, achieving the program's long-term goal ahead of the 2010 statutory deadline. In 2007 the EPA estimated that by 2010, the overall costs of complying with the program for businesses and consumers would be $1 billion to $2 billion a year, only one-fourth of what was initially predicted. Forbes says: "In 2010, by which time the cap and trade system had been augmented by the George W. Bush administration's Clean Air Interstate Rule, SO2 emissions had fallen to 5.1 million tons." The term citizen science can be traced back as far as January 1989 to a campaign by the Audubon Society to measure acid rain. Scientist Muki Haklay cites in a policy report for the Wilson Center entitled 'Citizen Science and Policy: A European Perspective' a first use of the term 'citizen science' by R. Kerson in the magazine MIT Technology Review from January 1989. Quoting from the Wilson Center report: "The new form of engagement in science received the name "citizen science". The first recorded example of using the term is from 1989, describing how 225 volunteers across the US collected rain samples to assist the Audubon Society in an acid-rain awareness-raising campaign. The volunteers collected samples, checked for acidity, and reported to the organization. The information was then used to demonstrate the full extent of the phenomenon." In Canada Canadian Harold Harvey was among the first to research a "dead" lake. In 1971, he and R. J. Beamish published a report, "Acidification of the La Cloche Mountain Lakes", documenting the gradual deterioration of fish stocks in 60 lakes in Killarney Park in Ontario, which they had been studying systematically since 1966. In the 1970s and 80s, acid rain was a major topic of research at the Experimental Lakes Area (ELA) in Northwestern Ontario, Canada. Researchers added sulfuric acid to whole lakes in controlled ecosystem experiments to simulate the effects of acid rain. Because its remote conditions allowed for whole-ecosystem experiments, research at the ELA showed that the effect of acid rain on fish populations started at concentrations much lower than those observed in laboratory experiments. In the context of a food web, fish populations crashed earlier than when acid rain had direct toxic effects to the fish because the acidity led to crashes in prey populations (e.g. mysids). As experimental acid inputs were reduced, fish populations and lake ecosystems recovered at least partially, although invertebrate populations have still not completely returned to the baseline conditions. This research showed both that acidification was linked to declining fish populations and that the effects could be reversed if sulfuric acid emissions decreased, and influenced policy in Canada and the United States. In 1985, seven Canadian provinces (all except British Columbia, Alberta, and Saskatchewan) and the federal government signed the Eastern Canada Acid Rain Program. The provinces agreed to limit their combined sulfur dioxide emissions to 2.3 million tonnes by 1994. The Canada-US Air Quality Agreement was signed in 1991. In 1998, all federal, provincial, and territorial Ministers of Energy and Environment signed The Canada-Wide Acid Rain Strategy for Post-2000, which was designed to protect lakes that are more sensitive than those protected by earlier policies. In India Increased risk might be posed by the expected rise in total sulphur emissions from 4,400 kilotonnes (kt) in 1990 to 6,500 kt in 2000, 10,900 kt in 2010 and 18,500 in 2020. Emissions of chemicals leading to acidification The most important gas which leads to acidification is sulfur dioxide. Emissions of nitrogen oxides which are oxidized to form nitric acid are of increasing importance due to stricter controls on emissions of sulfur compounds. 70 Tg(S) per year in the form of SO2 comes from fossil fuel combustion and industry, 2.8 Tg(S) from wildfires, and 7–8 Tg(S) per year from volcanoes. Natural phenomena The principal natural phenomena that contribute acid-producing gases to the atmosphere are emissions from volcanoes. Thus, for example, fumaroles from the Laguna Caliente crater of Poás Volcano create extremely high amounts of acid rain and fog, with acidity as high as a pH of 2, clearing an area of any vegetation and frequently causing irritation to the eyes and lungs of inhabitants in nearby settlements. Acid-producing gasses are also created by biological processes that occur on the land, in wetlands, and in the oceans. The major biological source of sulfur compounds is dimethyl sulfide. Nitric acid in rainwater is an important source of fixed nitrogen for plant life, and is also produced by electrical activity in the atmosphere such as lightning. Acidic deposits have been detected in glacial ice thousands of years old in remote parts of the globe. Human activity The principal cause of acid rain is sulfur and nitrogen compounds from human sources, such as electricity generation, animal agriculture, factories, and motor vehicles. These also include power plants, which use electric power generators that account for a quarter of nitrogen oxides and two-thirds of sulfur dioxide within the atmosphere. Industrial acid rain is a substantial problem in China and Russia and areas downwind from them. These areas all burn sulfur-containing coal to generate heat and electricity. The problem of acid rain has not only increased with population and industrial growth, but has become more widespread. The use of tall smokestacks to reduce local pollution has contributed to the spread of acid rain by releasing gases into regional atmospheric circulation; dispersal from these taller stacks causes pollutants to be carried farther, causing widespread ecological damage. Often deposition occurs a considerable distance downwind of the emissions, with mountainous regions tending to receive the greatest deposition (because of their higher rainfall). An example of this effect is the low pH of rain which falls in Scandinavia. Regarding low pH and pH imbalances in correlation to acid rain, low levels, or those under the pH value of 7, are considered acidic. Acid rain falls at a pH value of roughly 4, making it harmful to consume for humans. When these low pH levels fall in specific regions, they not only affect the environment but also human health. With acidic pH levels in humans comes hair loss, low urinary pH, severe mineral imbalances, constipation, and many cases of chronic disorders like Fibromyalgia and Basal Carcinoma. Chemical process Combustion of fuels and smelting of some ores produce sulfur dioxide and nitric oxides. They are converted into sulfuric acid and nitric acid. In the gas phase sulfur dioxide is oxidized to sulfuric acid: Nitrogen dioxide reacts with hydroxyl radicals to form nitric acid: NO2 + OH· → HNO3 The detailed mechanisms depend on the presence water and traces of iron and manganese. A number of oxidants are capable of these reactions aside from O2, these include ozone, hydrogen peroxide, and oxygen. Acid deposition Wet deposition Wet deposition of acids occurs when any form of precipitation (rain, snow, and so on) removes acids from the atmosphere and delivers it to the Earth's surface. This can result from the deposition of acids produced in the raindrops (see aqueous phase chemistry above) or by the precipitation removing the acids either in clouds or below clouds. Wet removal of both gases and aerosols are both of importance for wet deposition. Dry deposition Acid deposition also occurs via dry deposition in the absence of precipitation. This can be responsible for as much as 20 to 60% of total acid deposition. This occurs when particles and gases stick to the ground, plants or other surfaces. Adverse effects Acid rain has been shown to have adverse impacts on forests, freshwaters and soils, killing insect and aquatic life-forms as well as causing damage to buildings and having impacts on human health. Surface waters and aquatic animals Sulfuric acid and nitric acid have multiple impacts on aquatic ecosystems, including acidification, increased nitrogen and aluminum content, and alteration of biogeochemical processes. Both the lower pH and higher aluminium concentrations in surface water that occur as a result of acid rain can cause damage to fish and other aquatic animals. At pH lower than 5 most fish eggs will not hatch and lower pH can kill adult fish. As lakes and rivers become more acidic, biodiversity is reduced. Acid rain has eliminated insect life and some fish species, including the brook trout in some lakes, streams, and creeks in geographically sensitive areas, such as the Adirondack Mountains of the United States. However, the extent to which acid rain contributes directly or indirectly via runoff from the catchment to lake and river acidity (i.e., depending on characteristics of the surrounding watershed) is variable. The United States Environmental Protection Agency's (EPA) website states: "Of the lakes and streams surveyed, acid rain caused acidity in 75% of the acidic lakes and about 50% of the acidic streams". Lakes hosted by silicate basement rocks are more acidic than lakes within limestone or other basement rocks with a carbonate composition (i.e. marble) due to buffering effects by carbonate minerals, even with the same amount of acid rain. Soils Soil biology and chemistry can be seriously damaged by acid rain. Some microbes are unable to tolerate changes to low pH and are killed. The enzymes of these microbes are denatured (changed in shape so they no longer function) by the acid. The hydronium ions of acid rain also mobilize toxins, such as aluminium, and leach away essential nutrients and minerals such as magnesium. 2 H+ (aq) + Mg2+ (clay) 2 H+ (clay) + Mg2+ (aq) Soil chemistry can be dramatically changed when base cations, such as calcium and magnesium, are leached by acid rain, thereby affecting sensitive species, such as sugar maple (Acer saccharum). Soil acidification Impacts of acidic water and soil acidification on plants could be minor or in most cases major. Most minor cases which do not result in fatality of plant life can be attributed to the plants being less susceptible to acidic conditions and/or the acid rain being less potent. However, even in minor cases, the plant will eventually die due to the acidic water lowering the plant's natural pH. Acidic water enters the plant and causes important plant minerals to dissolve and get carried away; which ultimately causes the plant to die of lack of minerals for nutrition. In major cases, which are more extreme, the same process of damage occurs as in minor cases, which is removal of essential minerals, but at a much quicker rate. Likewise, acid rain that falls on soil and on plant leaves causes drying of the waxy leaf cuticle, which ultimately causes rapid water loss from the plant to the outside atmosphere and eventually results in death of the plant. Soil acidification can lead to a decline in soil microbes as a result of a change in pH, which would have an adverse effect on plants due to their dependence on soil microbes to access nutrients. To see if a plant is being affected by soil acidification, one can closely observe the plant leaves. If the leaves are green and look healthy, the soil pH is normal and acceptable for plant life. But if the plant leaves have yellowing between the veins on their leaves, that means the plant is suffering from acidification and is unhealthy. Moreover, a plant suffering from soil acidification cannot photosynthesize; the acid-water-induced process of drying out of the plant can destroy chloroplast organelles. Without being able to photosynthesize, a plant cannot create nutrients for its own survival or oxygen for the survival of aerobic organisms, which affects most species on Earth and ultimately ends the purpose of the plant's existence. Forests and other vegetation Adverse effects may be indirectly related to acid rain, like the acid's effects on soil (see above) or high concentration of gaseous precursors to acid rain. High altitude forests are especially vulnerable as they are often surrounded by clouds and fog which are more acidic than rain. Plants are capable of adapting to acid rain. On Jinyun Mountain, Chongqing, plant species were seen adapting to new environmental conditions. The affects on the species ranged from being beneficial to detrimental. With natural rainfall or mild acid rainfall, the biochemical and physiological characteristics of plant seedlings were enhanced. Once the pH increases reaches the threshold of 3.5, the acid rain can no longer be beneficial and begins to have negative affects. Acid rain can negatively impact photosynthesis in plant leaves, when leaves are exposed to a lower pH, photosynthesis is impacted due to the decline in chlorophyll. Acid rain also has the ability to cause deformation to leaves at a cellular level, examples include; tissue scaring and changes to the stomatal, epidermis and mesophyll cells. Additional impacts of acid rain includes a decline in cuticle thickness present on the leaf surface. Because acid rain damages leaves, this directly impacts a plants ability to have a strong canopy cover, a decline in canopy cover can lead plants to be more vulnerable to diseases. Dead or dying trees often appear in areas impacted by acid rain. Acid rain causes aluminum to leach from the soil, posing risks to both plant and animal life. Furthermore, it strips the soil of critical minerals and nutrients necessary for tree growth. At higher altitudes, acidic fog and clouds can deplete nutrients from tree foliage, leading to discolored or dead leaves and needles. This depletion compromises the trees' ability to absorb sunlight, weakening them and diminishing their capacity to endure cold conditions. Other plants can also be damaged by acid rain, but the effect on food crops is minimized by the application of lime and fertilizers to replace lost nutrients. In cultivated areas, limestone may also be added to increase the ability of the soil to keep the pH stable, but this tactic is largely unusable in the case of wilderness lands. When calcium is leached from the needles of red spruce, these trees become less cold tolerant and exhibit winter injury and even death. Acid rain may also affect crop productivity by necrosis or changes to soil nutrients, which ultimately prevent plants from reaching maturity. Ocean acidification Acid rain has a much less harmful effect on oceans on a global scale, but it creates an amplified impact in the shallower waters of coastal waters. Acid rain can cause the ocean's pH to fall, known as ocean acidification, making it more difficult for different coastal species to create their exoskeletons that they need to survive. These coastal species link together as part of the ocean's food chain, and without them being a source for other marine life to feed off of, more marine life will die. Coral's limestone skeleton is particularly sensitive to pH decreases, because the calcium carbonate, a core component of the limestone skeleton, dissolves in acidic (low pH) solutions. In addition to acidification, excess nitrogen inputs from the atmosphere promote increased growth of phytoplankton and other marine plants, which, in turn, may cause more frequent harmful algal blooms and eutrophication (the creation of oxygen-depleted "dead zones") in some parts of the ocean. Human health effects Acid rain can negatively impact human health, especially when people breathe in particles released from acid rain. The effects of acid rain on human health are complex and may be seen in several ways, such as respiratory issues for long-term exposure and indirect exposure through contaminated food and water sources. Nitrogen Dioxide Effects Exposure to air pollutants associated with acid rain, such as nitrogen dioxide (NO2), may have a negative impact on respiratory health. Water-soluble nitrogen dioxide accumulates in the tiny airways, where it is transformed into nitric and nitrous acids. Pneumonia caused by nitric acids directly damages the epithelial cells lining the airways, resulting in pulmonary edema. Exposure to nitrogen dioxide also reduces the immune response by inhibiting the generation of inflammatory cytokines by alveolar macrophages in response to bacterial infection. In animal studies, the pollutant further reduces respiratory immunity by decreasing mucociliary clearance in the lower respiratory tract, which results in a reduced ability to remove respiratory infections. Sulfur Trioxide Effects The effects of sulfur trioxide and sulfuric acid are similar because they both produce sulfuric acid when they come into touch with the wet surfaces of your skin or respiratory system. The amount of SO3 breath through the mouth is larger than the amount of SO3 breath through the nose. When humans breathe in sulfur trioxide, small droplets of sulfuric acid will form inside the body and enter the respiratory tract to the lungs depending on the particle size. The effects of SO3 on the respiratory system lead to breathing difficulty in people who have asthma symptoms. Sulfur trioxide also causes very corrosive and irritation on the skin, eye, and gastrointestinal tracts when there is direct exposure to a specific concentration or long-term exposure. Consuming concentrated sulfuric acid has been known to burn the mouth and throat, erode a hole in the stomach, burns when it comes into contact with skin, make your eyes weep if it gets into them, and mortality. Federal Government's recommendation Nitrogen Dioxides A 25 parts per million (ppm) maximum for nitric oxide in working air has been set by the Occupational Safety and Health Administration (OSHA) for an 8-hour workday and a 40-hour workweek. Additionally, OSHA has established a 5-ppm nitrogen dioxide exposure limit for 15 minutes in the workplace. Sulfur Trioxide The not-to-exceed limits in the air, water, soil, or food that are recommended by regulations are often based on levels that affect animals before being modified to assist in safeguarding people. Depending on whether they employ different animal studies, have different exposure lengths (e.g., an 8-hour workday versus a 24-hour day), or for other reasons, these not-to-exceed values can vary between federal bodies. The amount of sulfur dioxide that can be emitted into the atmosphere is capped by the EPA. This reduces the quantity of sulfur dioxide in the air that turns into sulfur trioxide and sulfuric acid. Sulfuric acid concentrations in workroom air are restricted by OSHA to 1 mg/m3. Moreover, NIOSH advises a time-weighted average limit of 1 mg/m3. When you are aware of NO2 and SO3 exposure, you should talk to your doctor and ask people who are around you, especially children. Other adverse effects Acid rain can damage buildings, historic monuments, and statues, especially those made of rocks, such as limestone and marble, that contain large amounts of calcium carbonate. Acids in the rain react with the calcium compounds in the stones to create gypsum, which then flakes off. CaCO3 (s) + H2SO4 (aq) CaSO4 (s) + CO2 (g) + H2O (l) The effects of this are commonly seen on old gravestones, where acid rain can cause the inscriptions to become completely illegible. Acid rain also increases the corrosion rate of metals, in particular iron, steel, copper and bronze. Affected areas Places significantly impacted by acid rain around the globe include most of eastern Europe from Poland northward into Scandinavia, the eastern third of the United States, and southeastern Canada. Other affected areas include the southeastern coast of China and Taiwan. Prevention methods Technical solutions Many coal-firing power stations use flue-gas desulfurization (FGD) to remove sulfur-containing gases from their stack gases. For a typical coal-fired power station, FGD will remove 95% or more of the SO2 in the flue gases. An example of FGD is the wet scrubber which is commonly used. A wet scrubber is basically a reaction tower equipped with a fan that extracts hot smoke stack gases from a power plant into the tower. Lime or limestone in slurry form is also injected into the tower to mix with the stack gases and combine with the sulfur dioxide present. The calcium carbonate of the limestone produces pH-neutral calcium sulfate that is physically removed from the scrubber. That is, the scrubber turns sulfur pollution into industrial sulfates. In some areas the sulfates are sold to chemical companies as gypsum when the purity of calcium sulfate is high. In others, they are placed in landfill. The effects of acid rain can last for generations, as the effects of pH level change can stimulate the continued leaching of undesirable chemicals into otherwise pristine water sources, killing off vulnerable insect and fish species and blocking efforts to restore native life. Fluidized bed combustion also reduces the amount of sulfur emitted by power production. Vehicle emissions control reduces emissions of nitrogen oxides from motor vehicles. International treaties International treaties on the long-range transport of atmospheric pollutants have been agreed upon by western countries for some time now. Beginning in 1979, European countries convened in order to ratify general principles discussed during the UNECE Convention. The purpose was to combat Long-Range Transboundary Air Pollution. The 1985 Helsinki Protocol on the Reduction of Sulfur Emissions under the Convention on Long-Range Transboundary Air Pollution furthered the results of the convention. Results of the treaty have already come to fruition, as evidenced by an approximate 40 percent drop in particulate matter in North America. The effectiveness of the Convention in combatting acid rain has inspired further acts of international commitment to prevent the proliferation of particulate matter. Canada and the US signed the Air Quality Agreement in 1991. Most European countries and Canada signed the treaties. Activity of the Long-Range Transboundary Air Pollution Convention remained dormant after 1999, when 27 countries convened to further reduce the effects of acid rain. In 2000, foreign cooperation to prevent acid rain was sparked in Asia for the first time. Ten diplomats from countries ranging throughout the continent convened to discuss ways to prevent acid rain. Following these discussions, the Acid Deposition Monitoring Network in East Asia (EANET) was established in 2001 as an intergovernmental initiative to provide science-based inputs for decision makers and promote international cooperation on acid deposition in East Asia. In 2023, the EANET member countries include Cambodia, China, Indonesia, Japan, Lao PDR, Malaysia, Mongolia, Myanmar, the Philippines, Republic of Korea, Russia, Thailand and Vietnam. Emissions trading In this regulatory scheme, every current polluting facility is given or may purchase on an open market an emissions allowance for each unit of a designated pollutant it emits. Operators can then install pollution control equipment, and sell portions of their emissions allowances they no longer need for their own operations, thereby recovering some of the capital cost of their investment in such equipment. The intention is to give operators economic incentives to install pollution controls. The first emissions trading market was established in the United States by enactment of the Clean Air Act Amendments of 1990. The overall goal of the Acid Rain Program established by the Act is to achieve significant environmental and public health benefits through reductions in emissions of sulfur dioxide (SO2) and nitrogen oxides (NOx), the primary causes of acid rain. To achieve this goal at the lowest cost to society, the program employs both regulatory and market based approaches for controlling air pollution.
Physical sciences
Precipitation
null
3292
https://en.wikipedia.org/wiki/Brass
Brass
Brass is an alloy of copper and zinc, in proportions which can be varied to achieve different colours and mechanical, electrical, acoustic and chemical properties, but copper typically has the larger proportion, generally 66% copper and 34% zinc. In use since prehistoric times, it is a substitutional alloy: atoms of the two constituents may replace each other within the same crystal structure. Brass is similar to bronze, a copper alloy that contains tin instead of zinc. Both bronze and brass may include small proportions of a range of other elements including arsenic, lead, phosphorus, aluminium, manganese and silicon. Historically, the distinction between the two alloys has been less consistent and clear, and increasingly museums use the more general term "copper alloy". Brass has long been a popular material for its bright gold-like appearance and is still used for drawer pulls and doorknobs. It has also been widely used to make sculpture and utensils because of its low melting point, high workability (both with hand tools and with modern turning and milling machines), durability, and electrical and thermal conductivity. Brasses with higher copper content are softer and more golden in colour; conversely those with less copper and thus more zinc are harder and more silvery in colour. Brass is still commonly used in applications where corrosion resistance and low friction are required, such as locks, hinges, gears, bearings, ammunition casings, zippers, plumbing, hose couplings, valves, SCUBA regulators, and electrical plugs and sockets. It is used extensively for musical instruments such as horns and bells. The composition of brass makes it a favorable substitute for copper in costume jewelry and fashion jewelry, as it exhibits greater resistance to corrosion. Brass is not as hard as bronze and so is not suitable for most weapons and tools. Nor is it suitable for marine uses, because the zinc reacts with minerals in salt water, leaving porous copper behind; marine brass, with added tin, avoids this, as does bronze. Brass is often used in situations in which it is important that sparks not be struck, such as in fittings and tools used near flammable or explosive materials. Properties Brass is more malleable than bronze or zinc. The relatively low melting point of brass (, depending on composition) and its flow characteristics make it a relatively easy material to cast. By varying the proportions of copper and zinc, the properties of the brass can be changed, allowing hard and soft brasses. The density of brass is . Today, almost 90% of all brass alloys are recycled. Because brass is not ferromagnetic, ferrous scrap can be separated from it by passing the scrap near a powerful magnet. Brass scrap is melted and recast into billets that are extruded into the desired form and size. The general softness of brass means that it can often be machined without the use of cutting fluid, though there are exceptions to this. Aluminium makes brass stronger and more corrosion-resistant. Aluminium also causes a highly beneficial hard layer of aluminium oxide (Al2O3) to be formed on the surface that is thin, transparent, and self-healing. Tin has a similar effect and finds its use especially in seawater applications (naval brasses). Combinations of iron, aluminium, silicon, and manganese make brass wear- and tear-resistant. The addition of as little as 1% iron to a brass alloy will result in an alloy with a noticeable magnetic attraction. Brass will corrode in the presence of moisture, chlorides, acetates, ammonia, and certain acids. This often happens when the copper reacts with sulfur to form a brown and eventually black surface layer of copper sulfide which, if regularly exposed to slightly acidic water such as urban rainwater, can then oxidize in air to form a patina of green-blue copper carbonate. Depending on how the patina layer was formed, it may protect the underlying brass from further damage. Although copper and zinc have a large difference in electrical potential, the resulting brass alloy does not experience internalized galvanic corrosion because of the absence of a corrosive environment within the mixture. However, if brass is placed in contact with a more noble metal such as silver or gold in such an environment, the brass will corrode galvanically; conversely, if brass is in contact with a less-noble metal such as zinc or iron, the less noble metal will corrode and the brass will be protected. Lead content To enhance the machinability of brass, lead is often added in concentrations of about 2%. Since lead has a lower melting point than the other constituents of the brass, it tends to migrate towards the grain boundaries in the form of globules as it cools from casting. The pattern the globules form on the surface of the brass increases the available lead surface area which, in turn, affects the degree of leaching. In addition, cutting operations can smear the lead globules over the surface. These effects can lead to significant lead leaching from brasses of comparatively low lead content. In October 1999, the California State Attorney General sued 13 key manufacturers and distributors over lead content. In laboratory tests, state researchers found the average brass key, new or old, exceeded the California Proposition 65 limits by an average factor of 19, assuming handling twice a day. In April 2001 manufacturers agreed to reduce lead content to 1.5%, or face a requirement to warn consumers about lead content. Keys plated with other metals are not affected by the settlement, and may continue to use brass alloys with a higher percentage of lead content. Also in California, lead-free materials must be used for "each component that comes into contact with the wetted surface of pipes and pipe fittings, plumbing fittings and fixtures". On 1 January 2010, the maximum amount of lead in "lead-free brass" in California was reduced from 4% to 0.25% lead. Corrosion-resistant brass for harsh environments Dezincification-resistant (DZR or DR) brasses, sometimes referred to as CR (corrosion resistant) brasses, are used where there is a large corrosion risk and where normal brasses do not meet the requirements. Applications with high water temperatures, chlorides present or deviating water qualities (soft water) play a role. DZR-brass is used in water boiler systems. This brass alloy must be produced with great care, with special attention placed on a balanced composition and proper production temperatures and parameters to avoid long-term failures. An example of DZR brass is the C352 brass, with about 30% zinc, 61–63% copper, 1.7–2.8% lead, and 0.02–0.15% arsenic. The lead and arsenic significantly suppress the zinc loss. "Red brasses", a family of alloys with high copper proportion and generally less than 15% zinc, are more resistant to zinc loss. One of the metals called "red brass" is 85% copper, 5% tin, 5% lead, and 5% zinc. Copper alloy C23000, which is also known as "red brass", contains 84–86% copper, 0.05% each iron and lead, with the balance being zinc. Another such material is gunmetal, from the family of red brasses. Gunmetal alloys contain roughly 88% copper, 8–10% tin, and 2–4% zinc. Lead can be added for ease of machining or for bearing alloys. "Naval brass", for use in seawater, contains 40% zinc but also 1% tin. The tin addition suppresses zinc-leaching. The NSF International requires brasses with more than 15% zinc, used in piping and plumbing fittings, to be dezincification-resistant. Use in musical instruments The high malleability and workability, relatively good resistance to corrosion, and traditionally attributed acoustic properties of brass, have made it the usual metal of choice for construction of musical instruments whose acoustic resonators consist of long, relatively narrow tubing, often folded or coiled for compactness; silver and its alloys, and even gold, have been used for the same reasons, but brass is the most economical choice. Collectively known as brass instruments, or simply 'the brass', these include the trombone, tuba, trumpet, cornet, flugelhorn, baritone horn, euphonium, tenor horn, and French horn, and many other "horns", many in variously sized families, such as the saxhorns. Other wind instruments may be constructed of brass or other metals, and indeed most modern student-model flutes and piccolos are made of some variety of brass, usually a cupronickel alloy similar to nickel silver (also known as German silver). Clarinets, especially low clarinets such as the contrabass and subcontrabass, are sometimes made of metal because of limited supplies of the dense, fine-grained tropical hardwoods traditionally preferred for smaller woodwinds. For the same reason, some low clarinets, bassoons and contrabassoons feature a hybrid construction, with long, straight sections of wood, and curved joints, neck, and/or bell of metal. The use of metal also avoids the risks of exposing wooden instruments to changes in temperature or humidity, which can cause sudden cracking. Even though the saxophones and sarrusophones are classified as woodwind instruments, they are normally made of brass for similar reasons, and because their wide, conical bores and thin-walled bodies are more easily and efficiently made by forming sheet metal than by machining wood. The keywork of most modern woodwinds, including wooden-bodied instruments, is also usually made of an alloy such as nickel silver. Such alloys are stiffer and more durable than the brass used to construct the instrument bodies, but still workable with simple hand tools—a boon to quick repairs. The mouthpieces of both brass instruments and, less commonly, woodwind instruments are often made of brass among other metals as well. Next to the brass instruments, the most notable use of brass in music is in various percussion instruments, most notably cymbals, gongs, and orchestral (tubular) bells (large "church" bells are normally made of bronze). Small handbells and "jingle bells" are also commonly made of brass. The harmonica is a free reed aerophone, also often made from brass. In organ pipes of the reed family, brass strips (called tongues) are used as the reeds, which beat against the shallot (or beat "through" the shallot in the case of a "free" reed). Although not part of the brass section, snare drums are also sometimes made of brass. Some parts on electric guitars are also made from brass, especially inertia blocks on tremolo systems for its tonal properties, and for string nuts and saddles for both tonal properties and its low friction. Germicidal and antimicrobial applications The bactericidal properties of brass have been observed for centuries, particularly in marine environments where it prevents biofouling. Depending upon the type and concentration of pathogens and the medium they are in, brass kills these microorganisms within a few minutes to hours of contact. A large number of independent studies confirm this antimicrobial effect, even against antibiotic-resistant bacteria such as MRSA and VRSA. The mechanisms of antimicrobial action by copper and its alloys, including brass, are a subject of intense and ongoing investigation. Season cracking Brass is susceptible to stress corrosion cracking, especially from ammonia or substances containing or releasing ammonia. The problem is sometimes known as season cracking after it was first discovered in brass cartridges used for rifle ammunition during the 1920s in the British Indian Army. The problem was caused by high residual stresses from cold forming of the cases during manufacture, together with chemical attack from traces of ammonia in the atmosphere. The cartridges were stored in stables and the ammonia concentration rose during the hot summer months, thus initiating brittle cracks. The problem was resolved by annealing the cases, and storing the cartridges elsewhere. Types Other phases than α, β and γ are ε, a hexagonal intermetallic CuZn3, and η, a solid solution of copper in zinc. Brass alloys History Although forms of brass have been in use since prehistory, its true nature as a copper-zinc alloy was not understood until the post-medieval period because the zinc vapor which reacted with copper to make brass was not recognized as a metal. The King James Bible makes many references to "brass" to translate "nechosheth" (bronze or copper) from Hebrew to English. The earliest brasses may have been natural alloys made by smelting zinc-rich copper ores. By the Roman period brass was being deliberately produced from metallic copper and zinc minerals using the cementation process, the product of which was calamine brass, and variations on this method continued until the mid-19th century. It was eventually replaced by speltering, the direct alloying of copper and zinc metal which was introduced to Europe in the 16th century. Brass has sometimes historically been referred to as "yellow copper". Early copper-zinc alloys In West Asia and the Eastern Mediterranean early copper-zinc alloys are now known in small numbers from a number of 3rd millennium BC sites in the Aegean, Iraq, the United Arab Emirates, Kalmykia, Turkmenistan and Georgia and from 2nd millennium BC sites in western India, Uzbekistan, Iran, Syria, Iraq and Canaan. Isolated examples of copper-zinc alloys are known in China from the 1st century AD, long after bronze was widely used. The compositions of these early "brass" objects are highly variable and most have zinc contents of between 5% and 15% wt which is lower than in brass produced by cementation. These may be "natural alloys" manufactured by smelting zinc rich copper ores in redox conditions. Many have similar tin contents to contemporary bronze artefacts and it is possible that some copper-zinc alloys were accidental and perhaps not even distinguished from copper. However the large number of copper-zinc alloys now known suggests that at least some were deliberately manufactured and many have zinc contents of more than 12% wt which would have resulted in a distinctive golden colour. By the 8th–7th century BC Assyrian cuneiform tablets mention the exploitation of the "copper of the mountains" and this may refer to "natural" brass. "Oreikhalkon" (mountain copper), the Ancient Greek translation of this term, was later adapted to the Latin aurichalcum meaning "golden copper" which became the standard term for brass. In the 4th century BC Plato knew orichalkos as rare and nearly as valuable as gold and Pliny describes how aurichalcum had come from Cypriot ore deposits which had been exhausted by the 1st century AD. X-ray fluorescence analysis of 39 orichalcum ingots recovered from a 2,600-year-old shipwreck off Sicily found them to be an alloy made with 75–80% copper, 15–20% zinc and small percentages of nickel, lead and iron. Roman world During the later part of first millennium BC the use of brass spread across a wide geographical area from Britain and Spain in the west to Iran, and India in the east. This seems to have been encouraged by exports and influence from the Middle East and eastern Mediterranean where deliberate production of brass from metallic copper and zinc ores had been introduced. The 4th century BC writer Theopompus, quoted by Strabo, describes how heating earth from Andeira in Turkey produced "droplets of false silver", probably metallic zinc, which could be used to turn copper into oreichalkos. In the 1st century BC the Greek Dioscorides seems to have recognized a link between zinc minerals and brass describing how Cadmia (zinc oxide) was found on the walls of furnaces used to heat either zinc ore or copper and explaining that it can then be used to make brass. By the first century BC brass was available in sufficient supply to use as coinage in Phrygia and Bithynia, and after the Augustan currency reform of 23 BC it was also used to make Roman dupondii and sestertii. The uniform use of brass for coinage and military equipment across the Roman world may indicate a degree of state involvement in the industry, and brass even seems to have been deliberately boycotted by Jewish communities in Palestine because of its association with Roman authority. Brass was produced by the cementation process where copper and zinc ore are heated together until zinc vapor is produced which reacts with the copper. There is good archaeological evidence for this process and crucibles used to produce brass by cementation have been found on Roman period sites including Xanten and Nidda in Germany, Lyon in France and at a number of sites in Britain. They vary in size from tiny acorn sized to large amphorae like vessels but all have elevated levels of zinc on the interior and are lidded. They show no signs of slag or metal prills suggesting that zinc minerals were heated to produce zinc vapor which reacted with metallic copper in a solid state reaction. The fabric of these crucibles is porous, probably designed to prevent a buildup of pressure, and many have small holes in the lids which may be designed to release pressure or to add additional zinc minerals near the end of the process. Dioscorides mentioned that zinc minerals were used for both the working and finishing of brass, perhaps suggesting secondary additions. Brass made during the early Roman period seems to have varied between 20% and 28% wt zinc. The high content of zinc in coinage and brass objects declined after the first century AD and it has been suggested that this reflects zinc loss during recycling and thus an interruption in the production of new brass. However it is now thought this was probably a deliberate change in composition and overall the use of brass increases over this period making up around 40% of all copper alloys used in the Roman world by the 4th century AD. Medieval period Little is known about the production of brass during the centuries immediately after the collapse of the Roman Empire. Disruption in the trade of tin for bronze from Western Europe may have contributed to the increasing popularity of brass in the east and by the 6th–7th centuries AD over 90% of copper alloy artefacts from Egypt were made of brass. However other alloys such as low tin bronze were also used and they vary depending on local cultural attitudes, the purpose of the metal and access to zinc, especially between the Islamic and Byzantine world. Conversely the use of true brass seems to have declined in Western Europe during this period in favor of gunmetals and other mixed alloys but by about 1000 brass artefacts are found in Scandinavian graves in Scotland, brass was being used in the manufacture of coins in Northumbria and there is archaeological and historical evidence for the production of calamine brass in Germany and the Low Countries, areas rich in calamine ore. These places would remain important centres of brass making throughout the Middle Ages period, especially Dinant. Brass objects are still collectively known as dinanderie in French. The baptismal font at St Bartholomew's Church, Liège in modern Belgium (before 1117) is an outstanding masterpiece of Romanesque brass casting, though also often described as bronze. The metal of the early 12th-century Gloucester Candlestick is unusual even by medieval standards in being a mixture of copper, zinc, tin, lead, nickel, iron, antimony and arsenic with an unusually large amount of silver, ranging from 22.5% in the base to 5.76% in the pan below the candle. The proportions of this mixture may suggest that the candlestick was made from a hoard of old coins, probably Late Roman. Latten is a term for medieval alloys of uncertain and often variable composition often covering decorative borders and similar objects cut from sheet metal, whether of brass or bronze. Especially in Tibetan art, analysis of some objects shows very different compositions from different ends of a large piece. Aquamaniles were typically made in brass in both the European and Islamic worlds. The cementation process continued to be used but literary sources from both Europe and the Islamic world seem to describe variants of a higher temperature liquid process which took place in open-topped crucibles. Islamic cementation seems to have used zinc oxide known as tutiya or tutty rather than zinc ores for brass-making, resulting in a metal with lower iron impurities. A number of Islamic writers and the 13th century Italian Marco Polo describe how this was obtained by sublimation from zinc ores and condensed onto clay or iron bars, archaeological examples of which have been identified at Kush in Iran. It could then be used for brass making or medicinal purposes. In 10th century Yemen al-Hamdani described how spreading al-iglimiya, probably zinc oxide, onto the surface of molten copper produced tutiya vapor which then reacted with the metal. The 13th century Iranian writer al-Kashani describes a more complex process whereby tutiya was mixed with raisins and gently roasted before being added to the surface of the molten metal. A temporary lid was added at this point presumably to minimize the escape of zinc vapor. In Europe a similar liquid process in open-topped crucibles took place which was probably less efficient than the Roman process and the use of the term tutty by Albertus Magnus in the 13th century suggests influence from Islamic technology. The 12th century German monk Theophilus described how preheated crucibles were one sixth filled with powdered calamine and charcoal then topped up with copper and charcoal before being melted, stirred then filled again. The final product was cast, then again melted with calamine. It has been suggested that this second melting may have taken place at a lower temperature to allow more zinc to be absorbed. Albertus Magnus noted that the "power" of both calamine and tutty could evaporate and described how the addition of powdered glass could create a film to bind it to the metal. German brass making crucibles are known from Dortmund dating to the 10th century AD and from Soest and Schwerte in Westphalia dating to around the 13th century confirm Theophilus' account, as they are open-topped, although ceramic discs from Soest may have served as loose lids which may have been used to reduce zinc evaporation, and have slag on the interior resulting from a liquid process. Africa Some of the most famous objects in African art are the lost wax castings of West Africa, mostly from what is now Nigeria, produced first by the Kingdom of Ife and then the Benin Empire. Though normally described as "bronzes", the Benin Bronzes, now mostly in the British Museum and other Western collections, and the large portrait heads such as the Bronze Head from Ife of "heavily leaded zinc-brass" and the Bronze Head of Queen Idia, both also British Museum, are better described as brass, though of variable compositions. Work in brass or bronze continued to be important in Benin art and other West African traditions such as Akan goldweights, where the metal was regarded as a more valuable material than in Europe. Renaissance and post-medieval Europe The Renaissance saw important changes to both the theory and practice of brassmaking in Europe. By the 15th century there is evidence for the renewed use of lidded cementation crucibles at Zwickau in Germany. These large crucibles were capable of producing c.20 kg of brass. There are traces of slag and pieces of metal on the interior. Their irregular composition suggests that this was a lower temperature, not entirely liquid, process. The crucible lids had small holes which were blocked with clay plugs near the end of the process presumably to maximize zinc absorption in the final stages. Triangular crucibles were then used to melt the brass for casting. 16th-century technical writers such as Biringuccio, Ercker and Agricola described a variety of cementation brass making techniques and came closer to understanding the true nature of the process noting that copper became heavier as it changed to brass and that it became more golden as additional calamine was added. Zinc metal was also becoming more commonplace. By 1513 metallic zinc ingots from India and China were arriving in London and pellets of zinc condensed in furnace flues at the Rammelsberg in Germany were exploited for cementation brass making from around 1550. Eventually it was discovered that metallic zinc could be alloyed with copper to make brass, a process known as speltering, and by 1657 the German chemist Johann Glauber had recognized that calamine was "nothing else but unmeltable zinc" and that zinc was a "half ripe metal". However some earlier high zinc, low iron brasses such as the 1530 Wightman brass memorial plaque from England may have been made by alloying copper with zinc and include traces of cadmium similar to those found in some zinc ingots from China. However, the cementation process was not abandoned, and as late as the early 19th century there are descriptions of solid-state cementation in a domed furnace at around 900–950 °C and lasting up to 10 hours. The European brass industry continued to flourish into the post medieval period buoyed by innovations such as the 16th century introduction of water powered hammers for the production of wares such as pots. By 1559 the Germany city of Aachen alone was capable of producing 300,000 cwt of brass per year. After several false starts during the 16th and 17th centuries the brass industry was also established in England taking advantage of abundant supplies of cheap copper smelted in the new coal fired reverberatory furnace. In 1723 Bristol brass maker Nehemiah Champion patented the use of granulated copper, produced by pouring molten metal into cold water. This increased the surface area of the copper helping it react and zinc contents of up to 33% wt were reported using this new technique. In 1738 Nehemiah's son William Champion patented a technique for the first industrial scale distillation of metallic zinc known as distillation per descencum or "the English process". This local zinc was used in speltering and allowed greater control over the zinc content of brass and the production of high-zinc copper alloys which would have been difficult or impossible to produce using cementation, for use in expensive objects such as scientific instruments, clocks, brass buttons and costume jewelry. However Champion continued to use the cheaper calamine cementation method to produce lower-zinc brass and the archaeological remains of bee-hive shaped cementation furnaces have been identified at his works at Warmley. By the mid-to-late 18th century developments in cheaper zinc distillation such as John-Jaques Dony's horizontal furnaces in Belgium and the reduction of tariffs on zinc as well as demand for corrosion-resistant high zinc alloys increased the popularity of speltering and as a result cementation was largely abandoned by the mid-19th century.
Physical sciences
Specific alloys
null
3336
https://en.wikipedia.org/wiki/Brackish%20water
Brackish water
Brackish water, sometimes termed brack water, is water occurring in a natural environment that has more salinity than freshwater, but not as much as seawater. It may result from mixing seawater (salt water) and fresh water together, as in estuaries, or it may occur in brackish fossil aquifers. The word comes from the Middle Dutch root brak. Certain human activities can produce brackish water, in particular civil engineering projects such as dikes and the flooding of coastal marshland to produce brackish water pools for freshwater prawn farming. Brackish water is also the primary waste product of the salinity gradient power process. Because brackish water is hostile to the growth of most terrestrial plant species, without appropriate management it can be damaging to the environment (see article on shrimp farms). Technically, brackish water contains between 0.5 and 30 grams of salt per litre—more often expressed as 0.5 to 30 parts per thousand (‰), which is a specific gravity of between 1.0004 and 1.0226. Thus, brackish covers a range of salinity regimes and is not considered a precisely defined condition. It is characteristic of many brackish surface waters that their salinity can vary considerably over space or time. Water with a salt concentration greater than 30‰ is considered saline. Brackish water habitats Estuaries Brackish water condition commonly occurs when fresh water meets seawater. In fact, the most extensive brackish water habitats worldwide are estuaries, where a river meets the sea. The River Thames flowing through London is a classic river estuary. The town of Teddington a few miles west of London marks the boundary between the tidal and non-tidal parts of the Thames, although it is still considered a freshwater river about as far east as Battersea insofar as the average salinity is very low and the fish fauna consists predominantly of freshwater species such as roach, dace, carp, perch, and pike. The Thames Estuary becomes brackish between Battersea and Gravesend, and the diversity of freshwater fish species present is smaller, primarily roach and dace; euryhaline marine species such as flounder, European seabass, mullet, and smelt become much more common. Further east, the salinity increases and the freshwater fish species are completely replaced by euryhaline marine ones, until the river reaches Gravesend, at which point conditions become fully marine and the fish fauna resembles that of the adjacent North Sea and includes both euryhaline and stenohaline marine species. A similar pattern of replacement can be observed with the aquatic plants and invertebrates living in the river. This type of ecological succession from freshwater to marine ecosystem is typical of river estuaries. River estuaries form important staging points during the migration of anadromous and catadromous fish species, such as salmon, shad and eels, giving them time to form social groups and to adjust to the changes in salinity. Salmon are anadromous, meaning they live in the sea but ascend rivers to spawn; eels are catadromous, living in rivers and streams, but returning to the sea to breed. Besides the species that migrate through estuaries, there are many other fish that use them as "nursery grounds" for spawning or as places young fish can feed and grow before moving elsewhere. Herring and plaice are two commercially important species that use the Thames Estuary for this purpose. Estuaries are also commonly used as fishing grounds and as places for fish farming or ranching. For example, Atlantic salmon farms are often located in estuaries, although this has caused controversy, because in doing so, fish farmers expose migrating wild fish to large numbers of external parasites such as sea lice that escape from the pens the farmed fish are kept in. Mangroves Another important brackish water habitat is the mangrove swamp or mangal. Many, though not all, mangrove swamps fringe estuaries and lagoons where the salinity changes with each tide. Among the most specialised residents of mangrove forests are mudskippers, fish that forage for food on land, and archer fish, perch-like fish that "spit" at insects and other small animals living in the trees, knocking them into the water where they can be eaten. Like estuaries, mangrove swamps are extremely important breeding grounds for many fish, with species such as snappers, halfbeaks, and tarpon spawning or maturing among them. Besides fish, numerous other animals use mangroves, including such species as the saltwater crocodile, American crocodile, proboscis monkey, diamondback terrapin, and the crab-eating frog, Fejervarya cancrivora (formerly Rana cancrivora). Mangroves represent important nesting sites for numerous birds groups such as herons, storks, spoonbills, ibises, kingfishers, shorebirds and seabirds. Although often plagued with mosquitoes and other insects that make them unpleasant for humans, mangrove swamps are very important buffer zones between land and sea, and are a natural defense against hurricane and tsunami damage in particular. The Sundarbans and Bhitarkanika Mangroves are two of the large mangrove forests in the world, both on the coast of the Bay of Bengal. Brackish seas and lakes Some seas and lakes are brackish. The Baltic Sea is a brackish sea adjoining the North Sea. Originally the Eridanos river system prior to the Pleistocene, since then it has been flooded by the North Sea but still receives so much freshwater from the adjacent lands that the water is brackish. As seawater is denser, the water in the Baltic is stratified, with seawater at the bottom and freshwater at the top. Limited mixing occurs because of the lack of tides and storms, with the result that the fish fauna at the surface is freshwater in composition while that lower down is more marine. Cod are an example of a species only found in deep water in the Baltic, while pike are confined to the less saline surface waters. The Caspian Sea is the world's largest lake and contains brackish water with a salinity about one-third that of normal seawater. The Caspian is famous for its peculiar animal fauna, including one of the few non-marine seals (the Caspian seal) and the great sturgeons, a major source of caviar. Hudson Bay is a brackish marginal sea of the Arctic Ocean, it remains brackish due its limited connections to the open ocean, very high levels freshwater surface runoff input from the large Hudson Bay drainage basin, and low rate of evaporation due to being completely covered in ice for over half the year. In the Black Sea the surface water is brackish with an average salinity of about 17–18 parts per thousand compared to 30 to 40 for the oceans. The deep, anoxic water of the Black Sea originates from warm, salty water of the Mediterranean. Lake Texoma, a reservoir on the border between the U.S. states of Texas and Oklahoma, is a rare example of a brackish lake that is neither part of an endorheic basin nor a direct arm of the ocean, though its salinity is considerably lower than that of the other bodies of water mentioned here. The reservoir was created by the damming of the Red River of the South, which (along with several of its tributaries) receives large amounts of salt from natural seepage from buried deposits in the upstream region. The salinity is high enough that striped bass, a fish normally found only in salt water, has self-sustaining populations in the lake. Brackish marsh Other brackish bodies of water Human uses Brackish water is being used by humans in many different sectors. It is commonly used as cooling water for power generation and in a variety of ways in the mining, oil, and gas industries. Once desalinated it can also be used for agriculture, livestock, and municipal uses. Brackish water can be treated using reverse osmosis, electrodialysis, and other filtration processes.
Physical sciences
Water: General
Earth science
3364
https://en.wikipedia.org/wiki/Bit
Bit
The bit is the most basic unit of information in computing and digital communication. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either , but other representations such as true/false, yes/no, on/off, or +/− are also widely used. The relation between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. It may be physically implemented with a two-state device. A contiguous group of binary digits is commonly called a bit string, a bit vector, or a single-dimensional (or multi-dimensional) bit array. A group of eight bits is called one byte, but historically the size of the byte is not strictly defined. Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two. A string of four bits is usually a nibble. In information theory, one bit is the information entropy of a random binary variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known. As a unit of information or negentropy, the bit is also known as a shannon, named after Claude E. Shannon. As a measure of the length of a digital string that is encoded as symbols over a 0-1 (binary) alphabet, the bit has been called a binit, but this usage is now rare. In data compression, the goal is to find a shorter representation for a string, so that it requires fewer bits of storage -- but it must be "compressed" before storage and then (generally) "decompressed" before it is used in a computation. The field of Algorithmic Information Theory is devoted to the study of the "irreducible information content" of a string (i.e. its shortest-possible representation length, in bits), under the assumption that the receiver has minimal a priori knowledge of the method used to compress the string. The symbol for the binary digit is either "bit", per the IEC 80000-13:2008 standard, or the lowercase character "b", per the IEEE 1541-2002 standard. Use of the latter may create confusion with the capital "B" which is the international standard symbol for the byte. History Ralph Hartley suggested the use of a logarithmic measure of information in 1928. Claude E. Shannon first used the word "bit" in his seminal 1948 paper "A Mathematical Theory of Communication". He attributed its origin to John W. Tukey, who had written a Bell Labs memo on 9 January 1947 in which he contracted "binary information digit" to simply "bit". Physical representation A bit can be stored by a digital device or other physical system that exists in either of two possible distinct states. These may be the two stable states of a flip-flop, two positions of an electrical switch, two distinct voltage or current levels allowed by a circuit, two distinct levels of light intensity, two directions of magnetization or polarization, the orientation of reversible double stranded DNA, etc. Perhaps the earliest example of a binary storage device was the punched card invented by Basile Bouchon and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard (1804), and later adopted by Semyon Korsakov, Charles Babbage, Herman Hollerith, and early computer manufacturers like IBM. A variant of that idea was the perforated paper tape. In all those systems, the medium (card or tape) conceptually carried an array of hole positions; each position could be either punched through or not, thus carrying one bit of information. The encoding of text by bits was also used in Morse code (1844) and early digital communications machines such as teletypes and stock ticker machines (1870). The first electrical devices for discrete logic (such as elevator and traffic light control circuits, telephone switches, and Konrad Zuse's computer) represented bits as the states of electrical relays which could be either "open" or "closed". When relays were replaced by vacuum tubes, starting in the 1940s, computer builders experimented with a variety of storage methods, such as pressure pulses traveling down a mercury delay line, charges stored on the inside surface of a cathode-ray tube, or opaque spots printed on glass discs by photolithographic techniques. In the 1950s and 1960s, these methods were largely supplanted by magnetic storage devices such as magnetic-core memory, magnetic tapes, drums, and disks, where a bit was represented by the polarity of magnetization of a certain area of a ferromagnetic film, or by a change in polarity from one direction to the other. The same principle was later used in the magnetic bubble memory developed in the 1980s, and is still found in various magnetic strip items such as metro tickets and some credit cards. In modern semiconductor memory, such as dynamic random-access memory or a solid-state drive, the two values of a bit are represented by two levels of electric charge stored in a capacitor or a floating-gate MOSFET. In certain types of programmable logic arrays and read-only memory, a bit may be represented by the presence or absence of a conducting path at a certain point of a circuit. In optical discs, a bit is encoded as the presence or absence of a microscopic pit on a reflective surface. In one-dimensional bar codes and two-dimensional QR codes, bits are encoded as lines or squares which may be either black or white. In modern digital computing, bits are transformed in Boolean logic gates. Transmission and processing Bits are transmitted one at a time in serial transmission. By contrast, multiple bits are transmitted simultaneously in a parallel transmission. A serial computer processes information in either a bit-serial or a byte-serial fashion. From the standpoint of data communications, a byte-serial transmission is an 8-way parallel transmission with binary signalling. In programming languages such as C, a bitwise operation operates on binary strings as though they are vectors of bits, rather than interpreting them as binary numbers. Data transfer rates are usually measured in decimal SI multiples. For example, a channel capacity may be specified as 8 kbit/s = 8 kb/s = 1 kB/s. Storage File sizes are often measured in (binary) IEC multiples of bytes, for example 1 KiB = 1024 bytes = 8192 bits. Confusion may arise in cases where (for historic reasons) filesizes are specified with binary multipliers using the ambiguous prefixes K, M, and G rather than the IEC standard prefixes Ki, Mi, and Gi. Mass storage devices are usually measured in decimal SI multiples, for example 1 TB = bytes. Confusingly, the storage capacity of a directly-addressable memory device, such as a DRAM chip, or an assemblage of such chips on a memory module, is specified as a binary multiple -- using the ambiguous prefix G rather than the IEC recommended Gi prefix. For example, a DRAM chip that is specified (and advertised) as having "1 GB" of capacity has bytes of capacity. As at 2022, the difference between the popular understanding of a memory system with "8 GB" of capacity, and the SI-correct meaning of "8 GB" was still causing difficulty to software designers. Unit and symbol The bit is not defined in the International System of Units (SI). However, the International Electrotechnical Commission issued standard IEC 60027, which specifies that the symbol for binary digit should be 'bit', and this should be used in all multiples, such as 'kbit', for kilobit. However, the lower-case letter 'b' is widely used as well and was recommended by the IEEE 1541 Standard (2002). In contrast, the upper case letter 'B' is the standard and customary symbol for byte. Multiple bits Multiple bits may be expressed and represented in several ways. For convenience of representing commonly reoccurring groups of bits in information technology, several units of information have traditionally been used. The most common is the unit byte, coined by Werner Buchholz in June 1956, which historically was used to represent the group of bits used to encode a single character of text (until UTF-8 multibyte encoding took over) in a computer and for this reason it was used as the basic addressable element in many computer architectures. By 1993, the trend in hardware design had converged on the 8-bit byte. However, because of the ambiguity of relying on the underlying hardware design, the unit octet was defined to explicitly denote a sequence of eight bits. Computers usually manipulate bits in groups of a fixed size, conventionally named "words". Like the byte, the number of bits in a word also varies with the hardware design, and is typically between 8 and 80 bits, or even more in some specialized computers. In the early 21st century, retail personal or server computers have a word size of 32 or 64 bits. The International System of Units defines a series of decimal prefixes for multiples of standardized units which are commonly also used with the bit and the byte. The prefixes kilo (103) through yotta (1024) increment by multiples of one thousand, and the corresponding units are the kilobit (kbit) through the yottabit (Ybit).
Physical sciences
Data
null
3365
https://en.wikipedia.org/wiki/Byte
Byte
The byte is a unit of digital information that most commonly consists of eight bits. 1 byte (B) = 8 bits (bit). Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as the Internet Protocol () refer to an 8-bit byte as an octet. Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the bit endianness. The size of the byte has historically been hardware-dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used. The six-bit character code was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes, and persisted, in legacy systems, into the twenty-first century. In this era, bit groupings in the instruction stream were often referred to as syllables or slab, before the term byte became common. The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte, as 2 to the power of 8 is 256. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers commonly optimize for this usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit byte. Modern architectures typically use 32- or 64-bit words, built of four or eight bytes, respectively. The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE). Internationally, the unit octet explicitly defines a sequence of eight bits, eliminating the potential ambiguity of the term "byte". The symbol for octet, 'o', also conveniently eliminates the ambiguity in the symbol 'B' between byte and bel. Etymology and history The term byte was coined by Werner Buchholz in June 1956, during the early design phase for the IBM Stretch computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction. It is a deliberate respelling of bite to avoid accidental mutation to bit. Another origin of byte for bit groups smaller than a computer's word size, and in particular groups of four bits, is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which was jointly developed by Rand, MIT, and IBM. Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31. Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army (FIELDATA) and Navy. These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard, which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s. ASCII included the distinction of upper- and lowercase alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media. During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six-bit binary-coded decimal (BCDIC) representations used in earlier card punches. The prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, while in detail the EBCDIC and ASCII encoding schemes are different. In the early 1960s, AT&T introduced digital telephony on long-distance trunk lines. These used the eight-bit μ-law encoding. This large investment promised to reduce transmission costs for eight-bit data. In Volume 1 of The Art of Computer Programming (first published in 1968), Donald Knuth uses byte in his hypothetical MIX computer to denote a unit which "contains an unspecified amount of information ... capable of holding at least 64 distinct values ... at most 100 distinct values. On a binary computer a byte must therefore be composed of six bits". He notes that "Since 1975 or so, the word byte has come to mean a sequence of precisely eight binary digits...When we speak of bytes in connection with MIX we shall confine ourselves to the former sense of the word, harking back to the days when bytes were not yet standardized." The development of eight-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8080, the direct predecessor of the 8086, could also perform a small number of operations on the four-bit pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four-bit quantity is often called a nibble, also nybble, which is conveniently represented by a single hexadecimal digit. The term octet unambiguously specifies a size of eight bits. It is used extensively in protocol definitions. Historically, the term octad or octade was used to denote eight bits as well at least in Western Europe; however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers. Unit symbol The unit symbol for the byte is specified in IEC 80000-13, IEEE 1541 and the Metric Interchange Format as the upper-case character B. In the International System of Quantities (ISQ), B is also the symbol of the bel, a unit of logarithmic power ratio named after Alexander Graham Bell, creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one-tenth of a byte, the decibyte, and other fractions, are only used in derived units, such as transmission rates. The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in languages such as French and Romanian, and is also combined with metric prefixes for multiples, for example ko and Mo. Multiple-byte units More than one system exists to define unit multiples based on the byte. Some systems are based on powers of 10, following the International System of Units (SI), which defines for example the prefix kilo as 1000 (103); other systems are based on powers of two. Nomenclature for these systems has led to confusion. Systems based on powers of 10 use standard SI prefixes (kilo, mega, giga, ...) and their corresponding symbols (k, M, G, ...). Systems based on powers of 2, however, might use binary prefixes (kibi, mebi, gibi, ...) and their corresponding symbols (Ki, Mi, Gi, ...) or they might use the prefixes K, M, and G, creating ambiguity when the prefixes M or G are used. While the difference between the decimal and binary interpretations is relatively small for the kilobyte (about 2% smaller than the kibibyte), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a power-of-10-based terabyte is about 9% smaller than power-of-2-based tebibyte. Units based on powers of 10 Definition of prefixes using powers of 10—in which 1 kilobyte (symbol kB) is defined to equal 1,000 bytes—is recommended by the International Electrotechnical Commission (IEC). The IEC standard defines eight such multiples, up to 1 yottabyte (YB), equal to 10008 bytes. The additional prefixes ronna- for 10009 and quetta- for 100010 were adopted by the International Bureau of Weights and Measures (BIPM) in 2022. This definition is most commonly used for data-rate units in computer networks, internal bus, hard drive and flash media transfer speeds, and for the capacities of most storage media, particularly hard drives, flash-based storage, and DVDs. Operating systems that use this definition include macOS, iOS, Ubuntu, and Debian. It is also consistent with the other uses of the SI prefixes in computing, such as CPU clock speeds or measures of performance. Units based on powers of 2 A system of units based on powers of 2 in which 1 kibibyte (KiB) is equal to 1,024 (i.e., 210) bytes is defined by international standard IEC 80000-13 and is supported by national and international standards bodies (BIPM, IEC, NIST). The IEC standard defines eight such multiples, up to 1 yobibyte (YiB), equal to 10248 bytes. The natural binary counterparts to ronna- and quetta- were given in a consultation paper of the International Committee for Weights and Measures' Consultative Committee for Units (CCU) as robi- (Ri, 10249) and quebi- (Qi, 102410), but have not yet been adopted by the IEC or ISO. An alternative system of nomenclature for the same units (referred to here as the customary convention), in which 1 kilobyte (KB) is equal to 1,024 bytes, 1 megabyte (MB) is equal to 10242 bytes and 1 gigabyte (GB) is equal to 10243 bytes is mentioned by a 1990s JEDEC standard. Only the first three multiples (up to GB) are mentioned by the JEDEC standard, which makes no mention of TB and larger. While confusing and incorrect, the customary convention is used by the Microsoft Windows operating system and random-access memory capacity, such as main memory and CPU cache size, and in marketing and billing by telecommunication companies, such as Vodafone, AT&T, Orange and Telstra. For storage capacity, the customary convention was used by macOS and iOS through Mac OS X 10.5 Leopard and iOS 10, after which they switched to units based on powers of 10. Parochial units Various computer vendors have coined terms for data of various sizes, sometimes with different sizes for the same term even within a single vendor. These terms include double word, half word, long word, quad word, slab, superword and syllable. There are also informal terms. e.g., half byte and nybble for 4 bits, octal K for . History of the conflicting definitions Contemporary computer memory has a binary architecture making a definition of memory units based on powers of 2 most practical. The use of the metric prefix kilo for binary multiples arose as a convenience, because is approximately . This definition was popular in early decades of personal computing, with products like the Tandon 5-inch DD floppy format (holding bytes) being advertised as "360 KB", following the -byte convention. It was not universal, however. The Shugart SA-400 5-inch floppy disk held 109,375 bytes unformatted, and was advertised as "110 Kbyte", using the 1000 convention. Likewise, the 8-inch DEC RX01 floppy (1975) held bytes formatted, and was advertised as "256k". Some devices were advertised using a mixture of the two definitions: most notably, floppy disks advertised as "1.44 MB" have an actual capacity of , the equivalent of 1.47 MB or 1.41 MiB. In 1995, the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols attempted to resolve this ambiguity by proposing a set of binary prefixes for the powers of 1024, including kibi (kilobinary), mebi (megabinary), and gibi (gigabinary). In December 1998, the IEC addressed such multiple usages and definitions by adopting the IUPAC's proposed prefixes (kibi, mebi, gibi, etc.) to unambiguously denote powers of 1024. Thus one kibibyte (1 KiB) is 10241 bytes = 1024 bytes, one mebibyte (1 MiB) is 10242 bytes = bytes, and so on. In 1999, Donald Knuth suggested calling the kibibyte a "large kilobyte" (KKB). Modern standard definitions The IEC adopted the IUPAC proposal and published the standard in January 1999. The IEC prefixes are part of the International System of Quantities. The IEC further specified that the kilobyte should only be used to refer to bytes. Lawsuits over definition Lawsuits arising from alleged consumer confusion over the binary and decimal definitions of multiples of the byte have generally ended in favor of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = (109) bytes (the decimal definition), rather than the binary definition (230, i.e., ). Specifically, the United States District Court for the Northern District of California held that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' [...] The California Legislature has likewise adopted the decimal system for all 'transactions in this state. Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital. Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity. Seagate was sued on similar grounds and also settled. Practical examples Common uses Many programming languages define the data type byte. The C and C++ programming languages define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that the integral data type unsigned char must hold at least 256 different values, and is represented by at least eight bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte. In addition, the C and C++ standards require that there be no gaps between two bytes. This means every bit in memory is part of a byte. Java's primitive data type byte is defined as eight bits. It is a signed data type, holding values from −128 to 127. .NET programming languages, such as C#, define byte as an unsigned type, and the sbyte as a signed data type, holding values from 0 to 255, and −128 to 127, respectively. In data transmission systems, the byte is used as a contiguous sequence of bits in a serial data stream, representing the smallest distinguished unit of data. For asynchronous communication a full transmission unit usually additionally includes a start bit, 1 or 2 stop bits, and possibly a parity bit, and thus its size may vary from seven to twelve bits for five to eight bits of actual data. For synchronous communication the error checking usually uses bytes at the end of a frame.
Physical sciences
Data
null
3370
https://en.wikipedia.org/wiki/Boron%20nitride
Boron nitride
Boron nitride is a thermally and chemically resistant refractory compound of boron and nitrogen with the chemical formula BN. It exists in various crystalline forms that are isoelectronic to a similarly structured carbon lattice. The hexagonal form corresponding to graphite is the most stable and soft among BN polymorphs, and is therefore used as a lubricant and an additive to cosmetic products. The cubic (zincblende aka sphalerite structure) variety analogous to diamond is called c-BN; it is softer than diamond, but its thermal and chemical stability is superior. The rare wurtzite BN modification is similar to lonsdaleite but slightly softer than the cubic form. Because of excellent thermal and chemical stability, boron nitride ceramics are used in high-temperature equipment and metal casting. Boron nitride has potential use in nanotechnology. History Boron nitride was discovered by chemistry teacher of the Liverpool Institute in 1842 via reduction of boric acid with charcoal in the presence of potassium cyanide. Structure Boron nitride exists in multiple forms that differ in the arrangement of the boron and nitrogen atoms, giving rise to varying bulk properties of the material. Amorphous form (a-BN) The amorphous form of boron nitride (a-BN) is non-crystalline, lacking any long-distance regularity in the arrangement of its atoms. It is analogous to amorphous carbon. All other forms of boron nitride are crystalline. Hexagonal form (h-BN) The most stable crystalline form is the hexagonal one, also called h-BN, α-BN, g-BN, graphitic boron nitride and "white graphene". Hexagonal boron nitride (point group = D3h; space group = P63/mmc) has a layered structure similar to graphite. Within each layer, boron and nitrogen atoms are bound by strong covalent bonds, whereas the layers are held together by weak van der Waals forces. The interlayer "registry" of these sheets differs, however, from the pattern seen for graphite, because the atoms are eclipsed, with boron atoms lying over and above nitrogen atoms. This registry reflects the local polarity of the B–N bonds, as well as interlayer N-donor/B-acceptor characteristics. Likewise, many metastable forms consisting of differently stacked polytypes exist. Therefore, h-BN and graphite are very close neighbors, and the material can accommodate carbon as a substituent element to form BNCs. BC6N hybrids have been synthesized, where carbon substitutes for some B and N atoms. Hexagonal boron nitride monolayer is analogous to graphene, having a honeycomb lattice structure of nearly the same dimensions. Unlike graphene, which is black and an electrical conductor, h-BN monolayer is white and an insulator. It has been proposed for use as an atomic flat insulating substrate or a tunneling dielectric barrier in 2D electronics. . Cubic form (c-BN) Cubic boron nitride has a crystal structure analogous to that of diamond. Consistent with diamond being less stable than graphite, the cubic form is less stable than the hexagonal form, but the conversion rate between the two is negligible at room temperature, as it is for diamond. The cubic form has the sphalerite crystal structure (space group = F3m), the same as that of diamond (with ordered B and N atoms), and is also called β-BN or c-BN. Wurtzite form (w-BN) The wurtzite form of boron nitride (w-BN; point group = C6v; space group = P63mc) has the same structure as lonsdaleite, a rare hexagonal polymorph of carbon. As in the cubic form, the boron and nitrogen atoms are grouped into tetrahedra. In the wurtzite form, the boron and nitrogen atoms are grouped into 6-membered rings. In the cubic form all rings are in the chair configuration, whereas in w-BN the rings between 'layers' are in boat configuration. Earlier optimistic reports predicted that the wurtzite form was very strong, and was estimated by a simulation as potentially having a strength 18% stronger than that of diamond. Since only small amounts of the mineral exist in nature, this has not yet been experimentally verified. Its hardness is 46 GPa, slightly harder than commercial borides but softer than the cubic form of boron nitride. Properties Physical The partly ionic structure of BN layers in h-BN reduces covalency and electrical conductivity, whereas the interlayer interaction increases resulting in higher hardness of h-BN relative to graphite. The reduced electron-delocalization in hexagonal-BN is also indicated by its absence of color and a large band gap. Very different bonding – strong covalent within the basal planes (planes where boron and nitrogen atoms are covalently bonded) and weak between them – causes high anisotropy of most properties of h-BN. For example, the hardness, electrical and thermal conductivity are much higher within the planes than perpendicular to them. On the contrary, the properties of c-BN and w-BN are more homogeneous and isotropic. Those materials are extremely hard, with the hardness of bulk c-BN being slightly smaller and w-BN even higher than that of diamond. Polycrystalline c-BN with grain sizes on the order of 10 nm is also reported to have Vickers hardness comparable or higher than diamond. Because of much better stability to heat and transition metals, c-BN surpasses diamond in mechanical applications, such as machining steel. The thermal conductivity of BN is among the highest of all electric insulators (see table). Boron nitride can be doped p-type with beryllium and n-type with boron, sulfur, silicon or if co-doped with carbon and nitrogen. Both hexagonal and cubic BN are wide-gap semiconductors with a band-gap energy corresponding to the UV region. If voltage is applied to h-BN or c-BN, then it emits UV light in the range 215–250 nm and therefore can potentially be used as light-emitting diodes (LEDs) or lasers. Little is known on melting behavior of boron nitride. It degrades at 2973 °C, but melts at elevated pressure. Thermal stability Hexagonal and cubic BN (and probably w-BN) show remarkable chemical and thermal stabilities. For example, h-BN is stable to decomposition at temperatures up to 1000 °C in air, 1400 °C in vacuum, and 2800 °C in an inert atmosphere. The reactivity of h-BN and c-BN is relatively similar, and the data for c-BN are summarized in the table below. Thermal stability of c-BN can be summarized as follows: In air or oxygen: protective layer prevents further oxidation to ~1300 °C; no conversion to hexagonal form at 1400 °C. In nitrogen: some conversion to h-BN at 1525 °C after 12 h. In vacuum (): conversion to h-BN at 1550–1600 °C. Chemical stability Boron nitride is not attacked by the usual acids, but it is soluble in alkaline molten salts and nitrides, such as LiOH, KOH, NaOH-, , , , , or , which are therefore used to etch BN. Thermal conductivity The theoretical thermal conductivity of hexagonal boron nitride nanoribbons (BNNRs) can approach 1700–2000 W/(m⋅K), which has the same order of magnitude as the experimental measured value for graphene, and can be comparable to the theoretical calculations for graphene nanoribbons. Moreover, the thermal transport in the BNNRs is anisotropic. The thermal conductivity of zigzag-edged BNNRs is about 20% larger than that of armchair-edged nanoribbons at room temperature. Mechanical properties BN nanosheets consist of hexagonal boron nitride (h-BN). They are stable up to 800°C in air. The structure of monolayer BN is similar to that of graphene, which has exceptional strength, a high-temperature lubricant, and a substrate in electronic devices. The anisotropy of Young's modulus and Poisson's ratio depends on the system size. h-BN also exhibits strongly anisotropic strength and toughness, and maintains these over a range of vacancy defects, showing that the anisotropy is independent to the defect type. Natural occurrence In 2009, cubic form (c-BN) was reported in Tibet, and the name qingsongite proposed. The substance was found in dispersed micron-sized inclusions in chromium-rich rocks. In 2013, the International Mineralogical Association affirmed the mineral and the name. Synthesis Preparation and reactivity of hexagonal BN Hexagonal boron nitride is obtained by the treating boron trioxide () or boric acid () with ammonia () or urea () in an inert atmosphere: (T = 900 °C) (T = 900 °C) (T > 1000 °C) (T > 1500 °C) The resulting disordered (amorphous) material contains 92–95% BN and 5–8% . The remaining can be evaporated in a second step at temperatures in order to achieve BN concentration >98%. Such annealing also crystallizes BN, the size of the crystallites increasing with the annealing temperature. h-BN parts can be fabricated inexpensively by hot-pressing with subsequent machining. The parts are made from boron nitride powders adding boron oxide for better compressibility. Thin films of boron nitride can be obtained by chemical vapor deposition from borazine. ZYP Coatings also has developed boron nitride coatings that may be painted on a surface. Combustion of boron powder in nitrogen plasma at 5500 °C yields ultrafine boron nitride used for lubricants and toners. Boron nitride reacts with iodine fluoride to give in low yield. Boron nitride reacts with nitrides of lithium, alkaline earth metals and lanthanides to form nitridoborates. For example: Intercalation of hexagonal BN Various species intercalate into hexagonal BN, such as intercalate or alkali metals. Preparation of cubic BN c-BN is prepared analogously to the preparation of synthetic diamond from graphite. Direct conversion of hexagonal boron nitride to the cubic form has been observed at pressures between 5 and 18 GPa and temperatures between 1730 and 3230 °C, that is similar parameters as for direct graphite-diamond conversion. The addition of a small amount of boron oxide can lower the required pressure to 4–7 GPa and temperature to 1500 °C. As in diamond synthesis, to further reduce the conversion pressures and temperatures, a catalyst is added, such as lithium, potassium, or magnesium, their nitrides, their fluoronitrides, water with ammonium compounds, or hydrazine. Other industrial synthesis methods, again borrowed from diamond growth, use crystal growth in a temperature gradient, or explosive shock wave. The shock wave method is used to produce material called heterodiamond, a superhard compound of boron, carbon, and nitrogen. Low-pressure deposition of thin films of cubic boron nitride is possible. As in diamond growth, the major problem is to suppress the growth of hexagonal phases (h-BN or graphite, respectively). Whereas in diamond growth this is achieved by adding hydrogen gas, boron trifluoride is used for c-BN. Ion beam deposition, plasma-enhanced chemical vapor deposition, pulsed laser deposition, reactive sputtering, and other physical vapor deposition methods are used as well. Preparation of wurtzite BN Wurtzite BN can be obtained via static high-pressure or dynamic shock methods. The limits of its stability are not well defined. Both c-BN and w-BN are formed by compressing h-BN, but formation of w-BN occurs at much lower temperatures close to 1700 °C. Production statistics Whereas the production and consumption figures for the raw materials used for BN synthesis, namely boric acid and boron trioxide, are well known (see boron), the corresponding numbers for the boron nitride are not listed in statistical reports. An estimate for the 1999 world production is 300 to 350 metric tons. The major producers and consumers of BN are located in the United States, Japan, China and Germany. In 2000, prices varied from about $75–120/kg for standard industrial-quality h-BN and were about up to $200–400/kg for high purity BN grades. Applications Hexagonal BN Hexagonal BN (h-BN) is the most widely used polymorph. It is a good lubricant at both low and high temperatures (up to 900 °C, even in an oxidizing atmosphere). h-BN lubricant is particularly useful when the electrical conductivity or chemical reactivity of graphite (alternative lubricant) would be problematic. In internal combustion engines, where graphite could be oxidized and turn into carbon sludge, h-BN with its superior thermal stability can be added to engine lubricants. As with all nano-particle suspensions, Brownian-motion settlement is a problem. Settlement can clog engine oil filters, which limits solid lubricant applications in a combustion engine to automotive racing, where engine re-building is common. Since carbon has appreciable solubility in certain alloys (such as steels), which may lead to degradation of properties, BN is often superior for high temperature and/or high pressure applications. Another advantage of h-BN over graphite is that its lubricity does not require water or gas molecules trapped between the layers. Therefore, h-BN lubricants can be used in vacuum, such as space applications. The lubricating properties of fine-grained h-BN are used in cosmetics, paints, dental cements, and pencil leads. Hexagonal BN was first used in cosmetics around 1940 in Japan. Because of its high price, h-BN was abandoned for this application. Its use was revitalized in the late 1990s with the optimization h-BN production processes, and currently h-BN is used by nearly all leading producers of cosmetic products for foundations, make-up, eye shadows, blushers, kohl pencils, lipsticks and other skincare products. Because of its excellent thermal and chemical stability, boron nitride ceramics and coatings are used high-temperature equipment. h-BN can be included in ceramics, alloys, resins, plastics, rubbers, and other materials, giving them self-lubricating properties. Such materials are suitable for construction of e.g. bearings and in steelmaking. Many quantum devices use multilayer h-BN as a substrate material. It can also be used as a dielectric in resistive random access memories. Hexagonal BN is used in xerographic process and laser printers as a charge leakage barrier layer of the photo drum. In the automotive industry, h-BN mixed with a binder (boron oxide) is used for sealing oxygen sensors, which provide feedback for adjusting fuel flow. The binder utilizes the unique temperature stability and insulating properties of h-BN. Parts can be made by hot pressing from four commercial grades of h-BN. Grade HBN contains a boron oxide binder; it is usable up to 550–850 °C in oxidizing atmosphere and up to 1600 °C in vacuum, but due to the boron oxide content is sensitive to water. Grade HBR uses a calcium borate binder and is usable at 1600 °C. Grades HBC and HBT contain no binder and can be used up to 3000 °C. Boron nitride nanosheets (h-BN) can be deposited by catalytic decomposition of borazine at a temperature ~1100 °C in a chemical vapor deposition setup, over areas up to about 10 cm2. Owing to their hexagonal atomic structure, small lattice mismatch with graphene (~2%), and high uniformity they are used as substrates for graphene-based devices. BN nanosheets are also excellent proton conductors. Their high proton transport rate, combined with the high electrical resistance, may lead to applications in fuel cells and water electrolysis. h-BN has been used since the mid-2000s as a bullet and bore lubricant in precision target rifle applications as an alternative to molybdenum disulfide coating, commonly referred to as "moly". It is claimed to increase effective barrel life, increase intervals between bore cleaning and decrease the deviation in point of impact between clean bore first shots and subsequent shots. h-BN is used as a release agent in molten metal and glass applications. For example, ZYP Coatings developed and currently produces a line of paintable h-BN coatings that are used by manufacturers of molten aluminium, non-ferrous metal, and glass. Because h-BN is nonwetting and lubricious to these molten materials, the coated surface (i.e. mold or crucible) does not stick to the material. Cubic BN Cubic boron nitride (CBN or c-BN) is widely used as an abrasive. Its usefulness arises from its insolubility in iron, nickel, and related alloys at high temperatures, whereas diamond is soluble in these metals. Polycrystalline c-BN (PCBN) abrasives are therefore used for machining steel, whereas diamond abrasives are preferred for aluminum alloys, ceramics, and stone. When in contact with oxygen at high temperatures, BN forms a passivation layer of boron oxide. Boron nitride binds well with metals due to formation of interlayers of metal borides or nitrides. Materials with cubic boron nitride crystals are often used in the tool bits of cutting tools. For grinding applications, softer binders such as resin, porous ceramics and soft metals are used. Ceramic binders can be used as well. Commercial products are known under names "Borazon" (by Hyperion Materials & Technologies), and "Elbor" or "Cubonite" (by Russian vendors). Contrary to diamond, large c-BN pellets can be produced in a simple process (called sintering) of annealing c-BN powders in nitrogen flow at temperatures slightly below the BN decomposition temperature. This ability of c-BN and h-BN powders to fuse allows cheap production of large BN parts. Similar to diamond, the combination in c-BN of highest thermal conductivity and electrical resistivity is ideal for heat spreaders. As cubic boron nitride consists of light atoms and is very robust chemically and mechanically, it is one of the popular materials for X-ray membranes: low mass results in small X-ray absorption, and good mechanical properties allow usage of thin membranes, further reducing the absorption. Amorphous BN Layers of amorphous boron nitride (a-BN) are used in some semiconductor devices, e.g. MOSFETs. They can be prepared by chemical decomposition of trichloroborazine with caesium, or by thermal chemical vapor deposition methods. Thermal CVD can be also used for deposition of h-BN layers, or at high temperatures, c-BN. Other forms of boron nitride Atomically thin boron nitride Hexagonal boron nitride can be exfoliated to mono or few atomic layer sheets. Due to its analogous structure to that of graphene, atomically thin boron nitride is sometimes called white graphene. Mechanical properties Atomically thin boron nitride is one of the strongest electrically insulating materials. Monolayer boron nitride has an average Young's modulus of 0.865TPa and fracture strength of 70.5GPa, and in contrast to graphene, whose strength decreases dramatically with increased thickness, few-layer boron nitride sheets have a strength similar to that of monolayer boron nitride. Thermal conductivity Atomically thin boron nitride has one of the highest thermal conductivity coefficients (751 W/mK at room temperature) among semiconductors and electrical insulators, and its thermal conductivity increases with reduced thickness due to less intra-layer coupling. Thermal stability The air stability of graphene shows a clear thickness dependence: monolayer graphene is reactive to oxygen at 250 °C, strongly doped at 300 °C, and etched at 450 °C; in contrast, bulk graphite is not oxidized until 800 °C. Atomically thin boron nitride has much better oxidation resistance than graphene. Monolayer boron nitride is not oxidized till 700 °C and can sustain up to 850 °C in air; bilayer and trilayer boron nitride nanosheets have slightly higher oxidation starting temperatures. The excellent thermal stability, high impermeability to gas and liquid, and electrical insulation make atomically thin boron nitride potential coating materials for preventing surface oxidation and corrosion of metals and other two-dimensional (2D) materials, such as black phosphorus. Better surface adsorption Atomically thin boron nitride has been found to have better surface adsorption capabilities than bulk hexagonal boron nitride. According to theoretical and experimental studies, atomically thin boron nitride as an adsorbent experiences conformational changes upon surface adsorption of molecules, increasing adsorption energy and efficiency. The synergic effect of the atomic thickness, high flexibility, stronger surface adsorption capability, electrical insulation, impermeability, high thermal and chemical stability of BN nanosheets can increase the Raman sensitivity by up to two orders, and in the meantime attain long-term stability and reusability not readily achievable by other materials. Dielectric properties Atomically thin hexagonal boron nitride is an excellent dielectric substrate for graphene, molybdenum disulfide (), and many other 2D material-based electronic and photonic devices. As shown by electric force microscopy (EFM) studies, the electric field screening in atomically thin boron nitride shows a weak dependence on thickness, which is in line with the smooth decay of electric field inside few-layer boron nitride revealed by the first-principles calculations. Raman characteristics Raman spectroscopy has been a useful tool to study a variety of 2D materials, and the Raman signature of high-quality atomically thin boron nitride was first reported by Gorbachev et al. in 2011. and Li et al. However, the two reported Raman results of monolayer boron nitride did not agree with each other. Cai et al., therefore, conducted systematic experimental and theoretical studies to reveal the intrinsic Raman spectrum of atomically thin boron nitride. It reveals that atomically thin boron nitride without interaction with a substrate has a G band frequency similar to that of bulk hexagonal boron nitride, but strain induced by the substrate can cause Raman shifts. Nevertheless, the Raman intensity of G band of atomically thin boron nitride can be used to estimate layer thickness and sample quality. Boron nitride nanomesh Boron nitride nanomesh is a nanostructured two-dimensional material. It consists of a single BN layer, which forms by self-assembly a highly regular mesh after high-temperature exposure of a clean rhodium or ruthenium surface to borazine under ultra-high vacuum. The nanomesh looks like an assembly of hexagonal pores. The distance between two pore centers is 3.2 nm and the pore diameter is ~2 nm. Other terms for this material are boronitrene or white graphene. The boron nitride nanomesh is air-stable and compatible with some liquids. up to temperatures of 800 °C. Boron nitride nanotubes Boron nitride tubules were first made in 1989 by Shore and Dolan This work was patented in 1989 and published in 1989 thesis (Dolan) and then 1993 Science. The 1989 work was also the first preparation of amorphous BN by B-trichloroborazine and cesium metal. Boron nitride nanotubes were predicted in 1994 and experimentally discovered in 1995. They can be imagined as a rolled up sheet of h-boron nitride. Structurally, it is a close analog of the carbon nanotube, namely a long cylinder with diameter of several to hundred nanometers and length of many micrometers, except carbon atoms are alternately substituted by nitrogen and boron atoms. However, the properties of BN nanotubes are very different: whereas carbon nanotubes can be metallic or semiconducting depending on the rolling direction and radius, a BN nanotube is an electrical insulator with a bandgap of ~5.5 eV, basically independent of tube chirality and morphology. In addition, a layered BN structure is much more thermally and chemically stable than a graphitic carbon structure. Boron nitride aerogel Boron nitride aerogel is an aerogel made of highly porous BN. It typically consists of a mixture of deformed BN nanotubes and nanosheets. It can have a density as low as 0.6 mg/cm3 and a specific surface area as high as 1050 m2/g, and therefore has potential applications as an absorbent, catalyst support and gas storage medium. BN aerogels are highly hydrophobic and can absorb up to 160 times their weight in oil. They are resistant to oxidation in air at temperatures up to 1200 °C, and hence can be reused after the absorbed oil is burned out by flame. BN aerogels can be prepared by template-assisted chemical vapor deposition using borazine as the feed gas. Composites containing BN Addition of boron nitride to silicon nitride ceramics improves the thermal shock resistance of the resulting material. For the same purpose, BN is added also to silicon nitride-alumina and titanium nitride-alumina ceramics. Other materials being reinforced with BN include alumina and zirconia, borosilicate glasses, glass ceramics, enamels, and composite ceramics with titanium boride-boron nitride, titanium boride-aluminium nitride-boron nitride, and silicon carbide-boron nitride composition. Zirconia Stabilized Boron Nitride (ZSBN) is produced by adding zirconia to BN, enhancing its thermal shock resistance and mechanical strength through a sintering process. It offers better performance characteristics including Superior corrosion and erosion resistance over a wide temperature range. Its unique combination of thermal conductivity, lubricity, mechanical strength, and stability makes it suitable for various applications including cutting tools and wear-resistant coatings, thermal and electrical insulation, aerospace and defense, and high-temperature components. Pyrolytic boron nitride (PBN) Pyrolytic boron nitride (PBN), also known as Chemical vapour-deposited Boron Nitride(CVD-BN), is a high-purity ceramic material characterized by exceptional chemical resistance and mechanical strength at high temperatures. Pyrolytic boron nitride is typically prepared through the thermal decomposition of boron trichloride and ammonia vapors on graphite substrates at 1900°C. Pyrolytic boron nitride (PBN) generally has a hexagonal structure similar to hexagonal boron nitride (hBN), though it can exhibit stacking faults or deviations from the ideal lattice. Pyrolytic boron nitride (PBN) shows some remarkable attributes, including exceptional chemical inertness, high dielectric strength, excellent thermal shock resistance, non-wettability, non-toxicity, oxidation resistance, and minimal outgassing. Due to a highly ordered planar texture similar to pyrolytic graphite (PG), it exhibits anisotropic properties such as lower dielectric constant vertical to the crystal plane and higher bending strength along the crystal plane. PBN material has been widely manufactured as crucibles of compound semiconductor crystals, output windows and dielectric rods of traveling-wave tubes, high-temperature jigs and insulator. Health issues Boron nitride (along with , NbN, and BNC) is generally considered to be non-toxic and does not exhibit chemical activity in biological systems. Due to its excellent safety profile and lubricious properties, boron nitride finds widespread use in various applications, including cosmetics and food processing equipment.
Physical sciences
Ceramic compounds
Chemistry
3378
https://en.wikipedia.org/wiki/Beryllium
Beryllium
Beryllium is a chemical element; it has symbol Be and atomic number 4. It is a steel-gray, hard, strong, lightweight and brittle alkaline earth metal. It is a divalent element that occurs naturally only in combination with other elements to form minerals. Gemstones high in beryllium include beryl (aquamarine, emerald, red beryl) and chrysoberyl. It is a relatively rare element in the universe, usually occurring as a product of the spallation of larger atomic nuclei that have collided with cosmic rays. Within the cores of stars, beryllium is depleted as it is fused into heavier elements. Beryllium constitutes about 0.0004 percent by mass of Earth's crust. The world's annual beryllium production of 220 tons is usually manufactured by extraction from the mineral beryl, a difficult process because beryllium bonds strongly to oxygen. In structural applications, the combination of high flexural rigidity, thermal stability, thermal conductivity and low density (1.85 times that of water) make beryllium a desirable aerospace material for aircraft components, missiles, spacecraft, and satellites. Because of its low density and atomic mass, beryllium is relatively transparent to X-rays and other forms of ionizing radiation; therefore, it is the most common window material for X-ray equipment and components of particle detectors. When added as an alloying element to aluminium, copper (notably the alloy beryllium copper), iron, or nickel, beryllium improves many physical properties. For example, tools and components made of beryllium copper alloys are strong and hard and do not create sparks when they strike a steel surface. In air, the surface of beryllium oxidizes readily at room temperature to form a passivation layer 1–10 nm thick that protects it from further oxidation and corrosion. The metal oxidizes in bulk (beyond the passivation layer) when heated above , and burns brilliantly when heated to about . The commercial use of beryllium requires the use of appropriate dust control equipment and industrial controls at all times because of the toxicity of inhaled beryllium-containing dusts that can cause a chronic life-threatening allergic disease, berylliosis, in some people. Berylliosis is typically manifested by chronic pulmonary fibrosis and, in severe cases, right sided heart failure and death. Characteristics Physical properties Beryllium is a steel gray and hard metal that is brittle at room temperature and has a close-packed hexagonal crystal structure. It has exceptional stiffness (Young's modulus 287 GPa) and a melting point of 1287 °C. The modulus of elasticity of beryllium is approximately 35% greater than that of steel. The combination of this modulus and a relatively low density results in an unusually fast sound conduction speed in beryllium – about 12.9 km/s at ambient conditions. Other significant properties are high specific heat () and thermal conductivity (), which make beryllium the metal with the best heat dissipation characteristics per unit weight. In combination with the relatively low coefficient of linear thermal expansion (11.4 × 10−6 K−1), these characteristics result in a unique stability under conditions of thermal loading. Nuclear properties Naturally occurring beryllium, save for slight contamination by the cosmogenic radioisotopes, is isotopically pure beryllium-9, which has a nuclear spin of . Beryllium has a large scattering cross section for high-energy neutrons, about 6 barns for energies above approximately 10 keV. Therefore, it works as a neutron reflector and neutron moderator, effectively slowing the neutrons to the thermal energy range of below 0.03 eV, where the total cross section is at least an order of magnitude lower; the exact value strongly depends on the purity and size of the crystallites in the material. The single primordial beryllium isotope 9Be also undergoes a (n,2n) neutron reaction with neutron energies over about 1.9 MeV, to produce 8Be, which almost immediately breaks into two alpha particles. Thus, for high-energy neutrons, beryllium is a neutron multiplier, releasing more neutrons than it absorbs. This nuclear reaction is: + n → 2 + 2 n Neutrons are liberated when beryllium nuclei are struck by energetic alpha particles producing the nuclear reaction + → + n where is an alpha particle and is a carbon-12 nucleus. Beryllium also releases neutrons under bombardment by gamma rays. Thus, natural beryllium bombarded either by alphas or gammas from a suitable radioisotope is a key component of most radioisotope-powered nuclear reaction neutron sources for the laboratory production of free neutrons. Small amounts of tritium are liberated when nuclei absorb low energy neutrons in the three-step nuclear reaction + n → + ,    → + β−,    + n → + has a half-life of only 0.8 seconds, β− is an electron, and has a high neutron absorption cross section. Tritium is a radioisotope of concern in nuclear reactor waste streams. Optical properties As a metal, beryllium is transparent or translucent to most wavelengths of X-rays and gamma rays, making it useful for the output windows of X-ray tubes and other such apparatus. Isotopes and nucleosynthesis Both stable and unstable isotopes of beryllium are created in stars, but the radioisotopes do not last long. It is believed that most of the stable beryllium in the universe was originally created in the interstellar medium when cosmic rays induced fission in heavier elements found in interstellar gas and dust. Primordial beryllium contains only one stable isotope, 9Be, and therefore beryllium is, uniquely among all stable elements with an even atomic number, a monoisotopic and mononuclidic element. Radioactive cosmogenic 10Be is produced in the atmosphere of the Earth by the cosmic ray spallation of oxygen. 10Be accumulates at the soil surface, where its relatively long half-life (1.36 million years) permits a long residence time before decaying to boron-10. Thus, 10Be and its daughter products are used to examine natural soil erosion, soil formation and the development of lateritic soils, and as a proxy for measurement of the variations in solar activity and the age of ice cores. The production of 10Be is inversely proportional to solar activity, because increased solar wind during periods of high solar activity decreases the flux of galactic cosmic rays that reach the Earth. Nuclear explosions also form 10Be by the reaction of fast neutrons with 13C in the carbon dioxide in air. This is one of the indicators of past activity at nuclear weapon test sites. The isotope 7Be (half-life 53 days) is also cosmogenic, and shows an atmospheric abundance linked to sunspots, much like 10Be. 8Be has a very short half-life of about 8 s that contributes to its significant cosmological role, as elements heavier than beryllium could not have been produced by nuclear fusion in the Big Bang. This is due to the lack of sufficient time during the Big Bang's nucleosynthesis phase to produce carbon by the fusion of 4He nuclei and the very low concentrations of available beryllium-8. British astronomer Sir Fred Hoyle first showed that the energy levels of 8Be and 12C allow carbon production by the so-called triple-alpha process in helium-fueled stars where more nucleosynthesis time is available. This process allows carbon to be produced in stars, but not in the Big Bang. Star-created carbon (the basis of carbon-based life) is thus a component in the elements in the gas and dust ejected by AGB stars and supernovae (see also Big Bang nucleosynthesis), as well as the creation of all other elements with atomic numbers larger than that of carbon. The 2s electrons of beryllium may contribute to chemical bonding. Therefore, when 7Be decays by L-electron capture, it does so by taking electrons from its atomic orbitals that may be participating in bonding. This makes its decay rate dependent to a measurable degree upon its chemical surroundings – a rare occurrence in nuclear decay. The shortest-lived known isotope of beryllium is 16Be, which decays through neutron emission with a half-life of . The exotic isotopes 11Be and 14Be are known to exhibit a nuclear halo. This phenomenon can be understood as the nuclei of 11Be and 14Be have, respectively, 1 and 4 neutrons orbiting substantially outside the classical Fermi 'waterdrop' model of the nucleus. Occurrence The Sun has a concentration of 0.1 parts per billion (ppb) of beryllium. Beryllium has a concentration of 2 to 6 parts per million (ppm) in the Earth's crust and is the 47th most abundant element. It is most concentrated in the soils at 6 ppm. Trace amounts of 9Be are found in the Earth's atmosphere. The concentration of beryllium in sea water is 0.2–0.6 parts per trillion. In stream water, however, beryllium is more abundant with a concentration of 0.1 ppb. Beryllium is found in over 100 minerals, but most are uncommon to rare. The more common beryllium containing minerals include: bertrandite (Be4Si2O7(OH)2), beryl (Al2Be3Si6O18), chrysoberyl (Al2BeO4) and phenakite (Be2SiO4). Precious forms of beryl are aquamarine, red beryl and emerald. The green color in gem-quality forms of beryl comes from varying amounts of chromium (about 2% for emerald). The two main ores of beryllium, beryl and bertrandite, are found in Argentina, Brazil, India, Madagascar, Russia and the United States. Total world reserves of beryllium ore are greater than 400,000 tonnes. Production The extraction of beryllium from its compounds is a difficult process due to its high affinity for oxygen at elevated temperatures, and its ability to reduce water when its oxide film is removed. Currently the United States, China and Kazakhstan are the only three countries involved in the industrial-scale extraction of beryllium. Kazakhstan produces beryllium from a concentrate stockpiled before the breakup of the Soviet Union around 1991. This resource had become nearly depleted by mid-2010s. Production of beryllium in Russia was halted in 1997, and is planned to be resumed in the 2020s. Beryllium is most commonly extracted from the mineral beryl, which is either sintered using an extraction agent or melted into a soluble mixture. The sintering process involves mixing beryl with sodium fluorosilicate and soda at to form sodium fluoroberyllate, aluminium oxide and silicon dioxide. Beryllium hydroxide is precipitated from a solution of sodium fluoroberyllate and sodium hydroxide in water. The extraction of beryllium using the melt method involves grinding beryl into a powder and heating it to . The melt is quickly cooled with water and then reheated in concentrated sulfuric acid, mostly yielding beryllium sulfate and aluminium sulfate. Aqueous ammonia is then used to remove the aluminium and sulfur, leaving beryllium hydroxide. Beryllium hydroxide created using either the sinter or melt method is then converted into beryllium fluoride or beryllium chloride. To form the fluoride, aqueous ammonium hydrogen fluoride is added to beryllium hydroxide to yield a precipitate of ammonium tetrafluoroberyllate, which is heated to to form beryllium fluoride. Heating the fluoride to with magnesium forms finely divided beryllium, and additional heating to creates the compact metal. Heating beryllium hydroxide forms beryllium oxide, which becomes beryllium chloride when combined with carbon and chlorine. Electrolysis of molten beryllium chloride is then used to obtain the metal. Chemical properties Beryllium has a high electronegativity compared to other group 2 elements; thus C-Be bonds are less highly polarized than other C-MII bonds, although the attached carbon still bears a negative dipole moment. A beryllium atom has the electronic configuration [He] 2s2. The predominant oxidation state of beryllium is +2; the beryllium atom has lost both of its valence electrons. Lower oxidation states complexes of beryllium are exceedingly rare. For example, bis(carbene) compounds proposed to contain beryllium in the 0 and +1 oxidation state have been reported, although these claims have proved controversial. A stable complex with a Be-Be bond, which formally features beryllium in the +1 oxidation state, has been described. Beryllium's chemical behavior is largely a result of its small atomic and ionic radii. It thus has very high ionization potentials and strong polarization while bonded to other atoms, which is why all of its compounds are covalent. Its chemistry has similarities to that of aluminium, an example of a diagonal relationship. At room temperature, the surface of beryllium forms a 1−10 nm-thick oxide passivation layer that prevents further reactions with air, except for gradual thickening of the oxide up to about 25 nm. When heated above about 500 °C, oxidation into the bulk metal progresses along grain boundaries. Once the metal is ignited in air by heating above the oxide melting point around 2500 °C, beryllium burns brilliantly, forming a mixture of beryllium oxide and beryllium nitride. Beryllium dissolves readily in non-oxidizing acids, such as HCl and diluted H2SO4, but not in nitric acid or water as this forms the oxide. This behavior is similar to that of aluminium. Beryllium also dissolves in alkali solutions. Binary compounds of beryllium(II) are polymeric in the solid state. BeF2 has a silica-like structure with corner-shared BeF4 tetrahedra. BeCl2 and BeBr2 have chain structures with edge-shared tetrahedra. Beryllium oxide, BeO, is a white refractory solid which has a wurtzite crystal structure and a thermal conductivity as high as some metals. BeO is amphoteric. Beryllium sulfide, selenide and telluride are known, all having the zincblende structure. Beryllium nitride, Be3N2, is a high-melting-point compound which is readily hydrolyzed. Beryllium azide, BeN6 is known and beryllium phosphide, Be3P2 has a similar structure to Be3N2. A number of beryllium borides are known, such as Be5B, Be4B, Be2B, BeB2, BeB6 and BeB12. Beryllium carbide, Be2C, is a refractory brick-red compound that reacts with water to give methane. No beryllium silicide has been identified. The halides BeX2 (X = F, Cl, Br, and I) have a linear monomeric molecular structure in the gas phase. Complexes of the halides are formed with one or more ligands donating a total of two pairs of electrons. Such compounds obey the octet rule. Other 4-coordinate complexes, such as the aqua-ion [Be(H2O)4]2+ also obey the octet rule. Aqueous solutions Solutions of beryllium salts, such as beryllium sulfate and beryllium nitrate, are acidic because of hydrolysis of the [Be(H2O)4]2+ ion. The concentration of the first hydrolysis product, [Be(H2O)3(OH)]+, is less than 1% of the beryllium concentration. The most stable hydrolysis product is the trimeric ion [Be3(OH)3(H2O)6]3+. Beryllium hydroxide, Be(OH)2, is insoluble in water at pH 5 or more. Consequently, beryllium compounds are generally insoluble at biological pH. Because of this, inhalation of beryllium metal dust leads to the development of the fatal condition of berylliosis. Be(OH)2 dissolves in strongly alkaline solutions. Beryllium(II) forms few complexes with monodentate ligands because the water molecules in the aquo-ion, [Be(H2O)4]2+ are bound very strongly to the beryllium ion. Notable exceptions are the series of water-soluble complexes with the fluoride ion: [Be(H2O)4]^2+{} + \mathit{n}\,F^- <=> Be[(H2O)_{2\!-\mathit{n}}F_\mathit{n}]^{2\!-\mathit{n}}{} + \mathit{n}\,H2O Beryllium(II) forms many complexes with bidentate ligands containing oxygen-donor atoms. The species [Be3O(H2PO4)6]2- is notable for having a 3-coordinate oxide ion at its center. Basic beryllium acetate, Be4O(OAc)6, has an oxide ion surrounded by a tetrahedron of beryllium atoms. With organic ligands, such as the malonate ion, the acid deprotonates when forming the complex. The donor atoms are two oxygens. H2A + [Be(H2O)4]^2+ <=> [BeA(H2O)2] + 2H+ + 2H2O H2A + [BeA(H2O)2] <=> [BeA2]^2- + 2H+ + 2H2O The formation of a complex is in competition with the metal ion-hydrolysis reaction and mixed complexes with both the anion and the hydroxide ion are also formed. For example, derivatives of the cyclic trimer are known, with a bidentate ligand replacing one or more pairs of water molecules. Aliphatic hydroxycarboxylic acids such as glycolic acid form rather weak monodentate complexes in solution, in which the hydroxyl group remains intact. In the solid state, the hydroxyl group may deprotonate: a hexamer, Na_4[Be_6(OCH_2(O)O)_6] , was isolated long ago. Aromatic hydroxy ligands (i.e. phenols) form relatively strong complexes. For example, log K1 and log K2 values of 12.2 and 9.3 have been reported for complexes with tiron. Beryllium has generally a rather poor affinity for ammine ligands. Ligands such as EDTA behave as dicarboxylic acids. There are many early reports of complexes with amino acids, but unfortunately they are not reliable as the concomitant hydrolysis reactions were not understood at the time of publication. Values for log β of ca. 6 to 7 have been reported. The degree of formation is small because of competition with hydrolysis reactions. Organic chemistry Organoberyllium chemistry is limited to academic research due to the cost and toxicity of beryllium, beryllium derivatives and reagents required for the introduction of beryllium, such as beryllium chloride. Organometallic beryllium compounds are known to be highly reactive. Examples of known organoberyllium compounds are dineopentylberyllium, beryllocene (Cp2Be), diallylberyllium (by exchange reaction of diethyl beryllium with triallyl boron), bis(1,3-trimethylsilylallyl)beryllium, Be(mes)2, and (beryllium(I) complex) diberyllocene. Ligands can also be aryls and alkynyls. History The mineral beryl, which contains beryllium, has been used at least since the Ptolemaic dynasty of Egypt. In the first century CE, Roman naturalist Pliny the Elder mentioned in his encyclopedia Natural History that beryl and emerald ("smaragdus") were similar. The Papyrus Graecus Holmiensis, written in the third or fourth century CE, contains notes on how to prepare artificial emerald and beryl. Early analyses of emeralds and beryls by Martin Heinrich Klaproth, Torbern Olof Bergman, Franz Karl Achard, and always yielded similar elements, leading to the mistaken conclusion that both substances are aluminium silicates. Mineralogist René Just Haüy discovered that both crystals are geometrically identical, and he asked chemist Louis-Nicolas Vauquelin for a chemical analysis. In a 1798 paper read before the Institut de France, Vauquelin reported that he found a new "earth" by dissolving aluminium hydroxide from emerald and beryl in an additional alkali. The editors of the journal Annales de chimie et de physique named the new earth "glucine" for the sweet taste of some of its compounds. Klaproth preferred the name "beryllina" due to the fact that yttria also formed sweet salts. The name beryllium was first used by Friedrich Wöhler in 1828. Friedrich Wöhler and Antoine Bussy independently isolated beryllium in 1828 by the chemical reaction of metallic potassium with beryllium chloride, as follows: BeCl2 + 2 K → 2 KCl + Be Using an alcohol lamp, Wöhler heated alternating layers of beryllium chloride and potassium in a wired-shut platinum crucible. The above reaction immediately took place and caused the crucible to become white hot. Upon cooling and washing the resulting gray-black powder, he saw that it was made of fine particles with a dark metallic luster. The highly reactive potassium had been produced by the electrolysis of its compounds, a process discovered 21 years earlier. The chemical method using potassium yielded only small grains of beryllium from which no ingot of metal could be cast or hammered. The direct electrolysis of a molten mixture of beryllium fluoride and sodium fluoride by Paul Lebeau in 1898 resulted in the first pure (99.5 to 99.8%) samples of beryllium. However, industrial production started only after the First World War. The original industrial involvement included subsidiaries and scientists related to the Union Carbide and Carbon Corporation in Cleveland, Ohio, and Siemens & Halske AG in Berlin. In the US, the process was ruled by Hugh S. Cooper, director of The Kemet Laboratories Company. In Germany, the first commercially successful process for producing beryllium was developed in 1921 by Alfred Stock and Hans Goldschmidt. A sample of beryllium was bombarded with alpha rays from the decay of radium in a 1932 experiment by James Chadwick that uncovered the existence of the neutron. This same method is used in one class of radioisotope-based laboratory neutron sources that produce 30 neutrons for every million α particles. Beryllium production saw a rapid increase during World War II due to the rising demand for hard beryllium-copper alloys and phosphors for fluorescent lights. Most early fluorescent lamps used zinc orthosilicate with varying content of beryllium to emit greenish light. Small additions of magnesium tungstate improved the blue part of the spectrum to yield an acceptable white light. Halophosphate-based phosphors replaced beryllium-based phosphors after beryllium was found to be toxic. Electrolysis of a mixture of beryllium fluoride and sodium fluoride was used to isolate beryllium during the 19th century. The metal's high melting point makes this process more energy-consuming than corresponding processes used for the alkali metals. Early in the 20th century, the production of beryllium by the thermal decomposition of beryllium iodide was investigated following the success of a similar process for the production of zirconium, but this process proved to be uneconomical for volume production. Pure beryllium metal did not become readily available until 1957, even though it had been used as an alloying metal to harden and toughen copper much earlier. Beryllium could be produced by reducing beryllium compounds such as beryllium chloride with metallic potassium or sodium. Currently, most beryllium is produced by reducing beryllium fluoride with magnesium. The price on the American market for vacuum-cast beryllium ingots was about $338 per pound ($745 per kilogram) in 2001. Between 1998 and 2008, the world's production of beryllium had decreased from 343 to about 200 tonnes. It then increased to 230 metric tons by 2018, of which 170 tonnes came from the United States. Etymology Beryllium was named for the semiprecious mineral beryl, from which it was first isolated. The name beryllium was introduced by Wöhler in 1828. Although Humphry Davy failed to isolate it, he proposed the name glucium for the new metal, derived from the name glucina for the earth it was found in; altered forms of this name, glucinium or glucinum (symbol Gl) continued to be used into the 20th century. Both beryllium and glucinum were used concurrently until 1949, when the IUPAC adopted beryllium as the standard name of the element. Applications Radiation windows Because of its low atomic number and very low absorption for X-rays, the oldest and still one of the most important applications of beryllium is in radiation windows for X-ray tubes. Extreme demands are placed on purity and cleanliness of beryllium to avoid artifacts in the X-ray images. Thin beryllium foils are used as radiation windows for X-ray detectors, and their extremely low absorption minimizes the heating effects caused by high-intensity, low energy X-rays typical of synchrotron radiation. Vacuum-tight windows and beam-tubes for radiation experiments on synchrotrons are manufactured exclusively from beryllium. In scientific setups for various X-ray emission studies (e.g., energy-dispersive X-ray spectroscopy) the sample holder is usually made of beryllium because its emitted X-rays have much lower energies (≈100 eV) than X-rays from most studied materials. Low atomic number also makes beryllium relatively transparent to energetic particles. Therefore, it is used to build the beam pipe around the collision region in particle physics setups, such as all four main detector experiments at the Large Hadron Collider (ALICE, ATLAS, CMS, LHCb), the Tevatron and at SLAC. The low density of beryllium allows collision products to reach the surrounding detectors without significant interaction, its stiffness allows a powerful vacuum to be produced within the pipe to minimize interaction with gases, its thermal stability allows it to function correctly at temperatures of only a few degrees above absolute zero, and its diamagnetic nature keeps it from interfering with the complex multipole magnet systems used to steer and focus the particle beams. Mechanical applications Because of its stiffness, light weight and dimensional stability over a wide temperature range, beryllium metal is used for lightweight structural components in the defense and aerospace industries in high-speed aircraft, guided missiles, spacecraft, and satellites, including the James Webb Space Telescope. Several liquid-fuel rockets have used rocket nozzles made of pure beryllium. Beryllium powder was itself studied as a rocket fuel, but this use has never materialized. A small number of extreme high-end bicycle frames have been built with beryllium. From 1998 to 2000, the McLaren Formula One team used Mercedes-Benz engines with beryllium-aluminium alloy pistons. The use of beryllium engine components was banned following a protest by Scuderia Ferrari. Mixing about 2.0% beryllium into copper forms an alloy called beryllium copper that is six times stronger than copper alone. Beryllium alloys are used in many applications because of their combination of elasticity, high electrical conductivity and thermal conductivity, high strength and hardness, nonmagnetic properties, as well as good corrosion and fatigue resistance. These applications include non-sparking tools that are used near flammable gases (beryllium nickel), springs, membranes (beryllium nickel and beryllium iron) used in surgical instruments, and high temperature devices. As little as 50 parts per million of beryllium alloyed with liquid magnesium leads to a significant increase in oxidation resistance and decrease in flammability. The high elastic stiffness of beryllium has led to its extensive use in precision instrumentation, e.g. in inertial guidance systems and in the support mechanisms for optical systems. Beryllium-copper alloys were also applied as a hardening agent in "Jason pistols", which were used to strip the paint from the hulls of ships. In sound amplification systems, the speed at which sound travels directly affects the resonant frequency of the amplifier, thereby influencing the range of audible high-frequency sounds. Beryllium stands out due to its exceptionally high speed of sound propagation compared to other metals. This unique property allows beryllium to achieve higher resonant frequencies, making it an ideal material for use as a diaphragm in high-quality loudspeakers. Beryllium was used for cantilevers in high-performance phonograph cartridge styli, where its extreme stiffness and low density allowed for tracking weights to be reduced to 1 gram while still tracking high frequency passages with minimal distortion. An earlier major application of beryllium was in brakes for military airplanes because of its hardness, high melting point, and exceptional ability to dissipate heat. Environmental considerations have led to substitution by other materials. To reduce costs, beryllium can be alloyed with significant amounts of aluminium, resulting in the AlBeMet alloy (a trade name). This blend is cheaper than pure beryllium, while still retaining many desirable properties. Mirrors Beryllium mirrors are of particular interest. Large-area mirrors, frequently with a honeycomb support structure, are used, for example, in meteorological satellites where low weight and long-term dimensional stability are critical. Smaller beryllium mirrors are used in optical guidance systems and in fire-control systems, e.g. in the German-made Leopard 1 and Leopard 2 main battle tanks. In these systems, very rapid movement of the mirror is required, which again dictates low mass and high rigidity. Usually the beryllium mirror is coated with hard electroless nickel plating which can be more easily polished to a finer optical finish than beryllium. In some applications, the beryllium blank is polished without any coating. This is particularly applicable to cryogenic operation where thermal expansion mismatch can cause the coating to buckle. The James Webb Space Telescope has 18 hexagonal beryllium sections for its mirrors, each plated with a thin layer of gold. Because JWST will face a temperature of 33 K, the mirror is made of gold-plated beryllium, which is capable of handling extreme cold better than glass. Beryllium contracts and deforms less than glass and remains more uniform in such temperatures. For the same reason, the optics of the Spitzer Space Telescope are entirely built of beryllium metal. Magnetic applications Beryllium is non-magnetic. Therefore, tools fabricated out of beryllium-based materials are used by naval or military explosive ordnance disposal teams for work on or near naval mines, since these mines commonly have magnetic fuzes. They are also found in maintenance and construction materials near magnetic resonance imaging (MRI) machines because of the high magnetic fields generated. In the fields of radio communications and powerful (usually military) radars, hand tools made of beryllium are used to tune the highly magnetic klystrons, magnetrons, traveling wave tubes, etc., that are used for generating high levels of microwave power in the transmitters. Nuclear applications Thin plates or foils of beryllium are sometimes used in nuclear weapon designs as the very outer layer of the plutonium pits in the primary stages of thermonuclear bombs, placed to surround the fissile material. These layers of beryllium are good "pushers" for the implosion of the plutonium-239, and they are good neutron reflectors, just as in beryllium-moderated nuclear reactors. Beryllium is commonly used in some neutron sources in laboratory devices in which relatively few neutrons are needed (rather than having to use a nuclear reactor or a particle accelerator-powered neutron generator). For this purpose, a target of beryllium-9 is bombarded with energetic alpha particles from a radioisotope such as polonium-210, radium-226, plutonium-238, or americium-241. In the nuclear reaction that occurs, a beryllium nucleus is transmuted into carbon-12, and one free neutron is emitted, traveling in about the same direction as the alpha particle was heading. Such alpha decay-driven beryllium neutron sources, named "urchin" neutron initiators, were used in some early atomic bombs. Neutron sources in which beryllium is bombarded with gamma rays from a gamma decay radioisotope, are also used to produce laboratory neutrons. Beryllium is used in fuel fabrication for CANDU reactors. The fuel elements have small appendages that are resistance brazed to the fuel cladding using an induction brazing process with Be as the braze filler material. Bearing pads are brazed in place to prevent contact between the fuel bundle and the pressure tube containing it, and inter-element spacer pads are brazed on to prevent element to element contact. Beryllium is used at the Joint European Torus nuclear-fusion research laboratory, and it will be used in the more advanced ITER to condition the components which face the plasma. Beryllium has been proposed as a cladding material for nuclear fuel rods, because of its good combination of mechanical, chemical, and nuclear properties. Beryllium fluoride is one of the constituent salts of the eutectic salt mixture FLiBe, which is used as a solvent, moderator and coolant in many hypothetical molten salt reactor designs, including the liquid fluoride thorium reactor (LFTR). Acoustics The low weight and high rigidity of beryllium make it useful as a material for high-frequency speaker drivers. Because beryllium is expensive (many times more than titanium), hard to shape due to its brittleness, and toxic if mishandled, beryllium tweeters are limited to high-end home, pro audio, and public address applications. Some high-fidelity products have been fraudulently claimed to be made of the material. Some high-end phonograph cartridges used beryllium cantilevers to improve tracking by reducing mass. Electronic Beryllium is a p-type dopant in III-V compound semiconductors. It is widely used in materials such as GaAs, AlGaAs, InGaAs and InAlAs grown by molecular beam epitaxy (MBE). Cross-rolled beryllium sheet is an excellent structural support for printed circuit boards in surface-mount technology. In critical electronic applications, beryllium is both a structural support and heat sink. The application also requires a coefficient of thermal expansion that is well matched to the alumina and polyimide-glass substrates. The beryllium-beryllium oxide composite "E-Materials" have been specially designed for these electronic applications and have the additional advantage that the thermal expansion coefficient can be tailored to match diverse substrate materials. Beryllium oxide is useful for many applications that require the combined properties of an electrical insulator and an excellent heat conductor, with high strength and hardness and a very high melting point. Beryllium oxide is frequently used as an insulator base plate in high-power transistors in radio frequency transmitters for telecommunications. Beryllium oxide is being studied for use in increasing the thermal conductivity of uranium dioxide nuclear fuel pellets. Beryllium compounds were used in fluorescent lighting tubes, but this use was discontinued because of the disease berylliosis which developed in the workers who were making the tubes. Medical applications Beryllium is a component of several dental alloys. Beryllium is used in X-ray windows because it is transparent to X-rays, allowing for clearer and more efficient imaging. In medical imaging equipment, such as CT scanners and mammography machines, beryllium's strength and light weight enhance durability and performance. Beryllium is used in analytical equipment for blood, HIV, and other diseases. Beryllium alloys are used in surgical instruments, optical mirrors, and laser systems for medical treatments. Toxicity and safety Biological effects Approximately 35 micrograms of beryllium is found in the average human body, an amount not considered harmful. Beryllium is chemically similar to magnesium and therefore can displace it from enzymes, which causes them to malfunction. Because Be2+ is a highly charged and small ion, it can easily get into many tissues and cells, where it specifically targets cell nuclei, inhibiting many enzymes, including those used for synthesizing DNA. Its toxicity is exacerbated by the fact that the body has no means to control beryllium levels, and once inside the body, beryllium cannot be removed. Inhalation Chronic beryllium disease (CBD), or berylliosis, is a pulmonary and systemic granulomatous disease caused by inhalation of dust or fumes contaminated with beryllium; either large amounts over a short time or small amounts over a long time can lead to this ailment. Symptoms of the disease can take up to five years to develop; about a third of patients with it die and the survivors are left disabled. The International Agency for Research on Cancer (IARC) lists beryllium and beryllium compounds as Category 1 carcinogens. Occupational exposure In the US, the Occupational Safety and Health Administration (OSHA) has designated a permissible exposure limit (PEL) for beryllium and beryllium compounds of 0.2 μg/m3 as an 8-hour time-weighted average (TWA) and 2.0 μg/m3 as a short-term exposure limit over a sampling period of 15 minutes. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) upper-bound threshold of 0.5 μg/m3. The IDLH (immediately dangerous to life and health) value is 4 mg/m3. The toxicity of beryllium is on par with other toxic metalloids/metals, such as arsenic and mercury. Exposure to beryllium in the workplace can lead to a sensitized immune response, and over time development of berylliosis. NIOSH in the United States researches these effects in collaboration with a major manufacturer of beryllium products. NIOSH also conducts genetic research on sensitization and CBD, independently of this collaboration. Acute beryllium disease in the form of chemical pneumonitis was first reported in Europe in 1933 and in the United States in 1943. A survey found that about 5% of workers in plants manufacturing fluorescent lamps in 1949 in the United States had beryllium-related lung diseases. Chronic berylliosis resembles sarcoidosis in many respects, and the differential diagnosis is often difficult. It killed some early workers in nuclear weapons design, such as Herbert L. Anderson. Beryllium may be found in coal slag. When the slag is formulated into an abrasive agent for blasting paint and rust from hard surfaces, the beryllium can become airborne and become a source of exposure. Although the use of beryllium compounds in fluorescent lighting tubes was discontinued in 1949, potential for exposure to beryllium exists in the nuclear and aerospace industries, in the refining of beryllium metal and the melting of beryllium-containing alloys, in the manufacturing of electronic devices, and in the handling of other beryllium-containing material. Detection Early researchers undertook the highly hazardous practice of identifying beryllium and its various compounds from its sweet taste. A modern test for beryllium in air and on surfaces has been developed and published as an international voluntary consensus standard, ASTM D7202. The procedure uses dilute ammonium bifluoride for dissolution and fluorescence detection with beryllium bound to sulfonated hydroxybenzoquinoline, allowing up to 100 times more sensitive detection than the recommended limit for beryllium concentration in the workplace. Fluorescence increases with increasing beryllium concentration. The new procedure has been successfully tested on a variety of surfaces and is effective for the dissolution and detection of refractory beryllium oxide and siliceous beryllium in minute concentrations (ASTM D7458). The NIOSH Manual of Analytical Methods contains methods for measuring occupational exposures to beryllium.
Physical sciences
Chemical elements_2
null
3397
https://en.wikipedia.org/wiki/Bridge
Bridge
A bridge is a structure built to span a physical obstacle (such as a body of water, valley, road, or railway) without blocking the path underneath. It is constructed for the purpose of providing passage over the obstacle, which is usually something that is otherwise difficult or impossible to cross. There are many different designs of bridges, each serving a particular purpose and applicable to different situations. Designs of bridges vary depending on factors such as the function of the bridge, the nature of the terrain where the bridge is constructed and anchored, the material used to make it, and the funds available to build it. The earliest bridges were likely made with fallen trees and stepping stones. The Neolithic people built boardwalk bridges across marshland. The Arkadiko Bridge, dating from the 13th century BC, in the Peloponnese is one of the oldest arch bridges in existence and use. Etymology The Oxford English Dictionary traces the origin of the word bridge to an Old English word brycg, of the same meaning. The Oxford English Dictionary also notes that there is some suggestion that the word can be traced directly back to Proto-Indo-European *bʰrēw-. However, they also note that "this poses semantic problems." The origin of the word for the card game of the same name is unknown, but may be from folk etymology. History The simplest and earliest types of bridges were stepping stones. Neolithic people also built a form of boardwalk across marshes; examples of such bridges include the Sweet Track and the Post Track in England, approximately 6000 years old. Ancient people would also have used log bridges consisting of logs that fell naturally or were intentionally felled or placed across streams. Some of the first human-made bridges with significant span were probably intentionally felled trees. Among the oldest timber bridges is the Holzbrücke Rapperswil-Hurden bridge that crossed upper Lake Zürich in Switzerland; prehistoric timber pilings discovered to the west of the Seedamm causeway date back to 1523 BC. The first wooden footbridge there led across Lake Zürich; it was reconstructed several times through the late 2nd century AD, when the Roman Empire built a wooden bridge to carry transport across the lake. Between 1358 and 1360, Rudolf IV, Duke of Austria, built a 'new' wooden bridge across the lake that was used until 1878; it was approximately long and wide. On 6 April 2001, a reconstruction of the original wooden footbridge was opened; it is also the longest wooden bridge in Switzerland. The Arkadiko Bridge is one of four Mycenaean corbel arch bridges part of a former network of roads, designed to accommodate chariots, between the fort of Tiryns and town of Epidauros in the Peloponnese, in southern Greece. Dating to the Greek Bronze Age (13th century BC), it is one of the oldest arch bridges still in existence and use. Several intact, arched stone bridges from the Hellenistic era can be found in the Peloponnese. The greatest bridge builders of antiquity were the ancient Romans. The Romans built arch bridges and aqueducts that could stand in conditions that would damage or destroy earlier designs, some of which still stand today. An example is the Alcántara Bridge, built over the river Tagus, in Spain. The Romans also used cement, which reduced the variation of strength found in natural stone. One type of cement, called pozzolana, consisted of water, lime, sand, and volcanic rock. Brick and mortar bridges were built after the Roman era, as the technology for cement was lost (then later rediscovered). In India, the Arthashastra treatise by Kautilya mentions the construction of dams and bridges. A Mauryan bridge near Girnar was surveyed by James Princep. The bridge was swept away during a flood, and later repaired by Puspagupta, the chief architect of emperor Chandragupta I. The use of stronger bridges using plaited bamboo and iron chain was visible in India by about the 4th century. A number of bridges, both for military and commercial purposes, were constructed by the Mughal administration in India. Although large bridges of wooden construction existed in China at the time of the Warring States period, the oldest surviving stone bridge in China is the Zhaozhou Bridge, built from 595 to 605 AD during the Sui dynasty. This bridge is also historically significant as it is the world's oldest open-spandrel stone segmental arch bridge. European segmental arch bridges date back to at least the Alconétar Bridge (approximately 2nd century AD), while the enormous Roman era Trajan's Bridge (105 AD) featured open-spandrel segmental arches in wooden construction. Rope bridges, a simple type of suspension bridge, were used by the Inca civilization in the Andes mountains of South America, just prior to European colonization in the 16th century. The Ashanti built bridges over streams and rivers. They were constructed by pounding four large forked tree trunks into the stream bed, placing beams along these forked pillars, then positioning cross-beams that were finally covered with four to six inches of dirt. During the 18th century, there were many innovations in the design of timber bridges by Hans Ulrich Grubenmann, Johannes Grubenmann, as well as others. The first book on bridge engineering was written by Hubert Gautier in 1716. A major breakthrough in bridge technology came with the erection of the Iron Bridge in Shropshire, England in 1779. It used cast iron for the first time as arches to cross the river Severn. With the Industrial Revolution in the 19th century, truss systems of wrought iron were developed for larger bridges, but iron does not have the tensile strength to support large loads. With the advent of steel, which has a high tensile strength, much larger bridges were built, many using the ideas of Gustave Eiffel. In Canada and the United States, numerous timber covered bridges were built in the late 1700s to the late 1800s, reminiscent of earlier designs in Germany and Switzerland. Some covered bridges were also built in Asia. In later years, some were partly made of stone or metal but the trusses were usually still made of wood; in the United States, there were three styles of trusses, the Queen Post, the Burr Arch and the Town Lattice. Hundreds of these structures still stand in North America. They were brought to the attention of the general public in the 1990s by the novel, movie and play The Bridges of Madison County. In 1927, welding pioneer Stefan Bryła designed the first welded road bridge in the world, the Maurzyce Bridge which was later built across the river Słudwia at Maurzyce near Łowicz, Poland in 1929. In 1995, the American Welding Society presented the Historic Welded Structure Award for the bridge to Poland. Types of bridges Bridges can be categorized in several different ways. Common categories include the type of structural elements used, by what they carry, whether they are fixed or movable, and by the materials used. Structure types Bridges may be classified by how the actions of tension, compression, bending, torsion and shear are distributed through their structure. Most bridges will employ all of these to some degree, but only a few will predominate. The separation of forces and moments may be quite clear. In a suspension or cable-stayed bridge, the elements in tension are distinct in shape and placement. In other cases the forces may be distributed among a large number of members, as in a truss. Some Engineers sub-divide 'beam' bridges into slab, beam-and-slab and box girder on the basis of their cross-section. A slab can be solid or voided (though this is no longer favored for inspectability reasons) while beam-and-slab consists of concrete or steel girders connected by a concrete slab. A box-girder cross-section consists of a single-cell or multi-cellular box. In recent years, integral bridge construction has also become popular. Fixed or movable bridges Most bridges are fixed bridges, meaning they have no moving parts and stay in one place until they fail or are demolished. Temporary bridges, such as Bailey bridges, are designed to be assembled, taken apart, transported to a different site, and re-used. They are important in military engineering and are also used to carry traffic while an old bridge is being rebuilt. Movable bridges are designed to move out of the way of boats or other kinds of traffic, which would otherwise be too tall to fit. These are generally electrically powered. The Tank bridge transporter (TBT) has the same cross-country performance as a tank even when fully loaded. It can deploy, drop off and load bridges independently, but it cannot recover them. Double-decked bridges Double-decked (or double-decker) bridges have two levels, such as the George Washington Bridge, connecting New York City to Bergen County, New Jersey, US, as the world's busiest bridge, carrying 102 million vehicles annually; truss work between the roadway levels provided stiffness to the roadways and reduced movement of the upper level when the lower level was installed three decades after the upper level. The Tsing Ma Bridge and Kap Shui Mun Bridge in Hong Kong have six lanes on their upper decks, and on their lower decks there are two lanes and a pair of tracks for MTR metro trains. Some double-decked bridges only use one level for street traffic; the Washington Avenue Bridge in Minneapolis reserves its lower level for automobile and light rail traffic and its upper level for pedestrian and bicycle traffic (predominantly students at the University of Minnesota). Likewise, in Toronto, the Prince Edward Viaduct has five lanes of motor traffic, bicycle lanes, and sidewalks on its upper deck; and a pair of tracks for the Bloor–Danforth subway line on its lower deck. The western span of the San Francisco–Oakland Bay Bridge also has two levels. Robert Stephenson's High Level Bridge across the River Tyne in Newcastle upon Tyne, completed in 1849, is an early example of a double-decked bridge. The upper level carries a railway, and the lower level is used for road traffic. Other examples include Britannia Bridge over the Menai Strait and Craigavon Bridge in Derry, Northern Ireland. The Oresund Bridge between Copenhagen and Malmö consists of a four-lane highway on the upper level and a pair of railway tracks at the lower level. Tower Bridge in London is different example of a double-decked bridge, with the central section consisting of a low-level bascule span and a high-level footbridge. Viaducts A viaduct is made up of multiple bridges connected into one longer structure. The longest and some of the highest bridges are viaducts, such as the Lake Pontchartrain Causeway and Millau Viaduct. Multi-way bridge A multi-way bridge has three or more separate spans which meet near the center of the bridge. Multi-way bridges with only three spans appear as a "T" or "Y" when viewed from above. Multi-way bridges are extremely rare. The Tridge, Margaret Bridge, and Zanesville Y-Bridge are examples. Bridge types by use A bridge can be categorized by what it is designed to carry, such as trains, pedestrian or road traffic (road bridge), a pipeline (Pipe bridge) or waterway for water transport or barge traffic. An aqueduct is a bridge that carries water, resembling a viaduct, which is a bridge that connects points of equal height. A road-rail bridge carries both road and rail traffic. Overway is a term for a bridge that separates incompatible intersecting traffic, especially road and rail. Some bridges accommodate other purposes, such as the tower of Nový Most Bridge in Bratislava, which features a restaurant, or a bridge-restaurant which is a bridge built to serve as a restaurant. Other suspension bridge towers carry transmission antennas. Conservationists use wildlife overpasses to reduce habitat fragmentation and animal-vehicle collisions. The first animal bridges sprung up in France in the 1950s, and these types of bridges are now used worldwide to protect both large and small wildlife. Bridges are subject to unplanned uses as well. The areas underneath some bridges have become makeshift shelters and homes to homeless people, and the undertimbers of bridges all around the world are spots of prevalent graffiti. Some bridges attract people attempting suicide, and become known as suicide bridges. Bridge types by material The materials used to build the structure are also used to categorize bridges. Until the end of the 18th century, bridges were made out of timber, stone and masonry. Modern bridges are currently built in concrete, steel, fiber reinforced polymers (FRP), stainless steel or combinations of those materials. Living bridges have been constructed of live plants such as Ficus elastica tree roots in India and wisteria vines in Japan. Analysis and design Unlike buildings whose design is led by architects, bridges are usually designed by engineers. This follows from the importance of the engineering requirements; namely spanning the obstacle and having the durability to survive, with minimal maintenance, in an aggressive outdoor environment. Bridges are first analysed; the bending moment and shear force distributions are calculated due to the applied loads. For this, the finite element method is the most popular. The analysis can be one-, two-, or three-dimensional. For the majority of bridges, a two-dimensional plate model (often with stiffening beams) is sufficient or an upstand finite element model. On completion of the analysis, the bridge is designed to resist the applied bending moments and shear forces, section sizes are selected with sufficient capacity to resist the stresses. Many bridges are made of prestressed concrete which has good durability properties, either by pre-tensioning of beams prior to installation or post-tensioning on site. In most countries, bridges, like other structures, are designed according to Load and Resistance Factor Design (LRFD) principles. In simple terms, this means that the load is factored up by a factor greater than unity, while the resistance or capacity of the structure is factored down, by a factor less than unity. The effect of the factored load (stress, bending moment) should be less than the factored resistance to that effect. Both of these factors allow for uncertainty and are greater when the uncertainty is greater. Aesthetics Most bridges are utilitarian in appearance, but in some cases, the appearance of the bridge can have great importance. Often, this is the case with a large bridge that serves as an entrance to a city, or crosses over a main harbor entrance. These are sometimes known as signature bridges. Designers of bridges in parks and along parkways often place more importance on aesthetics, as well. Examples include the stone-faced bridges along the Taconic State Parkway in New York. Bridges are typically more aesthetically pleasing if they are simple in shape, the deck is thinner in proportion to its span, the lines of the structure are continuous, and the shapes of the structural elements reflect the forces acting on them. To create a beautiful image, some bridges are built much taller than necessary. This type, often found in east-Asian style gardens, is called a Moon bridge, evoking a rising full moon. Other garden bridges may cross only a dry bed of stream-washed pebbles, intended only to convey an impression of a stream. Often in palaces, a bridge will be built over an artificial waterway as symbolic of a passage to an important place or state of mind. A set of five bridges cross a sinuous waterway in an important courtyard of the Forbidden City in Beijing, China. The central bridge was reserved exclusively for the use of the Emperor and Empress, with their attendants. Bridge maintenance The estimated life of bridges varies between 25 and 80 years depending on location and material. Bridges may age hundred years with proper maintenance and rehabilitation. Bridge maintenance consisting of a combination of structural health monitoring and testing. This is regulated in country-specific engineer standards and includes an ongoing monitoring every three to six months, a simple test or inspection every two to three years and a major inspection every six to ten years. In Europe, the cost of maintenance is considerable and is higher in some countries than spending on new bridges. The lifetime of welded steel bridges can be significantly extended by aftertreatment of the weld transitions. This results in a potential high benefit, using existing bridges far beyond the planned lifetime. Bridge traffic loading While the response of a bridge to the applied loading is well understood, the applied traffic loading itself is still the subject of research. This is a statistical problem as loading is highly variable, particularly for road bridges. Load Effects in bridges (stresses, bending moments) are designed for using the principles of Load and Resistance Factor Design. Before factoring to allow for uncertainty, the load effect is generally considered to be the maximum characteristic value in a specified return period. Notably, in Europe, it is the maximum value expected in 1000 years. Bridge standards generally include a load model, deemed to represent the characteristic maximum load to be expected in the return period. In the past, these load models were agreed by standard drafting committees of experts but today, this situation is changing. It is now possible to measure the components of bridge traffic load, to weigh trucks, using weigh-in-motion (WIM) technologies. With extensive WIM databases, it is possible to calculate the maximum expected load effect in the specified return period. This is an active area of research, addressing issues of opposing direction lanes, side-by-side (same direction) lanes, traffic growth, permit/non-permit vehicles and long-span bridges (see below). Rather than repeat this complex process every time a bridge is to be designed, standards authorities specify simplified notional load models, notably HL-93, intended to give the same load effects as the characteristic maximum values. The Eurocode is an example of a standard for bridge traffic loading that was developed in this way. Traffic loading on long span bridges Most bridge standards are only applicable for short and medium spans - for example, the Eurocode is only applicable for loaded lengths up to 200 m. Longer spans are dealt with on a case-by-case basis. It is generally accepted that the intensity of load reduces as span increases because the probability of many trucks being closely spaced and extremely heavy reduces as the number of trucks involved increases. It is also generally assumed that short spans are governed by a small number of trucks traveling at high speed, with an allowance for dynamics. Longer spans on the other hand, are governed by congested traffic and no allowance for dynamics is needed. Calculating the loading due to congested traffic remains a challenge as there is a paucity of data on inter-vehicle gaps, both within-lane and inter-lane, in congested conditions. Weigh-in-Motion (WIM) systems provide data on inter-vehicle gaps but only operate well in free flowing traffic conditions. Some authors have used cameras to measure gaps and vehicle lengths in jammed situations and have inferred weights from lengths using WIM data. Others have used microsimulation to generate typical clusters of vehicles on the bridge. Bridge vibration Bridges vibrate under load and this contributes, to a greater or lesser extent, to the stresses. Vibration and dynamics are generally more significant for slender structures such as pedestrian bridges and long-span road or rail bridges. One of the most famous examples is the Tacoma Narrows Bridge that collapsed shortly after being constructed due to excessive vibration. More recently, the Millennium Bridge in London vibrated excessively under pedestrian loading and was closed and retrofitted with a system of dampers. For smaller bridges, dynamics is not catastrophic but can contribute an added amplification to the stresses due to static effects. For example, the Eurocode for bridge loading specifies amplifications of between 10% and 70%, depending on the span, the number of traffic lanes and the type of stress (bending moment or shear force). Vehicle-bridge dynamic interaction There have been many studies of the dynamic interaction between vehicles and bridges during vehicle crossing events. Fryba did pioneering work on the interaction of a moving load and an Euler-Bernoulli beam. With increased computing power, vehicle-bridge interaction (VBI) models have become ever more sophisticated. The concern is that one of the many natural frequencies associated with the vehicle will resonate with the bridge's first natural frequency. The vehicle-related frequencies include body bounce and axle hop but there are also pseudo-frequencies associated with the vehicle's speed of crossing and there are many frequencies associated with the surface profile. Given the wide variety of heavy vehicles on road bridges, a statistical approach has been suggested, with VBI analyses carried out for many statically extreme loading events. Bridge failures The failure of bridges is of special concern for structural engineers in trying to learn lessons vital to bridge design, construction and maintenance. The failure of bridges first assumed national interest in Britain during the Victorian era when many new designs were being built, often using new materials, with some of them failing catastrophically. In the United States, the National Bridge Inventory tracks the structural evaluations of all bridges, including designations such as "structurally deficient" and "functionally obsolete". Bridge health monitoring There are several methods used to monitor the condition of large structures, like bridges. Many long-span bridges are now routinely monitored with a range of sensors, including strain transducers, accelerometers, tiltmeters, and GPS. Accelerometers have the advantage that they are inertial, i.e., they do not require a reference point to measure from. This is often a problem for distance or deflection measurement, especially if the bridge is over water. Crowdsourcing bridge conditions by accessing data passively captured by cell phones, which routinely include accelerometers and GPS sensors, has been suggested as an alternative to including sensors during bridge construction and an augment for professional examinations. An option for structural-integrity monitoring is "non-contact monitoring", which uses the Doppler effect (Doppler shift). A laser beam from a Laser Doppler Vibrometer is directed at the point of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface. The advantage of this method is that the setup time for the equipment is faster and, unlike an accelerometer, this makes measurements possible on multiple structures in as short a time as possible. Additionally, this method can measure specific points on a bridge that might be difficult to access. However, vibrometers are relatively expensive and have the disadvantage that a reference point is needed to measure from. Snapshots in time of the external condition of a bridge can be recorded using Lidar to aid bridge inspection. This can provide measurement of the bridge geometry (to facilitate the building of a computer model) but the accuracy is generally insufficient to measure bridge deflections under load. While larger modern bridges are routinely monitored electronically, smaller bridges are generally inspected visually by trained inspectors. There is considerable research interest in the challenge of smaller bridges as they are often remote and do not have electrical power on site. Possible solutions are the installation of sensors on a specialist inspection vehicle and the use of its measurements as it drives over the bridge to infer information about the bridge condition. These vehicles can be equipped with accelerometers, gyrometers, Laser Doppler Vibrometers and some even have the capability to apply a resonant force to the road surface to dynamically excite the bridge at its resonant frequency. Visual index
Technology
Transportation
null
3406
https://en.wikipedia.org/wiki/Branchiopoda
Branchiopoda
Branchiopoda is a class of crustaceans. It comprises fairy shrimp, clam shrimp, Diplostraca (or Cladocera), Notostraca, the Devonian Lepidocaris and possibly the Cambrian Rehbachiella. They are mostly small, freshwater animals that feed on plankton and detritus. Description Members of the Branchiopoda are unified by the presence of gills on many of the animals' appendages, including some of the mouthparts. This is also responsible for the name of the group (from the , gills, akin to , windpipe; , foot). They generally possess compound eyes and a carapace, which may be a shell of two valves enclosing the trunk (as in most Cladocera), broad and shallow (as in the Notostraca), or entirely absent (as in the Anostraca). In the groups where the carapace prevents the use of the trunk limbs for swimming (Cladocera and clam shrimp), the antennae are used for locomotion, as they are in the nauplius. Male fairy shrimp have an enlarged pair of antennae with which they grasp the female during mating, while the bottom-feeding Notostraca, the antennae are reduced to vestiges. The trunk limbs are beaten in a metachronal rhythm, causing a flow of water along the midline of the animal, from which it derives oxygen, food and, in the case of the Anostraca and Notostraca, movement. Ecology Branchiopods are found in continental fresh water, including temporary pools and in hypersaline lakes, and some in brackish water. Only two groups of water fleas include marine species: Family Podonidae in the order Diplostraca, and family Sididae in the order Diplostraca. Most branchiopodans eat floating detritus or plankton, which they take using the setae on their appendages. But notostracans are omnivorous and very opportunistic feeders and will eat algae and bacteria in addition to animals as both predators and scavengers. Taxonomy In early taxonomic treatments, the current members of the Branchiopoda were all placed in a single genus, Monoculus. The taxon Branchiopoda was erected by Pierre André Latreille in 1817, initially at the rank of order. The current upper-level classification of Branchiopoda, according to the World Register of Marine Species (2021), is as follows: Class Branchiopoda Latreille, 1817 Subclass Sarsostraca Tasch, 1969 Order Anostraca Sars, 1867 Suborder Anostracina Weekers et al., 2002 Suborder Artemiina Weekers et al., 2002 Subclass Phyllopoda Preuss, 1951 Superorder Diplostraca Gerstaecker, 1866 Order Anomopoda G.O. Sars, 1865 Order Ctenopoda G.O. Sars, 1865 Order Cyclestherida Sars G.O., 1899 Order Haplopoda G.O. Sars, 1865 Order Laevicaudata Linder, 1945 Order Onychopoda G.O. Sars, 1865 Order Spinicaudata Linder, 1945 Order Notostraca G. O. Sars, 1867 Genus †Rehbachiella? Müller, 1983 In addition, the extinct genus Lepidocaris is generally placed in Branchiopoda. Anostraca The fairy shrimp of the order Anostraca are usually long (exceptionally up to ). Most species have 20 body segments, bearing 11 pairs of leaf-like phyllopodia (swimming legs), and the body lacks a carapace. They live in vernal pools and hypersaline lakes across the world, including pools in deserts, in ice-covered mountain lakes and in Antarctica. They swim "upside-down" and feed by filtering organic particles from the water or by scraping algae from surfaces. They are an important food for many birds and fish, and are cultured and harvested for use as fish food. There are 300 species spread across 8 families. Lipostraca Lipostraca contains a single extinct Early Devonian species, Lepidocaris rhyniensis, which is the most abundant animal in the Rhynie chert deposits. It resembles modern Anostraca, to which it is probably closely related, although its relationships to other orders remain unclear. The body is long, with 23 body segments and 19 pairs of appendages, but no carapace. It occurred chiefly among charophytes, probably in alkaline temporary pools. Notostraca The order Notostraca comprises the single family Triopsidae, containing the tadpole shrimp or shield shrimp. The two genera, Triops and Lepidurus, are considered living fossils, having not changed significantly in outward form since the Triassic. They have a broad, flat carapace, which conceals the head and bears a single pair of compound eyes. The abdomen is long, appears to be segmented and bears numerous pairs of flattened legs. The telson is flanked by a pair of long, thin caudal rami. Phenotypic plasticity within taxa makes species-level identification difficult, and is further compounded by variation in the mode of reproduction. The evidence of phenotypic plasticity of Arctic tadpole shrimp (Lepidurus arcticus, Notostraca) has been observed in Svalbard. Notostracans are the largest branchiopodans and are omnivores living on the bottom of temporary pools, ponds and shallow lakes. Laevicaudata, Spinicaudata and Cyclestherida (once Conchostraca) Clam shrimp are bivalved animals which have lived since at least the Devonian. The three groups are not believed to form a clade. They have 10–32 trunk segments, decreasing in size from front to back, and each bears a pair of legs which also carry gills. A strong muscle can close the two halves of the shell together. Anomopoda, Ctenopoda, Onychopoda, and Haplopoda (once Cladocera) These four orders make up a group of small crustaceans commonly called water fleas. Around 620 species have been recognised so far, with many more undescribed. They are ubiquitous in inland aquatic habitats, but rare in the oceans. Most are long, with a down-turned head, and a carapace covering the apparently unsegmented thorax and abdomen. There is a single median compound eye. Most species show cyclical parthenogenesis, where asexual reproduction is occasionally supplemented by sexual reproduction, which produces resting eggs that allow the species to survive harsh conditions and disperse to distant habitats. In the water bodies of the world, a lot of Cladocera are non-native species, many of which pose a great threat to aquatic ecosystems. Evolution The fossil record of branchiopods extends back at least into the Upper Cambrian and possibly further. The group is thought to be monophyletic, with the Anostraca having been the first group to branch off. It is thought that the group evolved in the seas, but was forced into temporary pools and hypersaline lakes by the evolution of bony fishes. Although they were previously considered the sister group to the remaining crustaceans, it is now widely accepted that crustaceans form a paraphyletic group, and Branchiopoda are thought to be sister to a clade comprising Xenocarida (Remipedia and Cephalocarida) and Hexapoda (insects and their relatives).
Biology and health sciences
Crustaceans
Animals
3410
https://en.wikipedia.org/wiki/Bird
Bird
Birds are a group of warm-blooded vertebrates constituting the class Aves (), characterised by feathers, toothless beaked jaws, the laying of hard-shelled eggs, a high metabolic rate, a four-chambered heart, and a strong yet lightweight skeleton. Birds live worldwide and range in size from the bee hummingbird to the common ostrich. There are over 11,000 living species and they are split into 44 orders. More than half are passerine or "perching" birds. Birds have wings whose development varies according to species; the only known groups without wings are the extinct moa and elephant birds. Wings, which are modified forelimbs, gave birds the ability to fly, although further evolution has led to the loss of flight in some birds, including ratites, penguins, and diverse endemic island species. The digestive and respiratory systems of birds are also uniquely adapted for flight. Some bird species of aquatic environments, particularly seabirds and some waterbirds, have further evolved for swimming. The study of birds is called ornithology. Birds are feathered theropod dinosaurs and constitute the only known living dinosaurs. Likewise, birds are considered reptiles in the modern cladistic sense of the term, and their closest living relatives are the crocodilians. Birds are descendants of the primitive avialans (whose members include Archaeopteryx) which first appeared during the Late Jurassic. According to some estimates, modern birds (Neornithes) evolved in the Late Cretaceous or between the Early and Late Cretaceous (100 Ma) and diversified dramatically around the time of the Cretaceous–Paleogene extinction event 66 million years ago, which killed off the pterosaurs and all non-ornithuran dinosaurs. Many social species preserve knowledge across generations (culture). Birds are social, communicating with visual signals, calls, and songs, and participating in such behaviour as cooperative breeding and hunting, flocking, and mobbing of predators. The vast majority of bird species are socially (but not necessarily sexually) monogamous, usually for one breeding season at a time, sometimes for years, and rarely for life. Other species have breeding systems that are polygynous (one male with many females) or, rarely, polyandrous (one female with many males). Birds produce offspring by laying eggs which are fertilised through sexual reproduction. They are usually laid in a nest and incubated by the parents. Most birds have an extended period of parental care after hatching. Many species of birds are economically important as food for human consumption and raw material in manufacturing, with domesticated and undomesticated birds being important sources of eggs, meat, and feathers. Songbirds, parrots, and other species are popular as pets. Guano (bird excrement) is harvested for use as a fertiliser. Birds figure throughout human culture. About 120 to 130 species have become extinct due to human activity since the 17th century, and hundreds more before then. Human activity threatens about 1,200 bird species with extinction, though efforts are underway to protect them. Recreational birdwatching is an important part of the ecotourism industry. Evolution and classification The first classification of birds was developed by Francis Willughby and John Ray in their 1676 volume Ornithologiae. Carl Linnaeus modified that work in 1758 to devise the taxonomic classification system currently in use. Birds are categorised as the biological class Aves in Linnaean taxonomy. Phylogenetic taxonomy places Aves in the clade Theropoda as an infraclass or more recently a subclass. Definition Aves and a sister group, the order Crocodilia, contain the only living representatives of the reptile clade Archosauria. During the late 1990s, Aves was most commonly defined phylogenetically as all descendants of the most recent common ancestor of modern birds and Archaeopteryx lithographica. However, an earlier definition proposed by Jacques Gauthier gained wide currency in the 21st century, and is used by many scientists including adherents to the PhyloCode. Gauthier defined Aves to include only the crown group of the set of modern birds. This was done by excluding most groups known only from fossils, and assigning them, instead, to the broader group Avialae, on the principle that a clade based on extant species should be limited to those extant species and their closest extinct relatives. Gauthier and de Queiroz identified four different definitions for the same biological name "Aves", which is a problem. The authors proposed to reserve the term Aves only for the crown group consisting of the last common ancestor of all living birds and all of its descendants, which corresponds to meaning number 4 below. They assigned other names to the other groups. Aves can mean all archosaurs closer to birds than to crocodiles (alternately Avemetatarsalia) Aves can mean those advanced archosaurs with feathers (alternately Avifilopluma) Aves can mean those feathered dinosaurs that fly (alternately Avialae) Aves can mean the last common ancestor of all the currently living birds and all of its descendants (a "crown group", in this sense synonymous with Neornithes) Under the fourth definition Archaeopteryx, traditionally considered one of the earliest members of Aves, is removed from this group, becoming a non-avian dinosaur instead. These proposals have been adopted by many researchers in the field of palaeontology and bird evolution, though the exact definitions applied have been inconsistent. Avialae, initially proposed to replace the traditional fossil content of Aves, is often used synonymously with the vernacular term "bird" by these researchers. Most researchers define Avialae as branch-based clade, though definitions vary. Many authors have used a definition similar to "all theropods closer to birds than to Deinonychus", with Troodon being sometimes added as a second external specifier in case it is closer to birds than to Deinonychus. Avialae is also occasionally defined as an apomorphy-based clade (that is, one based on physical characteristics). Jacques Gauthier, who named Avialae in 1986, re-defined it in 2001 as all dinosaurs that possessed feathered wings used in flapping flight, and the birds that descended from them. Despite being currently one of the most widely used, the crown-group definition of Aves has been criticised by some researchers. Lee and Spencer (1997) argued that, contrary to what Gauthier defended, this definition would not increase the stability of the clade and the exact content of Aves will always be uncertain because any defined clade (either crown or not) will have few synapomorphies distinguishing it from its closest relatives. Their alternative definition is synonymous to Avifilopluma. Dinosaurs and the origin of birds Based on fossil and biological evidence, most scientists accept that birds are a specialised subgroup of theropod dinosaurs and, more specifically, members of Maniraptora, a group of theropods which includes dromaeosaurids and oviraptorosaurs, among others. As scientists have discovered more theropods closely related to birds, the previously clear distinction between non-birds and birds has become blurred. By the 2000s, discoveries in the Liaoning Province of northeast China, which demonstrated many small theropod feathered dinosaurs, contributed to this ambiguity. The consensus view in contemporary palaeontology is that the flying theropods, or avialans, are the closest relatives of the deinonychosaurs, which include dromaeosaurids and troodontids. Together, these form a group called Paraves. Some basal members of Deinonychosauria, such as Microraptor, have features which may have enabled them to glide or fly. The most basal deinonychosaurs were very small. This evidence raises the possibility that the ancestor of all paravians may have been arboreal, have been able to glide, or both. Unlike Archaeopteryx and the non-avialan feathered dinosaurs, who primarily ate meat, studies suggest that the first avialans were omnivores. The Late Jurassic Archaeopteryx is well known as one of the first transitional fossils to be found, and it provided support for the theory of evolution in the late 19th century. Archaeopteryx was the first fossil to display both clearly traditional reptilian characteristics—teeth, clawed fingers, and a long, lizard-like tail—as well as wings with flight feathers similar to those of modern birds. It is not considered a direct ancestor of birds, though it is possibly closely related to the true ancestor. Early evolution Over 40% of key traits found in modern birds evolved during the 60 million year transition from the earliest bird-line archosaurs to the first maniraptoromorphs, i.e. the first dinosaurs closer to living birds than to Tyrannosaurus rex. The loss of osteoderms otherwise common in archosaurs and acquisition of primitive feathers might have occurred early during this phase. After the appearance of Maniraptoromorpha, the next 40 million years marked a continuous reduction of body size and the accumulation of neotenic (juvenile-like) characteristics. Hypercarnivory became increasingly less common while braincases enlarged and forelimbs became longer. The integument evolved into complex, pennaceous feathers. The oldest known paravian (and probably the earliest avialan) fossils come from the Tiaojishan Formation of China, which has been dated to the late Jurassic period (Oxfordian stage), about 160 million years ago. The avialan species from this time period include Anchiornis huxleyi, Xiaotingia zhengi, and Aurornis xui. The well-known probable early avialan, Archaeopteryx, dates from slightly later Jurassic rocks (about 155 million years old) from Germany. Many of these early avialans shared unusual anatomical features that may be ancestral to modern birds but were later lost during bird evolution. These features include enlarged claws on the second toe which may have been held clear of the ground in life, and long feathers or "hind wings" covering the hind limbs and feet, which may have been used in aerial maneuvering. Avialans diversified into a wide variety of forms during the Cretaceous period. Many groups retained primitive characteristics, such as clawed wings and teeth, though the latter were lost independently in a number of avialan groups, including modern birds (Aves). Increasingly stiff tails (especially the outermost half) can be seen in the evolution of maniraptoromorphs, and this process culminated in the appearance of the pygostyle, an ossification of fused tail vertebrae. In the late Cretaceous, about 100 million years ago, the ancestors of all modern birds evolved a more open pelvis, allowing them to lay larger eggs compared to body size. Around 95 million years ago, they evolved a better sense of smell. A third stage of bird evolution starting with Ornithothoraces (the "bird-chested" avialans) can be associated with the refining of aerodynamics and flight capabilities, and the loss or co-ossification of several skeletal features. Particularly significant are the development of an enlarged, keeled sternum and the alula, and the loss of grasping hands. Early diversity of bird ancestors The first large, diverse lineage of short-tailed avialans to evolve were the Enantiornithes, or "opposite birds", so named because the construction of their shoulder bones was in reverse to that of modern birds. Enantiornithes occupied a wide array of ecological niches, from sand-probing shorebirds and fish-eaters to tree-dwelling forms and seed-eaters. While they were the dominant group of avialans during the Cretaceous period, Enantiornithes became extinct along with many other dinosaur groups at the end of the Mesozoic era. Many species of the second major avialan lineage to diversify, the Euornithes (meaning "true birds", because they include the ancestors of modern birds), were semi-aquatic and specialised in eating fish and other small aquatic organisms. Unlike the Enantiornithes, which dominated land-based and arboreal habitats, most early euornithians lacked perching adaptations and likely included shorebird-like species, waders, and swimming and diving species. The latter included the superficially gull-like Ichthyornis and the Hesperornithiformes, which became so well adapted to hunting fish in marine environments that they lost the ability to fly and became primarily aquatic. The early euornithians also saw the development of many traits associated with modern birds, like strongly keeled breastbones, toothless, beaked portions of their jaws (though most non-avian euornithians retained teeth in other parts of the jaws). Euornithes also included the first avialans to develop true pygostyle and a fully mobile fan of tail feathers, which may have replaced the "hind wing" as the primary mode of aerial maneuverability and braking in flight. A study on mosaic evolution in the avian skull found that the last common ancestor of all Neornithes might have had a beak similar to that of the modern hook-billed vanga and a skull similar to that of the Eurasian golden oriole. As both species are small aerial and canopy foraging omnivores, a similar ecological niche was inferred for this hypothetical ancestor. Diversification of modern birds Most studies agree on a Cretaceous age for the most recent common ancestor of modern birds but estimates range from the Early Cretaceous to the latest Cretaceous. Similarly, there is no agreement on whether most of the early diversification of modern birds occurred in the Cretaceous and associated with breakup of the supercontinent Gondwana or occurred later and potentially as a consequence of the Cretaceous–Palaeogene extinction event. This disagreement is in part caused by a divergence in the evidence; most molecular dating studies suggests a Cretaceous evolutionary radiation, while fossil evidence points to a Cenozoic radiation (the so-called 'rocks' versus 'clocks' controversy). The discovery in 2005 of Vegavis from the Maastrichtian, the last stage of the Late Cretaceous, proved that the diversification of modern birds started before the Cenozoic era. The affinities of an earlier fossil, the possible galliform Austinornis lentus, dated to about 85 million years ago, are still too controversial to provide a fossil evidence of modern bird diversification. In 2020, Asteriornis from the Maastrichtian was described, it appears to be a close relative of Galloanserae, the earliest diverging lineage within Neognathae. Attempts to reconcile molecular and fossil evidence using genomic-scale DNA data and comprehensive fossil information have not resolved the controversy. However, a 2015 estimate that used a new method for calibrating molecular clocks confirmed that while modern birds originated early in the Late Cretaceous, likely in Western Gondwana, a pulse of diversification in all major groups occurred around the Cretaceous–Palaeogene extinction event. Modern birds would have expanded from West Gondwana through two routes. One route was an Antarctic interchange in the Paleogene. The other route was probably via Paleocene land bridges between South America and North America, which allowed for the rapid expansion and diversification of Neornithes into the Holarctic and Paleotropics. On the other hand, the occurrence of Asteriornis in the Northern Hemisphere suggest that Neornithes dispersed out of East Gondwana before the Paleocene. Classification of bird orders All modern birds lie within the crown group Aves (alternately Neornithes), which has two subdivisions: the Palaeognathae, which includes the flightless ratites (such as the ostriches) and the weak-flying tinamous, and the extremely diverse Neognathae, containing all other birds. These two subdivisions have variously been given the rank of superorder, cohort, or infraclass. The number of known living bird species is around 11,000 although sources may differ in their precise numbers. Cladogram of modern bird relationships based on Stiller et al (2024)., showing the 44 orders recognised by the IOC. The classification of birds is a contentious issue. Sibley and Ahlquist's Phylogeny and Classification of Birds (1990) is a landmark work on the subject. Most evidence seems to suggest the assignment of orders is accurate, but scientists disagree about the relationships among the orders themselves; evidence from modern bird anatomy, fossils and DNA have all been brought to bear on the problem, but no strong consensus has emerged. Fossil and molecular evidence from the 2010s is providing an increasingly clear picture of the evolution of modern bird orders. Genomics In 2010, the genome had been sequenced for only two birds, the chicken and the zebra finch. , the genomes of 542 species of birds had been completed. At least one genome has been sequenced from every order. These include at least one species in about 90% of extant avian families (218 out of 236 families recognised by the Howard and Moore Checklist). Being able to sequence and compare whole genomes gives researchers many types of information, about genes, the DNA that regulates the genes, and their evolutionary history. This has led to reconsideration of some of the classifications that were based solely on the identification of protein-coding genes. Waterbirds such as pelicans and flamingos, for example, may have in common specific adaptations suited to their environment that were developed independently. Distribution Birds live and breed in most terrestrial habitats and on all seven continents, reaching their southern extreme in the snow petrel's breeding colonies up to inland in Antarctica. The highest bird diversity occurs in tropical regions. It was earlier thought that this high diversity was the result of higher speciation rates in the tropics; however studies from the 2000s found higher speciation rates in the high latitudes that were offset by greater extinction rates than in the tropics. Many species migrate annually over great distances and across oceans; several families of birds have adapted to life both on the world's oceans and in them, and some seabird species come ashore only to breed, while some penguins have been recorded diving up to deep. Many bird species have established breeding populations in areas to which they have been introduced by humans. Some of these introductions have been deliberate; the ring-necked pheasant, for example, has been introduced around the world as a game bird. Others have been accidental, such as the establishment of wild monk parakeets in several North American cities after their escape from captivity. Some species, including cattle egret, yellow-headed caracara and galah, have spread naturally far beyond their original ranges as agricultural expansion created alternative habitats although modern practices of intensive agriculture have negatively impacted farmland bird populations. Anatomy and physiology Compared with other vertebrates, birds have a body plan that shows many unusual adaptations, mostly to facilitate flight. Skeletal system The skeleton consists of very lightweight bones. They have large air-filled cavities (called pneumatic cavities) which connect with the respiratory system. The skull bones in adults are fused and do not show cranial sutures. The orbital cavities that house the eyeballs are large and separated from each other by a bony septum (partition). The spine has cervical, thoracic, lumbar and caudal regions with the number of cervical (neck) vertebrae highly variable and especially flexible, but movement is reduced in the anterior thoracic vertebrae and absent in the later vertebrae. The last few are fused with the pelvis to form the synsacrum. The ribs are flattened and the sternum is keeled for the attachment of flight muscles except in the flightless bird orders. The forelimbs are modified into wings. The wings are more or less developed depending on the species; the only known groups that lost their wings are the extinct moa and elephant birds. Excretory system Like the reptiles, birds are primarily uricotelic, that is, their kidneys extract nitrogenous waste from their bloodstream and excrete it as uric acid, instead of urea or ammonia, through the ureters into the intestine. Birds do not have a urinary bladder or external urethral opening and (with exception of the ostrich) uric acid is excreted along with faeces as a semisolid waste. However, birds such as hummingbirds can be facultatively ammonotelic, excreting most of the nitrogenous wastes as ammonia. They also excrete creatine, rather than creatinine like mammals. This material, as well as the output of the intestines, emerges from the bird's cloaca. The cloaca is a multi-purpose opening: waste is expelled through it, most birds mate by joining cloaca, and females lay eggs from it. In addition, many species of birds regurgitate pellets. It is a common but not universal feature of altricial passerine nestlings (born helpless, under constant parental care) that instead of excreting directly into the nest, they produce a fecal sac. This is a mucus-covered pouch that allows parents to either dispose of the waste outside the nest or to recycle the waste through their own digestive system. Reproductive system Most male birds do not have intromittent penises. Males within Palaeognathae (with the exception of the kiwis), the Anseriformes (with the exception of screamers), and in rudimentary forms in Galliformes (but fully developed in Cracidae) possess a penis, which is never present in Neoaves. Its length is thought to be related to sperm competition and it fills with lymphatic fluid instead of blood when erect. When not copulating, it is hidden within the proctodeum compartment within the cloaca, just inside the vent. Female birds have sperm storage tubules that allow sperm to remain viable long after copulation, a hundred days in some species. Sperm from multiple males may compete through this mechanism. Most female birds have a single ovary and a single oviduct, both on the left side, but there are exceptions: species in at least 16 different orders of birds have two ovaries. Even these species, however, tend to have a single oviduct. It has been speculated that this might be an adaptation to flight, but males have two testes, and it is also observed that the gonads in both sexes decrease dramatically in size outside the breeding season. Also terrestrial birds generally have a single ovary, as does the platypus, an egg-laying mammal. A more likely explanation is that the egg develops a shell while passing through the oviduct over a period of about a day, so that if two eggs were to develop at the same time, there would be a risk to survival. While rare, mostly abortive, parthenogenesis is not unknown in birds and eggs can be diploid, automictic and results in male offspring. Birds are solely gonochoric, meaning they have two sexes: either female or male. The sex of birds is determined by the Z and W sex chromosomes, rather than by the X and Y chromosomes present in mammals. Male birds have two Z chromosomes (ZZ), and female birds have a W chromosome and a Z chromosome (WZ). A complex system of disassortative mating with two morphs is involved in the white-throated sparrow Zonotrichia albicollis, where white- and tan-browed morphs of opposite sex pair, making it appear as if four sexes were involved since any individual is compatible with only a fourth of the population. In nearly all species of birds, an individual's sex is determined at fertilisation. However, one 2007 study claimed to demonstrate temperature-dependent sex determination among the Australian brushturkey, for which higher temperatures during incubation resulted in a higher female-to-male sex ratio. This, however, was later proven to not be the case. These birds do not exhibit temperature-dependent sex determination, but temperature-dependent sex mortality. Respiratory and circulatory systems Birds have one of the most complex respiratory systems of all animal groups. Upon inhalation, 75% of the fresh air bypasses the lungs and flows directly into a posterior air sac which extends from the lungs and connects with air spaces in the bones and fills them with air. The other 25% of the air goes directly into the lungs. When the bird exhales, the used air flows out of the lungs and the stored fresh air from the posterior air sac is simultaneously forced into the lungs. Thus, a bird's lungs receive a constant supply of fresh air during both inhalation and exhalation. Sound production is achieved using the syrinx, a muscular chamber incorporating multiple tympanic membranes which diverges from the lower end of the trachea; the trachea being elongated in some species, increasing the volume of vocalisations and the perception of the bird's size. In birds, the main arteries taking blood away from the heart originate from the right aortic arch (or pharyngeal arch), unlike in the mammals where the left aortic arch forms this part of the aorta. The postcava receives blood from the limbs via the renal portal system. Unlike in mammals, the circulating red blood cells in birds retain their nucleus. Heart type and features The avian circulatory system is driven by a four-chambered, myogenic heart contained in a fibrous pericardial sac. This pericardial sac is filled with a serous fluid for lubrication. The heart itself is divided into a right and left half, each with an atrium and ventricle. The atrium and ventricles of each side are separated by atrioventricular valves which prevent back flow from one chamber to the next during contraction. Being myogenic, the heart's pace is maintained by pacemaker cells found in the sinoatrial node, located on the right atrium. The sinoatrial node uses calcium to cause a depolarising signal transduction pathway from the atrium through right and left atrioventricular bundle which communicates contraction to the ventricles. The avian heart also consists of muscular arches that are made up of thick bundles of muscular layers. Much like a mammalian heart, the avian heart is composed of endocardial, myocardial and epicardial layers. The atrium walls tend to be thinner than the ventricle walls, due to the intense ventricular contraction used to pump oxygenated blood throughout the body. Avian hearts are generally larger than mammalian hearts when compared to body mass. This adaptation allows more blood to be pumped to meet the high metabolic need associated with flight. Organisation Birds have a very efficient system for diffusing oxygen into the blood; birds have a ten times greater surface area to gas exchange volume than mammals. As a result, birds have more blood in their capillaries per unit of volume of lung than a mammal. The arteries are composed of thick elastic muscles to withstand the pressure of the ventricular contractions, and become more rigid as they move away from the heart. Blood moves through the arteries, which undergo vasoconstriction, and into arterioles which act as a transportation system to distribute primarily oxygen as well as nutrients to all tissues of the body. As the arterioles move away from the heart and into individual organs and tissues they are further divided to increase surface area and slow blood flow. Blood travels through the arterioles and moves into the capillaries where gas exchange can occur. Capillaries are organised into capillary beds in tissues; it is here that blood exchanges oxygen for carbon dioxide waste. In the capillary beds, blood flow is slowed to allow maximum diffusion of oxygen into the tissues. Once the blood has become deoxygenated, it travels through venules then veins and back to the heart. Veins, unlike arteries, are thin and rigid as they do not need to withstand extreme pressure. As blood travels through the venules to the veins a funneling occurs called vasodilation bringing blood back to the heart. Once the blood reaches the heart, it moves first into the right atrium, then the right ventricle to be pumped through the lungs for further gas exchange of carbon dioxide waste for oxygen. Oxygenated blood then flows from the lungs through the left atrium to the left ventricle where it is pumped out to the body. Nervous system The nervous system is large relative to the bird's size. The most developed part of the brain of birds is the one that controls the flight-related functions, while the cerebellum coordinates movement and the cerebrum controls behaviour patterns, navigation, mating and nest building. Most birds have a poor sense of smell with notable exceptions including kiwis, New World vultures and tubenoses. The avian visual system is usually highly developed. Water birds have special flexible lenses, allowing accommodation for vision in air and water. Some species also have dual fovea. Birds are tetrachromatic, possessing ultraviolet (UV) sensitive cone cells in the eye as well as green, red and blue ones. They also have double cones, likely to mediate achromatic vision. Many birds show plumage patterns in ultraviolet that are invisible to the human eye; some birds whose sexes appear similar to the naked eye are distinguished by the presence of ultraviolet reflective patches on their feathers. Male blue tits have an ultraviolet reflective crown patch which is displayed in courtship by posturing and raising of their nape feathers. Ultraviolet light is also used in foraging—kestrels have been shown to search for prey by detecting the UV reflective urine trail marks left on the ground by rodents. With the exception of pigeons and a few other species, the eyelids of birds are not used in blinking. Instead the eye is lubricated by the nictitating membrane, a third eyelid that moves horizontally. The nictitating membrane also covers the eye and acts as a contact lens in many aquatic birds. The bird retina has a fan shaped blood supply system called the pecten. Eyes of most birds are large, not very round and capable of only limited movement in the orbits, typically 10–20°. Birds with eyes on the sides of their heads have a wide visual field, while birds with eyes on the front of their heads, such as owls, have binocular vision and can estimate the depth of field. The avian ear lacks external pinnae but is covered by feathers, although in some birds, such as the Asio, Bubo and Otus owls, these feathers form tufts which resemble ears. The inner ear has a cochlea, but it is not a spiral as in mammals. Defence and intraspecific combat A few species are able to use chemical defences against predators; some Procellariiformes can eject an unpleasant stomach oil against an aggressor, and some species of pitohuis from New Guinea have a powerful neurotoxin in their skin and feathers. A lack of field observations limit our knowledge, but intraspecific conflicts are known to sometimes result in injury or death. The screamers (Anhimidae), some jacanas (Jacana, Hydrophasianus), the spur-winged goose (Plectropterus), the torrent duck (Merganetta) and nine species of lapwing (Vanellus) use a sharp spur on the wing as a weapon. The steamer ducks (Tachyeres), geese and swans (Anserinae), the solitaire (Pezophaps), sheathbills (Chionis), some guans (Crax) and stone curlews (Burhinus) use a bony knob on the alular metacarpal to punch and hammer opponents. The jacanas Actophilornis and Irediparra have an expanded, blade-like radius. The extinct Xenicibis was unique in having an elongate forelimb and massive hand which likely functioned in combat or defence as a jointed club or flail. Swans, for instance, may strike with the bony spurs and bite when defending eggs or young. Feathers, plumage, and scales Feathers are a feature characteristic of birds (though also present in some dinosaurs not currently considered to be true birds). They facilitate flight, provide insulation that aids in thermoregulation, and are used in display, camouflage, and signalling. There are several types of feathers, each serving its own set of purposes. Feathers are epidermal growths attached to the skin and arise only in specific tracts of skin called pterylae. The distribution pattern of these feather tracts (pterylosis) is used in taxonomy and systematics. The arrangement and appearance of feathers on the body, called plumage, may vary within species by age, social status, and sex. Plumage is regularly moulted; the standard plumage of a bird that has moulted after breeding is known as the "" plumage, or—in the Humphrey–Parkes terminology—"basic" plumage; breeding plumages or variations of the basic plumage are known under the Humphrey–Parkes system as "" plumages. Moulting is annual in most species, although some may have two moults a year, and large birds of prey may moult only once every few years. Moulting patterns vary across species. In passerines, flight feathers are replaced one at a time with the innermost being the first. When the fifth of sixth primary is replaced, the outermost begin to drop. After the innermost tertiaries are moulted, the starting from the innermost begin to drop and this proceeds to the outer feathers (centrifugal moult). The greater primary are moulted in synchrony with the primary that they overlap. A small number of species, such as ducks and geese, lose all of their flight feathers at once, temporarily becoming flightless. As a general rule, the tail feathers are moulted and replaced starting with the innermost pair. Centripetal moults of tail feathers are however seen in the Phasianidae. The centrifugal moult is modified in the tail feathers of woodpeckers and treecreepers, in that it begins with the second innermost pair of feathers and finishes with the central pair of feathers so that the bird maintains a functional climbing tail. The general pattern seen in passerines is that the primaries are replaced outward, secondaries inward, and the tail from centre outward. Before nesting, the females of most bird species gain a bare brood patch by losing feathers close to the belly. The skin there is well supplied with blood vessels and helps the bird in incubation. Feathers require maintenance and birds preen or groom them daily, spending an average of around 9% of their daily time on this. The bill is used to brush away foreign particles and to apply waxy secretions from the uropygial gland; these secretions protect the feathers' flexibility and act as an antimicrobial agent, inhibiting the growth of feather-degrading bacteria. This may be supplemented with the secretions of formic acid from ants, which birds receive through a behaviour known as anting, to remove feather parasites. The scales of birds are composed of the same keratin as beaks, claws, and spurs. They are found mainly on the toes and metatarsus, but may be found further up on the ankle in some birds. Most bird scales do not overlap significantly, except in the cases of kingfishers and woodpeckers. The scales of birds are thought to be homologous to those of reptiles and mammals. Flight Most birds can fly, which distinguishes them from almost all other vertebrate classes. Flight is the primary means of locomotion for most bird species and is used for searching for food and for escaping from predators. Birds have various adaptations for flight, including a lightweight skeleton, two large flight muscles, the pectoralis (which accounts for 15% of the total mass of the bird) and the supracoracoideus, as well as a modified forelimb (wing) that serves as an aerofoil. Wing shape and size generally determine a bird's flight style and performance; many birds combine powered, flapping flight with less energy-intensive soaring flight. About 60 extant bird species are flightless, as were many extinct birds. Flightlessness often arises in birds on isolated islands, most likely due to limited resources and the absence of mammalian land predators. Flightlessness is almost exclusively correlated with gigantism due to an island's inherent condition of isolation. Although flightless, penguins use similar musculature and movements to "fly" through the water, as do some flight-capable birds such as auks, shearwaters and dippers. Behaviour Most birds are diurnal, but some birds, such as many species of owls and nightjars, are nocturnal or crepuscular (active during twilight hours), and many coastal waders feed when the tides are appropriate, by day or night. Diet and feeding are varied and often include nectar, fruit, plants, seeds, carrion, and various small animals, including other birds. The digestive system of birds is unique, with a crop for storage and a gizzard that contains swallowed stones for grinding food to compensate for the lack of teeth. Some species such as pigeons and some psittacine species do not have a gallbladder. Most birds are highly adapted for rapid digestion to aid with flight. Some migratory birds have adapted to use protein stored in many parts of their bodies, including protein from the intestines, as additional energy during migration. Birds that employ many strategies to obtain food or feed on a variety of food items are called generalists, while others that concentrate time and effort on specific food items or have a single strategy to obtain food are considered specialists. Avian foraging strategies can vary widely by species. Many birds glean for insects, invertebrates, fruit, or seeds. Some hunt insects by suddenly attacking from a branch. Those species that seek pest insects are considered beneficial 'biological control agents' and their presence encouraged in biological pest control programmes. Combined, insectivorous birds eat 400–500 million metric tons of arthropods annually. Nectar feeders such as hummingbirds, sunbirds, lories, and lorikeets amongst others have specially adapted brushy tongues and in many cases bills designed to fit co-adapted flowers. Kiwis and shorebirds with long bills probe for invertebrates; shorebirds' varied bill lengths and feeding methods result in the separation of ecological niches. Divers, diving ducks, penguins and auks pursue their prey underwater, using their wings or feet for propulsion, while aerial predators such as sulids, kingfishers and terns plunge dive after their prey. Flamingos, three species of prion, and some ducks are filter feeders. Geese and dabbling ducks are primarily grazers. Some species, including frigatebirds, gulls, and skuas, engage in kleptoparasitism, stealing food items from other birds. Kleptoparasitism is thought to be a supplement to food obtained by hunting, rather than a significant part of any species' diet; a study of great frigatebirds stealing from masked boobies estimated that the frigatebirds stole at most 40% of their food and on average stole only 5%. Other birds are scavengers; some of these, like vultures, are specialised carrion eaters, while others, like gulls, corvids, or other birds of prey, are opportunists. Water and drinking Water is needed by many birds although their mode of excretion and lack of sweat glands reduces the physiological demands. Some desert birds can obtain their water needs entirely from moisture in their food. Some have other adaptations such as allowing their body temperature to rise, saving on moisture loss from evaporative cooling or panting. Seabirds can drink seawater and have salt glands inside the head that eliminate excess salt out of the nostrils. Most birds scoop water in their beaks and raise their head to let water run down the throat. Some species, especially of arid zones, belonging to the pigeon, finch, mousebird, button-quail and bustard families are capable of sucking up water without the need to tilt back their heads. Some desert birds depend on water sources and sandgrouse are particularly well known for congregating daily at waterholes. Nesting sandgrouse and many plovers carry water to their young by wetting their belly feathers. Some birds carry water for chicks at the nest in their crop or regurgitate it along with food. The pigeon family, flamingos and penguins have adaptations to produce a nutritive fluid called crop milk that they provide to their chicks. Feather care Feathers, being critical to the survival of a bird, require maintenance. Apart from physical wear and tear, feathers face the onslaught of fungi, ectoparasitic feather mites and bird lice. The physical condition of feathers are maintained by often with the application of secretions from the . Birds also bathe in water or dust themselves. While some birds dip into shallow water, more aerial species may make aerial dips into water and arboreal species often make use of dew or rain that collect on leaves. Birds of arid regions make use of loose soil to dust-bathe. A behaviour termed as anting in which the bird encourages ants to run through their plumage is also thought to help them reduce the ectoparasite load in feathers. Many species will spread out their wings and expose them to direct sunlight and this too is thought to help in reducing fungal and ectoparasitic activity that may lead to feather damage. Migration Many bird species migrate to take advantage of global differences of seasonal temperatures, therefore optimising availability of food sources and breeding habitat. These migrations vary among the different groups. Many landbirds, shorebirds, and waterbirds undertake annual long-distance migrations, usually triggered by the length of daylight as well as weather conditions. These birds are characterised by a breeding season spent in the temperate or polar regions and a non-breeding season in the tropical regions or opposite hemisphere. Before migration, birds substantially increase body fats and reserves and reduce the size of some of their organs. Migration is highly demanding energetically, particularly as birds need to cross deserts and oceans without refuelling. Landbirds have a flight range of around and shorebirds can fly up to , although the bar-tailed godwit is capable of non-stop flights of up to . Some seabirds undertake long migrations, with the longest annual migrations including those of Arctic terns, which were recorded travelling an average of between their Arctic breeding grounds in Greenland and Iceland and their wintering grounds in Antarctica, with one bird covering , and sooty shearwaters, which nest in New Zealand and Chile and make annual round trips of to their summer feeding grounds in the North Pacific off Japan, Alaska and California. Other seabirds disperse after breeding, travelling widely but having no set migration route. Albatrosses nesting in the Southern Ocean often undertake circumpolar trips between breeding seasons. Some bird species undertake shorter migrations, travelling only as far as is required to avoid bad weather or obtain food. Irruptive species such as the boreal finches are one such group and can commonly be found at a location in one year and absent the next. This type of migration is normally associated with food availability. Species may also travel shorter distances over part of their range, with individuals from higher latitudes travelling into the existing range of conspecifics; others undertake partial migrations, where only a fraction of the population, usually females and subdominant males, migrates. Partial migration can form a large percentage of the migration behaviour of birds in some regions; in Australia, surveys found that 44% of non-passerine birds and 32% of passerines were partially migratory. Altitudinal migration is a form of short-distance migration in which birds spend the breeding season at higher altitudes and move to lower ones during suboptimal conditions. It is most often triggered by temperature changes and usually occurs when the normal territories also become inhospitable due to lack of food. Some species may also be nomadic, holding no fixed territory and moving according to weather and food availability. Parrots as a family are overwhelmingly neither migratory nor sedentary but considered to either be dispersive, irruptive, nomadic or undertake small and irregular migrations. The ability of birds to return to precise locations across vast distances has been known for some time; in an experiment conducted in the 1950s, a Manx shearwater released in Boston in the United States returned to its colony in Skomer, in Wales within 13 days, a distance of . Birds navigate during migration using a variety of methods. For diurnal migrants, the sun is used to navigate by day, and a stellar compass is used at night. Birds that use the sun compensate for the changing position of the sun during the day by the use of an internal clock. Orientation with the stellar compass depends on the position of the constellations surrounding Polaris. These are backed up in some species by their ability to sense the Earth's geomagnetism through specialised photoreceptors. Communication Birds communicate primarily using visual and auditory signals. Signals can be interspecific (between species) and intraspecific (within species). Birds sometimes use plumage to assess and assert social dominance, to display breeding condition in sexually selected species, or to make threatening displays, as in the sunbittern's mimicry of a large predator to ward off hawks and protect young chicks. Visual communication among birds may also involve ritualised displays, which have developed from non-signalling actions such as preening, the adjustments of feather position, pecking, or other behaviour. These displays may signal aggression or submission or may contribute to the formation of pair-bonds. The most elaborate displays occur during courtship, where "dances" are often formed from complex combinations of many possible component movements; males' breeding success may depend on the quality of such displays. Bird calls and songs, which are produced in the syrinx, are the major means by which birds communicate with sound. This communication can be very complex; some species can operate the two sides of the syrinx independently, allowing the simultaneous production of two different songs. Calls are used for a variety of purposes, including mate attraction, evaluation of potential mates, bond formation, the claiming and maintenance of territories, the identification of other individuals (such as when parents look for chicks in colonies or when mates reunite at the start of breeding season), and the warning of other birds of potential predators, sometimes with specific information about the nature of the threat. Some birds also use mechanical sounds for auditory communication. The Coenocorypha snipes of New Zealand drive air through their feathers, woodpeckers drum for long-distance communication, and palm cockatoos use tools to drum. Flocking and other associations While some birds are essentially territorial or live in small family groups, other birds may form large flocks. The principal benefits of flocking are safety in numbers and increased foraging efficiency. Defence against predators is particularly important in closed habitats like forests, where ambush predation is common and multiple eyes can provide a valuable early warning system. This has led to the development of many mixed-species feeding flocks, which are usually composed of small numbers of many species; these flocks provide safety in numbers but increase potential competition for resources. Costs of flocking include bullying of socially subordinate birds by more dominant birds and the reduction of feeding efficiency in certain cases. Some species have a mixed system with breeding pairs maintaining territories, while unmated or young birds live in flocks where they secure mates prior to finding territories. Birds sometimes also form associations with non-avian species. Plunge-diving seabirds associate with dolphins and tuna, which push shoaling fish towards the surface. Some species of hornbills have a mutualistic relationship with dwarf mongooses, in which they forage together and warn each other of nearby birds of prey and other predators. Resting and roosting The high metabolic rates of birds during the active part of the day is supplemented by rest at other times. Sleeping birds often use a type of sleep known as vigilant sleep, where periods of rest are interspersed with quick eye-opening "peeks", allowing them to be sensitive to disturbances and enable rapid escape from threats. Swifts are believed to be able to sleep in flight and radar observations suggest that they orient themselves to face the wind in their roosting flight. It has been suggested that there may be certain kinds of sleep which are possible even when in flight. Some birds have also demonstrated the capacity to fall into slow-wave sleep one hemisphere of the brain at a time. The birds tend to exercise this ability depending upon its position relative to the outside of the flock. This may allow the eye opposite the sleeping hemisphere to remain vigilant for predators by viewing the outer margins of the flock. This adaptation is also known from marine mammals. Communal roosting is common because it lowers the loss of body heat and decreases the risks associated with predators. Roosting sites are often chosen with regard to thermoregulation and safety. Unusual mobile roost sites include large herbivores on the African savanna that are used by oxpeckers. Many sleeping birds bend their heads over their backs and tuck their bills in their back feathers, although others place their beaks among their breast feathers. Many birds rest on one leg, while some may pull up their legs into their feathers, especially in cold weather. Perching birds have a tendon-locking mechanism that helps them hold on to the perch when they are asleep. Many ground birds, such as quails and pheasants, roost in trees. A few parrots of the genus Loriculus roost hanging upside down. Some hummingbirds go into a nightly state of torpor accompanied with a reduction of their metabolic rates. This physiological adaptation shows in nearly a hundred other species, including owlet-nightjars, nightjars, and woodswallows. One species, the common poorwill, even enters a state of hibernation. Birds do not have sweat glands, but can lose water directly through the skin, and they may cool themselves by moving to shade, standing in water, panting, increasing their surface area, fluttering their throat or using special behaviours like urohidrosis to cool themselves. Breeding Social systems 95 per cent of bird species are socially monogamous. These species pair for at least the length of the breeding season or—in some cases—for several years or until the death of one mate. Monogamy allows for both paternal care and biparental care, which is especially important for species in which care from both the female and the male parent is required in order to successfully rear a brood. Among many socially monogamous species, extra-pair copulation (infidelity) is common. Such behaviour typically occurs between dominant males and females paired with subordinate males, but may also be the result of forced copulation in ducks and other anatids. For females, possible benefits of extra-pair copulation include getting better genes for her offspring and insuring against the possibility of infertility in her mate. Males of species that engage in extra-pair copulations will closely guard their mates to ensure the parentage of the offspring that they raise. Other mating systems, including polygyny, polyandry, polygamy, polygynandry, and promiscuity, also occur. Polygamous breeding systems arise when females are able to raise broods without the help of males. Mating systems vary across bird families but variations within species are thought to be driven by environmental conditions. A unique system is the formation of trios where a third individual is allowed by a breeding pair temporarily into the territory to assist with brood raising thereby leading to higher fitness. Breeding usually involves some form of courtship display, typically performed by the male. Most displays are rather simple and involve some type of song. Some displays, however, are quite elaborate. Depending on the species, these may include wing or tail drumming, dancing, aerial flights, or communal lekking. Females are generally the ones that drive partner selection, although in the polyandrous phalaropes, this is reversed: plainer males choose brightly coloured females. Courtship feeding, billing and are commonly performed between partners, generally after the birds have paired and mated. Homosexual behaviour has been observed in males or females in numerous species of birds, including copulation, pair-bonding, and joint parenting of chicks. Over 130 avian species around the world engage in sexual interactions between the same sex or homosexual behaviours. "Same-sex courtship activities may involve elaborate displays, synchronised dances, gift-giving ceremonies, or behaviours at specific display areas including bowers, arenas, or leks." Territories, nesting and incubation Many birds actively defend a territory from others of the same species during the breeding season; maintenance of territories protects the food source for their chicks. Species that are unable to defend feeding territories, such as seabirds and swifts, often breed in colonies instead; this is thought to offer protection from predators. Colonial breeders defend small nesting sites, and competition between and within species for nesting sites can be intense. All birds lay amniotic eggs with hard shells made mostly of calcium carbonate. Hole and burrow nesting species tend to lay white or pale eggs, while open nesters lay camouflaged eggs. There are many exceptions to this pattern, however; the ground-nesting nightjars have pale eggs, and camouflage is instead provided by their plumage. Species that are victims of brood parasites have varying egg colours to improve the chances of spotting a parasite's egg, which forces female parasites to match their eggs to those of their hosts. Bird eggs are usually laid in a nest. Most species create somewhat elaborate nests, which can be cups, domes, plates, mounds, or burrows. Some bird nests can be a simple scrape, with minimal or no lining; most seabird and wader nests are no more than a scrape on the ground. Most birds build nests in sheltered, hidden areas to avoid predation, but large or colonial birds—which are more capable of defence—may build more open nests. During nest construction, some species seek out plant matter from plants with parasite-reducing toxins to improve chick survival, and feathers are often used for nest insulation. Some bird species have no nests; the cliff-nesting common guillemot lays its eggs on bare rock, and male emperor penguins keep eggs between their body and feet. The absence of nests is especially prevalent in open habitat ground-nesting species where any addition of nest material would make the nest more conspicuous. Many ground nesting birds lay a clutch of eggs that hatch synchronously, with precocial chicks led away from the nests (nidifugous) by their parents soon after hatching. Incubation, which regulates temperature for chick development, usually begins after the last egg has been laid. In monogamous species incubation duties are often shared, whereas in polygamous species one parent is wholly responsible for incubation. Warmth from parents passes to the eggs through brood patches, areas of bare skin on the abdomen or breast of the incubating birds. Incubation can be an energetically demanding process; adult albatrosses, for instance, lose as much as of body weight per day of incubation. The warmth for the incubation of the eggs of megapodes comes from the sun, decaying vegetation or volcanic sources. Incubation periods range from 10 days (in woodpeckers, cuckoos and passerine birds) to over 80 days (in albatrosses and kiwis). The diversity of characteristics of birds is great, sometimes even in closely related species. Several avian characteristics are compared in the table below. Parental care and fledging At the time of their hatching, chicks range in development from helpless to independent, depending on their species. Helpless chicks are termed altricial, and tend to be born small, blind, immobile and naked; chicks that are mobile and feathered upon hatching are termed precocial. Altricial chicks need help thermoregulating and must be brooded for longer than precocial chicks. The young of many bird species do not precisely fit into either the precocial or altricial category, having some aspects of each and thus fall somewhere on an "altricial-precocial spectrum". Chicks at neither extreme but favouring one or the other may be termed or . The length and nature of parental care varies widely amongst different orders and species. At one extreme, parental care in megapodes ends at hatching; the newly hatched chick digs itself out of the nest mound without parental assistance and can fend for itself immediately. At the other extreme, many seabirds have extended periods of parental care, the longest being that of the great frigatebird, whose chicks take up to six months to fledge and are fed by the parents for up to an additional 14 months. The chick guard stage describes the period of breeding during which one of the adult birds is permanently present at the nest after chicks have hatched. The main purpose of the guard stage is to aid offspring to thermoregulate and protect them from predation. In some species, both parents care for nestlings and fledglings; in others, such care is the responsibility of only one sex. In some species, other members of the same species—usually close relatives of the breeding pair, such as offspring from previous broods—will help with the raising of the young. Such alloparenting is particularly common among the Corvida, which includes such birds as the true crows, Australian magpie and fairy-wrens, but has been observed in species as different as the rifleman and red kite. Among most groups of animals, male parental care is rare. In birds, however, it is quite common—more so than in any other vertebrate class. Although territory and nest site defence, incubation, and chick feeding are often shared tasks, there is sometimes a division of labour in which one mate undertakes all or most of a particular duty. The point at which chicks fledge varies dramatically. The chicks of the Synthliboramphus murrelets, like the ancient murrelet, leave the nest the night after they hatch, following their parents out to sea, where they are raised away from terrestrial predators. Some other species, such as ducks, move their chicks away from the nest at an early age. In most species, chicks leave the nest just before, or soon after, they are able to fly. The amount of parental care after fledging varies; albatross chicks leave the nest on their own and receive no further help, while other species continue some supplementary feeding after fledging. Chicks may also follow their parents during their first migration. Brood parasites Brood parasitism, in which an egg-layer leaves her eggs with another individual's brood, is more common among birds than any other type of organism. After a parasitic bird lays her eggs in another bird's nest, they are often accepted and raised by the host at the expense of the host's own brood. Brood parasites may be either obligate brood parasites, which must lay their eggs in the nests of other species because they are incapable of raising their own young, or non-obligate brood parasites, which sometimes lay eggs in the nests of conspecifics to increase their reproductive output even though they could have raised their own young. One hundred bird species, including honeyguides, icterids, and ducks, are obligate parasites, though the most famous are the cuckoos. Some brood parasites are adapted to hatch before their host's young, which allows them to destroy the host's eggs by pushing them out of the nest or to kill the host's chicks; this ensures that all food brought to the nest will be fed to the parasitic chicks. Sexual selection Birds have evolved a variety of mating behaviours, with the peacock tail being perhaps the most famous example of sexual selection and the Fisherian runaway. Commonly occurring sexual dimorphisms such as size and colour differences are energetically costly attributes that signal competitive breeding situations. Many types of avian sexual selection have been identified; intersexual selection, also known as female choice; and intrasexual competition, where individuals of the more abundant sex compete with each other for the privilege to mate. Sexually selected traits often evolve to become more pronounced in competitive breeding situations until the trait begins to limit the individual's fitness. Conflicts between an individual fitness and signalling adaptations ensure that sexually selected ornaments such as plumage colouration and courtship behaviour are "honest" traits. Signals must be costly to ensure that only good-quality individuals can present these exaggerated sexual ornaments and behaviours. Inbreeding depression Inbreeding causes early death (inbreeding depression) in the zebra finch Taeniopygia guttata. Embryo survival (that is, hatching success of fertile eggs) was significantly lower for sib-sib mating pairs than for unrelated pairs. Darwin's finch Geospiza scandens experiences inbreeding depression (reduced survival of offspring) and the magnitude of this effect is influenced by environmental conditions such as low food availability. Inbreeding avoidance Incestuous matings by the purple-crowned fairy wren Malurus coronatus result in severe fitness costs due to inbreeding depression (greater than 30% reduction in hatchability of eggs). Females paired with related males may undertake extra pair matings (see Promiscuity#Other animals for 90% frequency in avian species) that can reduce the negative effects of inbreeding. However, there are ecological and demographic constraints on extra pair matings. Nevertheless, 43% of broods produced by incestuously paired females contained extra pair young. Inbreeding depression occurs in the great tit (Parus major) when the offspring produced as a result of a mating between close relatives show reduced fitness. In natural populations of Parus major, inbreeding is avoided by dispersal of individuals from their birthplace, which reduces the chance of mating with a close relative. Southern pied babblers Turdoides bicolor appear to avoid inbreeding in two ways. The first is through dispersal, and the second is by avoiding familiar group members as mates. Cooperative breeding in birds typically occurs when offspring, usually males, delay dispersal from their natal group in order to remain with the family to help rear younger kin. Female offspring rarely stay at home, dispersing over distances that allow them to breed independently, or to join unrelated groups. In general, inbreeding is avoided because it leads to a reduction in progeny fitness (inbreeding depression) due largely to the homozygous expression of deleterious recessive alleles. Cross-fertilisation between unrelated individuals ordinarily leads to the masking of deleterious recessive alleles in progeny. Ecology Birds occupy a wide range of ecological positions. While some birds are generalists, others are highly specialised in their habitat or food requirements. Even within a single habitat, such as a forest, the niches occupied by different species of birds vary, with some species feeding in the forest canopy, others beneath the canopy, and still others on the forest floor. Forest birds may be insectivores, frugivores, or nectarivores. Aquatic birds generally feed by fishing, plant eating, and piracy or kleptoparasitism. Many grassland birds are granivores. Birds of prey specialise in hunting mammals or other birds, while vultures are specialised scavengers. Birds are also preyed upon by a range of mammals including a few avivorous bats. A wide range of endo- and ectoparasites depend on birds and some parasites that are transmitted from parent to young have co-evolved and show host-specificity. Some nectar-feeding birds are important pollinators, and many frugivores play a key role in seed dispersal. Plants and pollinating birds often coevolve, and in some cases a flower's primary pollinator is the only species capable of reaching its nectar. Birds are often important to island ecology. Birds have frequently reached islands that mammals have not; on those islands, birds may fulfil ecological roles typically played by larger animals. For example, in New Zealand nine species of moa were important browsers, as are the kererū and kōkako today. Today the plants of New Zealand retain the defensive adaptations evolved to protect them from the extinct moa. Many birds act as ecosystem engineers through the construction of nests, which provide important microhabitats and food for hundreds of species of invertebrates. Nesting seabirds may affect the ecology of islands and surrounding seas, principally through the concentration of large quantities of guano, which may enrich the local soil and the surrounding seas. A wide variety of avian ecology field methods, including counts, nest monitoring, and capturing and marking, are used for researching avian ecology. Relationship with humans Since birds are highly visible and common animals, humans have had a relationship with them since the dawn of man. Sometimes, these relationships are mutualistic, like the cooperative honey-gathering among honeyguides and African peoples such as the Borana. Other times, they may be commensal, as when species such as the house sparrow have benefited from human activities. Several species have reconciled to habits of farmers who practice traditional farming. Examples include the Sarus Crane that begins nesting in India when farmers flood the fields in anticipation of rains, and the woolly-necked storks that have taken to nesting on a short tree grown for agroforestry beside fields and canals. Several bird species have become commercially significant agricultural pests, and some pose an aviation hazard. Human activities can also be detrimental, and have threatened numerous bird species with extinction (hunting, avian lead poisoning, pesticides, roadkill, wind turbine kills and predation by pet cats and dogs are common causes of death for birds). Birds can act as vectors for spreading diseases such as psittacosis, salmonellosis, campylobacteriosis, mycobacteriosis (avian tuberculosis), avian influenza (bird flu), giardiasis, and cryptosporidiosis over long distances. Some of these are zoonotic diseases that can also be transmitted to humans. Economic importance Domesticated birds raised for meat and eggs, called poultry, are the largest source of animal protein eaten by humans; in 2003, tons of poultry and tons of eggs were produced worldwide. Chickens account for much of human poultry consumption, though domesticated turkeys, ducks, and geese are also relatively common. Many species of birds are also hunted for meat. Bird hunting is primarily a recreational activity except in extremely undeveloped areas. The most important birds hunted in North and South America are waterfowl; other widely hunted birds include pheasants, wild turkeys, quail, doves, partridge, grouse, snipe, and woodcock. Muttonbirding is also popular in Australia and New Zealand. Although some hunting, such as that of muttonbirds, may be sustainable, hunting has led to the extinction or endangerment of dozens of species. Other commercially valuable products from birds include feathers (especially the down of geese and ducks), which are used as insulation in clothing and bedding, and seabird faeces (guano), which is a valuable source of phosphorus and nitrogen. The War of the Pacific, sometimes called the Guano War, was fought in part over the control of guano deposits. Birds have been domesticated by humans both as pets and for practical purposes. Colourful birds, such as parrots and mynas, are bred in captivity or kept as pets, a practice that has led to the illegal trafficking of some endangered species. Falcons and cormorants have long been used for hunting and fishing, respectively. Messenger pigeons, used since at least 1 AD, remained important as recently as World War II. Today, such activities are more common either as hobbies, for entertainment and tourism. Amateur bird enthusiasts (called birdwatchers, twitchers or, more commonly, birders) number in the millions. Many homeowners erect bird feeders near their homes to attract various species. Bird feeding has grown into a multimillion-dollar industry; for example, an estimated 75% of households in Britain provide food for birds at some point during the winter. In religion and mythology Birds play prominent and diverse roles in religion and mythology. In religion, birds may serve as either messengers or priests and leaders for a deity, such as in the Cult of Makemake, in which the Tangata manu of Easter Island served as chiefs or as attendants, as in the case of Hugin and Munin, the two common ravens who whispered news into the ears of the Norse god Odin. In several civilisations of ancient Italy, particularly Etruscan and Roman religion, priests were involved in augury, or interpreting the words of birds while the "auspex" (from which the word "auspicious" is derived) watched their activities to foretell events. They may also serve as religious symbols, as when Jonah (, dove) embodied the fright, passivity, mourning, and beauty traditionally associated with doves. Birds have themselves been deified, as in the case of the common peacock, which is perceived as Mother Earth by the people of southern India. In the ancient world, doves were used as symbols of the Mesopotamian goddess Inanna (later known as Ishtar), the Canaanite mother goddess Asherah, and the Greek goddess Aphrodite. In ancient Greece, Athena, the goddess of wisdom and patron deity of the city of Athens, had a little owl as her symbol. In religious images preserved from the Inca and Tiwanaku empires, birds are depicted in the process of transgressing boundaries between earthly and underground spiritual realms. Indigenous peoples of the central Andes maintain legends of birds passing to and from metaphysical worlds. In culture and folklore Birds have featured in culture and art since prehistoric times, when they were represented in early cave painting and carvings. Some birds have been perceived as monsters, including the mythological Roc and the Māori's legendary , a giant bird capable of snatching humans. Birds were later used as symbols of power, as in the magnificent Peacock Throne of the Mughal and Persian emperors. With the advent of scientific interest in birds, many paintings of birds were commissioned for books. Among the most famous of these bird artists was John James Audubon, whose paintings of North American birds were a great commercial success in Europe and who later lent his name to the National Audubon Society. Birds are also important figures in poetry; for example, Homer incorporated nightingales into his Odyssey, and Catullus used a sparrow as an erotic symbol in his Catullus 2. The relationship between an albatross and a sailor is the central theme of Samuel Taylor Coleridge's The Rime of the Ancient Mariner, which led to the use of the term as a metaphor for a 'burden'. Other English metaphors derive from birds; vulture funds and vulture investors, for instance, take their name from the scavenging vulture. Aircraft, particularly military aircraft, are frequently named after birds. The predatory nature of raptors make them popular choices for fighter aircraft such as the F-16 Fighting Falcon and the Harrier Jump Jet, while the names of seabirds may be chosen for aircraft primarily used by naval forces such as the HU-16 Albatross and the V-22 Osprey. Perceptions of bird species vary across cultures. Owls are associated with bad luck, witchcraft, and death in parts of Africa, but are regarded as wise across much of Europe. Hoopoes were considered sacred in Ancient Egypt and symbols of virtue in Persia, but were thought of as thieves across much of Europe and harbingers of war in Scandinavia. In heraldry, birds, especially eagles, often appear in coats of arms In vexillology, birds are a popular choice on flags. Birds feature in the flag designs of 17 countries and numerous subnational entities and territories. Birds are used by nations to symbolise a country's identity and heritage, with 91 countries officially recognising a national bird. Birds of prey are highly represented, though some nations have chosen other species of birds with parrots being popular among smaller, tropical nations. In music In music, birdsong has influenced composers and musicians in several ways: they can be inspired by birdsong; they can intentionally imitate bird song in a composition, as Vivaldi, Messiaen, and Beethoven did, along with many later composers; they can incorporate recordings of birds into their works, as Ottorino Respighi first did; or like Beatrice Harrison and David Rothenberg, they can duet with birds. A 2023 archaeological excavation of a 10,000-year-old site in Israel yielded hollow wing bones of coots and ducks with perforations made on the side that are thought to have allowed them to be used as flutes or whistles possibly used by Natufian people to lure birds of prey. Threats and conservation Human activities have caused population decreases or extinction in many bird species. Over a hundred bird species have gone extinct in historical times, although the most dramatic human-caused avian extinctions, eradicating an estimated 750–1800 species, occurred during the human colonisation of Melanesian, Polynesian, and Micronesian islands. Many bird populations are declining worldwide, with 1,227 species listed as threatened by BirdLife International and the IUCN in 2009. There have been long-term declines in North American bird populations, with an estimated loss of 2.9 billion breeding adults, about 30% of the total, since 1970. The most commonly cited human threat to birds is habitat loss. Other threats include overhunting, accidental mortality due to collisions with buildings or vehicles, long-line fishing bycatch, pollution (including oil spills and pesticide use), competition and predation from nonnative invasive species, and climate change. Governments and conservation groups work to protect birds, either by passing laws that preserve and restore bird habitat or by establishing captive populations for reintroductions. Such projects have produced some successes; one study estimated that conservation efforts saved 16 species of bird that would otherwise have gone extinct between 1994 and 2004, including the California condor and Norfolk parakeet. Human activities have allowed the expansion of a few temperate area species, such as the barn swallow and European starling. In the tropics and sub-tropics, relatively more species are expanding due to human activities, particularly due to the spread of crops such as rice whose expansion in south Asia has benefitted at least 64 bird species, though may have harmed many more species.
Biology and health sciences
Biology
null
3416
https://en.wikipedia.org/wiki/Bryozoa
Bryozoa
Bryozoa (also known as the Polyzoa, Ectoprocta or commonly as moss animals) are a phylum of simple, aquatic invertebrate animals, nearly all living in sedentary colonies. Typically about long, they have a special feeding structure called a lophophore, a "crown" of tentacles used for filter feeding. Most marine bryozoans live in tropical waters, but a few are found in oceanic trenches and polar waters. The bryozoans are classified as the marine bryozoans (Stenolaemata), freshwater bryozoans (Phylactolaemata), and mostly-marine bryozoans (Gymnolaemata), a few members of which prefer brackish water. 5,869living species are known. Originally all of the crown group Bryozoa were colonial, but as an adaptation to a mesopsammal (interstitial spaces in marine sand) life or to deep-sea habitats, secondarily solitary forms have since evolved. Solitary species have been described in four genera; (Aethozooides, Aethozoon, Franzenella and Monobryozoon). The latter having a statocyst-like organ with a supposed excretory function. The terms Polyzoa and Bryozoa were introduced in 1830 and 1831, respectively. Soon after it was named, another group of animals was discovered whose filtering mechanism looked similar, so it was included in Bryozoa until 1869, when the two groups were noted to be very different internally. The new group was given the name "Entoprocta", while the original Bryozoa were called "Ectoprocta". Disagreements about terminology persisted well into the 20th century, but "Bryozoa" is now the generally accepted term. Colonies take a variety of forms, including fans, bushes and sheets. Single animals, called zooids, live throughout the colony and are not fully independent. These individuals can have unique and diverse functions. All colonies have "autozooids", which are responsible for feeding, excretion, and supplying nutrients to the colony through diverse channels. Some classes have specialist zooids like hatcheries for fertilized eggs, colonial defence structures, and root-like attachment structures. Cheilostomata is the most diverse order of bryozoan, possibly because its members have the widest range of specialist zooids. They have mineralized exoskeletons and form single-layered sheets which encrust over surfaces, and some colonies can creep very slowly by using spiny defensive zooids as legs. Each zooid consists of a "cystid", which provides the body wall and produces the exoskeleton, and a "polypide", which holds the organs. Zooids have no special excretory organs, and autozooids' polypides are scrapped when they become overloaded with waste products; usually the body wall then grows a replacement polypide. Their gut is U-shaped, with the mouth inside the crown of tentacles and the anus outside it. Zooids of all the freshwater species are simultaneous hermaphrodites. Although those of many marine species function first as males and then as females, their colonies always contain a combination of zooids that are in their male and female stages. All species emit sperm into the water. Some also release ova into the water, while others capture sperm via their tentacles to fertilize their ova internally. In some species the larvae have large yolks, go to feed, and quickly settle on a surface. Others produce larvae that have little yolk but swim and feed for a few days before settling. After settling, all larvae undergo a radical metamorphosis that destroys and rebuilds almost all the internal tissues. Freshwater species also produce statoblasts that lie dormant until conditions are favorable, which enables a colony's lineage to survive even if severe conditions kill the mother colony. Predators of marine bryozoans include sea slugs (nudibranchs), fish, sea urchins, pycnogonids, crustaceans, mites and starfish. Freshwater bryozoans are preyed on by snails, insects, and fish. In Thailand, many populations of one freshwater species have been wiped out by an introduced species of snail. Membranipora membranacea, a fast-growing invasive bryozoan off the northeast and northwest coasts of the US, has reduced kelp forests so much that it has affected local fish and invertebrate populations. Bryozoans have spread diseases to fish farms and fishermen. Chemicals extracted from a marine bryozoan species have been investigated for treatment of cancer and Alzheimer's disease, but analyses have not been encouraging. Mineralized skeletons of bryozoans first appear in rocks from the Early Ordovician period, making it the last major phylum to appear in the fossil record. This has led researchers to suspect that bryozoans arose earlier but were initially unmineralized, and may have differed significantly from fossilized and modern forms. In 2021, some research suggested Protomelission, a genus known from the Cambrian period, could be an example of an early bryozoan, but later research suggested that this taxon may instead represent a dasyclad alga. Early fossils are mainly of erect forms, but encrusting forms gradually became dominant. It is uncertain whether the phylum is monophyletic. Bryozoans' evolutionary relationships to other phyla are also unclear, partly because scientists' view of the family tree of animals is mainly influenced by better-known phyla. Both morphological and molecular phylogeny analyses disagree over bryozoans' relationships with entoprocts, about whether bryozoans should be grouped with brachiopods and phoronids in Lophophorata, and whether bryozoans should be considered protostomes or deuterostomes. Description Distinguishing features Bryozoans, phoronids and brachiopods strain food out of the water by means of a lophophore, a "crown" of hollow tentacles. Bryozoans form colonies consisting of clones called zooids that are typically about long. Phoronids resemble bryozoan zooids but are long and, although they often grow in clumps, do not form colonies consisting of clones. Brachiopods, generally thought to be closely related to bryozoans and phoronids, are distinguished by having shells rather like those of bivalves. All three of these phyla have a coelom, an internal cavity lined by mesothelium. Some encrusting bryozoan colonies with mineralized exoskeletons look very like small corals. However, bryozoan colonies are founded by an ancestrula, which is round rather than shaped like a normal zooid of that species. On the other hand, the founding polyp of a coral has a shape like that of its daughter polyps, and coral zooids have no coelom or lophophore. Entoprocts, another phylum of filter-feeders, look rather like bryozoans but their lophophore-like feeding structure has solid tentacles, their anus lies inside rather than outside the base of the "crown" and they have no coelom. Types of zooid All bryozoans are colonial except for one genus, Monobryozoon. Individual members of a bryozoan colony are about long and are known as zooids, since they are not fully independent animals. All colonies contain feeding zooids, known as autozooids. Those of some groups also contain non-feeding heterozooids, also known as polymorphic zooids, which serve a variety of functions other than feeding; colony members are genetically identical and co-operate, rather like the organs of larger animals. What type of zooid grows where in a colony is determined by chemical signals from the colony as a whole or sometimes in response to the scent of predators or rival colonies. The bodies of all types have two main parts. The cystid consists of the body wall and whatever type of exoskeleton is secreted by the epidermis. The exoskeleton may be organic (chitin, polysaccharide or protein) or made of the mineral calcium carbonate. The latter is always absent in freshwater species. The body wall consists of the epidermis, basal lamina (a mat of non-cellular material), connective tissue, muscles, and the mesothelium which lines the coelom (main body cavity) – except that in one class, the mesothelium is split into two separate layers, the inner one forming a membranous sac that floats freely and contains the coelom, and the outer one attached to the body wall and enclosing the membranous sac in a pseudocoelom. The other main part of the bryozoan body, known as the polypide and situated almost entirely within the cystid, contains the nervous system, digestive system, some specialized muscles and the feeding apparatus or other specialized organs that take the place of the feeding apparatus. Feeding zooids The most common type of zooid is the feeding autozooid, in which the polypide bears a "crown" of hollow tentacles called a lophophore, which captures food particles from the water. In all colonies a large percentage of zooids are autozooids, and some consist entirely of autozooids, some of which also engage in reproduction. The basic shape of the "crown" is a full circle. Among the freshwater bryozoans (Phylactolaemata) the crown appears U-shaped, but this impression is created by a deep dent in the rim of the crown, which has no gap in the fringe of tentacles. The sides of the tentacles bear fine hairs called cilia, whose beating drives a water current from the tips of the tentacles to their bases, where it exits. Food particles that collide with the tentacles are trapped by mucus, and further cilia on the inner surfaces of the tentacles move the particles towards the mouth in the center. The method used by ectoprocts is called "upstream collecting", as food particles are captured before they pass through the field of cilia that creates the feeding current. This method is also used by phoronids, brachiopods and pterobranchs. The lophophore and mouth are mounted on a flexible tube called the "invert", which can be turned inside-out and withdrawn into the polypide, rather like the finger of a rubber glove; in this position the lophophore lies inside the invert and is folded like the spokes of an umbrella. The invert is withdrawn, sometimes within 60milliseconds, by a pair of retractor muscles that are anchored at the far end of the cystid. Sensors at the tips of the tentacles may check for signs of danger before the invert and lophophore are fully extended. Extension is driven by an increase in internal fluid pressure, which species with flexible exoskeletons produce by contracting circular muscles that lie just inside the body wall, while species with a membranous sac use circular muscles to squeeze this. Some species with rigid exoskeletons have a flexible membrane that replaces part of the exoskeleton, and transverse muscles anchored on the far side of the exoskeleton increase the fluid pressure by pulling the membrane inwards. In others there is no gap in the protective skeleton, and the transverse muscles pull on a flexible sac which is connected to the water outside by a small pore; the expansion of the sac increases the pressure inside the body and pushes the invert and lophophore out. In some species the retracted invert and lophophore are protected by an operculum ("lid"), which is closed by muscles and opened by fluid pressure. In one class, a hollow lobe called the "epistome" overhangs the mouth. The gut is U-shaped, running from the mouth, in the center of the lophophore, down into the animal's interior and then back to the anus, which is located on the invert, outside and usually below the lophophore. A network of strands of mesothelium called "funiculi" ("little ropes") connects the mesothelium covering the gut with that lining the body wall. The wall of each strand is made of mesothelium, and surrounds a space filled with fluid, thought to be blood. A colony's zooids are connected, enabling autozooids to share food with each other and with any non-feeding heterozooids. The method of connection varies between the different classes of bryozoans, ranging from quite large gaps in the body walls to small pores through which nutrients are passed by funiculi. There is a nerve ring round the pharynx (throat) and a ganglion that serves as a brain to one side of this. Nerves run from the ring and ganglion to the tentacles and to the rest of the body. Bryozoans have no specialized sense organs, but cilia on the tentacles act as sensors. Members of the genus Bugula grow towards the sun, and therefore must be able to detect light. In colonies of some species, signals are transmitted between zooids through nerves that pass through pores in the body walls, and coordinate activities such as feeding and the retraction of lophophores. The solitary individuals of Monobryozoon are autozooids with pear-shaped bodies. The wider ends have up to 15 short, muscular projections by which the animals anchor themselves to sand or gravel and pull themselves through the sediments. Avicularia and vibracula Some authorities use the term avicularia (plural of avicularium) to refer to any type of zooid in which the lophophore is replaced by an extension that serves some protective function, while others restrict the term to those that defend the colony by snapping at invaders and small predators, killing some and biting the appendages of others. In some species the snapping zooids are mounted on a peduncle (stalk), their bird-like appearance responsible for the term – Charles Darwin described these as like "the head and beak of a vulture in miniature, seated on a neck and capable of movement". Stalked avicularia are placed upside-down on their stalks. The "lower jaws" are modified versions of the opercula that protect the retracted lophophores in autozooids of some species, and are snapped shut "like a mousetrap" by similar muscles, while the beak-shaped upper jaw is the inverted body wall. In other species the avicularia are stationary box-like zooids laid the normal way up, so that the modified operculum snaps down against the body wall. In both types the modified operculum is opened by other muscles that attach to it, or by internal muscles that raise the fluid pressure by pulling on a flexible membrane. The actions of these snapping zooids are controlled by small, highly modified polypides that are located inside the "mouth" and bear tufts of short sensory cilia. These zooids appear in various positions: some take the place of autozooids, some fit into small gaps between autozooids, and small avicularia may occur on the surfaces of other zooids. In vibracula, regarded by some as a type of avicularia, the operculum is modified to form a long bristle that has a wide range of motion. They may function as defenses against predators and invaders, or as cleaners. In some species that form mobile colonies, vibracula around the edges are used as legs for burrowing and walking. Structural polymorphs Kenozooids (from the Greek 'empty') consist only of the body wall and funicular strands crossing the interior, and no polypide. The functions of these zooids include forming the stems of branching structures, acting as spacers that enable colonies to grow quickly in a new direction, strengthening the colony's branches, and elevating the colony slightly above its substrate for competitive advantages against other organisms. Some kenozooids are hypothesized to be capable of storing nutrients for the colony. Because kenozooids' function is generally structural, they are called "structural polymorphs." Some heterozooids found in extinct trepostome bryozoans, called mesozooids, are thought to have functioned to space the feeding autozooids an appropriate distance apart. In thin sections of trepostome fossils, mesozooids can be seen in between the tubes that held autozooids; they are smaller tubes that are divided along their length by diaphragms, making them look like rows of box-like chambers sandwiched between autozooidal tubes. Reproductive polymorphs Gonozooids act as brood chambers for fertilized eggs. Almost all modern cyclostome bryozoans have them, but they can be hard to locate on a colony because there are so few gonozooids in one colony. The aperture in gonozooids, which is called an ooeciopore, acts as a point for larvae to exit. Some gonozooids have very complex shapes with autozooidal tubes passing through chambers within them. All larvae released from a gonozooid are clones created by division of a single egg; this is called monozygotic polyembryony, and is a reproductive strategy also used by armadillos. Cheilostome bryozoans also brood their embryos; one of the common methods is through ovicells, capsules attached to autozooids. The autozooids possessing ovicells are normally still able to feed, however, so these are not considered heterozooids. "Female" polymorphs are more common than "male" polymorphs, but specialized zooids that produce sperm are also known. These are called androzooids, and some are found in colonies of Odontoporella bishopi, a species that is symbiotic with hermit crabs and lives on their shells. These zooids are smaller than the others and have four short tentacles and four long tentacles, unlike the autozooids which have 15–16 tentacles. Androzooids are also found in species with mobile colonies that can crawl around. It is possible that androzooids are used to exchange sperm between colonies when two mobile colonies or bryozoan-encrusted hermit crabs happen to encounter one another. Other polymorphs Spinozooids are hollow, movable spines, like very slender, small tubes, present on the surface of colonies, which probably are for defense. Some species have miniature nanozooids with small single-tentacled polypides, and these may grow on other zooids or within the body walls of autozooids that have degenerated. Colony forms and composition Although zooids are microscopic, colonies range in size from to over . However, the majority are under across. The shapes of colonies vary widely, depend on the pattern of budding by which they grow, the variety of zooids present and the type and amount of skeletal material they secrete. Some marine species are bush-like or fan-like, supported by "trunks" and "branches" formed by kenozooids, with feeding autozooids growing from these. Colonies of these types are generally unmineralized but may have exoskeletons made of chitin. Others look like small corals, producing heavy lime skeletons. Many species form colonies which consist of sheets of autozooids. These sheets may form leaves, tufts or, in the genus Thalamoporella, structures that resemble an open head of lettuce. The most common marine form, however, is encrusting, in which a one-layer sheet of zooids spreads over a hard surface or over seaweed. Some encrusting colonies may grow to over and contain about 2,000,000 zooids. These species generally have exoskeletons reinforced with calcium carbonate, and the openings through which the lophophores protrude are on the top or outer surface. The moss-like appearance of encrusting colonies is responsible for the phylum's name (Ancient Greek words meaning 'moss' and meaning 'animal'). Large colonies of encrusting species often have "chimneys", gaps in the canopy of lophophores, through which they swiftly expel water that has been sieved, and thus avoid re-filtering water that is already exhausted. They are formed by patches of non-feeding heterozooids. New chimneys appear near the edges of expanding colonies, at points where the speed of the outflow is already high, and do not change position if the water flow changes. Some freshwater species secrete a mass of gelatinous material, up to in diameter, to which the zooids stick. Other freshwater species have plant-like shapes with "trunks" and "branches", which may stand erect or spread over the surface. A few species can creep at about per day. Each colony grows by asexual budding from a single zooid known as the ancestrula, which is round rather than shaped like a normal zooid. This occurs at the tips of "trunks" or "branches" in forms that have this structure. Encrusting colonies grow round their edges. In species with calcareous exoskeletons, these do not mineralize until the zooids are fully grown. Colony lifespans range from one to about 12 years, and the short-lived species pass through several generations in one season. Species that produce defensive zooids do so only when threats have already appeared, and may do so within 48 hours. The theory of "induced defenses" suggests that production of defenses is expensive and that colonies which defend themselves too early or too heavily will have reduced growth rates and lifespans. This "last minute" approach to defense is feasible because the loss of zooids to a single attack is unlikely to be significant. Colonies of some encrusting species also produce special heterozooids to limit the expansion of other encrusting organisms, especially other bryozoans. In some cases this response is more belligerent if the opposition is smaller, which suggests that zooids on the edge of a colony can somehow sense the size of the opponent. Some species consistently prevail against certain others, but most turf wars are indecisive and the combatants soon turn to growing in uncontested areas. Bryozoans competing for territory do not use the sophisticated techniques employed by sponges or corals, possibly because the shortness of bryozoan lifespans makes heavy investment in turf wars unprofitable. Bryozoans have contributed to carbonate sedimentation in marine life since the Ordovician period. Bryozoans take responsibility for many of the colony forms, which have evolved in different taxonomic groups and vary in sediment producing ability. The nine basic bryozoan colony-forms include: encrusting, dome-shaped, palmate, foliose, fenestrate, robust branching, delicate branching, articulated and free-living. Most of these sediments come from two distinct groups of colonies: domal, delicate branching, robust branching and palmate; and fenestrate. Fenestrate colonies generate rough particles both as sediment and components of stromatoporoids coral reefs. The delicate colonies however, create both coarse sediment and form the cores of deep-water, subphotic biogenic mounds. Nearly all post- bryozoan sediments are made up of growth forms, with the addition to free-living colonies which include significant numbers of various colonies. "In contrast to the Palaeozoic, post-Palaeozoic bryozoans generated sediment varying more widely with the size of their grains; they grow as they moved from mud, to sand, to gravel." Taxonomy The phylum was originally called "Polyzoa", but this name was eventually replaced by Ehrenberg's term "Bryozoa". The name "Bryozoa" was originally applied only to the animals also known as Ectoprocta (), in which the anus lies outside the "crown" of tentacles. After the discovery of the Entoprocta (), in which the anus lies within a "crown" of tentacles, the name "Bryozoa" was promoted to phylum level to include the two classes Ectoprocta and Entoprocta. However, in 1869 Hinrich Nitsche regarded the two groups as quite distinct for a variety of reasons, and coined the name "Ectoprocta" for Ehrenberg's "Bryozoa". Despite their apparently similar methods of feeding, they differed markedly anatomically; in addition to the different positions of the anus, ectoprocts have hollow tentacles and a coelom, while entoprocts have solid tentacles and no coelom. Hence the two groups are now widely regarded as separate phyla, and the name "Bryozoa" is now synonymous with "Ectoprocta". This has remained the majority view ever since, although most publications have preferred the name "Bryozoa" rather than "Ectoprocta". Nevertheless, some notable scientists have continued to regard the "Ectoprocta" and Entoprocta as close relatives and group them under "Bryozoa". The ambiguity about the scope of the name "Bryozoa" led to proposals in the 1960s and 1970s that it should be avoided and the unambiguous term "Ectoprocta" should be used. However, the change would have made it harder to find older works in which the phylum was called "Bryozoa", and the desire to avoid ambiguity, if applied consistently to all classifications, would have necessitated renaming of several other phyla and many lower-level groups. In practice, zoological naming of split or merged groups of animals is complex and not completely consistent. Works since 2000 have used various names to resolve the ambiguity, including: "Bryozoa", "Ectoprocta", "Bryozoa (Ectoprocta)", and "Ectoprocta (Bryozoa)". Some have used more than one approach in the same work. The common name "moss animals" is the literal meaning of "Bryozoa", from Greek ('moss') and ('animals'), based on the mossy appearance of encrusting species. Until 2008 there were "inadequately known and misunderstood type species belonging to the Cyclostome Bryozoan family Oncousoeciidae." Modern research and experiments have been done using low-vacuum scanning electron microscopy of uncoated type material to critically examine and perhaps revise the taxonomy of three genera belonging to this family, including Oncousoecia, Microeciella, and Eurystrotos. This method permits data to be obtained that would be difficult to recognize with an optical microscope. The valid type species of Oncousoecia was found to be Oncousoecia lobulata. This interpretation stabilizes Oncousoecia by establishing a type species that corresponds to the general usage of the genus. Fellow Oncousoeciid Eurystrotos is now believed to be not conspecific with O. lobulata, as previously suggested, but shows enough similarities to be considered a junior synonym of Oncousoecia. Microeciella suborbicularus has also been recently distinguished from O. lobulata and O. dilatans, using this modern method of low vacuum scanning, with which it has been inaccurately synonymized with in the past. A new genus has also been recently discovered called Junerossia in the family Stomachetosellidae, along with 10 relatively new species of bryozoa such as Alderina flaventa, Corbulella extenuata, Puellina septemcryptica, Junerossia copiosa, Calyptotheca kapaaensis, Bryopesanser serratus, Cribellopora souleorum, Metacleidochasma verrucosa, Disporella compta, and Favosipora adunca. Classification and diversity Counts of formally described species range between 4,000 and 4,500. The Gymnolaemata and especially Cheilostomata have the greatest numbers of species, possibly because of their wide range of specialist zooids. Under the Linnaean system of classification, which is still used as a convenient way to label groups of organisms, living members of the phylum Bryozoa are divided into: Fossil record Fossils of about 15,000 bryozoan species have been found. Bryozoans are among the three dominant groups of Paleozoic fossils. Bryozoans with calcitic skeletons were a major source of the carbonate minerals that make up limestones, and their fossils are incredibly common in marine sediments worldwide from the Ordovician onward. However, unlike corals and other colonial animals found in the fossil record, Bryozoan colonies did not reach large sizes. Fossil bryozoan colonies are typically found highly fragmented and scattered; the preservation of complete zoaria is uncommon in the fossil record, and relatively little study has been devoted to reassembling fragmented zoaria. The largest known fossil colonies are branching trepostome bryozoans from Ordovician rocks in the United States, reaching 66 centimeters in height. The oldest species with a mineralized skeleton occurs in the Lower Ordovician. It is likely that the first bryozoans appeared much earlier and were entirely soft-bodied, and the Ordovician fossils record the appearance of mineralized skeletons in this phylum. By the Arenigian stage of the Early Ordovician period, about , all the modern orders of stenolaemates were present, and the ctenostome order of gymnolaemates had appeared by the Middle Ordovician, about . The Early Ordovician fossils may also represent forms that had already become significantly different from the original members of the phylum. Ctenostomes with phosphatized soft tissue are known from the Devonian. Other types of filter feeders appeared around the same time, which suggests that some change made the environment more favorable for this lifestyle. Fossils of cheilostomates, an order of gymnolaemates with mineralized skeletons, first appear in the Mid Jurassic, about , and these have been the most abundant and diverse bryozoans from the Cretaceous to the present. Evidence compiled from the last 100 million years show that cheilostomatids consistently grew over cyclostomatids in territorial struggles, which may help to explain how cheilostomatids replaced cyclostomatids as the dominant marine bryozoans. Marine fossils from the Paleozoic era, which ended , are mainly of erect forms, those from the Mesozoic are fairly equally divided by erect and encrusting forms, and more recent ones are predominantly encrusting. Fossils of the soft, freshwater phylactolaemates are very rare, appear in and after the Late Permian (which began about ) and consist entirely of their durable statoblasts. There are no known fossils of freshwater members of other classes. Evolutionary family tree Scientists are divided about whether the Bryozoa (Ectoprocta) are a monophyletic group (whether they include all and only a single ancestor species and all its descendants), about what are the phylum's closest relatives in the family tree of animals, and even about whether they should be regarded as members of the protostomes or deuterostomes, the two major groups that account for all moderately complex animals. Molecular phylogeny, which attempts to work out the evolutionary family tree of organisms by comparing their biochemistry and especially their genes, has done much to clarify the relationships between the better-known invertebrate phyla. However, the shortage of genetic data about "minor phyla" such as bryozoans and entoprocts has left their relationships to other groups unclear. Traditional view The traditional view is that the Bryozoa are a monophyletic group, in which the class Phylactolaemata is most closely related to Stenolaemata and Ctenostomatida, the classes that appear earliest in the fossil record. However, in 2005 a molecular phylogeny study that focused on phylactolaemates concluded that these are more closely related to the phylum Phoronida, and especially to the only phoronid species that is colonial, than they are to the other ectoproct classes. That implies that the Entoprocta are not monophyletic, as the Phoronida are a sub-group of ectoprocts but the standard definition of Entoprocta excludes the Phoronida. In 2009 another molecular phylogeny study, using a combination of genes from mitochondria and the cell nucleus, concluded that Bryozoa is a monophyletic phylum, in other words includes all the descendants of a common ancestor that is itself a bryozoan. The analysis also concluded that the classes Phylactolaemata, Stenolaemata and Gymnolaemata are also monophyletic, but could not determine whether Stenolaemata are more closely related to Phylactolaemata or Gymnolaemata. The Gymnolaemata are traditionally divided into the soft-bodied Ctenostomatida and mineralized Cheilostomata, but the 2009 analysis considered it more likely that neither of these orders is monophyletic and that mineralized skeletons probably evolved more than once within the early Gymnolaemata. Bryozoans' relationships with other phyla are uncertain and controversial. Traditional phylogeny, based on anatomy and on the development of the adult forms from embryos, has produced no enduring consensus about the position of ectoprocts. Attempts to reconstruct the family tree of animals have largely ignored ectoprocts and other "minor phyla", which have received little scientific study because they are generally tiny, have relatively simple body plans, and have little impact on human economies – despite the fact that the "minor phyla" include most of the variety in the evolutionary history of animals. In the opinion of Ruth Dewel, Judith Winston, and Frank McKinney, "Our standard interpretation of bryozoan morphology and embryology is a construct resulting from over 100 years of attempts to synthesize a single framework for all invertebrates," and takes little account of some peculiar features of ectoprocts. In ectoprocts, all of the larva's internal organs are destroyed during the metamorphosis to the adult form and the adult's organs are built from the larva's epidermis and mesoderm, while in other bilaterians some organs including the gut are built from endoderm. In most bilaterian embryos the blastopore, a dent in the outer wall, deepens to become the larva's gut, but in ectoprocts the blastopore disappears and a new dent becomes the point from which the gut grows. The ectoproct coelom is formed by neither of the processes used by other bilaterians, enterocoely, in which pouches that form on the wall of the gut become separate cavities, nor schizocoely, in which the tissue between the gut and the body wall splits, forming paired cavities. Entoprocts When entoprocts were discovered in the 19th century, they and bryozoans (ectoprocts) were regarded as classes within the phylum Bryozoa, because both groups were sessile animals that filter-fed by means of a crown of tentacles that bore cilia. From 1869 onwards increasing awareness of differences, including the position of the entoproct anus inside the feeding structure and the difference in the early pattern of division of cells in their embryos, caused scientists to regard the two groups as separate phyla, and "Bryozoa" became just an alternative name for ectoprocts, in which the anus is outside the feeding organ. A series of molecular phylogeny studies from 1996 to 2006 have also concluded that bryozoans (ectoprocts) and entoprocts are not sister groups. However, two well-known zoologists, Claus Nielsen and Thomas Cavalier-Smith, maintain on anatomical and developmental grounds that bryozoans and entoprocts are member of the same phylum, Bryozoa. A molecular phylogeny study in 2007 also supported this old idea, while its conclusions about other phyla agreed with those of several other analyses. Grouping into the Lophophorata By 1891 bryozoans (ectoprocts) were grouped with phoronids in a super-phylum called "Tentaculata". In the 1970s comparisons between phoronid larvae and the cyphonautes larva of some gymnolaete bryozoans produced suggestions that the bryozoans, most of which are colonial, evolved from a semi-colonial species of phoronid. Brachiopods were also assigned to the "Tentaculata", which were renamed Lophophorata as they all use a lophophore for filter feeding. The majority of scientists accept this, but Claus Nielsen thinks these similarities are superficial. The Lophophorata are usually defined as animals with a lophophore, a three-part coelom and a U-shaped gut. In Nielsen's opinion, phoronids' and brachiopods' lophophores are more like those of pterobranchs, which are members of the phylum Hemichordata. Bryozoan's tentacles bear cells with multiple cilia, while the corresponding cells of phoronids', brachiopods' and pterobranchs' lophophores have one cilium per cell; and bryozoan tentacles have no hemal canal ("blood vessel"), which those of the other three phyla have. If the grouping of bryozoans with phoronids and brachiopods into Lophophorata is correct, the next issue is whether the Lophophorata are protostomes, along with most invertebrate phyla, or deuterostomes, along with chordates, hemichordates and echinoderms. The traditional view was that lophophorates were a mix of protostome and deuterostome features. Research from the 1970s onwards suggested they were deuterostomes, because of some features that were thought characteristic of deuterostomes: a three-part coelom; radial rather than spiral cleavage in the development of the embryo; and formation of the coelom by enterocoely. However the coelom of ectoproct larvae shows no sign of division into three sections, and that of adult ectoprocts is different from that of other coelomate phyla as it is built anew from epidermis and mesoderm after metamorphosis has destroyed the larval coelom. Lophophorate molecular phylogenetics Molecular phylogeny analyses from 1995 onwards, using a variety of biochemical evidence and analytical techniques, placed the lophophorates as protostomes and closely related to annelids and molluscs in a super-phylum called Lophotrochozoa. "Total evidence" analyses, which used both morphological features and a relatively small set of genes, came to various conclusions, mostly favoring a close relationship between lophophorates and Lophotrochozoa. A study in 2008, using a larger set of genes, concluded that the lophophorates were closer to the Lophotrochozoa than to deuterostomes, but also that the lophophorates were not monophyletic. Instead, it concluded that brachiopods and phoronids formed a monophyletic group, but bryozoans (ectoprocts) were closest to entoprocts, supporting the original definition of "Bryozoa". They are the only major phylum of exclusively clonal animals, composed of modular units known as zooids. Because they thrive in colonies, colonial growth allows them to develop unrestricted variations in form. Despite this, only a small number of basic growth forms have been found and have commonly reappeared throughout the history of the bryozoa. Ectoproct molecular phylogenetics The phylogenetic position of the ectoproct bryozoans remains uncertain, but it remains certain that they belong to the Protostomia and more specifically to the Lophotrochozoa. This implies that the ectoproct larva is a trochophore with the corona being a homologue of the prototroch; this is supported from the similarity between the coronate larvae and the Type 1 pericalymma larvae of some molluscs and sipunculans, where the prototroch zone is expanded to cover the hyposphere. A study of the mitochondrial DNA sequence suggests that the Bryozoa may be related to the Chaetognatha. Physiology Feeding and excretion Most species are filter feeders that sieve small particles, mainly phytoplankton (microscopic floating plants), out of the water. The freshwater species Plumatella emarginata feeds on diatoms, green algae, cyanobacteria, non-photosynthetic bacteria, dinoflagellates, rotifers, protozoa, small nematodes, and microscopic crustaceans. While the currents that bryozoans generate to draw food towards the mouth are well understood, the exact method of capture is still debated. All species also flick larger particles towards the mouth with a tentacle, and a few capture zooplankton (planktonic animals) by using their tentacles as cages. In addition the tentacles, whose surface area is increased by microvilli (small hairs and pleats), absorb organic compounds dissolved in the water. Unwanted particles may be flicked away by tentacles or shut out by closing the mouth. A study in 2008 showed that both encrusting and erect colonies fed more quickly and grew faster in gentle than in strong currents. In some species the first part of the stomach forms a muscular gizzard lined with chitinous teeth that crush armored prey such as diatoms. Wave-like peristaltic contractions move the food through the stomach for digestion. The final section of the stomach is lined with cilia (minute hairs) that compress undigested solids, which then pass through the intestine and out through the anus. There are no nephridia ("little kidneys") or other excretory organs in bryozoa, and it is thought that ammonia diffuses out through the body wall and lophophore. More complex waste products are not excreted but accumulate in the polypide, which degenerates after a few weeks. Some of the old polypide is recycled, but much of it remains as a large mass of dying cells containing accumulated wastes, and this is compressed into a "brown body". When the degeneration is complete, the cystid (outer part of the animal) produces a new polypide, and the brown body remains in the coelom, or in the stomach of the new polypide and is expelled next time the animal defecates. Respiration and circulation There are no respiratory organs, heart or blood vessels. Instead, zooids absorb oxygen and eliminate carbon dioxide through diffusion. Bryozoa accomplish diffusion through the use of either a thin membrane (in the case of anascans and some polyzoa) or through pseudopores located on the outer dermis of the zooid. The different bryozoan groups use various methods to share nutrients and oxygen between zooids: some have quite large gaps in the body walls, allowing the coelomic fluid to circulate freely; in others, the funiculi (internal "little ropes") of adjacent zooids connect via small pores in the body wall. Reproduction and life cycles Zooids of all phylactolaemate species are simultaneous hermaphrodites. Although those of many marine species are protandric, in other words function first as males and then as females, their colonies contain a combination of zooids that are in their male and female stages. In all species the ovaries develop on the inside of the body wall, and the testes on the funiculus connecting the stomach to the body wall. Eggs and sperm are released into the coelom, and sperm exit into the water through pores in the tips of some of the tentacles, and then are captured by the feeding currents of zooids that are producing eggs. Some species' eggs are fertilized externally after being released through a pore between two tentacles, which in some cases is at the tip of a small projection called the "intertentacular organ" in the base of a pair of tentacles. Others' are fertilized internally, in the intertentacular organ or in the coelom. All phylactolaemates and stenolaemates, and most gymnolaemates, exhibit placentation, and has therefore lecithotrophic (non-feeding) larvae. Except for Cyclostomata and the small gymnolaemate family Epistomiidae, which are viviparous, all are brooders. Phylactolaemata brood their embryos in an internal brood sac, but Gymnolaemata both external membranous sacs, skeletal chambers (ovicells) and internal brooding sacs exist. The developing embryo relies on egg's yolk, extraembryonic nutrition (matrotrophy) or both. In ctenostomes the mother provides a brood chamber for the fertilized eggs, and her polypide disintegrates, providing nourishment to the embryo. Stenolaemates produce specialized zooids to serve as brood chambers, and their eggs divide within this to produce up to 100 identical embryos. Planktotrophic (feeding) larvae are only found in class Gymnolaemata: In the cheilostomatan suborder Malacostegina they are found in the two families Membraniporidae and Electridae, and in the three ctenostome families Alcyonidiidae, Farrellidae, and Hislopiidae. In addition there are a few unconfirmed records, like the solitary form Aethozoid where larvae has never been observed, but which is assumed to have planktotrophic larvae. The cleavage of bryozoan eggs is biradial, in other words the early stages are bilaterally symmetrical. It is unknown how the coelom forms, since the metamorphosis from larva to adult destroys all of the larva's internal tissues. In many animals the blastopore, an opening in the surface of the early embryo, tunnels through to form the gut. However, in bryozoans the blastopore closes, and a new opening develops to create the mouth. Bryozoan larvae vary in form, but all have a band of cilia round the body which enables them to swim, a tuft of cilia at the top, and an adhesive sac that everts and anchors them when they settle on a surface. Some gymnolaemate species produce cyphonautes larvae which have little yolk but a well-developed mouth and gut, and live as plankton for a considerable time before settling. These larvae have triangular shells of chitin, with one corner at the top and the base open, forming a hood round the downward-facing mouth. In 2006 it was reported that the cilia of cyphonautes larvae use the same range of techniques as those of adults to capture food. Species that brood their embryos form larvae that are nourished by large yolks, have no gut and do not feed, and such larvae quickly settle on a surface. In all marine species the larvae produce cocoons in which they metamorphose completely after settling: the larva's epidermis becomes the lining of the coelom, and the internal tissues are converted to a food reserve that nourishes the developing zooid until it is ready to feed. The larvae of phylactolaemates produce multiple polypides, so that each new colony starts with several zooids. In all species the founder zooids then grow the new colonies by budding clones of themselves. In phylactolaemates, zooids die after producing several clones, so that living zooids are found only round the edges of a colony. Phylactolaemates can also reproduce asexually by a method that enables a colony's lineage to survive the variable and uncertain conditions of freshwater environments. Throughout summer and autumn they produce disc-shaped statoblasts, masses of cells that function as "survival pods" rather like the gemmules of sponges. Statoblasts form on the funiculus connected to the parent's gut, which nourishes them. As they grow, statoblasts develop protective bivalve-like shells made of chitin. When they mature, some statoblasts stick to the parent colony, some fall to the bottom ("sessoblasts"), some contain air spaces that enable them to float ("floatoblasts"), and some remain in the parent's cystid to re-build the colony if it dies. Statoblasts can remain dormant for considerable periods, and while dormant can survive harsh conditions such as freezing and desiccation. They can be transported across long distances by animals, floating vegetation, currents and winds, and even in the guts of larger animals. When conditions improve, the valves of the shell separate and the cells inside develop into a zooid that tries to form a new colony. Plumatella emarginata produces both "sessoblasts", which enable the lineage to control a good territory even if hard times decimate the parent colonies, and "floatoblasts", which spread to new sites. New colonies of Plumatella repens produce mainly "sessoblasts" while mature ones switch to "floatoblasts". A study estimated that one group of colonies in a patch measuring produced 800,000 statoblasts. Cupuladriid Bryozoa are capable of both sexual and asexual reproduction. The sexually reproducing colonies (aclonal) are the result of a larval cupuladriid growing into an adult stage whereas the asexual colonies(clonal) are a result of a fragment of a colony of cupuladriids growing into its own colony. The different forms of reproduction in cupuladriids are achieved through a variety of methods depending on the morphology and classification of the zooid. Ecology Habitats and distribution Most marine species live in tropical waters at depths less than . However, a few have been found in deep-sea trenches, especially around cold seeps, and others near the poles. The great majority of bryozoans are sessile. Typically, sessile bryozoans live on hard substrates including rocks, sand or shells. Boring bryozoans leave unique borehole traces after dissolving calcium carbonate substrates. Encrusting forms are much the commonest of these in shallow seas, but erect forms become more common as the depth increases. An example of incrustation on pebbles and cobbles is found in the diverse Pleistocene bryozoans found in northern Japan, where fossils have been found of single stones covered with more than 20 bryozoan species. Sediments with smaller particles, like sand or silt, are usually unsuitable habitat for bryozoans, but tiny colonies have been found encrusting grains of coarse sand. Some bryozoan species specialize in colonizing marine algae, seagrasses, and even mangrove roots; the genus Amphibiobeania lives on the leaves of mangrove trees and is called "amphibious" because it can survive regular exposure to air at low tide. There are a variety of "free-living" bryozoans that live un-attached to a substrate. A few forms such as Cristatella can move. Lunulitiform cheilostomes are one group of free-living bryozoans with mobile colonies. They form small round colonies un-attached to any substrate; colonies of the genus Selenaria have been observed to "walk" around using setae. Another cheilostome family, the Cupuladriidae, convergently evolved similarly shaped colonies capable of movement. When observed in an aquarium, Selenaria maculata colonies were recorded to crawl at a speed of one meter per hour, climb over each other, move toward light, and right themselves when turned upside-down. Later study of this genus showed that neuroelectrical activity in the colonies increased in correlation with movement toward light sources. It is theorized that the capacity for movement arose as a side effect when colonies evolved longer setae for unburying themselves from sediment. Other free-living bryozoans are moved freely by waves, currents, or other phenomena. An Antarctic species, Alcyonidium pelagosphaera, consists of floating colonies. The pelagic species is between in diameter, has the shape of a hollow sphere and consists of a single layer of autozooids. It is still not known if these colonies are pelagic their whole life or only represents a temporarily and previously undescribed juvenile stage. Colonies of the species Alcyonidium disciforme, which is disc-shaped and similarly free-living, inhabit muddy seabeds in the Arctic and can sequester sand grains they have engulfed, potentially using the sand as ballast to turn themselves right-side-up after they have been overturned. Some bryozoan species can form bryoliths, sphere-shaped free-living colonies that grow outward in all directions as they roll about on the seabed. In 2014 it was reported that the bryozoan Fenestrulina rugula had become a dominant species in parts of Antarctica. Global warming has increased the rate of scouring by icebergs, and this species is particularly adept at recolonizing scoured areas. The phylactolaemates live in all types of freshwater environment – lakes and ponds, rivers and streams, and estuaries – and are among the most abundant sessile freshwater animals. Some ctenostomes are exclusively freshwater while others prefer brackish water but can survive in freshwater. Scientists' knowledge of freshwater bryozoan populations in many parts of the world is incomplete, even in some parts of Europe. It was long thought that some freshwater species occurred worldwide, but since 2002 all of these have been split into more localized species. Bryozoans grow in clonal colonies. A larval Bryozoan settles on a hard substance and produces a colony asexually through budding. These colonies can grow thousands of individual zooids in a relatively short period of time. Even though colonies of zooids grow through asexual reproduction, Bryozoans are hermaphrodites and new colonies can be formed through sexual reproduction and the generation of free swimming larvae. When colonies grow too large, however, they can split in two. This is the only case where asexual reproduction results in a new colony separate from its predecessor. Most colonies are stationary. Indeed, these colonies tend to be settled on immobile substances such as sediment and coarse substances. There are some colonies of freshwater species such as Cristatella mucedo that are able to move slowly on a creeping foot. Interactions with non-human organisms Marine species are common on coral reefs, but seldom a significant proportion of the total biomass. In temperate waters, the skeletons of dead colonies form a significant component of shell gravels, and live ones are abundant in these areas. The marine lace-like bryozoan Membranipora membranacea produces spines in response to predation by several species of sea slugs (nudibranchs). Other predators on marine bryozoans include fish, sea urchins, pycnogonids, crustaceans, mites and starfish. In general marine echinoderms and molluscs eat masses of zooids by gouging pieces of colonies, breaking their mineralized "houses", while most arthropod predators on bryozoans eat individual zooids. In freshwater, bryozoans are among the most important filter feeders, along with sponges and mussels. Freshwater bryozoans are attacked by many predators, including snails, insects, and fish. In Thailand the introduced species Pomacea canaliculata (golden apple snail), which is generally a destructive herbivore, has wiped out phylactolaemate populations wherever it has appeared. P. canaliculata also preys on a common freshwater gymnolaemate, but with less devastating effect. Indigenous snails do not feed on bryozoans. Several species of the hydroid family Zancleidae have symbiotic relationships with bryozoans, some of which are beneficial to the hydroids while others are parasitic. Modifications appear in the shapes of some these hydroids, for example smaller tentacles or encrustation of the roots by bryozoans. The bryozoan Alcyonidium nodosum protects the whelk Burnupena papyracea against predation by the powerful and voracious rock lobster Jasus lalandii. While whelk shells encrusted by the bryozoans are stronger than those without this reinforcement, chemical defenses produced by the bryozoans are probably the more significant deterrent. In the Banc d'Arguin offshore Mauritania the species Acanthodesia commensale, which is generally growing attached to gravel and hard-substrate, has formed a facultative symbiotic relationship with hermit crabs of the species Pseudopagurus cf. granulimanus resulting in egg-size structures known as bryoliths. Nucleating on an empty gastropod shell, the bryozoan colonies form multilamellar skeletal crusts that produce spherical encrustations and extend the living chamber of the hermit crab through helicospiral tubular growth. Some phylactolaemate species are intermediate hosts for a group of myxozoa that have also been found to cause proliferative kidney disease, which is often fatal in salmonid fish, and has severely reduced wild fish populations in Europe and North America. Membranipora membranacea, whose colonies feed and grow exceptionally fast in a wide range of current speeds, was first noticed in the Gulf of Maine in 1987 and quickly became the most abundant organism living on kelps. This invasion reduced the kelp population by breaking their fronds, so that its place as the dominant "vegetation" in some areas was taken by another invader, the large alga Codium fragile tomentosoides. These changes reduced the area of habitat available for local fish and invertebrates. M. membranacea has also invaded the northwest coast of the US. A few freshwater species have been also found thousands of kilometers from their native ranges. Some may have been transported naturally as statoblasts. Others more probably were spread by humans, for example on imported water plants or as stowaways on ships. Interaction with humans Fish farms and hatcheries have lost stock to proliferative kidney disease, which is caused by one or more myxozoans that use bryozoans as alternate hosts. Some fishermen in the North Sea have had to find other work because of a form of eczema (a skin disease) known as "Dogger Bank itch", caused by contact with bryozoans that have stuck to nets and lobster pots. Marine bryozoans are often responsible for biofouling on ships' hulls, on docks and marinas, and on offshore structures. They are among the first colonizers of new or recently cleaned structures. Freshwater species are occasional nuisances in water pipes, drinking water purification equipment, sewage treatment facilities, and the cooling pipes of power stations. A group of chemicals called bryostatins can be extracted from the marine bryozoan Bugula neritina. In 2001 pharmaceutical company GPC Biotech licensed bryostatin 1 from Arizona State University for commercial development as a treatment for cancer. GPC Biotech canceled development in 2003, saying that bryostatin 1 showed little effectiveness and some toxic side effects. In January 2008 a clinical trial was submitted to the United States National Institutes of Health to measure the safety and effectiveness of Bryostatin 1 in the treatment of Alzheimer's disease. However, no participants had been recruited by the end of December 2008, when the study was scheduled for completion. More recent work shows it has positive effects on cognition in patients with Alzheimer's disease with few side effects. About of bryozoans must be processed to extract of bryostatin, As a result, synthetic equivalents have been developed that are simpler to produce and apparently at least as effective.
Biology and health sciences
Lophotrochozoa
null
3419
https://en.wikipedia.org/wiki/Bay%20leaf
Bay leaf
The bay leaf is an aromatic leaf commonly used as a herb in cooking. It can be used whole, either dried or fresh, in which case it is removed from the dish before consumption, or less commonly used in ground form. The flavor that a bay leaf imparts to a dish has not been universally agreed upon, but many agree it is a subtle addition. Bay leaves come from various plants and are used for their distinctive flavor and fragrance. The most common source is the bay laurel (Laurus nobilis). Other types include California bay laurel, Indian bay leaf, West Indian bay laurel, and Mexican bay laurel. Bay leaves contain essential oils, such as eucalyptol, terpenes, and methyleugenol, which contribute to their taste and aroma. Bay leaves are used in cuisines including Indian, Filipino, European, and Caribbean. They are typically used in soups, stews, meat, seafood, and vegetable dishes. The leaves should be removed from the cooked food before eating as they can be abrasive in the digestive tract. Bay leaves are used as an insect repellent in pantries and as an active ingredient in killing jars for entomology. In Eastern Orthodoxy liturgy, they are used to symbolize Jesus' destruction of Hades and freeing of the dead. While some visually similar plants have poisonous leaves, bay leaves are not toxic. However, they remain stiff even after cooking and may pose a choking hazard or cause harm to the digestive tract if swallowed whole or in large pieces. Canadian food and drug regulations set specific standards for bay leaves, including limits on ash content, moisture levels, and essential oil content. Sources Bay leaves come from several plants, such as: Bay laurel (Laurus nobilis, Lauraceae). Fresh or dried bay leaves are used in cooking for their distinctive flavour and fragrance. The leaves should be removed from the cooked food before eating (see safety section below). The leaves are often used to flavour soups, stews, braises and pâtés in many countries. The fresh leaves are very mild and do not develop their full flavour until several weeks after picking and drying. California bay leaf. The leaf of the California bay tree (Umbellularia californica, Lauraceae), also known as California laurel, Oregon myrtle, and pepperwood, is similar to the Mediterranean bay laurel but contains the toxin umbellulone, which can cause methemoglobinemia. Indian bay leaf or malabathrum (Cinnamomum tamala, Lauraceae) differs from bay laurel leaves, which are shorter and light- to medium-green in colour, with one large vein down the length of the leaf. Indian bay leaves are about twice as long and wider, usually olive green in colour, and have three veins running the length of the leaf. Culinarily, Indian bay leaves are quite different, having a fragrance and taste similar to cinnamon (cassia) bark, but milder. Indonesian bay leaf or Indonesian laurel (salam leaf, Syzygium polyanthum, Myrtaceae) is not commonly found outside Indonesia; this herb is applied to meat and, less often, to rice and to vegetables. West Indian bay leaf, the leaf of the West Indian bay tree (Pimenta racemosa, Myrtaceae) is used culinarily (especially in Caribbean cuisine) and to produce the cologne called bay rum. Mexican bay leaf (Litsea glaucescens, Lauraceae). Chemical constituents The leaves of the European / Mediterranean plant Laurus nobilis contain about 1.3% essential oils (ol. lauri folii), consisting of 45% eucalyptol, 12% other terpenes, 8–12% terpinyl acetate, 3–4% sesquiterpenes, 3% methyleugenol, and other α- and β-pinenes, phellandrene, linalool, geraniol, terpineol, and also contain lauric acid. Taste and aroma If eaten whole, Laurus nobilis bay leaves are pungent and have a sharp, bitter taste. As with many spices and flavourings, the fragrance of the bay leaf is more noticeable than its taste. When the leaf is dried, the aroma is herbal, slightly floral, and somewhat similar to oregano and thyme. Myrcene, a component of many essential oils used in perfumery, can be extracted from this bay leaf. They also contain eugenol. Uses In Indian cuisine, bay laurel leaves are sometimes used in place of Indian bay leaf, although they have a different flavour. They are most often used in rice dishes like biryani and as an ingredient in garam masala. Bay leaves are called (, in Hindi), Tejpātā (তেজপাতা) in Bengali, তেজ পাত in Assamese and usually rendered into English as Tej Patta. In the Philippines, dried bay laurel leaves are used in several Filipino dishes, such as menudo, beef pares, and adobo. Bay leaves were used for flavouring by the ancient Greeks. They are a fixture in the cooking of many European cuisines (particularly those of the Mediterranean), as well as in the Americas. They are used in soups, stews, brines, meat, seafood, vegetable dishes, and sauces. The leaves also flavour many classic French and Italian dishes. The leaves are most often used whole (sometimes in a ) and removed before serving (they can be abrasive in the digestive tract). Thai and Laotian cuisine employs bay leaf (, ) in a few Arab-influenced dishes, notably massaman curry. Bay leaves can also be crushed or ground before cooking. Crushed bay leaves impart more fragrance than whole leaves, but are more difficult to remove and thus they are often used in a muslin bag or tea infuser. Ground bay laurel may be substituted for whole leaves and does not need to be removed, but it is much stronger. To brew tea, bay leaves are best boiled for a brief period—typically 3 minutes—to prevent bitterness, as prolonged boiling may overpower the tea's flavor. Fresh bay leaves impart a stronger aroma, while dried leaves require longer steeping for a similar effect. Bay leaves are also used in the making of jerk chicken in the Caribbean Islands. The bay leaves are soaked and placed on the cool side of the grill. Pimento sticks are placed on top of the leaves, and the chicken is placed on top and smoked. The leaves are also added whole to soups, stews, and other Caribbean dishes. Bay leaves can also be used scattered in a pantry to repel meal moths, flies, and cockroaches. Mediouni-Ben Jemaa and Tersim 2011 find the essential oil to be usable as an insect repellent. Bay leaves have been used in entomology as the active ingredient in killing jars. The crushed, fresh, young leaves are put into the jar under a layer of paper. The vapors they release kill insects slowly but effectively and keep the specimens relaxed and easy to mount. The leaves discourage the growth of molds. They are not effective for killing large beetles and similar specimens, but insects that have been killed in a cyanide killing jar can be transferred to a laurel jar to await mounting. There is confusion in the literature about whether Laurus nobilis is a source of cyanide to any practical extent, but there is no evidence that cyanide is relevant to its value in killing jars. It certainly is rich in various essential oil components that could incapacitate insects in high concentrations; such compounds include 1,8-cineole, alpha-terpinyl acetate, and methyl eugenol. It also is unclear to what extent the alleged effect of cyanide released by the crushed leaves has been mis-attributed to Laurus nobilis in confusion with the unrelated Prunus laurocerasus, the so-called cherry laurel, which certainly does contain dangerous concentrations of cyanogenic glycosides together with the enzymes to generate the hydrogen cyanide from the glycocides if the leaf is physically damaged. Bay leaves are used in Eastern Orthodoxy liturgy. To mark Jesus' destruction of Hades and freeing of the dead, parishioners throw bay leaves and flowers into the air, letting them flutter to the ground. Safety Some members of the laurel family, as well as the unrelated but visually similar mountain laurel and cherry laurel, have leaves that are poisonous to humans and livestock. While these plants are not sold anywhere for culinary use, their visual similarity to bay leaves has led to the oft-repeated belief that bay leaves should be removed from food after cooking because they are poisonous. This is not true; bay leaves may be eaten without toxic effect. However, they remain unpleasantly stiff even after thorough cooking, and if swallowed whole or in large pieces they may pose a risk of harming the digestive tract or causing choking. Thus, most recipes that use bay leaves will recommend their removal after the cooking process has finished. Canadian food and drug regulations The Canadian government requires that ground bay leaves contain no more than 4.5% total ash material, with a maximum of 0.5% of which is insoluble in hydrochloric acid. To be considered dried, they must contain 7% moisture or less. The oil content cannot be less than 1 milliliter per 100 grams of the spice.
Biology and health sciences
Herbs and spices
Plants
3430
https://en.wikipedia.org/wiki/Bulletin%20board%20system
Bulletin board system
A bulletin board system (BBS), also called a computer bulletin board service (CBBS), is a computer server running software that allows users to connect to the system using a terminal program. Once logged in, the user performs functions such as uploading and downloading software and data, reading news and bulletins, and exchanging messages with other users through public message boards and sometimes via direct chatting. In the early 1980s, message networks such as FidoNet were developed to provide services such as NetMail, which is similar to internet-based email. Many BBSes also offered online games in which users could compete with each other. BBSes with multiple phone lines often provided chat rooms, allowing users to interact with each other. Bulletin board systems were in many ways a precursor to the modern form of the World Wide Web, social networks, and other aspects of the Internet. Low-cost, high-performance asynchronous modems drove the use of online services and BBSes through the early 1990s. InfoWorld estimated that there were 60,000 BBSes serving 17 million users in the United States alone in 1994, a collective market much larger than major online services such as CompuServe. The introduction of inexpensive dial-up internet service and the Mosaic web browser offered ease of use and global access that BBS and online systems did not provide, and led to a rapid crash in the market starting in late 1994 to early 1995. Over the next year, many of the leading BBS software providers went bankrupt and tens of thousands of BBSes disappeared. Today, BBSing survives largely as a nostalgic hobby in most parts of the world, but it is still a popular form of communication for middle aged Taiwanese (see PTT Bulletin Board System). Most surviving BBSes are accessible over Telnet and typically offer free email accounts, FTP services, and IRC. Some offer access through packet switched networks or packet radio connections. History Precursors A precursor to the public bulletin board system was Community Memory, which started in August 1973 in Berkeley, California. Microcomputers did not exist at that time, and modems were both expensive and slow. Community Memory ran on a mainframe computer and was accessed through terminals located in several San Francisco Bay Area neighborhoods. The poor quality of the original modem connecting the terminals to the mainframe prompted Community Memory hardware person, Lee Felsenstein, to invent the Pennywhistle modem, whose design was influential in the mid-1970s. Community Memory allowed the user to type messages into a computer terminal after inserting a coin, and offered a "pure" bulletin board experience with public messages only (no email or other features). It did offer the ability to tag messages with keywords, which the user could use in searches. The system acted primarily in the form of a buy and sell system with the tags taking the place of the more traditional classifications. But users found ways to express themselves outside these bounds, and the system spontaneously created stories, poetry and other forms of communications. The system was expensive to operate, and when their host machine became unavailable and a new one could not be found, the system closed in January 1975. Similar functionality was available to most mainframe users, which might be considered a sort of ultra-local BBS when used in this fashion. Commercial systems, expressly intended to offer these features to the public, became available in the late 1970s and formed the online service market that lasted into the 1990s. One particularly influential example was PLATO, which had thousands of users by the late 1970s, many of whom used the messaging and chat room features of the system in the same way that would later become common on BBSes. The first BBSes Early modems were generally either expensive or very simple devices using acoustic couplers to handle telephone operation. The user would pick up the phone, dial a number, then press the handset into rubber cups on the top of the modem. Disconnecting at the end of a call required the user to pick up the handset and return it to the phone. Examples of direct-connecting modems did exist, and these often allowed the host computer to send it commands to answer or hang up calls, but these were very expensive devices used by large banks and similar companies. With the introduction of microcomputers with expansion slots, like the S-100 bus machines and Apple II, it became possible for the modem to communicate instructions and data on separate lines. These machines typically only supported asynchronous communications, and synchronous modems were much more expensive than asynchronous modems. A number of modems of this sort were available by the late 1970s. This made the BBS possible for the first time, as it allowed software on the computer to pick up an incoming call, communicate with the user, and then hang up the call when the user logged off. The first public dial-up BBS was developed by Ward Christensen and Randy Suess, members of the Chicago Area Computer Hobbyists' Exchange (CACHE). According to an early interview, when Chicago was snowed under during the Great Blizzard of 1978, the two began preliminary work on the Computerized Bulletin Board System, or CBBS. The system came into existence largely through a fortuitous combination of Christensen having a spare S-100 bus computer and an early Hayes internal modem, and Suess's insistence that the machine be placed at his house in Chicago where it would be a local phone call for more users. Christensen patterned the system after the cork board his local computer club used to post information like "need a ride". CBBS officially went online on 16 February 1978. CBBS, which kept a count of callers, reportedly connected 253,301 callers before it was finally retired. Smartmodem A key innovation required for the popularization of the BBS was the Smartmodem manufactured by Hayes Microcomputer Products. Internal modems like the ones used by CBBS and similar early systems were usable, but generally expensive due to the manufacturer having to make a different modem for every computer platform they wanted to target. They were also limited to those computers with internal expansion, and could not be used with other useful platforms like video terminals. External modems were available for these platforms but required the phone to be dialed using a conventional handset. Internal modems could be software-controlled to perform outbound and inbound calls, but external modems had only the data pins to communicate with the host system. Hayes' solution to the problem was to use a small microcontroller to implement a system that examined the data flowing into the modem from the host computer, watching for certain command strings. This allowed commands to be sent to and from the modem using the same data pins as all the rest of the data, meaning it would work on any system that could support even the most basic modems. The Smartmodem could pick up the phone, dial numbers, and hang up again, all without any operator intervention. The Smartmodem was not necessary for BBS use but made overall operation dramatically simpler. It also improved usability for the caller, as most terminal software allowed different phone numbers to be stored and dialed on command, allowing the user to easily connect to a series of systems. The introduction of the Smartmodem led to the first real wave of BBS systems. Limited in speed and storage capacity, these systems were normally dedicated solely to messaging, private email and public forums. File transfers were extremely slow at these speeds, and file libraries were typically limited to text files containing lists of other BBS systems. These systems attracted a particular type of user who used the BBS as a unique type of communications medium, and when these local systems were crowded from the market in the 1990s, their loss was lamented for many years. Higher speeds, commercialization Speed improved with the introduction of 1200 bit/s asynchronous modems in the early 1980s, giving way to 2400 bit/s fairly rapidly. The improved performance led to a substantial increase in BBS popularity. Most of the information was displayed using ordinary ASCII text or ANSI art, but a number of systems attempted character-based graphical user interfaces (GUIs) which began to be practical at 2400 bit/s. There was a lengthy delay before 9600 bit/s models began to appear on the market. 9600 bit/s was not even established as a strong standard before V.32bis at 14.4 kbit/s took over in the early 1990s. This period also saw the rapid rise in capacity and a dramatic drop in the price of hard drives. By the late 1980s, many BBS systems had significant file libraries, and this gave rise to leechingusers calling BBSes solely for their files. These users would use the modem for some time, leaving less time for other users, who got busy signals. The resulting upheaval eliminated many of the pioneering message-centric systems. This also gave rise to a new class of BBS systems, dedicated solely to file upload and downloads. These systems charged for access, typically a flat monthly fee, compared to the per-hour fees charged by Event Horizons BBS and most online services. Many third-party services were developed to support these systems, offering simple credit card merchant account gateways for the payment of monthly fees, and entire file libraries on compact disk that made initial setup very easy. Early 1990s editions of Boardwatch were filled with ads for single-click install solutions dedicated to these new sysops. While this gave the market a bad reputation, it also led to its greatest success. During the early 1990s, there were a number of mid-sized software companies dedicated to BBS software, and the number of BBSes in service reached its peak. Towards the early 1990s, BBS became so popular that it spawned three monthly magazines, Boardwatch, BBS Magazine, and in Asia and Australia, Chips 'n Bits Magazine which devoted extensive coverage of the software and technology innovations and people behind them, and listings to US and worldwide BBSes. In addition, in the US, a major monthly magazine, Computer Shopper, carried a list of BBSes along with a brief abstract of each of their offerings. GUIs Through the late 1980s and early 1990s, there was considerable experimentation with ways to develop user-friendly interfaces for BBSes. Almost every popular system used ANSI-based color menus to make reading easier on capable hardware and terminal emulators, and most also allowed cursor commands to offer command-line recall and similar features. Another common feature was the use of autocomplete to make menu navigation simpler, a feature that would not re-appear on the Web until decades later. A number of systems also made forays into GUI-based interfaces, either using character graphics sent from the host, or using custom GUI-based terminal systems. The latter initially appeared on the Macintosh platform, where TeleFinder and FirstClass became very popular. FirstClass offered a host of features that would be difficult or impossible under a terminal-based solution, including bi-directional information flow and non-blocking operation that allowed the user to exchange files in both directions while continuing to use the message system and chat, all in separate windows. Will Price's "Hermes", released in 1988, combined a familiar PC style with Macintosh GUI interface. (Hermes was already "venerable" by 1994 although the Hermes II release remained popular.) Skypix featured on Amiga a complete markup language. It used a standardized set of icons to indicate mouse driven commands available online and to recognize different filetypes present on BBS storage media. It was capable of transmitting data like images, audio files, and audio clips between users linked to the same BBS or off-line if the BBS was in the circuit of the FidoNet organization. On the PC, efforts were more oriented to extensions of the original terminal concept, with the GUI being described in the information on the host. One example was the Remote Imaging Protocol, essentially a picture description system, which remained relatively obscure. Probably the ultimate development of this style of operation was the dynamic page implementation of the University of Southern California BBS (USCBBS) by Susan Biddlecomb, which predated the implementation of the HTML Dynamic web page. A complete Dynamic web page implementation was accomplished using TBBS with a TDBS add-on presenting a complete menu system individually customized for each user. Rise of the Internet and decline of BBS The demand for complex ANSI and ASCII screens and larger file transfers taxed available channel capacity, which in turn increased demand for faster modems. 14.4 kbit/s modems were standard for a number of years while various companies attempted to introduce non-standard systems with higher performancenormally about 19.2 kbit/s. Another delay followed due to a long V.34 standards process before 28.8 kbit/s was released, only to be quickly replaced by 33.6 kbit/s, and then 56 kbit/s. These increasing speeds had the side effect of dramatically reducing the noticeable effects of channel efficiency. When modems were slow, considerable effort was put into developing the most efficient protocols and display systems possible. TCP/IP ran slowly over 1200 bit/s modems. 56 kbit/s modems could access the protocol suite more quickly than with slower modems. Dial-up Internet service became widely available in the mid-1990s to the general public outside of universities and research laboratories, and connectivity was included in most general-use operating systems by default as Internet access became popular. These developments together resulted in the sudden obsolescence of bulletin board technology in 1995 and the collapse of its supporting market. Technically, Internet service offered an enormous advantage over BBS systems, as a single connection to the user's Internet service provider allowed them to contact services around the world. In comparison, BBS systems relied on a direct point-to-point connection, so even dialing multiple local systems required multiple phone calls. Internet protocols also allowed a single connection to be used to contact multiple services simultaneously; for example, downloading files from an FTP library while checking the weather on a local news website. Even with a shell account, it was possible to multitask using job control or a terminal multiplexer such as GNU Screen. In comparison, a connection to a BBS allowed access only to the information on that system. Estimating numbers According to the FidoNet Nodelist, BBSes reached their peak usage around 1996, which was the same period that the World Wide Web and AOL became mainstream. BBSes rapidly declined in popularity thereafter, and were replaced by systems using the Internet for connectivity. Some of the larger commercial BBSes, such as MaxMegabyte and ExecPC BBS, evolved into Internet service providers. The website textfiles.com serves as an archive that documents the history of the BBS. The historical BBS list on textfiles.com contains over 105,000 BBSes that have existed over a span of 20 years in North America alone. The owner of textfiles.com, Jason Scott, also produced BBS: The Documentary, a DVD film that chronicles the history of the BBS and features interviews with well-known people (mostly from the United States) from the heyday BBS era. In the 2000s, most traditional BBS systems migrated to the Internet using Telnet or SSH protocols. As of September 2022, between 900 and 1000 are thought to be active via the Internet fewer than 30 of these being of the traditional "dial-up" (modem) variety. Software and hardware Unlike modern websites and online services that are typically hosted by third-party companies in commercial data centers, BBS computers (especially for smaller boards) were typically operated from the system operator's home. As such, access could be unreliable, and in many cases, only one user could be on the system at a time. Only larger BBSes with multiple phone lines using specialized hardware, multitasking software, or a LAN connecting multiple computers, could host multiple simultaneous users. The first BBSes each used their own unique software, quite often written entirely or at least customized by the system operators themselves, running on early S-100 bus microcomputer systems such as the Altair 8800, IMSAI 8080 and Cromemco under the CP/M operating system. Soon after, BBS software was being written for all of the major home computer systems of the late 1970s erathe Apple II, Atari 8-bit computers, Commodore PET, TI-99/4A, and TRS-80 being some of the most popular. In 1981, the IBM Personal Computer was introduced and MS-DOS soon became the operating system on which the majority of BBS programs were run. RBBS-PC, ported over from the CP/M world, and Fido BBS, developed by Tom Jennings (who later founded FidoNet) were the first notable MS-DOS BBS programs. Many successful commercial BBS programs were developed, such as PCBoard BBS, RemoteAccess BBS, Magpie and Wildcat! BBS. Popular freeware BBS programs included Telegard BBS and Renegade BBS, which both had early origins from leaked WWIV BBS source code. BBS systems on other systems remained popular, especially home computers, largely because they catered to the audience of users running those machines. The ubiquitous Commodore 64 (introduced in 1982) was a common platform in the 1980s. Popular commercial BBS programs were Blue Board, Ivory BBS, Color64 and CNet 64. There was also a devoted contingent of BBS users on TI-99/4A computers, long after Texas Instruments had discontinued the computer in the aftermath of their price war with Commodore. Popular BBSes for the TI-99/4A included Techie, TIBBS (Texas Instruments Bulletin Board System), TI-COMM, and Zyolog. In the early 1990s, a small number of BBSes were also running on the Commodore Amiga. Popular BBS software for the Amiga were ABBS, Amiexpress, C-Net, StormforceBBS, Infinity and Tempest. There was also a small faction of devoted Atari BBSes that used the Atari 800, then the 800XL, and eventually the 1040ST. The earlier machines generally lacked hard drive capabilities, which limited them primarily to messaging. MS-DOS continued to be the most popular operating system for BBS use up until the mid-1990s, and in the early years, most multi-node BBSes were running under a DOS based multitasker such as DESQview or consisted of multiple computers connected via a LAN. In the late 1980s, a handful of BBS developers implemented multitasking communications routines inside their software, allowing multiple phone lines and users to connect to the same BBS computer. These included Galacticomm's MajorBBS (later WorldGroup), eSoft The Bread Board System (TBBS), and Falken. Other popular BBS's were Maximus and Opus, with some associated applications such as BinkleyTerm being based on characters from the Berkley Breathed cartoon strip of Bloom County. Though most BBS software had been written in BASIC or Pascal (with some low-level routines written in assembly language), the C language was starting to gain popularity. By 1995, many of the DOS-based BBSes had begun switching to modern multitasking operating systems, such as OS/2, Windows 95, and Linux. One of the first graphics-based BBS applications was Excalibur BBS with low-bandwidth applications that required its own client for efficiency. This led to one of the earliest implementations of Electronic Commerce in 1996 with replication of partner stores around the globe. TCP/IP networking allowed most of the remaining BBSes to evolve and include Internet hosting capabilities. Recent BBS software, such as Synchronet, Mystic BBS, EleBBS, DOC, Magpie or Wildcat! BBS, provide access using the Telnet protocol rather than dialup, or by using legacy DOS-based BBS software with a FOSSIL-to-Telnet redirector such as NetFoss. Presentation BBSes were generally text-based, rather than GUI-based, and early BBSes conversed using the simple ASCII character set. However, some home computer manufacturers extended the ASCII character set to take advantage of the advanced color and graphics capabilities of their systems. BBS software authors included these extended character sets in their software, and terminal program authors included the ability to display them when a compatible system was called. Atari's native character set was known as ATASCII, while most Commodore BBSes supported PETSCII. PETSCII was also supported by the nationwide online service Quantum Link. The use of these custom character sets was generally incompatible between manufacturers. Unless a caller was using terminal emulation software written for, and running on, the same type of system as the BBS, the session would simply fall back to simple ASCII output. For example, a Commodore 64 user calling an Atari BBS would use ASCII rather than the native character set of either. As time progressed, most terminal programs began using the ASCII standard, but could use their native character set if it was available. COCONET, a BBS system made by Coconut Computing, Inc., was released in 1988 and only supported a GUI (no text interface was initially available but eventually became available around 1990), and worked in EGA/VGA graphics mode, which made it stand out from text-based BBS systems. COCONET's bitmap and vector graphics and support for multiple type fonts were inspired by the PLATO system, and the graphics capabilities were based on what was available in the Borland Graphics Interface library. A competing approach called Remote Imaging Protocol (RIP) emerged and was promoted by Telegrafix in the early to mid-1990s but it never became widespread. A teletext technology called NAPLPS was also considered, and although it became the underlying graphics technology behind the Prodigy service, it never gained popularity in the BBS market. There were several GUI-based BBSes on the Apple Macintosh platform, including TeleFinder and FirstClass, but these were mostly confined to the Mac market. In the UK, the BBC Micro based OBBS software, available from Pace for use with their modems, optionally allowed for color and graphics using the Teletext based graphics mode available on that platform. Other systems used the Viewdata protocols made popular in the UK by British Telecom's Prestel service, and the on-line magazine Micronet 800 whom were busy giving away modems with their subscriptions. Over time, terminal manufacturers started to support ANSI X3.64 in addition to or instead of proprietary terminal control codes, e.g., color, cursor positioning. The most popular form of online graphics was ANSI art, which combined the IBM Extended ASCII character set's blocks and symbols with ANSI escape sequences to allow changing colors on demand, provide cursor control and screen formatting, and even basic musical tones. During the late 1980s and early 1990s, most BBSes used ANSI to make elaborate welcome screens, and colorized menus, and thus, ANSI support was a sought-after feature in terminal client programs. The development of ANSI art became so popular that it spawned an entire BBS "artscene" subculture devoted to it. The Amiga Skyline BBS software in 1988 featured a script markup language communication protocol called Skypix which was capable of giving the user a complete graphical interface, featuring rich graphics, changeable fonts, mouse-controlled actions, animations and sound. Today, most BBS software that is still actively supported, such as Worldgroup, Wildcat! BBS and Citadel/UX, is Web-enabled, and the traditional text interface has been replaced (or operates concurrently) with a Web-based user interface. For those more nostalgic for the true BBS experience, one can use NetSerial (Windows) or DOSBox (Windows/*nix) to redirect DOS COM port software to telnet, allowing them to connect to Telnet BBSes using 1980s and 1990s era modem terminal emulation software, like Telix, Terminate, Qmodem and Procomm Plus. Modern 32-bit terminal emulators such as mTelnet and SyncTerm include native telnet support. Content and access Since most early BBSes were run by computer hobbyists, content was largely technical, with user communities revolving around hardware and software discussions. As the BBS phenomenon grew, so did the popularity of special interest boards. Bulletin Board Systems could be found for almost every hobby and interest. Popular interests included politics, religion, music, dating, and alternative lifestyles. Many system operators also adopted a theme in which they customized their entire BBS (welcome screens, prompts, menus, and so on) to reflect that theme. Common themes were based on fantasy, or were intended to give the user the illusion of being somewhere else, such as in a sanatorium, wizard's castle, or on a pirate ship. In the early days, the file download library consisted of files that the system operators obtained themselves from other BBSes and friends. Many BBSes inspected every file uploaded to their public file download library to ensure that the material did not violate copyright law. As time went on, shareware CD-ROMs were sold with up to thousands of files on each CD-ROM. Small BBSes copied each file individually to their hard drive. Some systems used a CD-ROM drive to make the files available. Advanced BBSes used Multiple CD-ROM disc changer units that switched 6 CD-ROM disks on demand for the caller(s). Large systems used all 26 DOS drive letters with multi-disk changers housing tens of thousands of copyright-free shareware or freeware files available to all callers. These BBSes were generally more family-friendly, avoiding the seedier side of BBSes. Access to these systems varied from single to multiple modem lines with some requiring little or no confirmed registration. Some BBSes, called elite, WaReZ, or pirate boards, were exclusively used for distributing cracked software, phreaking materials, and other questionable or unlawful content. These BBSes often had multiple modems and phone lines, allowing several users to upload and download files at once. Most elite BBSes used some form of new user verification, where new users would have to apply for membership and attempt to prove that they were not a law enforcement officer or a lamer. The largest elite boards accepted users by invitation only. Elite boards also spawned their own subculture and gave rise to the slang known today as leetspeak. Another common type of board was the support BBS run by a manufacturer of computer products or software. These boards were dedicated to supporting users of the company's products with question and answer forums, news and updates, and downloads. Most of them were not a free call. Today, these services have moved to the Web. Some general-purpose Bulletin Board Systems had special levels of access that were given to those who paid extra money, uploaded useful files or knew the system operator personally. These specialty and pay BBSes usually had something unique to offer their users, such as large file libraries, warez, pornography, chat rooms or Internet access. Pay BBSes such as The WELL and Echo NYC (now Internet forums rather than dial-up), ExecPC, PsudNetwork and MindVox (which folded in 1996) were admired for their close, friendly communities and quality discussion forums. However, many free BBSes also maintained close communities, and some even had annual or bi-annual events where users would travel great distances to meet face-to-face with their on-line friends. These events were especially popular with BBSes that offered chat rooms. Some of the BBSes that provided access to illegal content faced opposition. On July 12, 1985, in conjunction with a credit card fraud investigation, the Middlesex County, New Jersey Sheriff's department raided and seized The Private Sector BBS, which was the official BBS for grey hat hacker quarterly 2600 Magazine at the time. The notorious Rusty n Edie's BBS, in Boardman, Ohio, was raided by the FBI in January 1993 for trading unlicensed software, and later sued by Playboy for copyright infringement in November 1997. In Flint, Michigan, a 21-year-old man was charged with distributing child pornography through his BBS in March 1996. Networks Most early BBSes operated as individual systems. Information contained on that BBS never left the system, and users would only interact with the information and user community on that BBS alone. However, as BBSes became more widespread, there evolved a desire to connect systems together to share messages and files with distant systems and users. The largest such network was FidoNet. As is it was prohibitively expensive for the hobbyist system operator to have a dedicated connection to another system, FidoNet was developed as a store and forward network. Private email (Netmail), public message boards (Echomail) and eventually even file attachments on a FidoNet-capable BBS would be bundled into one or more archive files over a set time interval. These archive files were then compressed with ARC or ZIP and forwarded to (or polled by) another nearby node or hub via a dialup Xmodem session. Messages would be relayed around various FidoNet hubs until they were eventually delivered to their destination. The hierarchy of FidoNet BBS nodes, hubs, and zones was maintained in a routing table called a Nodelist. Some larger BBSes or regional FidoNet hubs would make several transfers per day, some even to multiple nodes or hubs, and as such, transfers usually occurred at night or in the early morning when toll rates were lowest. In Fido's heyday, sending a Netmail message to a user on a distant FidoNet node, or participating in an Echomail discussion could take days, especially if any FidoNet nodes or hubs in the message's route only made one transfer call per day. FidoNet was platform-independent and would work with any BBS that was written to use it. BBSes that did not have integrated FidoNet capability could usually add it using an external FidoNet front-end mailer such as SEAdog, FrontDoor, BinkleyTerm, InterMail or D'Bridge, and a mail processor such as FastEcho or Squish. The front-end mailer would conduct the periodic FidoNet transfers, while the mail processor would usually run just before and just after the mailer ran. This program would scan for and pack up new outgoing messages, and then unpack, sort and "toss" the incoming messages into a BBS user's local email box or into the BBS's local message bases reserved for Echomail. As such, these mail processors were commonly called "scanner/tosser/packers". Many other BBS networks followed the example of FidoNet, using the same standards and the same software. These were called FidoNet Technology Networks (FTNs). They were usually smaller and targeted at selected audiences. Some networks used QWK doors, and others such as RelayNet (RIME) and WWIVnet used non-Fido software and standards. Before commercial Internet access became common, these networks of BBSes provided regional and international e-mail and message bases. Some even provided gateways, such as UFGATE, by which members could send and receive e-mail to and from the Internet via UUCP, and many FidoNet discussion groups were shared via gateway to Usenet. Elaborate schemes allowed users to download binary files, search gopherspace, and interact with distant programs, all using plain-text e-mail. As the volume of FidoNet Mail increased and newsgroups from the early days of the Internet became available, satellite data downstream services became viable for larger systems. The satellite service provided access to FidoNet and Usenet newsgroups in large volumes at a reasonable fee. By connecting a small dish and receiver, a constant downstream of thousands of FidoNet and Usenet newsgroups could be received. The local BBS only needed to upload new outgoing messages via the modem network back to the satellite service. This method drastically reduced phone data transfers while dramatically increasing the number of message forums. FidoNet is still in use today, though in a much smaller form, and many Echomail groups are still shared with Usenet via FidoNet to Usenet gateways. Widespread abuse of Usenet with spam and pornography has led to many of these FidoNet gateways to cease operation completely. Shareware and freeware Much of the shareware movement was started via user distribution of software through BBSes. A notable example was Phil Katz's PKARC (and later PKZIP, using the same ".zip" algorithm that WinZip and other popular archivers now use); also other concepts of software distribution like freeware, postcardware like JPEGview and donationware like Red Ryder for the Macintosh first appeared on BBS sites. Doom from id Software and nearly all Apogee Software games were distributed as shareware. The Internet has largely erased the distinction of sharewaremost users now download the software directly from the developer's website rather than receiving it from another BBS user "sharing" it. Today, shareware often refers to electronically distributed software from a small developer. Many commercial BBS software companies that continue to support their old BBS software products switched to the shareware model or made it entirely free. Some companies were able to make the move to the Internet and provide commercial products with BBS capabilities. Features A classic BBS had: A computer One or more modems One or more phone lines, with more allowing for increased concurrent users A BBS software package A sysop – system operator A user community The BBS software usually provides: Menu systems One or more message bases Uploading and downloading of message packets in QWK format using XMODEM, YMODEM or ZMODEM File areas Live viewing of all caller activity by the system operator Voting – opinion booths Statistics on message posters, top uploaders / downloaders Online games (usually single player or only a single active player at a given time) A doorway to third-party online games Usage auditing capabilities Multi-user chat (only possible on multi-line BBSes) Internet email (more common in later Internet-connected BBSes) Networked message boards Most modern BBSes allow telnet access over the Internet using a telnet server and a virtual FOSSIL driver. A "yell for SysOp" page caller side menu item that sounded an audible alarm to the system operator. If chosen, the system operator could then initiate a text-to-text chat with the caller. Primitive social networking features, such as leaving messages on a user's profile
Technology
Internet
null
3446
https://en.wikipedia.org/wiki/Bomber
Bomber
A bomber is a military combat aircraft that utilizes air-to-ground weaponry to drop bombs, launch torpedoes, or deploy air-launched cruise missiles. Bombs were first dropped from an aircraft during the Italo-Turkish War, with the first major deployments coming in the First World War and Second World War by all major airforces, damaging cities, towns, and rural areas. The first bomber planes in history were the Italian Caproni Ca 30 and British Bristol T.B.8, both of 1913. Some bombers were decorated with nose art or victory markings. There are two major classifications of bomber: strategic and tactical. Strategic bombing is done by heavy bombers primarily designed for long-range bombing missions against strategic targets to diminish the enemy's ability to wage war by limiting access to resources through crippling infrastructure, reducing industrial output, or inflicting massive civilian casualties to an extent deemed to force surrender. Tactical bombing is aimed at countering enemy military activity and in supporting offensive operations, and is typically assigned to smaller aircraft operating at shorter ranges, typically near the troops on the ground or against enemy shipping. During WWII with engine power as a major limitation, combined with the desire for accuracy and other operational factors, bomber designs tended to be tailored to specific roles. Early in the Cold War however, bombers were the only means of carrying nuclear weapons to enemy targets, and held the role of deterrence. With the advent of guided air-to-air missiles, bombers needed to avoid interception. High-speed and high-altitude flying became a means of evading detection and attack. With the advent of ICBMs the role of the bomber was brought to a more tactical focus in close air support roles, and a focus on stealth technology for strategic bombers. Classification Strategic Strategic bombing is done by heavy bombers primarily designed for long-range bombing missions against strategic targets such as supply bases, bridges, factories, shipyards, and cities themselves, to diminish the enemy's ability to wage war by limiting access to resources through crippling infrastructure or reducing industrial output. Current examples include the strategic nuclear-armed bombers: B-2 Spirit, B-52 Stratofortress, Tupolev Tu-95 'Bear', Tupolev Tu-22M 'Backfire' and Tupolev Tu-160 "Blackjack"; historically notable examples are the: Gotha G.IV, Avro Lancaster, Heinkel He 111, Junkers Ju 88, Boeing B-17 Flying Fortress, Consolidated B-24 Liberator, Boeing B-29 Superfortress, and Tupolev Tu-16 'Badger'. Tactical Tactical bombing, aimed at countering enemy military activity and in supporting offensive operations, is typically assigned to smaller aircraft operating at shorter ranges, typically near the troops on the ground or against enemy shipping. This role is filled by tactical bomber class, which crosses and blurs with various other aircraft categories: light bombers, medium bombers, dive bombers, interdictors, fighter-bombers, attack aircraft, multirole combat aircraft, and others. Current examples: Xian JH-7, Dassault Mirage 2000D, and the Panavia Tornado IDS Historical examples: Ilyushin Il-2 Shturmovik, Junkers Ju 87 Stuka, Republic P-47 Thunderbolt, Hawker Typhoon and Mikoyan MiG-27. History The first use of an air-dropped bomb (actually four hand grenades specially manufactured by the Italian naval arsenal) was carried out by Italian Second Lieutenant Giulio Gavotti on 1 November 1911 during the Italo-Turkish war in Libya – although his plane was not designed for the task of bombing, and his improvised attacks on Ottoman positions had little impact. These picric acid-filled steel spheres were nicknamed "ballerinas" from the fluttering fabric ribbons attached. Early bombers On 16 October 1912, Bulgarian observer Prodan Tarakchiev dropped two of those bombs on the Turkish railway station of Karağaç (near the besieged Edirne) from an Albatros F.2 aircraft piloted by Radul Milkov, during the First Balkan War. This is deemed to be the first use of an aircraft as a bomber. The first heavier-than-air aircraft purposely designed for bombing were the Italian Caproni Ca 30 and British Bristol T.B.8, both of 1913. The Bristol T.B.8 was an early British single engined biplane built by the Bristol Aeroplane Company. They were fitted with a prismatic Bombsight in the front cockpit and a cylindrical bomb carrier in the lower forward fuselage capable of carrying twelve 10 lb (4.5 kg) bombs, which could be dropped singly or as a salvo as required. The aircraft was purchased for use both by the Royal Naval Air Service and the Royal Flying Corps (RFC), and three T.B.8s, that were being displayed in Paris during December 1913 fitted with bombing equipment, were sent to France following the outbreak of war. Under the command of Charles Rumney Samson, a bombing attack on German gun batteries at Middelkerke, Belgium was executed on 25 November 1914. The dirigible, or airship, was developed in the early 20th century. Early airships were prone to disaster, but slowly the airship became more dependable, with a more rigid structure and stronger skin. Prior to the outbreak of war, Zeppelins, a larger and more streamlined form of airship designed by German Count Ferdinand von Zeppelin, were outfitted to carry bombs to attack targets at long range. These were the first long range, strategic bombers. Although the German air arm was strong, with a total of 123 airships by the end of the war, they were vulnerable to attack and engine failure, as well as navigational issues. German airships inflicted little damage on all 51 raids, with 557 Britons killed and 1,358 injured. The German Navy lost 53 of its 73 airships, and the German Army lost 26 of its 50 ships. The Caproni Ca 30 was built by Gianni Caproni in Italy. It was a twin-boom biplane with three 67 kW (80 hp) Gnome rotary engines and first flew in October 1914. Test flights revealed power to be insufficient and the engine layout unworkable, and Caproni soon adopted a more conventional approach installing three 81 kW (110 hp) Fiat A.10s. The improved design was bought by the Italian Army and it was delivered in quantity from August 1915. While mainly used as a trainer, Avro 504s were also briefly used as bombers at the start of the First World War by the Royal Naval Air Service (RNAS) when they were used for raids on the German airship sheds. Strategic bombing Bombing raids and interdiction operations were mainly carried out by French and British forces during the War as the German air arm was forced to concentrate its resources on a defensive strategy. Notably, bombing campaigns formed a part of the British offensive at the Battle of Neuve Chapelle in 1915, with Royal Flying Corps squadrons attacking German railway stations in an attempt to hinder the logistical supply of the German army. The early, improvised attempts at bombing that characterized the early part of the war slowly gave way to a more organized and systematic approach to strategic and tactical bombing, pioneered by various air power strategists of the Entente, especially Major Hugh Trenchard; he was the first to advocate that there should be "... sustained [strategic bombing] attacks with a view to interrupting the enemy's railway communications ... in conjunction with the main operations of the Allied Armies." When the war started, bombing was very crude (hand-held bombs were thrown over the side) yet by the end of the war long-range bombers equipped with complex mechanical bombing computers were being built, designed to carry large loads to destroy enemy industrial targets. The most important bombers used in World War I were the French Breguet 14, British de Havilland DH-4, German Albatros C.III and Russian Sikorsky Ilya Muromets. The Russian Sikorsky Ilya Muromets, was the first four-engine bomber to equip a dedicated strategic bombing unit during World War I. This heavy bomber was unrivaled in the early stages of the war, as the Central Powers had no comparable aircraft until much later. Long range bombing raids were carried out at night by multi-engine biplanes such as the Gotha G.IV (whose name was synonymous with all multi-engine German bombers) and later the Handley Page Type O; the majority of bombing was done by single-engined biplanes with one or two crew members flying short distances to attack enemy lines and immediate hinterland. As the effectiveness of a bomber was dependent on the weight and accuracy of its bomb load, ever larger bombers were developed starting in World War I, while considerable money was spent developing suitable bombsights. World War II With engine power as a major limitation, combined with the desire for accuracy and other operational factors, bomber designs tended to be tailored to specific roles. By the start of the war this included: dive bomber – specially strengthened for vertical diving attacks for greater accuracy light bomber, medium bomber and heavy bomber – subjective definitions based on size and/or payload capacity torpedo bomber – specialized aircraft armed with torpedoes ground attack aircraft – aircraft used against targets on a battlefield such as troop or tank concentrations night bomber – specially equipped to operate at night when opposing defences are limited maritime patrol – long range bombers that were used against enemy shipping, particularly submarines fighter-bomber – a modified fighter aircraft used as a light bomber Bombers of this era were not intended to attack other aircraft although most were fitted with defensive weapons. World War II saw the beginning of the widespread use of high speed bombers which began to minimize defensive weaponry in order to attain higher speed. Some smaller designs were used as the basis for night fighters. A number of fighters, such as the Hawker Hurricane were used as ground attack aircraft, replacing earlier conventional light bombers that proved unable to defend themselves while carrying a useful bomb load. Cold War left|thumb|An RAF Avro Vulcan At the start of the Cold War, bombers were the only means of carrying nuclear weapons to enemy targets, and had the role of deterrence. With the advent of guided air-to-air missiles, bombers needed to avoid interception. High-speed and high-altitude flying became a means of evading detection and attack. Designs such as the English Electric Canberra could fly faster or higher than contemporary fighters. When surface-to-air missiles became capable of hitting high-flying bombers, bombers were flown at low altitudes to evade radar detection and interception. Once "stand off" nuclear weapon designs were developed, bombers did not need to pass over the target to make an attack; they could fire and turn away to escape the blast. Nuclear strike aircraft were generally finished in bare metal or anti-flash white to minimize absorption of thermal radiation from the flash of a nuclear explosion. The need to drop conventional bombs remained in conflicts with non-nuclear powers, such as the Vietnam War or Malayan Emergency. The development of large strategic bombers stagnated in the later part of the Cold War because of spiraling costs and the development of the Intercontinental ballistic missile (ICBM) – which was felt to have similar deterrent value while being impossible to intercept. Because of this, the United States Air Force XB-70 Valkyrie program was cancelled in the early 1960s; the later B-1B Lancer and B-2 Spirit aircraft entered service only after protracted political and development problems. Their high cost meant that few were built and the 1950s-designed B-52s are projected to remain in use until the 2040s. Similarly, the Soviet Union used the intermediate-range Tu-22M 'Backfire' in the 1970s, but their Mach 3 bomber project stalled. The Mach 2 Tu-160 'Blackjack' was built only in tiny numbers, leaving the 1950s Tupolev Tu-16 and Tu-95 'Bear' heavy bombers to continue being used into the 21st century. The British strategic bombing force largely came to an end when the V bomber force was phased out; the last of which left service in 1983. The French Mirage IV bomber version was retired in 1996, although the Mirage 2000N and the Rafale have taken on this role. The only other nation that fields strategic bombing forces is China, which has a number of Xian H-6s. Modern era Currently, only the United States Air Force, the Russian Aerospace Forces' Long-Range Aviation command, and China's People's Liberation Army Air Force operate strategic heavy bombers. Other air forces have transitioned away from dedicated bombers in favor of multirole combat aircraft. At present, these air forces are each developing stealth replacements for their legacy bomber fleets, the USAF with the Northrop Grumman B-21, the Russian Aerospace Forces with the PAK DA, and the PLAAF with the Xian H-20. , the B-21 is expected to enter service by 2026–2027. The B-21 would be capable of loitering near target areas for extended periods of time. Other uses Occasionally, military aircraft have been used to bomb ice jams with limited success as part of an effort to clear them. In 2018, the Swedish Air Force dropped bombs on a forest fire, snuffing out flames with the aid of the blast waves. The fires had been raging in an area contaminated with unexploded ordnance, rendering them difficult to extinguish for firefighters.
Technology
Military aviation
null
3717
https://en.wikipedia.org/wiki/Brain
Brain
The brain is an organ that serves as the center of the nervous system in all vertebrate and most invertebrate animals. It consists of nervous tissue and is typically located in the head (cephalization), usually near organs for special senses such as vision, hearing and olfaction. Being the most specialized organ, it is responsible for receiving information from the sensory nervous system, processing those information (thought, cognition, and intelligence) and the coordination of motor control (muscle activity and endocrine system). While invertebrate brains arise from paired segmental ganglia (each of which is only responsible for the respective body segment) of the ventral nerve cord, vertebrate brains develop axially from the midline dorsal nerve cord as a vesicular enlargement at the rostral end of the neural tube, with centralized control over all body segments. All vertebrate brains can be embryonically divided into three parts: the forebrain (prosencephalon, subdivided into telencephalon and diencephalon), midbrain (mesencephalon) and hindbrain (rhombencephalon, subdivided into metencephalon and myelencephalon). The spinal cord, which directly interacts with somatic functions below the head, can be considered a caudal extension of the myelencephalon enclosed inside the vertebral column. Together, the brain and spinal cord constitute the central nervous system in all vertebrates. In humans, the cerebral cortex contains approximately 14–16 billion neurons, and the estimated number of neurons in the cerebellum is 55–70 billion. Each neuron is connected by synapses to several thousand other neurons, typically communicating with one another via cytoplasmic processes known as dendrites and axons. Axons are usually myelinated and carry trains of rapid micro-electric signal pulses called action potentials to target specific recipient cells in other areas of the brain or distant parts of the body. The prefrontal cortex, which controls executive functions, is particularly well developed in humans. Physiologically, brains exert centralized control over a body's other organs. They act on the rest of the body both by generating patterns of muscle activity and by driving the secretion of chemicals called hormones. This centralized control allows rapid and coordinated responses to changes in the environment. Some basic types of responsiveness such as reflexes can be mediated by the spinal cord or peripheral ganglia, but sophisticated purposeful control of behavior based on complex sensory input requires the information integrating capabilities of a centralized brain. The operations of individual brain cells are now understood in considerable detail but the way they cooperate in ensembles of millions is yet to be solved. Recent models in modern neuroscience treat the brain as a biological computer, very different in mechanism from a digital computer, but similar in the sense that it acquires information from the surrounding world, stores it, and processes it in a variety of ways. This article compares the properties of brains across the entire range of animal species, with the greatest attention to vertebrates. It deals with the human brain insofar as it shares the properties of other brains. The ways in which the human brain differs from other brains are covered in the human brain article. Several topics that might be covered here are instead covered there because much more can be said about them in a human context. The most important that are covered in the human brain article are brain disease and the effects of brain damage. Structure The shape and size of the brain varies greatly between species, and identifying common features is often difficult. Nevertheless, there are a number of principles of brain architecture that apply across a wide range of species. Some aspects of brain structure are common to almost the entire range of animal species; others distinguish "advanced" brains from more primitive ones, or distinguish vertebrates from invertebrates. The simplest way to gain information about brain anatomy is by visual inspection, but many more sophisticated techniques have been developed. Brain tissue in its natural state is too soft to work with, but it can be hardened by immersion in alcohol or other fixatives, and then sliced apart for examination of the interior. Visually, the interior of the brain consists of areas of so-called grey matter, with a dark color, separated by areas of white matter, with a lighter color. Further information can be gained by staining slices of brain tissue with a variety of chemicals that bring out areas where specific types of molecules are present in high concentrations. It is also possible to examine the microstructure of brain tissue using a microscope, and to trace the pattern of connections from one brain area to another. Cellular structure The brains of all species are composed primarily of two broad classes of brain cells: neurons and glial cells. Glial cells (also known as glia or neuroglia) come in several types, and perform a number of critical functions, including structural support, metabolic support, insulation, and guidance of development. Neurons, however, are usually considered the most important cells in the brain. In humans, the cerebral cortex contains approximately 14–16 billion neurons, and the estimated number of neurons in the cerebellum is 55–70 billion. Each neuron is connected by synapses to several thousand other neurons. The property that makes neurons unique is their ability to send signals to specific target cells, sometimes over long distances. They send these signals by means of an axon, which is a thin protoplasmic fiber that extends from the cell body and projects, usually with numerous branches, to other areas, sometimes nearby, sometimes in distant parts of the brain or body. The length of an axon can be extraordinary: for example, if a pyramidal cell (an excitatory neuron) of the cerebral cortex were magnified so that its cell body became the size of a human body, its axon, equally magnified, would become a cable a few centimeters in diameter, extending more than a kilometer. These axons transmit signals in the form of electrochemical pulses called action potentials, which last less than a thousandth of a second and travel along the axon at speeds of 1–100 meters per second. Some neurons emit action potentials constantly, at rates of 10–100 per second, usually in irregular patterns; other neurons are quiet most of the time, but occasionally emit a burst of action potentials. Axons transmit signals to other neurons by means of specialized junctions called synapses. A single axon may make as many as several thousand synaptic connections with other cells. When an action potential, traveling along an axon, arrives at a synapse, it causes a chemical called a neurotransmitter to be released. The neurotransmitter binds to receptor molecules in the membrane of the target cell. Synapses are the key functional elements of the brain. The essential function of the brain is cell-to-cell communication, and synapses are the points at which communication occurs. The human brain has been estimated to contain approximately 100 trillion synapses; even the brain of a fruit fly contains several million. The functions of these synapses are very diverse: some are excitatory (exciting the target cell); others are inhibitory; others work by activating second messenger systems that change the internal chemistry of their target cells in complex ways. A large number of synapses are dynamically modifiable; that is, they are capable of changing strength in a way that is controlled by the patterns of signals that pass through them. It is widely believed that activity-dependent modification of synapses is the brain's primary mechanism for learning and memory. Most of the space in the brain is taken up by axons, which are often bundled together in what are called nerve fiber tracts. A myelinated axon is wrapped in a fatty insulating sheath of myelin, which serves to greatly increase the speed of signal propagation. (There are also unmyelinated axons). Myelin is white, making parts of the brain filled exclusively with nerve fibers appear as light-colored white matter, in contrast to the darker-colored grey matter that marks areas with high densities of neuron cell bodies. Evolution Generic bilaterian nervous system Except for a few primitive organisms such as sponges (which have no nervous system) and cnidarians (which have a diffuse nervous system consisting of a nerve net), all living multicellular animals are bilaterians, meaning animals with a bilaterally symmetric body plan (that is, left and right sides that are approximate mirror images of each other). All bilaterians are thought to have descended from a common ancestor that appeared late in the Cryogenian period, 700–650 million years ago, and it has been hypothesized that this common ancestor had the shape of a simple tubeworm with a segmented body. At a schematic level, that basic worm-shape continues to be reflected in the body and nervous system architecture of all modern bilaterians, including vertebrates. The fundamental bilateral body form is a tube with a hollow gut cavity running from the mouth to the anus, and a nerve cord with an enlargement (a ganglion) for each body segment, with an especially large ganglion at the front, called the brain. The brain is small and simple in some species, such as nematode worms; in other species, such as vertebrates, it is a large and very complex organ. Some types of worms, such as leeches, also have an enlarged ganglion at the back end of the nerve cord, known as a "tail brain". There are a few types of existing bilaterians that lack a recognizable brain, including echinoderms and tunicates. It has not been definitively established whether the existence of these brainless species indicates that the earliest bilaterians lacked a brain, or whether their ancestors evolved in a way that led to the disappearance of a previously existing brain structure. Invertebrates This category includes tardigrades, arthropods, molluscs, and numerous types of worms. The diversity of invertebrate body plans is matched by an equal diversity in brain structures. Two groups of invertebrates have notably complex brains: arthropods (insects, crustaceans, arachnids, and others), and cephalopods (octopuses, squids, and similar molluscs). The brains of arthropods and cephalopods arise from twin parallel nerve cords that extend through the body of the animal. Arthropods have a central brain, the supraesophageal ganglion, with three divisions and large optical lobes behind each eye for visual processing. Cephalopods such as the octopus and squid have the largest brains of any invertebrates. There are several invertebrate species whose brains have been studied intensively because they have properties that make them convenient for experimental work: Fruit flies (Drosophila), because of the large array of techniques available for studying their genetics, have been a natural subject for studying the role of genes in brain development. In spite of the large evolutionary distance between insects and mammals, many aspects of Drosophila neurogenetics have been shown to be relevant to humans. The first biological clock genes, for example, were identified by examining Drosophila mutants that showed disrupted daily activity cycles. A search in the genomes of vertebrates revealed a set of analogous genes, which were found to play similar roles in the mouse biological clock—and therefore almost certainly in the human biological clock as well. Studies done on Drosophila, also show that most neuropil regions of the brain are continuously reorganized throughout life in response to specific living conditions. The nematode worm Caenorhabditis elegans, like Drosophila, has been studied largely because of its importance in genetics. In the early 1970s, Sydney Brenner chose it as a model organism for studying the way that genes control development. One of the advantages of working with this worm is that the body plan is very stereotyped: the nervous system of the hermaphrodite contains exactly 302 neurons, always in the same places, making identical synaptic connections in every worm. Brenner's team sliced worms into thousands of ultrathin sections and photographed each one under an electron microscope, then visually matched fibers from section to section, to map out every neuron and synapse in the entire body. The complete neuronal wiring diagram of C.elegans – its connectome was achieved. Nothing approaching this level of detail is available for any other organism, and the information gained has enabled a multitude of studies that would otherwise have not been possible. The sea slug Aplysia californica was chosen by Nobel Prize-winning neurophysiologist Eric Kandel as a model for studying the cellular basis of learning and memory, because of the simplicity and accessibility of its nervous system, and it has been examined in hundreds of experiments. Vertebrates The first vertebrates appeared over 500 million years ago (Mya) during the Cambrian period, and may have resembled the modern jawless fish (hagfish and lamprey) in form. Jawed vertebrates appeared by 445 Mya, tetrapods by 350 Mya, amniotes by 310 Mya and mammaliaforms by 200 Mya (approximately). Each vertebrate clade has an equally long evolutionary history, but the brains of modern fish, amphibians, reptiles, birds and mammals show a gradient of size and complexity that roughly follows the evolutionary sequence. All of these brains contain the same set of basic anatomical structures, but many are rudimentary in the hagfish, whereas in mammals the foremost part (forebrain, especially the telencephalon) is greatly developed and expanded. Brains are most commonly compared in terms of their mass. The relationship between brain size, body size and other variables has been studied across a wide range of vertebrate species. As a rule of thumb, brain size increases with body size, but not in a simple linear proportion. In general, smaller animals tend to have proportionally larger brains, measured as a fraction of body size. For mammals, the relationship between brain volume and body mass essentially follows a power law with an exponent of about 0.75. This formula describes the central tendency, but every family of mammals departs from it to some degree, in a way that reflects in part the complexity of their behavior. For example, primates have brains 5 to 10 times larger than the formula predicts. Predators, who have to implement various hunting strategies against the ever changing anti-predator adaptations, tend to have larger brains relative to body size than their prey. All vertebrate brains share a common underlying form, which appears most clearly during early stages of embryonic development. In its earliest form, the brain appears as three vesicular swellings at the front end of the neural tube; these swellings eventually become the forebrain (prosencephalon), midbrain (mesencephalon) and hindbrain (rhombencephalon), respectively. At the earliest stages of brain development, the three areas are roughly equal in size. In many aquatic/semiaquatic vertebrates such as fish and amphibians, the three parts remain similar in size in adults, but in terrestrial tetrapods such as mammals, the forebrain becomes much larger than the other parts, the hindbrain develops a bulky dorsal extension known as the cerebellum, and the midbrain becomes very small as a result. The brains of vertebrates are made of very soft tissue. Living brain tissue is pinkish on the outside and mostly white on the inside, with subtle variations in color. Vertebrate brains are surrounded by a system of connective tissue membranes called meninges, which separate the skull from the brain. Cerebral arteries pierce the outer two layers of the meninges, the dura and arachnoid mater, into the subarachnoid space and perfuse the brain parenchyma via arterioles perforating into the innermost layer of the meninges, the pia mater. The endothelial cells in the cerebral blood vessel walls are joined tightly to one another, forming the blood–brain barrier, which blocks the passage of many toxins and pathogens (though at the same time blocking antibodies and some drugs, thereby presenting special challenges in treatment of diseases of the brain). As a result of the osmotic restriction by the blood-brain barrier, the metabolites within the brain are cleared mostly by bulk flow of the cerebrospinal fluid within the glymphatic system instead of via venules like other parts of the body. Neuroanatomists usually divide the vertebrate brain into six main subregions: the telencephalon (the cerebral hemispheres), diencephalon (thalamus and hypothalamus), mesencephalon (midbrain), cerebellum, pons and medulla oblongata, with the midbrain, pons and medulla often collectively called the brainstem. Each of these areas has a complex internal structure. Some parts, such as the cerebral cortex and the cerebellar cortex, are folded into convoluted gyri and sulci in order to maximize surface area within the available intracranial space. Other parts, such as the thalamus and hypothalamus, consist of many small clusters of nuclei known as "ganglia". Thousands of distinguishable areas can be identified within the vertebrate brain based on fine distinctions of neural structure, chemistry, and connectivity. Although the same basic components are present in all vertebrate brains, some branches of vertebrate evolution have led to substantial distortions of brain geometry, especially in the forebrain area. The brain of a shark shows the basic components in a straightforward way, but in teleost fishes (the great majority of existing fish species), the forebrain has become "everted", like a sock turned inside out. In birds, there are also major changes in forebrain structure. These distortions can make it difficult to match brain components from one species with those of another species. Here is a list of some of the most important vertebrate brain components, along with a brief description of their functions as currently understood: The medulla, along with the spinal cord, contains many small nuclei involved in a wide variety of sensory and involuntary motor functions such as vomiting, heart rate and digestive processes. The pons lies in the brainstem directly above the medulla. Among other things, it contains nuclei that control often voluntary but simple acts such as sleep, respiration, swallowing, bladder function, equilibrium, eye movement, facial expressions, and posture. The hypothalamus is a small region at the base of the forebrain, whose complexity and importance belies its size. It is composed of numerous small nuclei, each with distinct connections and neurochemistry. The hypothalamus is engaged in additional involuntary or partially voluntary acts such as sleep and wake cycles, eating and drinking, and the release of some hormones. The thalamus is a collection of nuclei with diverse functions: some are involved in relaying information to and from the cerebral hemispheres, while others are involved in motivation. The subthalamic area (zona incerta) seems to contain action-generating systems for several types of "consummatory" behaviors such as eating, drinking, defecation, and copulation. The cerebellum modulates the outputs of other brain systems, whether motor-related or thought related, to make them certain and precise. Removal of the cerebellum does not prevent an animal from doing anything in particular, but it makes actions hesitant and clumsy. This precision is not built-in but learned by trial and error. The muscle coordination learned while riding a bicycle is an example of a type of neural plasticity that may take place largely within the cerebellum. 10% of the brain's total volume consists of the cerebellum and 50% of all neurons are held within its structure. The optic tectum allows actions to be directed toward points in space, most commonly in response to visual input. In mammals, it is usually referred to as the superior colliculus, and its best-studied function is to direct eye movements. It also directs reaching movements and other object-directed actions. It receives strong visual inputs, but also inputs from other senses that are useful in directing actions, such as auditory input in owls and input from the thermosensitive pit organs in snakes. In some primitive fishes, such as lampreys, this region is the largest part of the brain. The superior colliculus is part of the midbrain. The pallium is a layer of grey matter that lies on the surface of the forebrain and is the most complex and most recent evolutionary development of the brain as an organ. In reptiles and mammals, it is called the cerebral cortex. Multiple functions involve the pallium, including smell and spatial memory. In mammals, where it becomes so large as to dominate the brain, it takes over functions from many other brain areas. In many mammals, the cerebral cortex consists of folded bulges called gyri that create deep furrows or fissures called sulci. The folds increase the surface area of the cortex and therefore increase the amount of gray matter and the amount of information that can be stored and processed. The hippocampus, strictly speaking, is found only in mammals. However, the area it derives from, the medial pallium, has counterparts in all vertebrates. There is evidence that this part of the brain is involved in complex events such as spatial memory and navigation in fishes, birds, reptiles, and mammals. The basal ganglia are a group of interconnected structures in the forebrain. The primary function of the basal ganglia appears to be action selection: they send inhibitory signals to all parts of the brain that can generate motor behaviors, and in the right circumstances can release the inhibition, so that the action-generating systems are able to execute their actions. Reward and punishment exert their most important neural effects by altering connections within the basal ganglia. The olfactory bulb is a special structure that processes olfactory sensory signals and sends its output to the olfactory part of the pallium. It is a major brain component in many vertebrates, but is greatly reduced in humans and other primates (whose senses are dominated by information acquired by sight rather than smell). Reptiles Modern reptiles and mammals diverged from a common ancestor around 320 million years ago. The number of extant reptiles far exceeds the number of mammalian species, with 11,733 recognized species of reptiles compared to 5,884 extant mammals. Along with the species diversity, reptiles have diverged in terms of external morphology, from limbless to tetrapod gliders to armored chelonians, reflecting adaptive radiation to a diverse array of environments. Morphological differences are reflected in the nervous system phenotype, such as: absence of lateral motor column neurons in snakes, which innervate limb muscles controlling limb movements; absence of motor neurons that innervate trunk muscles in tortoises; presence of innervation from the trigeminal nerve to pit organs responsible to infrared detection in snakes. Variation in size, weight, and shape of the brain can be found within reptiles. For instance, crocodilians have the largest brain volume to body weight proportion, followed by turtles, lizards, and snakes. Reptiles vary in the investment in different brain sections. Crocodilians have the largest telencephalon, while snakes have the smallest. Turtles have the largest diencephalon per body weight whereas crocodilians have the smallest. On the other hand, lizards have the largest mesencephalon. Yet their brains share several characteristics revealed by recent anatomical, molecular, and ontogenetic studies. Vertebrates share the highest levels of similarities during embryological development, controlled by conserved transcription factors and signaling centers, including gene expression, morphological and cell type differentiation. In fact, high levels of transcriptional factors can be found in all areas of the brain in reptiles and mammals, with shared neuronal clusters enlightening brain evolution. Conserved transcription factors elucidate that evolution acted in different areas of the brain by either retaining similar morphology and function, or diversifying it. Anatomically, the reptilian brain has less subdivisions than the mammalian brain, however it has numerous conserved aspects including the organization of the spinal cord and cranial nerve, as well as elaborated brain pattern of organization. Elaborated brains are characterized by migrated neuronal cell bodies away from the periventricular matrix, region of neuronal development, forming organized nuclear groups. Aside from reptiles and mammals, other vertebrates with elaborated brains include hagfish, galeomorph sharks, skates, rays, teleosts, and birds. Overall elaborated brains are subdivided in forebrain, midbrain, and hindbrain. The hindbrain coordinates and integrates sensory and motor inputs and outputs responsible for, but not limited to, walking, swimming, or flying. It contains input and output axons interconnecting the spinal cord, midbrain and forebrain transmitting information from the external and internal environments. The midbrain links sensory, motor, and integrative components received from the hindbrain, connecting it to the forebrain. The tectum, which includes the optic tectum and torus semicircularis, receives auditory, visual, and somatosensory inputs, forming integrated maps of the sensory and visual space around the animal. The tegmentum receives incoming sensory information and forwards motor responses to and from the forebrain. The isthmus connects the hindbrain with midbrain. The forebrain region is particularly well developed, is further divided into diencephalon and telencephalon. Diencephalon is related to regulation of eye and body movement in response to visual stimuli, sensory information, circadian rhythms, olfactory input, and autonomic nervous system.Telencephalon is related to control of movements, neurotransmitters and neuromodulators responsible for integrating inputs and transmitting outputs are present, sensory systems, and cognitive functions. Birds Mammals The most obvious difference between the brains of mammals and other vertebrates is their size. On average, a mammal has a brain roughly twice as large as that of a bird of the same body size, and ten times as large as that of a reptile of the same body size. Size, however, is not the only difference: there are also substantial differences in shape. The hindbrain and midbrain of mammals are generally similar to those of other vertebrates, but dramatic differences appear in the forebrain, which is greatly enlarged and also altered in structure. The cerebral cortex is the part of the brain that most strongly distinguishes mammals. In non-mammalian vertebrates, the surface of the cerebrum is lined with a comparatively simple three-layered structure called the pallium. In mammals, the pallium evolves into a complex six-layered structure called neocortex or isocortex. Several areas at the edge of the neocortex, including the hippocampus and amygdala, are also much more extensively developed in mammals than in other vertebrates. The elaboration of the cerebral cortex carries with it changes to other brain areas. The superior colliculus, which plays a major role in visual control of behavior in most vertebrates, shrinks to a small size in mammals, and many of its functions are taken over by visual areas of the cerebral cortex. The cerebellum of mammals contains a large portion (the neocerebellum) dedicated to supporting the cerebral cortex, which has no counterpart in other vertebrates. In placentals, there is a wide nerve tract connecting the cerebral hemispheres called the corpus callosum. Primates The brains of humans and other primates contain the same structures as the brains of other mammals, but are generally larger in proportion to body size. The encephalization quotient (EQ) is used to compare brain sizes across species. It takes into account the nonlinearity of the brain-to-body relationship. Humans have an average EQ in the 7-to-8 range, while most other primates have an EQ in the 2-to-3 range. Dolphins have values higher than those of primates other than humans, but nearly all other mammals have EQ values that are substantially lower. Most of the enlargement of the primate brain comes from a massive expansion of the cerebral cortex, especially the prefrontal cortex and the parts of the cortex involved in vision. The visual processing network of primates includes at least 30 distinguishable brain areas, with a complex web of interconnections. It has been estimated that visual processing areas occupy more than half of the total surface of the primate neocortex. The prefrontal cortex carries out functions that include planning, working memory, motivation, attention, and executive control. It takes up a much larger proportion of the brain for primates than for other species, and an especially large fraction of the human brain. Development The brain develops in an intricately orchestrated sequence of stages. It changes in shape from a simple swelling at the front of the nerve cord in the earliest embryonic stages, to a complex array of areas and connections. Neurons are created in special zones that contain stem cells, and then migrate through the tissue to reach their ultimate locations. Once neurons have positioned themselves, their axons sprout and navigate through the brain, branching and extending as they go, until the tips reach their targets and form synaptic connections. In a number of parts of the nervous system, neurons and synapses are produced in excessive numbers during the early stages, and then the unneeded ones are pruned away. For vertebrates, the early stages of neural development are similar across all species. As the embryo transforms from a round blob of cells into a wormlike structure, a narrow strip of ectoderm running along the midline of the back is induced to become the neural plate, the precursor of the nervous system. The neural plate folds inward to form the neural groove, and then the lips that line the groove merge to enclose the neural tube, a hollow cord of cells with a fluid-filled ventricle at the center. At the front end, the ventricles and cord swell to form three vesicles that are the precursors of the prosencephalon (forebrain), mesencephalon (midbrain), and rhombencephalon (hindbrain). At the next stage, the forebrain splits into two vesicles called the telencephalon (which will contain the cerebral cortex, basal ganglia, and related structures) and the diencephalon (which will contain the thalamus and hypothalamus). At about the same time, the hindbrain splits into the metencephalon (which will contain the cerebellum and pons) and the myelencephalon (which will contain the medulla oblongata). Each of these areas contains proliferative zones where neurons and glial cells are generated; the resulting cells then migrate, sometimes for long distances, to their final positions. Once a neuron is in place, it extends dendrites and an axon into the area around it. Axons, because they commonly extend a great distance from the cell body and need to reach specific targets, grow in a particularly complex way. The tip of a growing axon consists of a blob of protoplasm called a growth cone, studded with chemical receptors. These receptors sense the local environment, causing the growth cone to be attracted or repelled by various cellular elements, and thus to be pulled in a particular direction at each point along its path. The result of this pathfinding process is that the growth cone navigates through the brain until it reaches its destination area, where other chemical cues cause it to begin generating synapses. Considering the entire brain, thousands of genes create products that influence axonal pathfinding. The synaptic network that finally emerges is only partly determined by genes, though. In many parts of the brain, axons initially "overgrow", and then are "pruned" by mechanisms that depend on neural activity. In the projection from the eye to the midbrain, for example, the structure in the adult contains a very precise mapping, connecting each point on the surface of the retina to a corresponding point in a midbrain layer. In the first stages of development, each axon from the retina is guided to the right general vicinity in the midbrain by chemical cues, but then branches very profusely and makes initial contact with a wide swath of midbrain neurons. The retina, before birth, contains special mechanisms that cause it to generate waves of activity that originate spontaneously at a random point and then propagate slowly across the retinal layer. These waves are useful because they cause neighboring neurons to be active at the same time; that is, they produce a neural activity pattern that contains information about the spatial arrangement of the neurons. This information is exploited in the midbrain by a mechanism that causes synapses to weaken, and eventually vanish, if activity in an axon is not followed by activity of the target cell. The result of this sophisticated process is a gradual tuning and tightening of the map, leaving it finally in its precise adult form. Similar things happen in other brain areas: an initial synaptic matrix is generated as a result of genetically determined chemical guidance, but then gradually refined by activity-dependent mechanisms, partly driven by internal dynamics, partly by external sensory inputs. In some cases, as with the retina-midbrain system, activity patterns depend on mechanisms that operate only in the developing brain, and apparently exist solely to guide development. In humans and many other mammals, new neurons are created mainly before birth, and the infant brain contains substantially more neurons than the adult brain. There are, however, a few areas where new neurons continue to be generated throughout life. The two areas for which adult neurogenesis is well established are the olfactory bulb, which is involved in the sense of smell, and the dentate gyrus of the hippocampus, where there is evidence that the new neurons play a role in storing newly acquired memories. With these exceptions, however, the set of neurons that is present in early childhood is the set that is present for life. Glial cells are different: as with most types of cells in the body, they are generated throughout the lifespan. There has long been debate about whether the qualities of mind, personality, and intelligence can be attributed to heredity or to upbringing. Although many details remain to be settled, neuroscience shows that both factors are important. Genes determine both the general form of the brain and how it reacts to experience, but experience is required to refine the matrix of synaptic connections, resulting in greatly increased complexity. The presence or absence of experience is critical at key periods of development. Additionally, the quantity and quality of experience are important. For example, animals raised in enriched environments demonstrate thick cerebral cortices, indicating a high density of synaptic connections, compared to animals with restricted levels of stimulation. Physiology The functions of the brain depend on the ability of neurons to transmit electrochemical signals to other cells, and their ability to respond appropriately to electrochemical signals received from other cells. The electrical properties of neurons are controlled by a wide variety of biochemical and metabolic processes, most notably the interactions between neurotransmitters and receptors that take place at synapses. Neurotransmitters and receptors Neurotransmitters are chemicals that are released at synapses when the local membrane is depolarised and Ca2+ enters into the cell, typically when an action potential arrives at the synapse – neurotransmitters attach themselves to receptor molecules on the membrane of the synapse's target cell (or cells), and thereby alter the electrical or chemical properties of the receptor molecules. With few exceptions, each neuron in the brain releases the same chemical neurotransmitter, or combination of neurotransmitters, at all the synaptic connections it makes with other neurons; this rule is known as Dale's principle. Thus, a neuron can be characterized by the neurotransmitters that it releases. The great majority of psychoactive drugs exert their effects by altering specific neurotransmitter systems. This applies to drugs such as cannabinoids, nicotine, heroin, cocaine, alcohol, fluoxetine, chlorpromazine, and many others. The two neurotransmitters that are most widely found in the vertebrate brain are glutamate, which almost always exerts excitatory effects on target neurons, and gamma-aminobutyric acid (GABA), which is almost always inhibitory. Neurons using these transmitters can be found in nearly every part of the brain. Because of their ubiquity, drugs that act on glutamate or GABA tend to have broad and powerful effects. Some general anesthetics act by reducing the effects of glutamate; most tranquilizers exert their sedative effects by enhancing the effects of GABA. There are dozens of other chemical neurotransmitters that are used in more limited areas of the brain, often areas dedicated to a particular function. Serotonin, for example—the primary target of many antidepressant drugs and many dietary aids—comes exclusively from a small brainstem area called the raphe nuclei. Norepinephrine, which is involved in arousal, comes exclusively from a nearby small area called the locus coeruleus. Other neurotransmitters such as acetylcholine and dopamine have multiple sources in the brain but are not as ubiquitously distributed as glutamate and GABA. Electrical activity As a side effect of the electrochemical processes used by neurons for signaling, brain tissue generates electric fields when it is active. When large numbers of neurons show synchronized activity, the electric fields that they generate can be large enough to detect outside the skull, using electroencephalography (EEG) or magnetoencephalography (MEG). EEG recordings, along with recordings made from electrodes implanted inside the brains of animals such as rats, show that the brain of a living animal is constantly active, even during sleep. Each part of the brain shows a mixture of rhythmic and nonrhythmic activity, which may vary according to behavioral state. In mammals, the cerebral cortex tends to show large slow delta waves during sleep, faster alpha waves when the animal is awake but inattentive, and chaotic-looking irregular activity when the animal is actively engaged in a task, called beta and gamma waves. During an epileptic seizure, the brain's inhibitory control mechanisms fail to function and electrical activity rises to pathological levels, producing EEG traces that show large wave and spike patterns not seen in a healthy brain. Relating these population-level patterns to the computational functions of individual neurons is a major focus of current research in neurophysiology. Metabolism All vertebrates have a blood–brain barrier that allows metabolism inside the brain to operate differently from metabolism in other parts of the body. The neurovascular unit regulates cerebral blood flow so that activated neurons can be supplied with energy. Glial cells play a major role in brain metabolism by controlling the chemical composition of the fluid that surrounds neurons, including levels of ions and nutrients. Brain tissue consumes a large amount of energy in proportion to its volume, so large brains place severe metabolic demands on animals. The need to limit body weight in order, for example, to fly, has apparently led to selection for a reduction of brain size in some species, such as bats. Most of the brain's energy consumption goes into sustaining the electric charge (membrane potential) of neurons. Most vertebrate species devote between 2% and 8% of basal metabolism to the brain. In primates, however, the percentage is much higher—in humans it rises to 20–25%. The energy consumption of the brain does not vary greatly over time, but active regions of the cerebral cortex consume somewhat more energy than inactive regions; this forms the basis for the functional brain imaging methods of PET, fMRI, and NIRS. The brain typically gets most of its energy from oxygen-dependent metabolism of glucose (i.e., blood sugar), but ketones provide a major alternative source, together with contributions from medium chain fatty acids (caprylic and heptanoic acids), lactate, acetate, and possibly amino acids. Function Information from the sense organs is collected in the brain. There it is used to determine what actions the organism is to take. The brain processes the raw data to extract information about the structure of the environment. Next it combines the processed information with information about the current needs of the animal and with memory of past circumstances. Finally, on the basis of the results, it generates motor response patterns. These signal-processing tasks require intricate interplay between a variety of functional subsystems. The function of the brain is to provide coherent control over the actions of an animal. A centralized brain allows groups of muscles to be co-activated in complex patterns; it also allows stimuli impinging on one part of the body to evoke responses in other parts, and it can prevent different parts of the body from acting at cross-purposes to each other. Perception The human brain is provided with information about light, sound, the chemical composition of the atmosphere, temperature, the position of the body in space (proprioception), the chemical composition of the bloodstream, and more. In other animals additional senses are present, such as the infrared heat-sense of snakes, the magnetic field sense of some birds, or the electric field sense mainly seen in aquatic animals. Each sensory system begins with specialized receptor cells, such as photoreceptor cells in the retina of the eye, or vibration-sensitive hair cells in the cochlea of the ear. The axons of sensory receptor cells travel into the spinal cord or brain, where they transmit their signals to a first-order sensory nucleus dedicated to one specific sensory modality. This primary sensory nucleus sends information to higher-order sensory areas that are dedicated to the same modality. Eventually, via a way-station in the thalamus, the signals are sent to the cerebral cortex, where they are processed to extract the relevant features, and integrated with signals coming from other sensory systems. Motor control Motor systems are areas of the brain that are involved in initiating body movements, that is, in activating muscles. Except for the muscles that control the eye, which are driven by nuclei in the midbrain, all the voluntary muscles in the body are directly innervated by motor neurons in the spinal cord and hindbrain. Spinal motor neurons are controlled both by neural circuits intrinsic to the spinal cord, and by inputs that descend from the brain. The intrinsic spinal circuits implement many reflex responses, and contain pattern generators for rhythmic movements such as walking or swimming. The descending connections from the brain allow for more sophisticated control. The brain contains several motor areas that project directly to the spinal cord. At the lowest level are motor areas in the medulla and pons, which control stereotyped movements such as walking, breathing, or swallowing. At a higher level are areas in the midbrain, such as the red nucleus, which is responsible for coordinating movements of the arms and legs. At a higher level yet is the primary motor cortex, a strip of tissue located at the posterior edge of the frontal lobe. The primary motor cortex sends projections to the subcortical motor areas, but also sends a massive projection directly to the spinal cord, through the pyramidal tract. This direct corticospinal projection allows for precise voluntary control of the fine details of movements. Other motor-related brain areas exert secondary effects by projecting to the primary motor areas. Among the most important secondary areas are the premotor cortex, supplementary motor area, basal ganglia, and cerebellum. In addition to all of the above, the brain and spinal cord contain extensive circuitry to control the autonomic nervous system which controls the movement of the smooth muscle of the body. Sleep Many animals alternate between sleeping and waking in a daily cycle. Arousal and alertness are also modulated on a finer time scale by a network of brain areas. A key component of the sleep system is the suprachiasmatic nucleus (SCN), a tiny part of the hypothalamus located directly above the point at which the optic nerves from the two eyes cross. The SCN contains the body's central biological clock. Neurons there show activity levels that rise and fall with a period of about 24 hours, circadian rhythms: these activity fluctuations are driven by rhythmic changes in expression of a set of "clock genes". The SCN continues to keep time even if it is excised from the brain and placed in a dish of warm nutrient solution, but it ordinarily receives input from the optic nerves, through the retinohypothalamic tract (RHT), that allows daily light-dark cycles to calibrate the clock. The SCN projects to a set of areas in the hypothalamus, brainstem, and midbrain that are involved in implementing sleep-wake cycles. An important component of the system is the reticular formation, a group of neuron-clusters scattered diffusely through the core of the lower brain. Reticular neurons send signals to the thalamus, which in turn sends activity-level-controlling signals to every part of the cortex. Damage to the reticular formation can produce a permanent state of coma. Sleep involves great changes in brain activity. Until the 1950s it was generally believed that the brain essentially shuts off during sleep, but this is now known to be far from true; activity continues, but patterns become very different. There are two types of sleep: REM sleep (with dreaming) and NREM (non-REM, usually without dreaming) sleep, which repeat in slightly varying patterns throughout a sleep episode. Three broad types of distinct brain activity patterns can be measured: REM, light NREM and deep NREM. During deep NREM sleep, also called slow wave sleep, activity in the cortex takes the form of large synchronized waves, whereas in the waking state it is noisy and desynchronized. Levels of the neurotransmitters norepinephrine and serotonin drop during slow wave sleep, and fall almost to zero during REM sleep; levels of acetylcholine show the reverse pattern. Homeostasis For any animal, survival requires maintaining a variety of parameters of bodily state within a limited range of variation: these include temperature, water content, salt concentration in the bloodstream, blood glucose levels, blood oxygen level, and others. The ability of an animal to regulate the internal environment of its body—the milieu intérieur, as the pioneering physiologist Claude Bernard called it—is known as homeostasis (Greek for "standing still"). Maintaining homeostasis is a crucial function of the brain. The basic principle that underlies homeostasis is negative feedback: any time a parameter diverges from its set-point, sensors generate an error signal that evokes a response that causes the parameter to shift back toward its optimum value. (This principle is widely used in engineering, for example in the control of temperature using a thermostat.) In vertebrates, the part of the brain that plays the greatest role is the hypothalamus, a small region at the base of the forebrain whose size does not reflect its complexity or the importance of its function. The hypothalamus is a collection of small nuclei, most of which are involved in basic biological functions. Some of these functions relate to arousal or to social interactions such as sexuality, aggression, or maternal behaviors; but many of them relate to homeostasis. Several hypothalamic nuclei receive input from sensors located in the lining of blood vessels, conveying information about temperature, sodium level, glucose level, blood oxygen level, and other parameters. These hypothalamic nuclei send output signals to motor areas that can generate actions to rectify deficiencies. Some of the outputs also go to the pituitary gland, a tiny gland attached to the brain directly underneath the hypothalamus. The pituitary gland secretes hormones into the bloodstream, where they circulate throughout the body and induce changes in cellular activity. Motivation The individual animals need to express survival-promoting behaviors, such as seeking food, water, shelter, and a mate. The motivational system in the brain monitors the current state of satisfaction of these goals, and activates behaviors to meet any needs that arise. The motivational system works largely by a reward–punishment mechanism. When a particular behavior is followed by favorable consequences, the reward mechanism in the brain is activated, which induces structural changes inside the brain that cause the same behavior to be repeated later, whenever a similar situation arises. Conversely, when a behavior is followed by unfavorable consequences, the brain's punishment mechanism is activated, inducing structural changes that cause the behavior to be suppressed when similar situations arise in the future. Most organisms studied to date use a reward–punishment mechanism: for instance, worms and insects can alter their behavior to seek food sources or to avoid dangers. In vertebrates, the reward-punishment system is implemented by a specific set of brain structures, at the heart of which lie the basal ganglia, a set of interconnected areas at the base of the forebrain. The basal ganglia are the central site at which decisions are made: the basal ganglia exert a sustained inhibitory control over most of the motor systems in the brain; when this inhibition is released, a motor system is permitted to execute the action it is programmed to carry out. Rewards and punishments function by altering the relationship between the inputs that the basal ganglia receive and the decision-signals that are emitted. The reward mechanism is better understood than the punishment mechanism, because its role in drug abuse has caused it to be studied very intensively. Research has shown that the neurotransmitter dopamine plays a central role: addictive drugs such as cocaine, amphetamine, and nicotine either cause dopamine levels to rise or cause the effects of dopamine inside the brain to be enhanced. Learning and memory Almost all animals are capable of modifying their behavior as a result of experience—even the most primitive types of worms. Because behavior is driven by brain activity, changes in behavior must somehow correspond to changes inside the brain. Already in the late 19th century theorists like Santiago Ramón y Cajal argued that the most plausible explanation is that learning and memory are expressed as changes in the synaptic connections between neurons. Until 1970, however, experimental evidence to support the synaptic plasticity hypothesis was lacking. In 1971 Tim Bliss and Terje Lømo published a paper on a phenomenon now called long-term potentiation: the paper showed clear evidence of activity-induced synaptic changes that lasted for at least several days. Since then technical advances have made these sorts of experiments much easier to carry out, and thousands of studies have been made that have clarified the mechanism of synaptic change, and uncovered other types of activity-driven synaptic change in a variety of brain areas, including the cerebral cortex, hippocampus, basal ganglia, and cerebellum. Brain-derived neurotrophic factor (BDNF) and physical activity appear to play a beneficial role in the process. Neuroscientists currently distinguish several types of learning and memory that are implemented by the brain in distinct ways: Working memory is the ability of the brain to maintain a temporary representation of information about the task that an animal is currently engaged in. This sort of dynamic memory is thought to be mediated by the formation of cell assemblies—groups of activated neurons that maintain their activity by constantly stimulating one another. Episodic memory is the ability to remember the details of specific events. This sort of memory can last for a lifetime. Much evidence implicates the hippocampus in playing a crucial role: people with severe damage to the hippocampus sometimes show amnesia, that is, inability to form new long-lasting episodic memories. Semantic memory is the ability to learn facts and relationships. This sort of memory is probably stored largely in the cerebral cortex, mediated by changes in connections between cells that represent specific types of information. Instrumental learning is the ability for rewards and punishments to modify behavior. It is implemented by a network of brain areas centered on the basal ganglia. Motor learning is the ability to refine patterns of body movement by practicing, or more generally by repetition. A number of brain areas are involved, including the premotor cortex, basal ganglia, and especially the cerebellum, which functions as a large memory bank for microadjustments of the parameters of movement. Research The field of neuroscience encompasses all approaches that seek to understand the brain and the rest of the nervous system. Psychology seeks to understand mind and behavior, and neurology is the medical discipline that diagnoses and treats diseases of the nervous system. The brain is also the most important organ studied in psychiatry, the branch of medicine that works to study, prevent, and treat mental disorders. Cognitive science seeks to unify neuroscience and psychology with other fields that concern themselves with the brain, such as computer science (artificial intelligence and similar fields) and philosophy. The oldest method of studying the brain is anatomical, and until the middle of the 20th century, much of the progress in neuroscience came from the development of better cell stains and better microscopes. Neuroanatomists study the large-scale structure of the brain as well as the microscopic structure of neurons and their components, especially synapses. Among other tools, they employ a plethora of stains that reveal neural structure, chemistry, and connectivity. In recent years, the development of immunostaining techniques has allowed investigation of neurons that express specific sets of genes. Also, functional neuroanatomy uses medical imaging techniques to correlate variations in human brain structure with differences in cognition or behavior. Neurophysiologists study the chemical, pharmacological, and electrical properties of the brain: their primary tools are drugs and recording devices. Thousands of experimentally developed drugs affect the nervous system, some in highly specific ways. Recordings of brain activity can be made using electrodes, either glued to the scalp as in EEG studies, or implanted inside the brains of animals for extracellular recordings, which can detect action potentials generated by individual neurons. Because the brain does not contain pain receptors, it is possible using these techniques to record brain activity from animals that are awake and behaving without causing distress. The same techniques have occasionally been used to study brain activity in human patients with intractable epilepsy, in cases where there was a medical necessity to implant electrodes to localize the brain area responsible for epileptic seizures. Functional imaging techniques such as fMRI are also used to study brain activity; these techniques have mainly been used with human subjects, because they require a conscious subject to remain motionless for long periods of time, but they have the great advantage of being noninvasive. Another approach to brain function is to examine the consequences of damage to specific brain areas. Even though it is protected by the skull and meninges, surrounded by cerebrospinal fluid, and isolated from the bloodstream by the blood–brain barrier, the delicate nature of the brain makes it vulnerable to numerous diseases and several types of damage. In humans, the effects of strokes and other types of brain damage have been a key source of information about brain function. Because there is no ability to experimentally control the nature of the damage, however, this information is often difficult to interpret. In animal studies, most commonly involving rats, it is possible to use electrodes or locally injected chemicals to produce precise patterns of damage and then examine the consequences for behavior. Computational neuroscience encompasses two approaches: first, the use of computers to study the brain; second, the study of how brains perform computation. On one hand, it is possible to write a computer program to simulate the operation of a group of neurons by making use of systems of equations that describe their electrochemical activity; such simulations are known as biologically realistic neural networks. On the other hand, it is possible to study algorithms for neural computation by simulating, or mathematically analyzing, the operations of simplified "units" that have some of the properties of neurons but abstract out much of their biological complexity. The computational functions of the brain are studied both by computer scientists and neuroscientists. Computational neurogenetic modeling is concerned with the study and development of dynamic neuronal models for modeling brain functions with respect to genes and dynamic interactions between genes. Recent years have seen increasing applications of genetic and genomic techniques to the study of the brain and a focus on the roles of neurotrophic factors and physical activity in neuroplasticity. The most common subjects are mice, because of the availability of technical tools. It is now possible with relative ease to "knock out" or mutate a wide variety of genes, and then examine the effects on brain function. More sophisticated approaches are also being used: for example, using Cre-Lox recombination it is possible to activate or deactivate genes in specific parts of the brain, at specific times. Recent years have also seen rapid advances in single-cell sequencing technologies, and these have been used to leverage the cellular heterogeneity of the brain as a means of better understanding the roles of distinct cell types in disease and biology (as well as how genomic variants influence individual cell types). In 2024, investigators studied a large integrated dataset of almost 3 million nuclei from the human prefrontal cortext from 388 individuals. In doing so, they annotated 28 cell types to evaluate expression and chromatin variation across gene families and drug targets. They identified about half a million cell type–specific regulatory elements and about 1.5 million single-cell expression quantitative trait loci (i.e., genomic variants with strong statistical associations with changes in gene expression within specific cell types), which were then used to build cell-type regulatory networks (the study also describes cell-to-cell communication networks). These networks were found to manifest cellular changes in aging and neuropsychiatric disorders. As part of the same investigation, a machine learning model was designed to accurately impute single-cell expression (this model prioritized ~250 disease-risk genes and drug targets with associated cell types). History The oldest brain to have been discovered was in Armenia in the Areni-1 cave complex. The brain, estimated to be over 5,000 years old, was found in the skull of a 12 to 14-year-old girl. Although the brains were shriveled, they were well preserved due to the climate found inside the cave. Early philosophers were divided as to whether the seat of the soul lies in the brain or heart. Aristotle favored the heart, and thought that the function of the brain was merely to cool the blood. Democritus, the inventor of the atomic theory of matter, argued for a three-part soul, with intellect in the head, emotion in the heart, and lust near the liver. The unknown author of On the Sacred Disease, a medical treatise in the Hippocratic Corpus, came down unequivocally in favor of the brain, writing: The Roman physician Galen also argued for the importance of the brain, and theorized in some depth about how it might work. Galen traced out the anatomical relationships among brain, nerves, and muscles, demonstrating that all muscles in the body are connected to the brain through a branching network of nerves. He postulated that nerves activate muscles mechanically by carrying a mysterious substance he called pneumata psychikon, usually translated as "animal spirits". Galen's ideas were widely known during the Middle Ages, but not much further progress came until the Renaissance, when detailed anatomical study resumed, combined with the theoretical speculations of René Descartes and those who followed him. Descartes, like Galen, thought of the nervous system in hydraulic terms. He believed that the highest cognitive functions are carried out by a non-physical res cogitans, but that the majority of behaviors of humans, and all behaviors of animals, could be explained mechanistically. The first real progress toward a modern understanding of nervous function, though, came from the investigations of Luigi Galvani (1737–1798), who discovered that a shock of static electricity applied to an exposed nerve of a dead frog could cause its leg to contract. Since that time, each major advance in understanding has followed more or less directly from the development of a new technique of investigation. Until the early years of the 20th century, the most important advances were derived from new methods for staining cells. Particularly critical was the invention of the Golgi stain, which (when correctly used) stains only a small fraction of neurons, but stains them in their entirety, including cell body, dendrites, and axon. Without such a stain, brain tissue under a microscope appears as an impenetrable tangle of protoplasmic fibers, in which it is impossible to determine any structure. In the hands of Camillo Golgi, and especially of the Spanish neuroanatomist Santiago Ramón y Cajal, the new stain revealed hundreds of distinct types of neurons, each with its own unique dendritic structure and pattern of connectivity. In the first half of the 20th century, advances in electronics enabled investigation of the electrical properties of nerve cells, culminating in work by Alan Hodgkin, Andrew Huxley, and others on the biophysics of the action potential, and the work of Bernard Katz and others on the electrochemistry of the synapse. These studies complemented the anatomical picture with a conception of the brain as a dynamic entity. Reflecting the new understanding, in 1942 Charles Sherrington visualized the workings of the brain waking from sleep: The invention of electronic computers in the 1940s, along with the development of mathematical information theory, led to a realization that brains can potentially be understood as information processing systems. This concept formed the basis of the field of cybernetics, and eventually gave rise to the field now known as computational neuroscience. The earliest attempts at cybernetics were somewhat crude in that they treated the brain as essentially a digital computer in disguise, as for example in John von Neumann's 1958 book, The Computer and the Brain. Over the years, though, accumulating information about the electrical responses of brain cells recorded from behaving animals has steadily moved theoretical concepts in the direction of increasing realism. One of the most influential early contributions was a 1959 paper titled What the frog's eye tells the frog's brain: the paper examined the visual responses of neurons in the retina and optic tectum of frogs, and came to the conclusion that some neurons in the tectum of the frog are wired to combine elementary responses in a way that makes them function as "bug perceivers". A few years later David Hubel and Torsten Wiesel discovered cells in the primary visual cortex of monkeys that become active when sharp edges move across specific points in the field of view—a discovery for which they won a Nobel Prize. Follow-up studies in higher-order visual areas found cells that detect binocular disparity, color, movement, and aspects of shape, with areas located at increasing distances from the primary visual cortex showing increasingly complex responses. Other investigations of brain areas unrelated to vision have revealed cells with a wide variety of response correlates, some related to memory, some to abstract types of cognition such as space. Theorists have worked to understand these response patterns by constructing mathematical models of neurons and neural networks, which can be simulated using computers. Some useful models are abstract, focusing on the conceptual structure of neural algorithms rather than the details of how they are implemented in the brain; other models attempt to incorporate data about the biophysical properties of real neurons. No model on any level is yet considered to be a fully valid description of brain function, though. The essential difficulty is that sophisticated computation by neural networks requires distributed processing in which hundreds or thousands of neurons work cooperatively—current methods of brain activity recording are only capable of isolating action potentials from a few dozen neurons at a time. Furthermore, even single neurons appear to be complex and capable of performing computations. So, brain models that do not reflect this are too abstract to be representative of brain operation; models that do try to capture this are very computationally expensive and arguably intractable with present computational resources. However, the Human Brain Project is trying to build a realistic, detailed computational model of the entire human brain. The wisdom of this approach has been publicly contested, with high-profile scientists on both sides of the argument. In the second half of the 20th century, developments in chemistry, electron microscopy, genetics, computer science, functional brain imaging, and other fields progressively opened new windows into brain structure and function. In the United States, the 1990s were officially designated as the "Decade of the Brain" to commemorate advances made in brain research, and to promote funding for such research. In the 21st century, these trends have continued, and several new approaches have come into prominence, including multielectrode recording, which allows the activity of many brain cells to be recorded all at the same time; genetic engineering, which allows molecular components of the brain to be altered experimentally; genomics, which allows variations in brain structure to be correlated with variations in DNA properties and neuroimaging. Society and culture As food Animal brains are used as food in numerous cuisines. In rituals Some archaeological evidence suggests that the mourning rituals of European Neanderthals also involved the consumption of the brain. The Fore people of Papua New Guinea are known to eat human brains. In funerary rituals, those close to the dead would eat the brain of the deceased to create a sense of immortality. A prion disease called kuru has been traced to this.
Biology and health sciences
Biology
null
3742
https://en.wikipedia.org/wiki/Bluetooth
Bluetooth
Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances and building personal area networks (PANs). In the most widely used mode, transmission power is limited to 2.5 milliwatts, giving it a very short range of up to . It employs UHF radio waves in the ISM bands, from 2.402GHz to 2.48GHz. It is mainly used as an alternative to wired connections to exchange files between nearby portable devices and connect cell phones and music players with wireless headphones, wireless speakers, HIFI systems, car audio and wireless transmission between TVs and soundbars. Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The IEEE standardized Bluetooth as IEEE 802.15.1 but no longer maintains the standard. The Bluetooth SIG oversees the development of the specification, manages the qualification program, and protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents applies to the technology, which is licensed to individual qualifying devices. , 4.7 billion Bluetooth integrated circuit chips are shipped annually. Bluetooth was first demonstrated in space in 2024, an early test envisioned to enhance IoT capabilities. Etymology The name "Bluetooth" was proposed in 1997 by Jim Kardach of Intel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales from Frans G. Bengtsson's The Long Ships, a historical novel about Vikings and the 10th-century Danish king Harald Bluetooth. Upon discovering a picture of the runestone of Harald Bluetooth in the book A History of the Vikings by Gwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth. According to Bluetooth's official website, Bluetooth is the Anglicised version of the Scandinavian Blåtand/Blåtann (or in Old Norse blátǫnn). It was the epithet of King Harald Bluetooth, who united the disparate Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols. The Bluetooth logo is a bind rune merging the Younger Futhark runes  (ᚼ, Hagall) and  (ᛒ, Bjarkan), Harald's initials. History The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO at Ericsson Mobile in Lund, Sweden. The purpose was to develop wireless headsets, according to two inventions by Johan Ullman, and . Nils Rydbeck tasked Tord Wingren with specifying and Dutchman Jaap Haartsen and Sven Mattisson with developing. Both were working for Ericsson in Lund. Principal design and development began in 1994 and by 1997 the team had a workable solution. From 1997 Örjan Johansson became the project leader and propelled the technology and standardization. In 1997, Adalio Sanchez, then head of IBM ThinkPad product R&D, approached Nils Rydbeck about collaborating on integrating a mobile phone into a ThinkPad notebook. The two assigned engineers from Ericsson and IBM studied the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal. Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruited Toshiba and Nokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba, and IBM. The first Bluetooth device was revealed in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" at COMDEX. The first Bluetooth mobile phone was the unreleased prototype Ericsson T36, though it was the revised Ericsson model T39 that actually made it to store shelves in June 2001. However Ericsson released the R520m in Quarter 1 of 2001, making the R520m the first ever commercially available Bluetooth phone. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth. Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration, and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices. Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, since Wi-Fi was not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations with Motorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time. In 2012, Jaap Haartsen was nominated by the European Patent Office for the European Inventor Award. Implementation Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, including guard bands 2MHz wide at the bottom end and 3.5MHz wide at the top. This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology called frequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, with adaptive frequency-hopping (AFH) enabled. Bluetooth Low Energy uses 2MHz spacing, which accommodates 40 channels. Originally, Gaussian frequency-shift keying (GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK (differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneous bit rate of 1Mbit/s is possible. The term Enhanced Data Rate (EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, transferring 2 and 3Mbit/s respectively. In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSK modulation on 4 MHz channels with forward error correction (FEC). Bluetooth is a packet-based protocol with a master/slave architecture. One master may communicate with up to seven slaves in a piconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625μs, and two slots make up a slot pair of 1250μs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots. The above excludes Bluetooth Low Energy, introduced in the 4.0 specification, which uses the same spectrum but somewhat differently. Communication and connection A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave). The Bluetooth Core Specification provides for the connection of two or more piconets to form a scatternet, in which certain devices simultaneously play the master/leader role in one piconet and the slave role in another. At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in a round-robin fashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets. Uses Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-cost transceiver microchips in each device. Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, a quasi optical wireless path must be viable. Bluetooth classes and power use Historically, the Bluetooth range was defined by the radio class, with a lower class (and higher output power) having larger range. The actual range of a given link depends on several qualities of both communicating devices and the air and obstacles in between. The primary attributes affecting range are the data rate, protocol (Bluetooth Classic or Bluetooth Low Energy), transmission power, and receiver sensitivity, and the relative orientations and gains of both antennas. The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products. Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device. In general, however, Class 1 devices have sensitivities similar to those of Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100 m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits. Bluetooth profile To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles. For example, The Headset Profile (HSP) connects headphones and earbuds to a cell phone or laptop. The Health Device Profile (HDP) can connect a cell phone to a digital thermometer or heart rate detector. The Video Distribution Profile (VDP) sends a video stream from a video camera to a TV screen or a recording device. Profiles are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices. List of applications Wireless control and communication between a mobile phone and a handsfree headset. This was one of the earliest applications to become popular. Wireless control of audio and communication functions between a mobile phone and a Bluetooth compatible car stereo system (and sometimes between the SIM card and the car phone). Wireless communication between a smartphone and a smart lock for unlocking doors. Wireless control of and communication with iOS and Android device phones, tablets and portable wireless speakers. Wireless Bluetooth headset and intercom. Idiomatically, a headset is sometimes called "a Bluetooth". Wireless streaming of audio to headphones with or without communication capabilities. Wireless streaming of data collected by Bluetooth-enabled fitness devices to phone or PC. Wireless networking between PCs in a confined space and where little bandwidth is required. Wireless communication with PC input and output devices, the most common being the mouse, keyboard and printer. Transfer of files, contact details, calendar appointments, and reminders between devices with OBEX and sharing directories via FTP. Triggering the camera shutter of a smartphone using a Bluetooth controlled selfie stick. Replacement of previous wired RS-232 serial communications in test equipment, GPS receivers, medical equipment, bar code scanners, and traffic control devices. For controls where infrared was often used. For low bandwidth applications where higher USB bandwidth is not required and cable-free connection desired. Sending small advertisements from Bluetooth-enabled advertising hoardings to other, discoverable, Bluetooth devices. Wireless bridge between two Industrial Ethernet (e.g., PROFINET) networks. Game consoles have been using Bluetooth as a wireless communications protocol for peripherals since the seventh generation, including Nintendo's Wii and Sony's PlayStation 3 which use Bluetooth for their respective controllers. Dial-up internet access on personal computers or PDAs using a data-capable mobile phone as a wireless modem. Short-range transmission of health sensor data from medical devices to mobile phone, set-top box or dedicated telehealth devices. Allowing a DECT phone to ring and answer calls on behalf of a nearby mobile phone. Real-time location systems (RTLS) are used to track and identify the location of objects in real time using "Nodes" or "tags" attached to, or embedded in, the objects tracked, and "Readers" that receive and process the wireless signals from these tags to determine their locations. Personal security application on mobile phones for prevention of theft or loss of items. The protected item has a Bluetooth marker (e.g., a tag) that is in constant communication with the phone. If the connection is broken (the marker is out of range of the phone) then an alarm is raised. This can also be used as a man overboard alarm. Calgary, Alberta, Canada's Roads Traffic division uses data collected from travelers' Bluetooth devices to predict travel times and road congestion for motorists. Wireless transmission of audio (a more reliable alternative to FM transmitters) Live video streaming to the visual cortical implant device by Nabeel Fattah in Newcastle university 2017. Connection of motion controllers to a PC when using VR headsets Wireless connection between TVs and soundbars. Devices Bluetooth exists in numerous products such as telephones, speakers, tablets, media players, robotics systems, laptops, and game console equipment as well as some high definition headsets, modems, hearing aids and even watches. Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files). Bluetooth protocols simplify the discovery and setup of services between devices. Bluetooth devices can advertise all of the services they provide. This makes using services easier, because more of the security, network address and permission configuration can be automated than with many other network types. Computer requirements A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While some desktop computers and most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle". Unlike its predecessor, IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter. Operating system implementation For Microsoft platforms, Windows XP Service Pack 2 and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR. Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft. Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR. Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR). The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP, DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced. Windows 8 and later support Bluetooth Low Energy (BLE). It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent. Apple products have worked with Bluetooth since Mac OSX v10.2, which was released in 2002. Linux has two popular Bluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed by Qualcomm. Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed by Broadcom. There is also Affix stack, developed by Nokia. It was once popular, but has not been updated since 2005. FreeBSD has included Bluetooth since its v5.0 release, implemented through netgraph. NetBSD has included Bluetooth since its v4.0 release. Its Bluetooth stack was ported to OpenBSD as well, however OpenBSD later removed it as unmaintained. DragonFly BSD has had NetBSD's Bluetooth implementation since 1.11 (2008). A netgraph-based implementation from FreeBSD has also been available in the tree, possibly disabled until 2014-11-15, and may require more work. Specifications and features The specifications were formalized by the Bluetooth Special Interest Group (SIG) and formally announced on 20 May 1998. In 2014 it had a membership of over 30,000 companies worldwide. It was established by Ericsson, IBM, Intel, Nokia and Toshiba, and later joined by many other companies. All versions of the Bluetooth standards are backward-compatible with all earlier versions. The Bluetooth Core Specification Working Group (CSWG) produces mainly four kinds of specifications: The Bluetooth Core Specificationtypically released every few years Core Specification Addendum (CSA) Core Specification Supplements (CSS)can be released more frequently than Addenda Errataavailable with a Bluetooth SIG account: Errata login) Bluetooth 1.0 and 1.0B Products were not interoperable. Anonymity was not possible, preventing certain services from using Bluetooth environments. Bluetooth 1.1 Ratified as IEEE Standard 802.15.1–2002 Many errors found in the v1.0B specifications were fixed. Added possibility of non-encrypted channels. Received signal strength indicator (RSSI) Bluetooth 1.2 Major enhancements include: Faster connection and discovery Adaptive frequency-hopping spread spectrum (AFH), which improves resistance to radio frequency interference by avoiding the use of crowded frequencies in the hopping sequence Higher transmission speeds in practice than in v1.1, up to 721 kbit/s Extended Synchronous Connections (eSCO), which improve voice quality of audio links by allowing retransmissions of corrupted packets, and may optionally increase audio latency to provide better concurrent data transfer Host Controller Interface (HCI) operation with three-wire UART Ratified as IEEE Standard 802.15.1–2005 Introduced flow control and retransmission modes for Bluetooth 2.0 + EDR This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for faster data transfer. The data rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s. EDR uses a combination of GFSK and phase-shift keying modulation (PSK) with two variants, π/4-DQPSK and 8-DPSK. EDR can provide a lower power consumption through a reduced duty cycle. The specification is published as Bluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet. Bluetooth 2.1 + EDR Bluetooth Core Specification version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007. The headline feature of v2.1 is secure simple pairing (SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security. Version 2.1 allows various other improvements, including extended inquiry response (EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode. Bluetooth 3.0 + HS Version 3.0 + HS of the Bluetooth Core Specification was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated 802.11 link. The main new feature is (Alternative MAC/PHY), the addition of 802.11 as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification version 3.0 or earlier Core Specification Addendum 1. L2CAP Enhanced modes Enhanced Retransmission Mode (ERTM) implements reliable L2CAP channel, while Streaming Mode (SM) implements unreliable channel with no retransmission or flow control. Introduced in Core Specification Addendum 1. Alternative MAC/PHY Enables the use of alternative MAC and PHYs for transporting Bluetooth profile data. The Bluetooth radio is still used for device discovery, initial connection and profile configuration. However, when large quantities of data must be sent, the high-speed alternative MAC PHY 802.11 (typically associated with Wi-Fi) transports the data. This means that Bluetooth uses proven low power connection models when the system is idle, and the faster radio when it must send large quantities of data. AMP links require enhanced L2CAP modes. Unicast Connectionless Data Permits sending service data without establishing an explicit L2CAP channel. It is intended for use by applications that require low latency between user action and reconnection/transmission of data. This is only appropriate for small amounts of data. Enhanced Power Control Updates the power control feature to remove the open loop power control, and also to clarify ambiguities in power control introduced by the new modulation schemes added for EDR. Enhanced power control removes the ambiguities by specifying the behavior that is expected. The feature also adds closed loop power control, meaning RSSI filtering can start as the response is received. Additionally, a "go straight to maximum power" request has been introduced. This is expected to deal with the headset link loss issue typically observed when a user puts their phone into a pocket on the opposite side to the headset. Ultra-wideband The high-speed (AMP) feature of Bluetooth v3.0 was originally intended for UWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification. On 16 March 2009, the WiMedia Alliance announced it was entering into technology transfer agreements for the WiMedia Ultra-wideband (UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG), Wireless USB Promoter Group and the USB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations. In October 2009, the Bluetooth Special Interest Group suspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of former WiMedia members had not and would not sign up to the necessary agreements for the IP transfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer-term roadmap. Bluetooth 4.0 The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted . It includes Classic Bluetooth, Bluetooth high speed and Bluetooth Low Energy (BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols. Bluetooth Low Energy, previously known as Wibree, is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by a coin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions. The provisional names Wibree and Bluetooth ULP (Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE. Compared to Classic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range. In terms of lengthening the battery life of Bluetooth devices, represents a significant progression. In a single-mode implementation, only the low energy protocol stack is implemented. Dialog Semiconductor, STMicroelectronics, AMICCOM, CSR, Nordic Semiconductor and Texas Instruments have released single mode Bluetooth Low Energy solutions. In a dual-mode implementation, Bluetooth Smart functionality is integrated into an existing Classic Bluetooth controller. , the following semiconductor companies have announced the availability of chips meeting the standard: Qualcomm Atheros, CSR, Broadcom and Texas Instruments. The compliant architecture shares all of Classic Bluetooth's existing radio and functionality resulting in a negligible cost increase compared to Classic Bluetooth. Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost. General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services with AES Encryption. Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer. Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012. Core Specification Addendum 4 has an adoption date of 12 February 2013. Bluetooth 4.1 The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously. New features of this specification include: Mobile wireless service coexistence signaling Train nudging and generalized interlaced scanning Low Duty Cycle Directed Advertising L2CAP connection-oriented and dedicated channels with credit-based flow control Dual Mode and Topology LE Link Layer Topology 802.11n PAL Audio architecture updates for Wide Band Speech Fast data advertising interval Limited discovery time Some features were already available in a Core Specification Addendum (CSA) before the release of v4.1. Bluetooth 4.2 Released on 2 December 2014, it introduces features for the Internet of things. The major areas of improvement are: Bluetooth Low Energy Secure Connection with Data Packet Length Extension to improve the cryptographic protocol Link Layer Privacy with Extended Scanner Filter Policies to improve data security Internet Protocol Support Profile (IPSP) version 6 ready for Bluetooth smart devices to support the Internet of things and home automation Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates. Bluetooth 5 The Bluetooth SIG released Bluetooth 5 on 6 December 2016. Its new features are mainly focused on new Internet of Things technology. Sony was the first to announce Bluetooth 5.0 support with its Xperia XZ Premium in Feb 2017 during the Mobile World Congress 2017. The Samsung Galaxy S8 launched with Bluetooth 5 support in April 2017. In September 2017, the iPhone 8, 8 Plus and iPhone X launched with Bluetooth 5 support as well. Apple also integrated Bluetooth 5 in its new HomePod offering released on 9 February 2018. Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0); the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market." Bluetooth 5 provides, for BLE, options that can double the data rate (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation of low-energy Bluetooth connections. The major areas of improvement are: Slot Availability Mask (SAM) 2 Mbit/s PHY for LE Long Range High Duty Cycle Non-Connectable Advertising LE Advertising Extensions LE Channel Selection Algorithm #2 Features added in CSA5 – integrated in v5.0: Higher Output Power The following features were removed in this version of the specification: Park State Bluetooth 5.1 The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019. The major areas of improvement are: Angle of arrival (AoA) and Angle of Departure (AoD) which are used for locating and tracking of devices Advertising Channel Index GATT caching Minor Enhancements batch 1: HCI support for debug keys in LE Secure Connections Sleep clock accuracy update mechanism ADI field in scan response data Interaction between and Flow Specification Block Host channel classification for secondary advertising Allow the SID to appear in scan response reports Specify the behavior when rules are violated Periodic Advertising Sync Transfer Features added in Core Specification Addendum (CSA) 6 – integrated in v5.1: Models Mesh-based model hierarchy The following features were removed in this version of the specification: Unit keys Bluetooth 5.2 On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification version 5.2. The new specification adds new features: Enhanced Attribute Protocol (EATT), an improved version of the Attribute Protocol (ATT) LE Power Control LE Isochronous Channels LE Audio that is built on top of the new 5.2 features. BT LE Audio was announced in January 2020 at CES by the Bluetooth SIG. Compared to regular Bluetooth Audio, Bluetooth Low Energy Audio makes lower battery consumption possible and creates a standardized way of transmitting audio over BT LE. Bluetooth LE Audio also allows one-to-many and many-to-one transmission, allowing multiple receivers from one source or one receiver for multiple sources, known as Auracast. It uses a new LC3 codec. BLE Audio will also add support for hearing aids. On 12 July 2022, the Bluetooth SIG announced the completion of Bluetooth LE Audio. The standard has a lower minimum latency claim of 20–30 ms vs Bluetooth Classic audio of 100–200 ms. At IFA in August 2023 Samsung announced support for Auracast through a software update for their Galaxy Buds2 Pro and two of their TV's. In October users started getting updates for the earbuds. Bluetooth 5.3 The Bluetooth SIG published the Bluetooth Core Specification version 5.3 on 13 July 2021. The feature enhancements of Bluetooth 5.3 are: Connection Subrating Periodic Advertisement Interval Channel Classification Enhancement Encryption key size control enhancements The following features were removed in this version of the specification: Alternate MAC and PHY (AMP) Extension Bluetooth 5.4 The Bluetooth SIG released the Bluetooth Core Specification version 5.4 on 7 February 2023. This new version adds the following features: Periodic Advertising with Responses (PAwR) Encrypted Advertising Data LE Security Levels Characteristic Advertising Coding Selection Bluetooth 6.0 The Bluetooth SIG released the Bluetooth Core Specification version 6.0 on 27 August 2024. This version adds the following features: Bluetooth Channel Sounding Decision-based advertising filtering Monitoring advertisers enhancement LL extended feature set Frame space update Technical information Architecture Software Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host and the controller. High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets. Hardware The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g. SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol. A Bluetooth device is a short-range wireless device. Bluetooth devices are fabricated on RF CMOS integrated circuit (RF circuit) chips. Bluetooth protocol stack Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols. Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols: HCI and RFCOMM. Link Manager The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC). The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services: Transmission and reception of data. Name request Request of the link addresses. Establishment of the connection. Authentication. Negotiation of link mode and connection establishment. Host Controller Interface The Host Controller Interface provides a command interface between the controller and the host. Logical Link Control and Adaptation Protocol The Logical Link Control and Adaptation Protocol (L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols. Provides segmentation and reassembly of on-air packets. In Basic mode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the default MTU, and 48 bytes as the minimum mandatory supported MTU. In Retransmission and Flow Control modes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks. Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes: Enhanced Retransmission Mode (ERTM) This mode is an improved version of the original retransmission mode. This mode provides a reliable L2CAP channel. Streaming Mode (SM) This is a very simple mode, with no retransmission or flow control. This mode provides an unreliable L2CAP channel. Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer. Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links. Service Discovery Protocol The Service Discovery Protocol (SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine which Bluetooth profiles the headset can use (Headset Profile, Hands Free Profile (HFP), Advanced Audio Distribution Profile (A2DP) etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by a Universally unique identifier (UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128). Radio Frequency Communications Radio Frequency Communications (RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulates EIA-232 (formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation. RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth. Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM. Bluetooth Network Encapsulation Protocol The Bluetooth Network Encapsulation Protocol (BNEP) is used for transferring another protocol stack's data via an L2CAP channel. Its main purpose is the transmission of IP packets in the Personal Area Networking Profile. BNEP performs a similar function to SNAP in Wireless LAN. Audio/Video Control Transport Protocol The Audio/Video Control Transport Protocol (AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player. Audio/Video Distribution Transport Protocol The Audio/Video Distribution Transport Protocol (AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over an L2CAP channel intended for video distribution profile in the Bluetooth transmission. Telephony Control Protocol The Telephony Control Protocol– Binary (TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices." TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest. Adopted protocols Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include: Point-to-Point Protocol (PPP) Internet standard protocol for transporting IP datagrams over a point-to-point link. TCP/IP/UDP Foundation Protocols for TCP/IP protocol suite Object Exchange Protocol (OBEX) Session-layer protocol for the exchange of objects, providing a model for object and operation representation Wireless Application Environment/Wireless Application Protocol (WAE/WAP) WAE specifies an application framework for wireless devices and WAP is an open standard to provide mobile users access to telephony and information services. Baseband error correction Depending on packet type, individual packets may be protected by error correction, either 1/3 rate forward error correction (FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged by automatic repeat request (ARQ). Setting up connections Any Bluetooth device in discoverable mode transmits the following information on demand: Device name Device class List of services Technical information (for example: device features, manufacturer, Bluetooth specification used, clock offset) Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device. Every device has a unique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices. Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range named T610 (see Bluejacking). Pairing and bonding Motivation Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range). To resolve this conflict, Bluetooth uses a process called bonding, and a bond is generated through a process called pairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively. Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship. Implementation During pairing, the two devices establish a relationship by creating a shared secret known as a link key. If both devices store the same link key, they are said to be paired or bonded. A device that wants to communicate only with a bonded device can cryptographically authenticate the identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticated ACL link between the devices may be encrypted to protect exchanged data against eavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with. Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases. Pairing mechanisms Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms: Legacy pairing: This is the only method available in Bluetooth v2.0 and before. Each device must enter a PIN code; pairing is only successful if both devices enter the same PIN code. Any 16-byte UTF-8 string may be used as a PIN code; however, not all devices may be capable of entering all possible PIN codes. Limited input devices: The obvious example of this class of device is a Bluetooth Hands-free headset, which generally have few inputs. These devices usually have a fixed PIN, for example "0000" or "1234", that are hard-coded into the device. Numeric input devices: Mobile phones are classic examples of these devices. They allow a user to enter a numeric value up to 16 digits in length. Alpha-numeric input devices: PCs and smartphones are examples of these devices. They allow a user to enter full UTF-8 text as a PIN code. If pairing with a less capable device the user must be aware of the input limitations on the other device; there is no mechanism available for a capable device to determine how it should limit the available input a user may use. Secure Simple Pairing (SSP): This is required by Bluetooth v2.1, although a Bluetooth v2.1 device may only use legacy pairing to interoperate with a v2.0 or earlier device. Secure Simple Pairing uses a form of public-key cryptography, and some types can help protect against man in the middle, or MITM attacks. SSP has the following authentication mechanisms: Just works: As the name implies, this method just works, with no user interaction. However, a device may prompt the user to confirm the pairing process. This method is typically used by headsets with minimal IO capabilities, and is more secure than the fixed PIN mechanism this limited set of devices uses for legacy pairing. This method provides no man-in-the-middle (MITM) protection. Numeric comparison: If both devices have a display, and at least one can accept a binary yes/no user input, they may use Numeric Comparison. This method displays a 6-digit numeric code on each device. The user should compare the numbers to ensure they are identical. If the comparison succeeds, the user(s) should confirm pairing on the device(s) that can accept an input. This method provides MITM protection, assuming the user confirms on both devices and actually performs the comparison properly. Passkey Entry: This method may be used between a device with a display and a device with numeric keypad entry (such as a keyboard), or two devices with numeric keypad entry. In the first case, the display presents a 6-digit numeric code to the user, who then enters the code on the keypad. In the second case, the user of each device enters the same 6-digit number. Both of these cases provide MITM protection. Out of band (OOB): This method uses an external means of communication, such as near-field communication (NFC) to exchange some information used in the pairing process. Pairing is completed using the Bluetooth radio, but requires information from the OOB mechanism. This provides only the level of MITM protection that is present in the OOB mechanism. SSP is considered simple for the following reasons: In most cases, it does not require a user to generate a passkey. For use cases not requiring MITM protection, user interaction can be eliminated. For numeric comparison, MITM protection can be achieved with a simple equality comparison by the user. Using OOB with NFC enables pairing when devices simply get close, rather than requiring a lengthy discovery process. Security concerns Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simple XOR attacks to retrieve the encryption key. Turning off encryption is required for several normal operations, so it is problematic to detect if encryption is disabled for a valid reason or a security attack. Bluetooth v2.1 addresses this in the following ways: Encryption is required for all non-SDP (Service Discovery Protocol) connections A new Encryption Pause and Resume feature is used for all normal operations that require that encryption be disabled. This enables easy identification of normal operation from security attacks. The encryption key must be refreshed before it expires. Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device. Security Overview Bluetooth implements confidentiality, authentication and key derivation with custom algorithms based on the SAFER+ block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm. The E0 stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices. An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker. In September 2008, the National Institute of Standards and Technology (NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers. Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See the pairing mechanisms section for more about these changes. Bluejacking Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!" Bluejacking does not involve the removal or alteration of any data from the device. Some form of DoS is also possible, even in modern devices, by sending unsolicited pairing requests in rapid succession; this becomes disruptive because most systems display a full screen notification for every connection request, interrupting every other activity, especially on less powerful devices. History of security concerns 2001–2004 In 2001, Jakobsson and Wetzel from Bell Laboratories discovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme. In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data. In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at the CeBIT fairgrounds, showing the importance of the problem to the world. A new attack called BlueBug was used for this experiment. In 2004 the first purported virus using Bluetooth to spread itself among mobile phones appeared on the Symbian OS. The virus was first described by Kaspersky Lab and requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology or Symbian OS since the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see also Bluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to with directional antennas and signal amplifiers. This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use. 2005 In January 2005, a mobile malware worm known as Lasco surfaced. The worm began targeting mobile phones using Symbian OS (Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other .SIS files on the device, allowing replication to another device through the use of removable media (Secure Digital, CompactFlash, etc.). The worm can render the mobile device unstable. In April 2005, University of Cambridge security researchers published results of their actual implementation of passive attacks against the PIN-based pairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones. In June 2005, Yaniv Shaked and Avishai Wool published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary. In August 2005, police in Cambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way. 2006 In April 2006, researchers from Secure Network and F-Secure published a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm. In October 2006, at the Luxembourgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked. 2017 In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, including Microsoft Windows, Linux, Apple iOS, and Google Android. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017. 2018 In July 2018, Lior Neumann and Eli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections. Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers. 2019 In August 2019, security researchers at the Singapore University of Technology and Design, Helmholtz Center for Information Security, and University of Oxford discovered a vulnerability, called KNOB (Key Negotiation of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)". Google released an Android security patch on 5 August 2019, which removed this vulnerability. 2023 In November 2023, researchers from Eurecom revealed a new class of attacks known as BLUFFS (Bluetooth Low Energy Forward and Future Secrecy Attacks). These 6 new attacks expand on and work in conjunction with the previously known KNOB and BIAS (Bluetooth Impersonation AttackS) attacks. While the previous KNOB and BIAS attacks allowed an attacker to decrypt and spoof Bluetooth packets within a session, BLUFFS extends this capability to all sessions generated by a device (including past, present, and future). All devices running Bluetooth versions 4.2 up to and including 5.4 are affected. Health concerns Bluetooth uses the radio frequency spectrum in the 2.402GHz to 2.480GHz range, which is non-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included by IARC in the possible carcinogen list. Maximum power output from a Bluetooth radio is 100mW for Class1, 2.5mW for Class2, and 1mW for Class3 devices. Even the maximum power output of Class1 is a lower level than the lowest-powered mobile phones. UMTS and W-CDMA output 250mW, GSM1800/1900 outputs 1000mW, and GSM850/900 outputs 2000mW. Award programs The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets. The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World. The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.
Technology
Networks
null
3755
https://en.wikipedia.org/wiki/Boron
Boron
Boron is a chemical element. It has the symbol B and atomic number 5. In its crystalline form it is a brittle, dark, lustrous metalloid; in its amorphous form it is a brown powder. As the lightest element of the boron group it has three valence electrons for forming covalent bonds, resulting in many compounds such as boric acid, the mineral sodium borate, and the ultra-hard crystals of boron carbide and boron nitride. Boron is synthesized entirely by cosmic ray spallation and supernovas and not by stellar nucleosynthesis, so it is a low-abundance element in the Solar System and in the Earth's crust. It constitutes about 0.001 percent by weight of Earth's crust. It is concentrated on Earth by the water-solubility of its more common naturally occurring compounds, the borate minerals. These are mined industrially as evaporites, such as borax and kernite. The largest known deposits are in Turkey, the largest producer of boron minerals. Elemental boron is found in small amounts in meteoroids, but chemically uncombined boron is not otherwise found naturally on Earth. Several allotropes exist: amorphous boron is a brown powder; crystalline boron is silvery to black, extremely hard (9.3 on the Mohs scale), and a poor electrical conductor at room temperature (1.5 × 10−6 Ω−1 cm−1 room temperature electrical conductivity). The primary use of the element itself is as boron filaments with applications similar to carbon fibers in some high-strength materials. Boron is primarily used in chemical compounds. About half of all production consumed globally is an additive in fiberglass for insulation and structural materials. The next leading use is in polymers and ceramics in high-strength, lightweight structural and heat-resistant materials. Borosilicate glass is desired for its greater strength and thermal shock resistance than ordinary soda lime glass. As sodium perborate, it is used as a bleach. A small amount is used as a dopant in semiconductors, and reagent intermediates in the synthesis of organic fine chemicals. A few boron-containing organic pharmaceuticals are used or are in study. Natural boron is composed of two stable isotopes, one of which (boron-10) has a number of uses as a neutron-capturing agent. Borates have low toxicity in mammals (similar to table salt) but are more toxic to arthropods and are occasionally used as insecticides. Boron-containing organic antibiotics are known. Although only traces are required, it is an essential plant nutrient. History The word boron was coined from borax, the mineral from which it was isolated, by analogy with carbon, which boron resembles chemically. Borax in its mineral form (then known as tincal) first saw use as a glaze, beginning in China circa 300 AD. Some crude borax traveled westward, and was apparently mentioned by the alchemist Jabir ibn Hayyan around 700 AD. Marco Polo brought some glazes back to Italy in the 13th century. Georgius Agricola, in around 1600, reported the use of borax as a flux in metallurgy. In 1777, boric acid was recognized in the hot springs (soffioni) near Florence, Italy, at which point it became known as sal sedativum, with ostensible medical benefits. The mineral was named sassolite, after Sasso Pisano in Italy. Sasso was the main source of European borax from 1827 to 1872, when American sources replaced it. Boron compounds were rarely used until the late 1800s when Francis Marion Smith's Pacific Coast Borax Company first popularized and produced them in volume at low cost. Boron was not recognized as an element until it was isolated by Sir Humphry Davy and by Joseph Louis Gay-Lussac and Louis Jacques Thénard. In 1808 Davy observed that electric current sent through a solution of borates produced a brown precipitate on one of the electrodes. In his subsequent experiments, he used potassium to reduce boric acid instead of electrolysis. He produced enough boron to confirm a new element and named it boracium. Gay-Lussac and Thénard used iron to reduce boric acid at high temperatures. By oxidizing boron with air, they showed that boric acid is its oxidation product. Jöns Jacob Berzelius identified it as an element in 1824. Pure boron was arguably first produced by the American chemist Ezekiel Weintraub in 1909. Characteristics of the element Isotopes Boron has two naturally occurring and stable isotopes, 11B (80.1%) and 10B (19.9%). The mass difference results in a wide range of δ11B values, which are defined as a fractional difference between the 11B and 10B and traditionally expressed in parts per thousand, in natural waters ranging from −16 to +59. There are 13 known isotopes of boron; the shortest-lived isotope is 7B which decays through proton emission and alpha decay with a half-life of 3.5×10−22 s. Isotopic fractionation of boron is controlled by the exchange reactions of the boron species B(OH)3 and [B(OH)4]−. Boron isotopes are also fractionated during mineral crystallization, during H2O phase changes in hydrothermal systems, and during hydrothermal alteration of rock. The latter effect results in preferential removal of the [10B(OH)4]− ion onto clays. It results in solutions enriched in 11B(OH)3 and therefore may be responsible for the large 11B enrichment in seawater relative to both oceanic crust and continental crust; this difference may act as an isotopic signature. The exotic 17B exhibits a nuclear halo, i.e. its radius is appreciably larger than that predicted by the liquid drop model. NMR spectroscopy Both 10B and 11B possess nuclear spin. The nuclear spin of 10B is 3 and that of 11B is . These isotopes are, therefore, of use in nuclear magnetic resonance spectroscopy; and spectrometers specially adapted to detecting the boron-11 nuclei are available commercially. The 10B and 11B nuclei also cause splitting in the resonances of attached nuclei. Allotropes Boron forms four major allotropes: α-rhombohedral and β-rhombohedral (α-R and β-R), γ-orthorhombic (γ) and β-tetragonal (β-T). All four phases are stable at ambient conditions, and β-rhombohedral is the most common and stable. An α-tetragonal phase also exists (α-T), but is very difficult to produce without significant contamination. Most of the phases are based on B12 icosahedra, but the γ phase can be described as a rocksalt-type arrangement of the icosahedra and B2 atomic pairs. It can be produced by compressing other boron phases to 12–20 GPa and heating to 1500–1800 °C; it remains stable after releasing the temperature and pressure. The β-T phase is produced at similar pressures, but higher temperatures of 1800–2200 °C. The α-T and β-T phases might coexist at ambient conditions, with the β-T phase being the more stable. Compressing boron above 160 GPa produces a boron phase with an as yet unknown structure, and this phase is a superconductor at temperatures below 6–12 K. Atomic structure Atomic boron is the lightest element having an electron in a p-orbital in its ground state. Its first three ionization energies are higher than those for heavier group III elements, reflecting its electropositive character. Chemistry of the element Preparation Elemental boron is rare and poorly studied because the pure material is extremely difficult to prepare. Most studies of "boron" involve samples that contain small amounts of carbon. Very pure boron is produced with difficulty because of contamination by carbon or other elements that resist removal. Some early routes to elemental boron involved the reduction of boric oxide with metals such as magnesium or aluminium. However, the product was often contaminated with borides of those metals. Pure boron can be prepared by reducing volatile boron halides with hydrogen at high temperatures. Ultrapure boron for use in the semiconductor industry is produced by the decomposition of diborane at high temperatures and then further purified by the zone melting or Czochralski processes. Reactions of the element Crystalline boron is a hard, black material with a melting point of above 2000 °C. Crystalline boron is chemically inert and resistant to attack by boiling hydrofluoric or hydrochloric acid. When finely divided, it is attacked slowly by hot concentrated hydrogen peroxide, hot concentrated nitric acid, hot sulfuric acid or hot mixture of sulfuric and chromic acids. Since elemental boron is very rare, its chemical reactions are of little significance practically speaking. The elemental form is not typically used as a precursor to compounds. Instead, the extensive inventory of boron compounds are produced from borates. When exposed to air, under normal conditions, a protective oxide or hydroxide layer forms on the surface of boron, which prevents further corrosion. The rate of oxidation of boron depends on the crystallinity, particle size, purity and temperature. At higher temperatures boron burns to form boron trioxide: 4 B + 3 O2 → 2 B2O3 Chemical compounds General trends In some ways, boron is comparable to carbon in its capability to form stable covalently bonded molecular networks (even nominally disordered (amorphous) boron contains boron icosahedra, which are bonded randomly to each other without long-range order.). In terms of chemical behavior, boron compounds resembles silicon. Aluminium, the heavier congener of boron, does not behave analogously to boron: it is far more electropositive, it is larger, and it tends not to form homoatomic Al-Al bonds. In the most familiar compounds, boron has the formal oxidation state III. These include the common oxides, sulfides, nitrides, and halides, as well as organic derivatives Boron compounds often violate the octet rule. Halides Boron forms the complete series of trihalides, i.e. BX3 (X = F, Cl, Br, I). The trifluoride is produced by treating borate salts with hydrogen fluoride, while the trichloride is produced by carbothermic reduction of boron oxides in the presence of chlorine gas: The trihalides adopt a planar trigonal structures, in contrast to the behavior of aluminium trihalides. All charge-neutral boron halides violate the octet rule, hence they typically are Lewis acidic. For example, boron trifluoride (BF3) combines eagerly with fluoride sources to give the tetrafluoroborate anion, BF4−. Boron trifluoride is used in the petrochemical industry as a catalyst. The halides react with water to form boric acid. Other boron halides include those with B-B bonding, such as B2F4 and B4Cl4. Oxide derivatives Boron-containing minerals exclusively exist as oxides of B(III), often associated with other elements. More than one hundred borate minerals are known. These minerals resemble silicates in some respect, although it is often found not only in a tetrahedral coordination with oxygen, but also in a trigonal planar configuration. The borates can be subdivided into two classes, anhydrous and the far more common hydrates. The hydrates contain B-OH groups and sometimes water of crystallization. A typical motif is exemplified by the tetraborate anions of the common mineral borax. The formal negative charge of the tetrahedral borate center is balanced by sodium (Na+). Some idea of the complexity of these materials is provided by the inventory of zinc borates, which are common wood preservatives and fire retardants: 4ZnO·B2O3·H2O, ZnO·B2O3·1.12H2O, ZnO·B2O3·2H2O, 6ZnO·5B2O3·3H2O, 2ZnO·3B2O3·7H2O, 2ZnO·3B2O3·3H2O, 3ZnO·5B2O3·14H2O, and ZnO·5B2O3·4.5H2O. As illustrated by the preceding examples, borate anions tend to condense by formation of B-O-B bonds. Borosilicates, with B-O-Si, and borophosphates, with B-O-P linkages, are also well represented in both minerals and synthetic compounds. Related to the oxides are the alkoxides and boronic acids with the formula B(OR)3 and R2BOH, respectively. Boron forms a wide variety of such metal-organic compounds, some of which are used in the synthesis of pharmaceuticals. These developments, especially the Suzuki reaction, was recognized with the 2010 Nobel Prize in Chemistry to Akira Suzuki. Hydrides Boranes and borohydrides are neutral and anionic compounds of boron and hydrogen, respectively. Sodium borohydride is the progenitor of the boranes. Sodium borohydride is obtained by hydrogenation of trimethylborate: Sodium borohydride is a white, fairly air-stable salt. Sodium borohydride converts to diborane by treatment with boron trifluoride: Diborane is the dimer of the elusive parent called borane, BH3. Having a formula akin to ethane's (C2H6), diborane adopts a very different structure, featuring a pair of bridging H atoms. This unusual structure, which was deduced only in the 1940's, was an early indication of the many surprises provided by boron chemistry. Pyrolysis of diborane gives boron hydride clusters, such as pentaborane(9) and decaborane . A large number of anionic boron hydrides are also known, e.g. [B12H12]2−. In these cluster compounds, boron has a coordination number greater than four. The analysis of the bonding in these polyhedra clusters earned William N. Lipscomb the 1976 Nobel Prize in Chemistry for "studies on the structure of boranes illuminating problems of chemical bonding". Not only are their structures unusual, many of the boranes are extremely reactive. For example, a widely used procedure for pentaborane states that it will "spontaneously inflame or explode in air". Organoboron compounds A large number of organoboron compounds, species with B-C bonds, are known. Many organoboron compounds are produced from hydroboration, the addition of B-H bonds to bonds. Diborane is traditionally used for such reactions, as illustrated by the preparation of trioctylborane: This regiochemistry, i.e. the tendency of B to attach to the terminal carbon - is explained by the polarization of the bonds in boranes, which is indicated as Bδ+-Hδ-. Hydroboration opened the doors for many subsequent reactions, several of which are useful in the synthesis of complex organic compounds. The significance of these methods was recognized by the award of Nobel Prize in Chemistry to H. C. Brown in 1979. Even complicated boron hydrides, such as decaborane undergo hydroboration. Like the volatile boranes, the alkyl boranes ignite spontaneously in air. In the 1950s, several studies examined the use of boranes as energy-increasing "Zip fuel" additives for jet fuel. Triorganoboron(III) compounds are trigonal planar and exhibit weak Lewis acidity. The resulting adducts are tetrahedral. This behavior contrasts with that of triorganoaluminium compounds (see trimethylaluminium), which are tetrahedral with bridging alkyl groups. Nitrides The boron-nitrides follow the pattern of avoiding B-B and N-N bonds: only B-N bonding is observed generally. The boron nitrides exhibit structures analogous to various allotropes of carbon, including graphite, diamond, and nanotubes. This similarity reflects the fact that B and N have eight valence electrons as does a pair of carbon atoms. In cubic boron nitride (tradename Borazon), boron and nitrogen atoms are tetrahedral, just like carbon in diamond. Cubic boron nitride, among other applications, is used as an abrasive, as its hardness is comparable with that of diamond. Hexagonal boron nitride (h-BN) is the BN analogue of graphite, consisting of sheets of alternating B and N atoms. These sheets stack with boron and nitrogen in registry between the sheets. Graphite and h-BN have very different properties, although both are lubricants, as these planes slip past each other easily. However, h-BN is a relatively poor electrical and thermal conductor in the planar directions. Molecular analogues of boron nitrides are represented by borazine, (BH)3(NH)3. Carbides Boron carbide is a ceramic material. It is obtained by carbothermal reduction of B2O3in an electric furnace: 2 B2O3 + 7 C → B4C + 6 CO Boron carbide's structure is only approximately reflected in its formula of B4C, and it shows a clear depletion of carbon from this suggested stoichiometric ratio. This is due to its very complex structure. The substance can be seen with empirical formula B12C3 (i.e., with B12 dodecahedra being a motif), but with less carbon, as the suggested C3 units are replaced with C-B-C chains, and some smaller (B6) octahedra are present as well (see the boron carbide article for structural analysis). The repeating polymer plus semi-crystalline structure of boron carbide gives it great structural strength per weight. Borides Binary metal-boron compounds, the metal borides, contain only boron and a metal. They are metallic, very hard, with high melting points. TiB2, ZrB2, and HfB2 have melting points above 3000 °C. Some metal borides find specialized applications as hard materials for cutting tools. Occurrence Boron is rare in the Universe and solar system. The amount of boron formed in the Big Bang is negligible. Boron is not generated in the normal course of stellar nucleosynthesis and is destroyed in stellar interiors. In the high oxygen environment of the Earth's surface, boron is always found fully oxidized to borate. Boron does not appear on Earth in elemental form. Extremely small traces of elemental boron were detected in Lunar regolith. Although boron is a relatively rare element in the Earth's crust, representing only 0.001% of the crust mass, it can be highly concentrated by the action of water, in which many borates are soluble. It is found naturally combined in compounds such as borax and boric acid (sometimes found in volcanic spring waters). About a hundred borate minerals are known. Production Economically important sources of boron are the minerals colemanite, rasorite (kernite), ulexite and tincal. Together these constitute 90% of mined boron-containing ore. The largest global borax deposits known, many still untapped, are in Central and Western Turkey, including the provinces of Eskişehir, Kütahya and Balıkesir. Global proven boron mineral mining reserves exceed one billion metric tonnes, against a yearly production of about four million tonnes. Turkey and the United States are the largest producers of boron products. Turkey produces about half of the global yearly demand, through Eti Mine Works () a Turkish state-owned mining and chemicals company focusing on boron products. It holds a government monopoly on the mining of borate minerals in Turkey, which possesses 72% of the world's known deposits. In 2012, it held a 47% share of production of global borate minerals, ahead of its main competitor, Rio Tinto Group. Almost a quarter (23%) of global boron production comes from the Rio Tinto Borax Mine (also known as the U.S. Borax Boron Mine) near Boron, California. Market trend The average cost of crystalline elemental boron is US$5/g. Elemental boron is chiefly used in making boron fibers, where it is deposited by chemical vapor deposition on a tungsten core (see below). Boron fibers are used in lightweight composite applications, such as high strength tapes. This use is a very small fraction of total boron use. Boron is introduced into semiconductors as boron compounds, by ion implantation. Estimated global consumption of boron (almost entirely as boron compounds) was about 4 million tonnes of B2O3 in 2012. As compounds such as borax and kernite its cost was US$377/tonne in 2019. Increasing demand for boric acid has led a number of producers to invest in additional capacity. Turkey's state-owned Eti Mine Works opened a new boric acid plant with the production capacity of 100,000 tonnes per year at Emet in 2003. Rio Tinto Group increased the capacity of its boron plant from 260,000 tonnes per year in 2003 to 310,000 tonnes per year by May 2005, with plans to grow this to 366,000 tonnes per year in 2006. Chinese boron producers have been unable to meet rapidly growing demand for high quality borates. This has led to imports of sodium tetraborate (borax) growing by a hundredfold between 2000 and 2005 and boric acid imports increasing by 28% per year over the same period. The rise in global demand has been driven by high growth rates in glass fiber, fiberglass and borosilicate glassware production. A rapid increase in the manufacture of reinforcement-grade boron-containing fiberglass in Asia, has offset the development of boron-free reinforcement-grade fiberglass in Europe and the US. The recent rises in energy prices may lead to greater use of insulation-grade fiberglass, with consequent growth in the boron consumption. Roskill Consulting Group forecasts that world demand for boron will grow by 3.4% per year to reach 21 million tonnes by 2010. The highest growth in demand is expected to be in Asia where demand could rise by an average 5.7% per year. Applications Nearly all boron ore extracted from the Earth is refined as boric acid and sodium tetraborate pentahydrate. In the United States, 70% of the boron is used for the production of glass and ceramics. The major global industrial-scale use of boron compounds (about 46% of end-use) is in production of glass fiber for boron-containing insulating and structural fiberglasses, especially in Asia. Boron is added to the glass as borax pentahydrate or boron oxide, to influence the strength or fluxing qualities of the glass fibers. Another 10% of global boron production is for borosilicate glass as used in high strength glassware. About 15% of global boron is used in boron ceramics, including super-hard materials discussed below. Agriculture consumes 11% of global boron production, and bleaches and detergents about 6%. Boronated fiberglass Fiberglasses, a fiber reinforced polymer sometimes contain borosilicate, borax, or boron oxide, and is added to increase the strength of the glass. The highly boronated glasses, E-glass (named for "Electrical" use) are alumino-borosilicate glass. Another common high-boron glasses, C-glass, also has a high boron oxide content, used for glass staple fibers and insulation. D-glass, a borosilicate glass, named for its low dielectric constant. Because of the ubiquitous use of fiberglass in construction and insulation, boron-containing fiberglasses consume over half the global production of boron, and are the single largest commercial boron market. Borosilicate glass Borosilicate glass, which is typically 12–15% B2O3, 80% SiO2, and 2% Al2O3, has a low coefficient of thermal expansion, giving it a good resistance to thermal shock. Schott AG's "Duran" and Owens-Corning's trademarked Pyrex are two major brand names for this glass, used both in laboratory glassware and in consumer cookware and bakeware, chiefly for this resistance. Elemental boron fiber Boron fibers (boron filaments) are high-strength, lightweight materials that are used chiefly for advanced aerospace structures as a component of composite materials, as well as limited production consumer and sporting goods such as golf clubs and fishing rods. The fibers can be produced by chemical vapor deposition of boron on a tungsten filament. Boron fibers and sub-millimeter sized crystalline boron springs are produced by laser-assisted chemical vapor deposition. Translation of the focused laser beam allows production of even complex helical structures. Such structures show good mechanical properties (elastic modulus 450 GPa, fracture strain 3.7%, fracture stress 17 GPa) and can be applied as reinforcement of ceramics or in micromechanical systems. Boron carbide ceramic Boron carbide's ability to absorb neutrons without forming long-lived radionuclides (especially when doped with extra boron-10) makes the material attractive as an absorbent for neutron radiation arising in nuclear power plants. Nuclear applications of boron carbide include shielding, control rods and shut-down pellets. Within control rods, boron carbide is often powdered, to increase its surface area. High-hardness and abrasive compounds Boron carbide and cubic boron nitride powders are widely used as abrasives. Boron nitride is a material isoelectronic to carbon. Similar to carbon, it has both hexagonal (soft graphite-like h-BN) and cubic (hard, diamond-like c-BN) forms. h-BN is used as a high temperature component and lubricant. c-BN, also known under commercial name borazon, is a superior abrasive. Its hardness is only slightly smaller than, but its chemical stability is superior, to that of diamond. Heterodiamond (also called BCN) is another diamond-like boron compound. Metallurgy Boron is added to boron steels at the level of a few parts per million to increase hardenability. Higher percentages are added to steels used in the nuclear industry due to boron's neutron absorption ability. Boron can also increase the surface hardness of steels and alloys through boriding. Additionally metal borides are used for coating tools through chemical vapor deposition or physical vapor deposition. Implantation of boron ions into metals and alloys, through ion implantation or ion beam deposition, results in a spectacular increase in surface resistance and microhardness. Laser alloying has also been successfully used for the same purpose. These borides are an alternative to diamond coated tools, and their (treated) surfaces have similar properties to those of the bulk boride. For example, rhenium diboride can be produced at ambient pressures, but is rather expensive because of rhenium. The hardness of ReB2 exhibits considerable anisotropy because of its hexagonal layered structure. Its value is comparable to that of tungsten carbide, silicon carbide, titanium diboride or zirconium diboride. Similarly, AlMgB14 + TiB2 composites possess high hardness and wear resistance and are used in either bulk form or as coatings for components exposed to high temperatures and wear loads. Detergent formulations and bleaching agents Borax is used in various household laundry and cleaning products. It is also present in some tooth bleaching formulas. Sodium perborate serves as a source of active oxygen in many detergents, laundry detergents, cleaning products, and laundry bleaches. However, despite its name, "Borateem" laundry bleach no longer contains any boron compounds, using sodium percarbonate instead as a bleaching agent. Insecticides and antifungals Zinc borates and boric acid, popularized as fire retardants, are widely used as wood preservatives and insecticides. Boric acid is also used as a domestic insecticide. Semiconductors Boron is a useful dopant for such semiconductors as silicon, germanium, and silicon carbide. Having one fewer valence electron than the host atom, it donates a hole resulting in p-type conductivity. Traditional method of introducing boron into semiconductors is via its atomic diffusion at high temperatures. This process uses either solid (B2O3), liquid (BBr3), or gaseous boron sources (B2H6 or BF3). However, after the 1970s, it was mostly replaced by ion implantation, which relies mostly on BF3 as a boron source. Boron trichloride gas is also an important chemical in semiconductor industry, however, not for doping but rather for plasma etching of metals and their oxides. Triethylborane is also injected into vapor deposition reactors as a boron source. Examples are the plasma deposition of boron-containing hard carbon films, silicon nitride–boron nitride films, and for doping of diamond film with boron. Magnets Boron is a component of neodymium magnets (Nd2Fe14B), which are among the strongest type of permanent magnet. These magnets are found in a variety of electromechanical and electronic devices, such as magnetic resonance imaging (MRI) medical imaging systems, in compact and relatively small motors and actuators. As examples, computer HDDs (hard disk drives), CD (compact disk) and DVD (digital versatile disk) players rely on neodymium magnet motors to deliver intense rotary power in a remarkably compact package. In mobile phones 'Neo' magnets provide the magnetic field which allows tiny speakers to deliver appreciable audio power. Shielding and neutron absorber in nuclear reactors Boron shielding is used as a control for nuclear reactors, taking advantage of its high cross-section for neutron capture. In pressurized water reactors a variable concentration of boronic acid in the cooling water is used as a neutron poison to compensate the variable reactivity of the fuel. When new rods are inserted the concentration of boronic acid is maximal, and is reduced during the lifetime. Other nonmedical uses Because of its distinctive green flame, amorphous boron is used in pyrotechnic flares. Some anti-corrosion systems contain borax. Sodium borates are used as a flux for soldering silver and gold and with ammonium chloride for welding ferrous metals. They are also fire retarding additives to plastics and rubber articles. Boric acid (also known as orthoboric acid) H3BO3 is used in the production of textile fiberglass and flat panel displays and in many PVAc- and PVOH-based adhesives. Triethylborane is a substance which ignites the JP-7 fuel of the Pratt & Whitney J58 turbojet/ramjet engines powering the Lockheed SR-71 Blackbird. It was also used to ignite the F-1 Engines on the Saturn V Rocket utilized by NASA's Apollo and Skylab programs from 1967 until 1973. Today SpaceX uses it to ignite the engines on their Falcon 9 rocket. Triethylborane is suitable for this because of its pyrophoric properties, especially the fact that it burns with a very high temperature. Triethylborane is an industrial initiator in radical reactions, where it is effective even at low temperatures. Borates are used as environmentally benign wood preservatives. Pharmaceutical and biological applications Boron plays a role in pharmaceutical and biological applications as it is found in various antibiotics produced by bacteria, such as boromycins, aplasmomycins, borophycins, and tartrolons. These antibiotics have shown inhibitory effects on the growth of certain bacteria, fungi, and protozoa. Boron is also being studied for its potential medicinal applications, including its incorporation into biologically active molecules for therapies like boron neutron capture therapy for brain tumors. Some boron-containing biomolecules may act as signaling molecules interacting with cell surfaces, suggesting a role in cellular communication. Boric acid has antiseptic, antifungal, and antiviral properties and, for these reasons, is applied as a water clarifier in swimming pool water treatment. Mild solutions of boric acid have been used as eye antiseptics. Bortezomib (marketed as Velcade and Cytomib). Boron appears as an active element in the organic pharmaceutical bortezomib, a new class of drug called the proteasome inhibitor, for treating myeloma and one form of lymphoma (it is currently in experimental trials against other types of lymphoma). The boron atom in bortezomib binds the catalytic site of the 26S proteasome with high affinity and specificity. A number of potential boronated pharmaceuticals using boron-10, have been prepared for use in boron neutron capture therapy (BNCT). Some boron compounds show promise in treating arthritis, though none have as yet been generally approved for the purpose. Tavaborole (marketed as Kerydin) is an Aminoacyl tRNA synthetase inhibitor which is used to treat toenail fungus. It gained FDA approval in July 2014. Dioxaborolane chemistry enables radioactive fluoride (18F) labeling of antibodies or red blood cells, which allows for positron emission tomography (PET) imaging of cancer and hemorrhages, respectively. A Human-Derived, Genetic, Positron-emitting and Fluorescent (HD-GPF) reporter system uses a human protein, PSMA and non-immunogenic, and a small molecule that is positron-emitting (boron bound 18F) and fluorescence for dual modality PET and fluorescent imaging of genome modified cells, e.g. cancer, CRISPR/Cas9, or CAR T-cells, in an entire mouse. The dual-modality small molecule targeting PSMA was tested in humans and found the location of primary and metastatic prostate cancer, fluorescence-guided removal of cancer, and detects single cancer cells in tissue margins. Research MgB2 Magnesium diboride (MgB2) is a superconductor with the transition temperature of 39 K. MgB2 wires are produced with the powder-in-tube process and applied in superconducting magnets. A project at CERN to make MgB2 cables has resulted in superconducting test cables able to carry 20,000 amperes for extremely high current distribution applications, such as the contemplated high luminosity version of the Large Hadron Collider. Commercial isotope enrichment Because of its high neutron cross-section, boron-10 is often used to control fission in nuclear reactors as a neutron-capturing substance. Several industrial-scale enrichment processes have been developed; however, only the fractionated vacuum distillation of the dimethyl ether adduct of boron trifluoride (DME-BF3) and column chromatography of borates are being used. Radiation-hardened semiconductors Cosmic radiation will produce secondary neutrons if it hits spacecraft structures. Those neutrons will be captured in 10B, if it is present in the spacecraft's semiconductors, producing a gamma ray, an alpha particle, and a lithium ion. Those resultant decay products may then irradiate nearby semiconductor "chip" structures, causing data loss (bit flipping, or single event upset). In radiation-hardened semiconductor designs, one countermeasure is to use depleted boron, which is greatly enriched in 11B and contains almost no 10B. This is useful because 11B is largely immune to radiation damage. Depleted boron is a byproduct of the nuclear industry (see above). Proton-boron fusion 11B is also a candidate as a fuel for aneutronic fusion. When struck by a proton with energy of about 500 keV, it produces three alpha particles and 8.7 MeV of energy. Most other fusion reactions involving hydrogen and helium produce penetrating neutron radiation, which weakens reactor structures and induces long-term radioactivity, thereby endangering operating personnel. The alpha particles from 11B fusion can be turned directly into electric power, and all radiation stops as soon as the reactor is turned off. Enriched boron (boron-10) The 10B isotope is useful for capturing thermal neutrons (see neutron cross section#Typical cross sections). The nuclear industry enriches natural boron to nearly pure 10B. The less-valuable by-product, depleted boron, is nearly pure 11B. Enriched boron or 10B is used in both radiation shielding and is the primary nuclide used in neutron capture therapy of cancer. In the latter ("boron neutron capture therapy" or BNCT), a compound containing 10B is incorporated into a pharmaceutical which is selectively taken up by a malignant tumor and tissues near it. The patient is then treated with a beam of low energy neutrons at a relatively low neutron radiation dose. The neutrons, however, trigger energetic and short-range secondary alpha particle and lithium-7 heavy ion radiation that are products of the boron-neutron nuclear reaction, and this ion radiation additionally bombards the tumor, especially from inside the tumor cells. In nuclear reactors, 10B is used for reactivity control and in emergency shutdown systems. It can serve either function in the form of borosilicate control rods or as boric acid. In pressurized water reactors, 10B boric acid is added to the reactor coolant after the plant is shut down for refueling. When the plant is started up again, the boric acid is slowly filtered out over many months as fissile material is used up and the fuel becomes less reactive. Nuclear fusion Boron has been investigated for possible applications in nuclear fusion research. It is commonly used for conditioning the walls in fusion reactors by depositing boron coatings on plasma-facing components and walls to reduce the release of hydrogen and impurities from the surfaces. It is also being used for the dissipation of energy in the fusion plasma boundary to suppress excessive energy bursts and heat fluxes to the walls. Neutron capture therapy In neutron capture therapy (BNCT) for malignant brain tumors, boron is researched to be used for selectively targeting and destroying tumor cells. The goal is to deliver higher concentrations of the non-radioactive boron isotope (10B) to the tumor cells than to the surrounding normal tissues. When these 10B-containing cells are irradiated with low-energy thermal neutrons, they undergo nuclear capture reactions, releasing high linear energy transfer (LET) particles such as α-particles and lithium-7 nuclei within a limited path length. These high-LET particles can destroy the adjacent tumor cells without causing significant harm to nearby normal cells. Boron acts as a selective agent due to its ability to absorb thermal neutrons and produce short-range physical effects primarily affecting the targeted tissue region. This binary approach allows for precise tumor cell killing while sparing healthy tissues. The effective delivery of boron involves administering boron compounds or carriers capable of accumulating selectively in tumor cells compared to surrounding tissue. BSH and BPA have been used clinically, but research continues to identify more optimal carriers. Accelerator-based neutron sources have also been developed recently as an alternative to reactor-based sources, leading to improved efficiency and enhanced clinical outcomes in BNCT. By employing the properties of boron isotopes and targeted irradiation techniques, BNCT offers a potential approach to treating malignant brain tumors by selectively killing cancer cells while minimizing the damage caused by traditional radiation therapies. BNCT has shown promising results in clinical trials for various other malignancies, including glioblastoma, head and neck cancer, cutaneous melanoma, hepatocellular carcinoma, lung cancer, and extramammary Paget's disease. The treatment involves a nuclear reaction between nonradioactive boron-10 isotope and low-energy thermal or high-energy epithermal neutrons to generate α particles and lithium nuclei that selectively destroy DNA in tumor cells. The primary challenge lies in developing efficient boron agents with higher content and specific targeting properties tailored for BNCT. Integration of tumor-targeting strategies with BNCT could potentially establish it as a practical personalized treatment option for different types of cancers. Ongoing research explores new boron compounds, optimization strategies, theranostic agents, and radiobiological advances to overcome limitations and cost-effectively improve patient outcomes. Biological role Boron is an essential plant nutrient, required primarily for maintaining the integrity of cell walls. However, high soil concentrations of greater than 1.0 ppm lead to marginal and tip necrosis in leaves as well as poor overall growth performance. Levels as low as 0.8 ppm produce these same symptoms in plants that are particularly sensitive to boron in the soil. Nearly all plants, even those somewhat tolerant of soil boron, will show at least some symptoms of boron toxicity when soil boron content is greater than 1.8 ppm. When this content exceeds 2.0 ppm, few plants will perform well and some may not survive. Some boron-containing antibiotics exist in nature. The first one found was boromycin, isolated from streptomyces in the 1960s. Others are tartrolons, a group of antibiotics discovered in the 1990s from culture broth of the myxobacterium Sorangium cellulosum. In 2013, chemist and synthetic biologist Steve Benner suggested that the conditions on Mars three billion years ago were much more favorable to the stability of RNA and formation of oxygen-containing boron and molybdenum catalysts found in life. According to Benner's theory, primitive life, which is widely believed to have originated from RNA, first formed on Mars before migrating to Earth. In human health It is thought that boron plays several essential roles in animals, including humans, but the exact physiological role is poorly understood. Boron deficiency has only been clearly established in livestock; in humans, boron deficiency may affect bone mineral density, though it has been noted that additional research on the effects of bone health is necessary. Boron is not classified as an essential human nutrient because research has not established a clear biological function for it. The U.S. Food and Nutrition Board (FNB) found the existing data insufficient to derive a Recommended Dietary Allowance (RDA), Adequate Intake (AI), or Estimated Average Requirement (EAR) for boron and the U.S. Food and Drug Administration (FDA) has not established a daily value for boron for food and dietary supplement labeling purposes. While low boron status can be detrimental to health, probably increasing the risk of osteoporosis, poor immune function, and cognitive decline, high boron levels are associated with cell damage and toxicity. Still, studies suggest that boron may exert beneficial effects on reproduction and development, calcium metabolism, bone formation, brain function, insulin and energy substrate metabolism, immunity, and steroid hormone (including estrogen) and vitamin D function, among other functions. A small human trial published in 1987 reported on postmenopausal women first made boron deficient and then repleted with 3 mg/day. Boron supplementation markedly reduced urinary calcium excretion and elevated the serum concentrations of 17 beta-estradiol and testosterone. Environmental boron appears to be inversely correlated with arthritis. The exact mechanism by which boron exerts its physiological effects is not fully understood, but may involve interactions with adenosine monophosphate (ADP) and S-adenosyl methionine (SAM-e), two compounds involved in important cellular functions. Furthermore, boron appears to inhibit cyclic ADP-ribose, thereby affecting the release of calcium ions from the endoplasmic reticulum and affecting various biological processes. Some studies suggest that boron may reduce levels of inflammatory biomarkers. Congenital endothelial dystrophy type 2, a rare form of corneal dystrophy, is linked to mutations in SLC4A11 gene that encodes a transporter reportedly regulating the intracellular concentration of boron. In humans, boron is usually consumed with food that contains boron, such as fruits, leafy vegetables, and nuts. Foods that are particularly rich in boron include avocados, dried fruits such as raisins, peanuts, pecans, prune juice, grape juice, wine and chocolate powder. According to 2-day food records from the respondents to the Third National Health and Nutrition Examination Survey (NHANES III), adult dietary intake was recorded at 0.9 to 1.4 mg/day. Health issues and toxicity Elemental boron, boron oxide, boric acid, borates, and many organoboron compounds are relatively nontoxic to humans and animals (with toxicity similar to that of table salt). The LD50 (dose at which there is 50% mortality) for animals is about 6 g per kg of body weight. Substances with an LD50 above 2 g/kg are considered nontoxic. An intake of 4 g/day of boric acid was reported without incident, but more than this is considered toxic in more than a few doses. Intakes of more than 0.5 grams per day for 50 days cause minor digestive and other problems suggestive of toxicity. Boric acid is more toxic to insects than to mammals, and is routinely used as an insecticide. However, it has been used in neutron capture therapy alongside other boron compounds such as sodium borocaptate and boronophenylalanine with reported low toxicity levels. The boranes (boron hydrogen compounds) and similar gaseous compounds are quite poisonous. As usual, boron is not an element that is intrinsically poisonous, but the toxicity of these compounds depends on structure (for another example of this phenomenon, see phosphine). The boranes are also highly flammable and require special care when handling, some combinations of boranes and other compounds are highly explosive. Sodium borohydride presents a fire hazard owing to its reducing nature and the liberation of hydrogen on contact with acid. Boron halides are corrosive. Boron is necessary for plant growth, but an excess of boron is toxic to plants, and occurs particularly in acidic soil. It presents as a yellowing from the tip inwards of the oldest leaves and black spots in barley leaves, but it can be confused with other stresses such as magnesium deficiency in other plants.
Physical sciences
Chemical elements_2
null
3756
https://en.wikipedia.org/wiki/Bromine
Bromine
Bromine is a chemical element; it has symbol Br and atomic number 35. It is a volatile red-brown liquid at room temperature that evaporates readily to form a similarly coloured vapour. Its properties are intermediate between those of chlorine and iodine. Isolated independently by two chemists, Carl Jacob Löwig (in 1825) and Antoine Jérôme Balard (in 1826), its name was derived , referring to its sharp and pungent smell. Elemental bromine is very reactive and thus does not occur as a free element in nature. Instead, it can be isolated from colourless soluble crystalline mineral halide salts analogous to table salt, a property it shares with the other halogens. While it is rather rare in the Earth's crust, the high solubility of the bromide ion (Br) has caused its accumulation in the oceans. Commercially the element is easily extracted from brine evaporation ponds, mostly in the United States and Israel. The mass of bromine in the oceans is about one three-hundredth that of chlorine. At standard conditions for temperature and pressure it is a liquid; the only other element that is liquid under these conditions is mercury. At high temperatures, organobromine compounds readily dissociate to yield free bromine atoms, a process that stops free radical chemical chain reactions. This effect makes organobromine compounds useful as fire retardants, and more than half the bromine produced worldwide each year is put to this purpose. The same property causes ultraviolet sunlight to dissociate volatile organobromine compounds in the atmosphere to yield free bromine atoms, causing ozone depletion. As a result, many organobromine compounds—such as the pesticide methyl bromide—are no longer used. Bromine compounds are still used in well drilling fluids, in photographic film, and as an intermediate in the manufacture of organic chemicals. Large amounts of bromide salts are toxic from the action of soluble bromide ions, causing bromism. However, bromine is beneficial for human eosinophils, and is an essential trace element for collagen development in all animals. Hundreds of known organobromine compounds are generated by terrestrial and marine plants and animals, and some serve important biological roles. As a pharmaceutical, the simple bromide ion (Br) has inhibitory effects on the central nervous system, and bromide salts were once a major medical sedative, before replacement by shorter-acting drugs. They retain niche uses as antiepileptics. History Bromine was discovered independently by two chemists, Carl Jacob Löwig and Antoine Balard, in 1825 and 1826, respectively. Löwig isolated bromine from a mineral water spring from his hometown Bad Kreuznach in 1825. Löwig used a solution of the mineral salt saturated with chlorine and extracted the bromine with diethyl ether. After evaporation of the ether, a brown liquid remained. With this liquid as a sample of his work he applied for a position in the laboratory of Leopold Gmelin in Heidelberg. The publication of the results was delayed and Balard published his results first. Balard found bromine chemicals in the ash of seaweed from the salt marshes of Montpellier. The seaweed was used to produce iodine, but also contained bromine. Balard distilled the bromine from a solution of seaweed ash saturated with chlorine. The properties of the resulting substance were intermediate between those of chlorine and iodine; thus he tried to prove that the substance was iodine monochloride (ICl), but after failing to do so he was sure that he had found a new element and named it muride, derived from the Latin word ("brine"). After the French chemists Louis Nicolas Vauquelin, Louis Jacques Thénard, and Joseph-Louis Gay-Lussac approved the experiments of the young pharmacist Balard, the results were presented at a lecture of the Académie des Sciences and published in Annales de Chimie et Physique. In his publication, Balard stated that he changed the name from muride to brôme on the proposal of M. Anglada. The name brôme (bromine) derives from the Greek (, "stench"). Other sources claim that the French chemist and physicist Joseph-Louis Gay-Lussac suggested the name brôme for the characteristic smell of the vapors. Bromine was not produced in large quantities until 1858, when the discovery of salt deposits in Stassfurt enabled its production as a by-product of potash. Apart from some minor medical applications, the first commercial use was the daguerreotype. In 1840, bromine was discovered to have some advantages over the previously used iodine vapor to create the light sensitive silver halide layer in daguerreotypy. By 1864, a 25% solution of liquid bromine in .75 molar aqueous potassium bromide was widely used to treat gangrene during the American Civil War, before the publications of Joseph Lister and Pasteur. Potassium bromide and sodium bromide were used as anticonvulsants and sedatives in the late 19th and early 20th centuries, but were gradually superseded by chloral hydrate and then by the barbiturates. In the early years of the First World War, bromine compounds such as xylyl bromide were used as poison gas. Properties Bromine is the third halogen, being a nonmetal in group 17 of the periodic table. Its properties are thus similar to those of fluorine, chlorine, and iodine, and tend to be intermediate between those of chlorine and iodine, the two neighbouring halogens. Bromine has the electron configuration [Ar]4s3d4p, with the seven electrons in the fourth and outermost shell acting as its valence electrons. Like all halogens, it is thus one electron short of a full octet, and is hence a strong oxidising agent, reacting with many elements in order to complete its outer shell. Corresponding to periodic trends, it is intermediate in electronegativity between chlorine and iodine (F: 3.98, Cl: 3.16, Br: 2.96, I: 2.66), and is less reactive than chlorine and more reactive than iodine. It is also a weaker oxidising agent than chlorine, but a stronger one than iodine. Conversely, the bromide ion is a weaker reducing agent than iodide, but a stronger one than chloride. These similarities led to chlorine, bromine, and iodine together being classified as one of the original triads of Johann Wolfgang Döbereiner, whose work foreshadowed the periodic law for chemical elements. It is intermediate in atomic radius between chlorine and iodine, and this leads to many of its atomic properties being similarly intermediate in value between chlorine and iodine, such as first ionisation energy, electron affinity, enthalpy of dissociation of the X molecule (X = Cl, Br, I), ionic radius, and X–X bond length. The volatility of bromine accentuates its very penetrating, choking, and unpleasant odour. All four stable halogens experience intermolecular van der Waals forces of attraction, and their strength increases together with the number of electrons among all homonuclear diatomic halogen molecules. Thus, the melting and boiling points of bromine are intermediate between those of chlorine and iodine. As a result of the increasing molecular weight of the halogens down the group, the density and heats of fusion and vaporisation of bromine are again intermediate between those of chlorine and iodine, although all their heats of vaporisation are fairly low (leading to high volatility) thanks to their diatomic molecular structure. The halogens darken in colour as the group is descended: fluorine is a very pale yellow gas, chlorine is greenish-yellow, and bromine is a reddish-brown volatile liquid that freezes at −7.2 °C and boils at 58.8 °C. (Iodine is a shiny black solid.) This trend occurs because the wavelengths of visible light absorbed by the halogens increase down the group. Specifically, the colour of a halogen, such as bromine, results from the electron transition between the highest occupied antibonding π molecular orbital and the lowest vacant antibonding σ molecular orbital. The colour fades at low temperatures so that solid bromine at −195 °C is pale yellow. Liquid bromine is infrared-transparent. Like solid chlorine and iodine, solid bromine crystallises in the orthorhombic crystal system, in a layered arrangement of Br molecules. The Br–Br distance is 227 pm (close to the gaseous Br–Br distance of 228 pm) and the Br···Br distance between molecules is 331 pm within a layer and 399 pm between layers (compare the van der Waals radius of bromine, 195 pm). This structure means that bromine is a very poor conductor of electricity, with a conductivity of around 5 × 10 Ω cm just below the melting point, although this is higher than the essentially undetectable conductivity of chlorine. At a pressure of 55 GPa (roughly 540,000 times atmospheric pressure) bromine undergoes an insulator-to-metal transition. At 75 GPa it changes to a face-centered orthorhombic structure. At 100 GPa it changes to a body centered orthorhombic monatomic form. Isotopes Bromine has two stable isotopes, Br and Br. These are its only two natural isotopes, with Br making up 51% of natural bromine and Br making up the remaining 49%. Both have nuclear spin 3/2− and thus may be used for nuclear magnetic resonance, although Br is more favourable. The relatively 1:1 distribution of the two isotopes in nature is helpful in identification of bromine containing compounds using mass spectroscopy. Other bromine isotopes are all radioactive, with half-lives too short to occur in nature. Of these, the most important are Br (t = 17.7 min), Br (t = 4.421 h), and Br (t = 35.28 h), which may be produced from the neutron activation of natural bromine. The most stable bromine radioisotope is Br (t = 57.04 h). The primary decay mode of isotopes lighter than Br is electron capture to isotopes of selenium; that of isotopes heavier than Br is beta decay to isotopes of krypton; and Br may decay by either mode to stable Se or Kr. Br isotopes from 87Br and heavier undergo beta decay with neutron emission and are of practical importance because they are fission products. Chemistry and compounds Bromine is intermediate in reactivity between chlorine and iodine, and is one of the most reactive elements. Bond energies to bromine tend to be lower than those to chlorine but higher than those to iodine, and bromine is a weaker oxidising agent than chlorine but a stronger one than iodine. This can be seen from the standard electrode potentials of the X/X couples (F, +2.866 V; Cl, +1.395 V; Br, +1.087 V; I, +0.615 V; At, approximately +0.3 V). Bromination often leads to higher oxidation states than iodination but lower or equal oxidation states to chlorination. Bromine tends to react with compounds including M–M, M–H, or M–C bonds to form M–Br bonds. Hydrogen bromide The simplest compound of bromine is hydrogen bromide, HBr. It is mainly used in the production of inorganic bromides and alkyl bromides, and as a catalyst for many reactions in organic chemistry. Industrially, it is mainly produced by the reaction of hydrogen gas with bromine gas at 200–400 °C with a platinum catalyst. However, reduction of bromine with red phosphorus is a more practical way to produce hydrogen bromide in the laboratory: 2 P + 6 HO + 3 Br → 6 HBr + 2 HPO HPO + HO + Br → 2 HBr + HPO At room temperature, hydrogen bromide is a colourless gas, like all the hydrogen halides apart from hydrogen fluoride, since hydrogen cannot form strong hydrogen bonds to the large and only mildly electronegative bromine atom; however, weak hydrogen bonding is present in solid crystalline hydrogen bromide at low temperatures, similar to the hydrogen fluoride structure, before disorder begins to prevail as the temperature is raised. Aqueous hydrogen bromide is known as hydrobromic acid, which is a strong acid (pK = −9) because the hydrogen bonds to bromine are too weak to inhibit dissociation. The HBr/HO system also involves many hydrates HBr·nHO for n = 1, 2, 3, 4, and 6, which are essentially salts of bromine anions and hydronium cations. Hydrobromic acid forms an azeotrope with boiling point 124.3 °C at 47.63 g HBr per 100 g solution; thus hydrobromic acid cannot be concentrated beyond this point by distillation. Unlike hydrogen fluoride, anhydrous liquid hydrogen bromide is difficult to work with as a solvent, because its boiling point is low, it has a small liquid range, its dielectric constant is low and it does not dissociate appreciably into HBr and ions – the latter, in any case, are much less stable than the bifluoride ions () due to the very weak hydrogen bonding between hydrogen and bromine, though its salts with very large and weakly polarising cations such as Cs and (R = Me, Et, Bu) may still be isolated. Anhydrous hydrogen bromide is a poor solvent, only able to dissolve small molecular compounds such as nitrosyl chloride and phenol, or salts with very low lattice energies such as tetraalkylammonium halides. Other binary bromides Nearly all elements in the periodic table form binary bromides. The exceptions are decidedly in the minority and stem in each case from one of three causes: extreme inertness and reluctance to participate in chemical reactions (the noble gases, with the exception of xenon in the very unstable XeBr); extreme nuclear instability hampering chemical investigation before decay and transmutation (many of the heaviest elements beyond bismuth); and having an electronegativity higher than bromine's (oxygen, nitrogen, fluorine, and chlorine), so that the resultant binary compounds are formally not bromides but rather oxides, nitrides, fluorides, or chlorides of bromine. (Nonetheless, nitrogen tribromide is named as a bromide as it is analogous to the other nitrogen trihalides.) Bromination of metals with Br tends to yield lower oxidation states than chlorination with Cl when a variety of oxidation states is available. Bromides can be made by reaction of an element or its oxide, hydroxide, or carbonate with hydrobromic acid, and then dehydrated by mildly high temperatures combined with either low pressure or anhydrous hydrogen bromide gas. These methods work best when the bromide product is stable to hydrolysis; otherwise, the possibilities include high-temperature oxidative bromination of the element with bromine or hydrogen bromide, high-temperature bromination of a metal oxide or other halide by bromine, a volatile metal bromide, carbon tetrabromide, or an organic bromide. For example, niobium(V) oxide reacts with carbon tetrabromide at 370 °C to form niobium(V) bromide. Another method is halogen exchange in the presence of excess "halogenating reagent", for example: FeCl + BBr (excess) → FeBr + BCl When a lower bromide is wanted, either a higher halide may be reduced using hydrogen or a metal as a reducing agent, or thermal decomposition or disproportionation may be used, as follows: 3 WBr + Al 3 WBr + AlBr EuBr + H → EuBr + HBr 2 TaBr TaBr + TaBr Most metal bromides with the metal in low oxidation states (+1 to +3) are ionic. Nonmetals tend to form covalent molecular bromides, as do metals in high oxidation states from +3 and above. Both ionic and covalent bromides are known for metals in oxidation state +3 (e.g. scandium bromide is mostly ionic, but aluminium bromide is not). Silver bromide is very insoluble in water and is thus often used as a qualitative test for bromine. Bromine halides The halogens form many binary, diamagnetic interhalogen compounds with stoichiometries XY, XY, XY, and XY (where X is heavier than Y), and bromine is no exception. Bromine forms a monofluoride and monochloride, as well as a trifluoride and pentafluoride. Some cationic and anionic derivatives are also characterised, such as , , , , and . Apart from these, some pseudohalides are also known, such as cyanogen bromide (BrCN), bromine thiocyanate (BrSCN), and bromine azide (BrN). The pale-brown bromine monofluoride (BrF) is unstable at room temperature, disproportionating quickly and irreversibly into bromine, bromine trifluoride, and bromine pentafluoride. It thus cannot be obtained pure. It may be synthesised by the direct reaction of the elements, or by the comproportionation of bromine and bromine trifluoride at high temperatures. Bromine monochloride (BrCl), a red-brown gas, quite readily dissociates reversibly into bromine and chlorine at room temperature and thus also cannot be obtained pure, though it can be made by the reversible direct reaction of its elements in the gas phase or in carbon tetrachloride. Bromine monofluoride in ethanol readily leads to the monobromination of the aromatic compounds PhX (para-bromination occurs for X = Me, Bu, OMe, Br; meta-bromination occurs for the deactivating X = –COEt, –CHO, –NO); this is due to heterolytic fission of the Br–F bond, leading to rapid electrophilic bromination by Br. At room temperature, bromine trifluoride (BrF) is a straw-coloured liquid. It may be formed by directly fluorinating bromine at room temperature and is purified through distillation. It reacts violently with water and explodes on contact with flammable materials, but is a less powerful fluorinating reagent than chlorine trifluoride. It reacts vigorously with boron, carbon, silicon, arsenic, antimony, iodine, and sulfur to give fluorides, and will also convert most metals and many metal compounds to fluorides; as such, it is used to oxidise uranium to uranium hexafluoride in the nuclear power industry. Refractory oxides tend to be only partially fluorinated, but here the derivatives KBrF and BrFSbF remain reactive. Bromine trifluoride is a useful nonaqueous ionising solvent, since it readily dissociates to form and and thus conducts electricity. Bromine pentafluoride (BrF) was first synthesised in 1930. It is produced on a large scale by direct reaction of bromine with excess fluorine at temperatures higher than 150 °C, and on a small scale by the fluorination of potassium bromide at 25 °C. It also reacts violently with water and is a very strong fluorinating agent, although chlorine trifluoride is still stronger. Polybromine compounds Although dibromine is a strong oxidising agent with a high first ionisation energy, very strong oxidisers such as peroxydisulfuryl fluoride (SOF) can oxidise it to form the cherry-red cation. A few other bromine cations are known, namely the brown and dark brown . The tribromide anion, , has also been characterised; it is analogous to triiodide. Bromine oxides and oxoacids Bromine oxides are not as well-characterised as chlorine oxides or iodine oxides, as they are all fairly unstable: it was once thought that they could not exist at all. Dibromine monoxide is a dark-brown solid which, while reasonably stable at −60 °C, decomposes at its melting point of −17.5 °C; it is useful in bromination reactions and may be made from the low-temperature decomposition of bromine dioxide in a vacuum. It oxidises iodine to iodine pentoxide and benzene to 1,4-benzoquinone; in alkaline solutions, it gives the hypobromite anion. So-called "bromine dioxide", a pale yellow crystalline solid, may be better formulated as bromine perbromate, BrOBrO. It is thermally unstable above −40 °C, violently decomposing to its elements at 0 °C. Dibromine trioxide, syn-BrOBrO, is also known; it is the anhydride of hypobromous acid and bromic acid. It is an orange crystalline solid which decomposes above −40 °C; if heated too rapidly, it explodes around 0 °C. A few other unstable radical oxides are also known, as are some poorly characterised oxides, such as dibromine pentoxide, tribromine octoxide, and bromine trioxide. The four oxoacids, hypobromous acid (HOBr), bromous acid (HOBrO), bromic acid (HOBrO), and perbromic acid (HOBrO), are better studied due to their greater stability, though they are only so in aqueous solution. When bromine dissolves in aqueous solution, the following reactions occur: {| |- | Br + HO || HOBr + H + Br || K = 7.2 × 10 mol l |- | Br + 2 OH || OBr + HO + Br || K = 2 × 10 mol l |} Hypobromous acid is unstable to disproportionation. The hypobromite ions thus formed disproportionate readily to give bromide and bromate: {| |- | 3 BrO 2 Br + || K = 10 |} Bromous acids and bromites are very unstable, although the strontium and barium bromites are known. More important are the bromates, which are prepared on a small scale by oxidation of bromide by aqueous hypochlorite, and are strong oxidising agents. Unlike chlorates, which very slowly disproportionate to chloride and perchlorate, the bromate anion is stable to disproportionation in both acidic and aqueous solutions. Bromic acid is a strong acid. Bromides and bromates may comproportionate to bromine as follows: + 5 Br + 6 H → 3 Br + 3 HO There were many failed attempts to obtain perbromates and perbromic acid, leading to some rationalisations as to why they should not exist, until 1968 when the anion was first synthesised from the radioactive beta decay of unstable . Today, perbromates are produced by the oxidation of alkaline bromate solutions by fluorine gas. Excess bromate and fluoride are precipitated as silver bromate and calcium fluoride, and the perbromic acid solution may be purified. The perbromate ion is fairly inert at room temperature but is thermodynamically extremely oxidising, with extremely strong oxidising agents needed to produce it, such as fluorine or xenon difluoride. The Br–O bond in is fairly weak, which corresponds to the general reluctance of the 4p elements arsenic, selenium, and bromine to attain their group oxidation state, as they come after the scandide contraction characterised by the poor shielding afforded by the radial-nodeless 3d orbitals. Organobromine compounds Like the other carbon–halogen bonds, the C–Br bond is a common functional group that forms part of core organic chemistry. Formally, compounds with this functional group may be considered organic derivatives of the bromide anion. Due to the difference of electronegativity between bromine (2.96) and carbon (2.55), the carbon atom in a C–Br bond is electron-deficient and thus electrophilic. The reactivity of organobromine compounds resembles but is intermediate between the reactivity of organochlorine and organoiodine compounds. For many applications, organobromides represent a compromise of reactivity and cost. Organobromides are typically produced by additive or substitutive bromination of other organic precursors. Bromine itself can be used, but due to its toxicity and volatility, safer brominating reagents are normally used, such as N-bromosuccinimide. The principal reactions for organobromides include dehydrobromination, Grignard reactions, reductive coupling, and nucleophilic substitution. Organobromides are the most common organohalides in nature, even though the concentration of bromide is only 0.3% of that for chloride in sea water, because of the easy oxidation of bromide to the equivalent of Br, a potent electrophile. The enzyme bromoperoxidase catalyzes this reaction. The oceans are estimated to release 1–2 million tons of bromoform and 56,000 tons of bromomethane annually. An old qualitative test for the presence of the alkene functional group is that alkenes turn brown aqueous bromine solutions colourless, forming a bromohydrin with some of the dibromoalkane also produced. The reaction passes through a short-lived strongly electrophilic bromonium intermediate. This is an example of a halogen addition reaction. Occurrence and production Bromine is significantly less abundant in the crust than fluorine or chlorine, comprising only 2.5 parts per million of the Earth's crustal rocks, and then only as bromide salts. It is the 46th most abundant element in Earth's crust. It is significantly more abundant in the oceans, resulting from long-term leaching. There, it makes up 65 parts per million, corresponding to a ratio of about one bromine atom for every 660 chlorine atoms. Salt lakes and brine wells may have higher bromine concentrations: for example, the Dead Sea contains 0.4% bromide ions. It is from these sources that bromine extraction is mostly economically feasible. Bromine is the tenth most abundant element in seawater. The main sources of bromine production are Israel and Jordan. The element is liberated by halogen exchange, using chlorine gas to oxidise Br to Br. This is then removed with a blast of steam or air, and is then condensed and purified. Today, bromine is transported in large-capacity metal drums or lead-lined tanks that can hold hundreds of kilograms or even tonnes of bromine. The bromine industry is about one-hundredth the size of the chlorine industry. Laboratory production is unnecessary because bromine is commercially available and has a long shelf life. Applications A wide variety of organobromine compounds are used in industry. Some are prepared from bromine and others are prepared from hydrogen bromide, which is obtained by burning hydrogen in bromine. Flame retardants Brominated flame retardants represent a commodity of growing importance, and make up the largest commercial use of bromine. When the brominated material burns, the flame retardant produces hydrobromic acid which interferes in the radical chain reaction of the oxidation reaction of the fire. The mechanism is that the highly reactive hydrogen radicals, oxygen radicals, and hydroxyl radicals react with hydrobromic acid to form less reactive bromine radicals (i.e., free bromine atoms). Bromine atoms may also react directly with other radicals to help terminate the free radical chain-reactions that characterise combustion. To make brominated polymers and plastics, bromine-containing compounds can be incorporated into the polymer during polymerisation. One method is to include a relatively small amount of brominated monomer during the polymerisation process. For example, vinyl bromide can be used in the production of polyethylene, polyvinyl chloride or polypropylene. Specific highly brominated molecules can also be added that participate in the polymerisation process. For example, tetrabromobisphenol A can be added to polyesters or epoxy resins, where it becomes part of the polymer. Epoxies used in printed circuit boards are normally made from such flame retardant resins, indicated by the FR in the abbreviation of the products (FR-4 and FR-2). In some cases, the bromine-containing compound may be added after polymerisation. For example, decabromodiphenyl ether can be added to the final polymers. A number of gaseous or highly volatile brominated halomethane compounds are non-toxic and make superior fire suppressant agents by this same mechanism, and are particularly effective in enclosed spaces such as submarines, airplanes, and spacecraft. However, they are expensive and their production and use has been greatly curtailed due to their effect as ozone-depleting agents. They are no longer used in routine fire extinguishers, but retain niche uses in aerospace and military automatic fire suppression applications. They include bromochloromethane (Halon 1011, CHBrCl), bromochlorodifluoromethane (Halon 1211, CBrClF), and bromotrifluoromethane (Halon 1301, CBrF). Other uses Silver bromide is used, either alone or in combination with silver chloride and silver iodide, as the light sensitive constituent of photographic emulsions. Ethylene bromide was an additive in gasolines containing lead anti-engine knocking agents. It scavenges lead by forming volatile lead bromide, which is exhausted from the engine. This application accounted for 77% of the bromine use in 1966 in the US. This application has declined since the 1970s due to environmental regulations (see below). Brominated vegetable oil (BVO), a complex mixture of plant-derived triglycerides that have been reacted to contain atoms of the element bromine bonded to the molecules, is used primarily to help emulsify citrus-flavored soft drinks, preventing them from separating during distribution. Poisonous bromomethane was widely used as pesticide to fumigate soil and to fumigate housing, by the tenting method. Ethylene bromide was similarly used. These volatile organobromine compounds are all now regulated as ozone depletion agents. The Montreal Protocol on Substances that Deplete the Ozone Layer scheduled the phase out for the ozone depleting chemical by 2005, and organobromide pesticides are no longer used (in housing fumigation they have been replaced by such compounds as sulfuryl fluoride, which contain neither the chlorine or bromine organics which harm ozone). Before the Montreal protocol in 1991 (for example) an estimated 35,000 tonnes of the chemical were used to control nematodes, fungi, weeds and other soil-borne diseases. In pharmacology, inorganic bromide compounds, especially potassium bromide, were frequently used as general sedatives in the 19th and early 20th century. Bromides in the form of simple salts are still used as anticonvulsants in both veterinary and human medicine, although the latter use varies from country to country. For example, the U.S. Food and Drug Administration (FDA) does not approve bromide for the treatment of any disease, and sodium bromide was removed from over-the-counter sedative products like Bromo-Seltzer, in 1975. Commercially available organobromine pharmaceuticals include the vasodilator nicergoline, the sedative brotizolam, the anticancer agent pipobroman, and the antiseptic merbromin. Otherwise, organobromine compounds are rarely pharmaceutically useful, in contrast to the situation for organofluorine compounds. Several drugs are produced as the bromide (or equivalents, hydrobromide) salts, but in such cases bromide serves as an innocuous counterion of no biological significance. Other uses of organobromine compounds include high-density drilling fluids, dyes (such as Tyrian purple and the indicator bromothymol blue), and pharmaceuticals. Bromine itself, as well as some of its compounds, are used in water treatment, and is the precursor of a variety of inorganic compounds with an enormous number of applications (e.g. silver bromide for photography). Zinc–bromine batteries are hybrid flow batteries used for stationary electrical power backup and storage; from household scale to industrial scale. Bromine is used in cooling towers (in place of chlorine) for controlling bacteria, algae, fungi, and zebra mussels. Because it has similar antiseptic qualities to chlorine, bromine can be used in the same manner as chlorine as a disinfectant or antimicrobial in applications such as swimming pools. Bromine came into this use in the United States during World War II due to a predicted shortage of chlorine. However, bromine is usually not used outside for these applications due to it being relatively more expensive than chlorine and the absence of a stabilizer to protect it from the sun. For indoor pools, it can be a good option as it is effective at a wider pH range. It is also more stable in a heated pool or hot tub. Biological role and toxicity A 2014 study suggests that bromine (in the form of bromide ion) is a necessary cofactor in the biosynthesis of collagen IV, making the element essential to basement membrane architecture and tissue development in animals. Nevertheless, no clear deprivation symptoms or syndromes have been documented in mammals. In other biological functions, bromine may be non-essential but still beneficial when it takes the place of chlorine. For example, in the presence of hydrogen peroxide, HO, formed by the eosinophil, and either chloride, iodide, thiocyanate, or bromide ions, eosinophil peroxidase provides a potent mechanism by which eosinophils kill multicellular parasites (such as the nematode worms involved in filariasis) and some bacteria (such as tuberculosis bacteria). Eosinophil peroxidase is a haloperoxidase that preferentially uses bromide over chloride for this purpose, generating hypobromite (hypobromous acid), although the use of chloride is possible. α-Haloesters are generally thought of as highly reactive and consequently toxic intermediates in organic synthesis. Nevertheless, mammals, including humans, cats, and rats, appear to biosynthesize traces of an α-bromoester, 2-octyl 4-bromo-3-oxobutanoate, which is found in their cerebrospinal fluid and appears to play a yet unclarified role in inducing REM sleep. Neutrophil myeloperoxidase can use HO and Br to brominate deoxycytidine, which could result in DNA mutations. Marine organisms are the main source of organobromine compounds, and it is in these organisms that bromine is more firmly shown to be essential. More than 1600 such organobromine compounds were identified by 1999. The most abundant is methyl bromide (CHBr), of which an estimated 56,000 tonnes is produced by marine algae each year. The essential oil of the Hawaiian alga Asparagopsis taxiformis consists of 80% bromoform. Most of such organobromine compounds in the sea are made by the action of a unique algal enzyme, vanadium bromoperoxidase. The bromide anion is not very toxic: a normal daily intake is 2 to 8 milligrams. However, high levels of bromide chronically impair the membrane of neurons, which progressively impairs neuronal transmission, leading to toxicity, known as bromism. Bromide has an elimination half-life of 9 to 12 days, which can lead to excessive accumulation. Doses of 0.5 to 1 gram per day of bromide can lead to bromism. Historically, the therapeutic dose of bromide is about 3 to 5 grams of bromide, thus explaining why chronic toxicity (bromism) was once so common. While significant and sometimes serious disturbances occur to neurologic, psychiatric, dermatological, and gastrointestinal functions, death from bromism is rare. Bromism is caused by a neurotoxic effect on the brain which results in somnolence, psychosis, seizures and delirium. Elemental bromine (Br) is toxic and causes chemical burns on human flesh. Inhaling bromine gas results in similar irritation of the respiratory tract, causing coughing, choking, shortness of breath, and death if inhaled in large enough amounts. Chronic exposure may lead to frequent bronchial infections and a general deterioration of health. As a strong oxidising agent, bromine is incompatible with most organic and inorganic compounds. Caution is required when transporting bromine; it is commonly carried in steel tanks lined with lead, supported by strong metal frames. The Occupational Safety and Health Administration (OSHA) of the United States has set a permissible exposure limit (PEL) for bromine at a time-weighted average (TWA) of 0.1 ppm. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of TWA 0.1 ppm and a short-term limit of 0.3 ppm. The exposure to bromine immediately dangerous to life and health (IDLH) is 3 ppm. Bromine is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities.
Physical sciences
Chemical elements_2
null
3757
https://en.wikipedia.org/wiki/Barium
Barium
Barium is a chemical element; it has symbol Ba and atomic number 56. It is the fifth element in group 2 and is a soft, silvery alkaline earth metal. Because of its high chemical reactivity, barium is never found in nature as a free element. The most common minerals of barium are barite (barium sulfate, BaSO4) and witherite (barium carbonate, BaCO3). The name barium originates from the alchemical derivative "baryta", from Greek (), meaning 'heavy'. Baric is the adjectival form of barium. Barium was identified as a new element in 1772, but not reduced to a metal until 1808 with the advent of electrolysis. Barium has few industrial applications. Historically, it was used as a getter for vacuum tubes and in oxide form as the emissive coating on indirectly heated cathodes. It is a component of YBCO (high-temperature superconductors) and electroceramics, and is added to steel and cast iron to reduce the size of carbon grains within the microstructure. Barium compounds are added to fireworks to impart a green color. Barium sulfate is used as an insoluble additive to oil well drilling fluid. In a purer form it is used as X-ray radiocontrast agents for imaging the human gastrointestinal tract. Water-soluble barium compounds are poisonous and have been used as rodenticides. Characteristics Physical properties Barium is a soft, silvery-white metal, with a slight golden shade when ultrapure. The silvery-white color of barium metal rapidly vanishes upon oxidation in air yielding a dark gray layer containing the oxide. Barium has a medium specific weight and high electrical conductivity. Because barium is difficult to purify, many of its properties have not been accurately determined. At room temperature and pressure, barium metal adopts a body-centered cubic structure, with a barium–barium distance of 503 picometers, expanding with heating at a rate of approximately 1.8/°C. It is a soft metal with a Mohs hardness of 1.25. Its melting temperature of is intermediate between those of the lighter strontium () and heavier radium (); however, its boiling point of exceeds that of strontium (). The density (3.62 g/cm3) is again intermediate between those of strontium (2.36 g/cm3) and radium (≈5 g/cm3). Chemical reactivity Barium is chemically similar to magnesium, calcium, and strontium, but more reactive. Its compounds are almost invariably found in the +2 oxidation state. As expected for a highly electropositive metal, barium's reaction with chalcogens is highly exothermic (release energy). Barium reacts with atmospheric oxygen in air at room temperature. For this reason, metallic barium is often stored under oil or in an inert atmosphere. Reactions with other nonmetals, such as carbon, nitrogen, phosphorus, silicon, and hydrogen, proceed upon heating. Reactions with water and alcohols are also exothermic and release hydrogen gas: Ba + 2 ROH → Ba(OR)2 + H2↑ (R is an alkyl group or a hydrogen atom) Barium reacts with ammonia to form the electride [Ba(NH3)6](e−)2, which near room temperature gives the amide Ba(NH2)2. The metal is readily attacked by acids. Sulfuric acid is a notable exception because passivation stops the reaction by forming the insoluble barium sulfate on the surface. Barium combines with several other metals, including aluminium, zinc, lead, and tin, forming intermetallic phases and alloys. Compounds Barium salts are typically white when solid and colorless when dissolved. They are denser than the strontium or calcium analogs, except for the halides (see table; zinc is given for comparison). Barium hydroxide ("baryta") was known to alchemists, who produced it by heating barium carbonate. Unlike calcium hydroxide, it absorbs very little CO2 in aqueous solutions and is therefore insensitive to atmospheric fluctuations. This property is used in calibrating pH equipment. Barium compounds burn with a green to pale green flame, which is an efficient test to detect a barium compound. The color results from spectral lines at 455.4, 493.4, 553.6, and 611.1 nm. Organobarium compounds are a growing field of knowledge: recently discovered are dialkylbariums and alkylhalobariums. Isotopes Barium found in the Earth's crust is a mixture of seven primordial nuclides, barium-130, 132, and 134 through 138. Barium-130 undergoes very slow radioactive decay to xenon-130 by double beta plus decay, with a half-life of (0.5–2.7)×1021 years (about 1011 times the age of the universe). Its abundance is ≈0.1% that of natural barium. Theoretically, barium-132 can similarly undergo double beta decay to xenon-132; this decay has not been detected. The radioactivity of these isotopes is so weak that they pose no danger to life. Of the stable isotopes, barium-138 composes 71.7% of all barium; other isotopes have decreasing abundance with decreasing mass number. In total, barium has 40 known isotopes, ranging in mass between 114 and 153. The most stable artificial radioisotope is barium-133 with a half-life of approximately 10.51 years. Five other isotopes have half-lives longer than a day. Barium also has 10 meta states, of which barium-133m1 is the most stable with a half-life of about 39 hours. History Alchemists in the early Middle Ages knew about some barium minerals. Smooth pebble-like stones of mineral baryte were found in volcanic rock near Bologna, Italy, and so were called "Bologna stones". Alchemists were attracted to them because after exposure to light they would glow for years. The phosphorescent properties of baryte heated with organics were described by V. Casciorolus in 1602. Carl Scheele determined that baryte contained a new element in 1772, but could not isolate barium, only barium oxide. Johan Gottlieb Gahn also isolated barium oxide two years later in similar studies. Oxidized barium was at first called "barote" by Guyton de Morveau, a name that was changed by Antoine Lavoisier to baryte (in French) or baryta (in Latin). Also in the 18th century, English mineralogist William Withering noted a heavy mineral in the lead mines of Cumberland, now known to be witherite. Barium was first isolated by electrolysis of molten barium salts in 1808 by Sir Humphry Davy in England. Davy, by analogy with calcium, named "barium" after baryta, with the "-ium" ending signifying a metallic element. Robert Bunsen and Augustus Matthiessen obtained pure barium by electrolysis of a molten mixture of barium chloride and ammonium chloride. The production of pure oxygen in the Brin process was a large-scale application of barium peroxide in the 1880s, before it was replaced by electrolysis and fractional distillation of liquefied air in the early 1900s. In this process barium oxide reacts at with air to form barium peroxide, which decomposes above by releasing oxygen: 2 BaO + O2 ⇌ 2 BaO2 Barium sulfate was first applied as a radiocontrast agent in X-ray imaging of the digestive system in 1908. Occurrence and production The abundance of barium is 0.0425% in the Earth's crust and 13 μg/L in sea water. The primary commercial source of barium is baryte (also called barytes or heavy spar), a barium sulfate mineral. with deposits in many parts of the world. Another commercial source, far less important than baryte, is witherite, barium carbonate. The main deposits are located in Britain, Romania, and the former USSR. The baryte reserves are estimated between 0.7 and 2 billion tonnes. The highest production, 8.3 million tonnes, was achieved in 1981, but only 7–8% was used for barium metal or compounds. Baryte production has risen since the second half of the 1990s from 5.6 million tonnes in 1996 to 7.6 in 2005 and 7.8 in 2011. China accounts for more than 50% of this output, followed by India (14% in 2011), Morocco (8.3%), US (8.2%), Iran and Kazakhstan (2.6% each) and Turkey (2.5%). The mined ore is washed, crushed, classified, and separated from quartz. If the quartz penetrates too deeply into the ore, or the iron, zinc, or lead content is abnormally high, then froth flotation is used. The product is a 98% pure baryte (by mass); the purity should be no less than 95%, with a minimal content of iron and silicon dioxide. It is then reduced by carbon to barium sulfide: BaSO4 + 2 C → BaS + 2 CO2 The water-soluble barium sulfide is the starting point for other compounds: treating BaS with oxygen produces the sulfate, with nitric acid the nitrate, with aqueous carbon dioxide the carbonate, and so on. The nitrate can be thermally decomposed to yield the oxide. Barium metal is produced by reduction with aluminium at . The intermetallic compound BaAl4 is produced first: 3 BaO + 14 Al → 3 BaAl4 + Al2O3 BaAl4 is an intermediate reacted with barium oxide to produce the metal. Note that not all barium is reduced. 8 BaO + BaAl4 → Ba↓ + 7 BaAl2O4 The remaining barium oxide reacts with the formed aluminium oxide: BaO + Al2O3 → BaAl2O4 and the overall reaction is 4 BaO + 2 Al → 3 Ba↓ + BaAl2O4 Barium vapor is condensed and packed into molds in an atmosphere of argon. This method is used commercially, yielding ultrapure barium. Commonly sold barium is about 99% pure, with main impurities being strontium and calcium (up to 0.8% and 0.25%) and other contaminants contributing less than 0.1%. A similar reaction with silicon at yields barium and barium metasilicate. Electrolysis is not used because barium readily dissolves in molten halides and the product is rather impure. Gemstone The barium mineral, benitoite (barium titanium silicate), occurs as a very rare blue fluorescent gemstone, and is the official state gem of California. Barium in seawater Barium exists in seawater as the Ba2+ ion with an average oceanic concentration of 109 nmol/kg. Barium also exists in the ocean as BaSO4, or barite. Barium has a nutrient-like profile with a residence time of 10,000 years. Barium shows a relatively consistent concentration in upper ocean seawater, excepting regions of high river inputs and regions with strong upwelling. There is little depletion of barium concentrations in the upper ocean for an ion with a nutrient-like profile, thus lateral mixing is important. Barium isotopic values show basin-scale balances instead of local or short-term processes. Applications Metal and alloys Barium, as a metal or when alloyed with aluminium, is used to remove unwanted gases (gettering) from vacuum tubes, such as TV picture tubes. Barium is suitable for this purpose because of its low vapor pressure and reactivity towards oxygen, nitrogen, carbon dioxide, and water; it can even partly remove noble gases by dissolving them in the crystal lattice. This application is gradually disappearing due to the rising popularity of the tubeless LCD, LED, and plasma sets. Other uses of elemental barium are minor and include an additive to silumin (aluminium–silicon alloys) that refines their structure, as well as bearing alloys; lead–tin soldering alloys – to increase the creep resistance; alloy with nickel for spark plugs; additive to steel and cast iron as an inoculant; alloys with calcium, manganese, silicon, and aluminium as high-grade steel deoxidizers. Barium sulfate and baryte Barium sulfate (the mineral baryte, BaSO4) is important to the petroleum industry as a drilling fluid in oil and gas wells. The precipitate of the compound (called "blanc fixe", from the French for "permanent white") is used in paints and varnishes; as a filler in ringing ink, plastics, and rubbers; as a paper coating pigment; and in nanoparticles, to improve physical properties of some polymers, such as epoxies. Barium sulfate has a low toxicity and relatively high density of ca. 4.5 g/cm3 (and thus opacity to X-rays). For this reason it is used as a radiocontrast agent in X-ray imaging of the digestive system ("barium meals" and "barium enemas"). Lithopone, a pigment that contains barium sulfate and zinc sulfide, is a permanent white with good covering power that does not darken when exposed to sulfides. Other barium compounds Other compounds of barium find only niche applications, limited by the toxicity of Ba2+ ions (barium carbonate is a rat poison), which is not a problem for the insoluble BaSO4. Barium oxide coating on the electrodes of fluorescent lamps facilitates the release of electrons. By its great atomic density, barium carbonate increases the refractive index and luster of glass and reduces leaks of X-rays from cathode-ray tubes (CRTs) TV sets. Barium, typically as barium nitrate imparts a yellow or "apple" green color to fireworks; for brilliant green barium chloride is used. Barium peroxide is a catalyst in the aluminothermic reaction (thermite) for welding rail tracks. It is also a green flare in tracer ammunition and a bleaching agent. Barium titanate is a promising electroceramic. Barium fluoride is used for optics in infrared applications because of its wide transparency range of 0.15–12 micrometers. YBCO was the first high-temperature superconductor cooled by liquid nitrogen, with a transition temperature of greater than the boiling point of nitrogen (). Ferrite, a type of sintered ceramic composed of iron oxide (Fe2O3) and barium oxide (BaO), is both electrically nonconductive and ferrimagnetic, and can be temporarily or permanently magnetized. Palaeoceanography The lateral mixing of barium is caused by water mass mixing and ocean circulation. Global ocean circulation reveals a strong correlation between dissolved barium and silicic acid. The large-scale ocean circulation combined with remineralization of barium show a similar correlation between dissolved barium and ocean alkalinity. Dissolved barium's correlation with silicic acid can be seen both vertically and spatially. Particulate barium shows a strong correlation with particulate organic carbon or POC. Barium is becoming more popular as a base for palaeoceanographic proxies. With both dissolved and particulate barium's links with silicic acid and POC, it can be used to determine historical variations in the biological pump, carbon cycle, and global climate. The barium particulate barite (BaSO4), as one of many proxies, can be used to provide a host of historical information on processes in different oceanic settings (water column, sediments, and hydrothermal sites). In each setting there are differences in isotopic and elemental composition of the barite particulate. Barite in the water column, known as marine or pelagic barite, reveals information on seawater chemistry variation over time. Barite in sediments, known as diagenetic or cold seeps barite, gives information about sedimentary redox processes. Barite formed via hydrothermal activity at hydrothermal vents, known as hydrothermal barite, reveals alterations in the condition of the earth's crust around those vents. Toxicity Soluble barium compounds have LD50 near 10 mg/kg (oral rats). Symptoms include "convulsions... paralysis of the peripheral nerve system ... severe inflammation of the gastrointestinal tract". The insoluble sulfate is nontoxic and is not classified as a dangerous goods in transport regulations. Little is known about the long term effects of barium exposure. The US EPA considers it unlikely that barium is carcinogenic when consumed orally. Inhaled dust containing insoluble barium compounds can accumulate in the lungs, causing a benign condition called baritosis.
Physical sciences
Chemical elements_2
null
3758
https://en.wikipedia.org/wiki/Berkelium
Berkelium
Berkelium is a synthetic chemical element; it has symbol Bk and atomic number 97. It is a member of the actinide and transuranium element series. It is named after the city of Berkeley, California, the location of the Lawrence Berkeley National Laboratory (then the University of California Radiation Laboratory) where it was discovered in December 1949. Berkelium was the fifth transuranium element discovered after neptunium, plutonium, curium and americium. The major isotope of berkelium, 249Bk, is synthesized in minute quantities in dedicated high-flux nuclear reactors, mainly at the Oak Ridge National Laboratory in Tennessee, United States, and at the Research Institute of Atomic Reactors in Dimitrovgrad, Russia. The longest-lived and second-most important isotope, 247Bk, can be synthesized via irradiation of 244Cm with high-energy alpha particles. Just over one gram of berkelium has been produced in the United States since 1967. There is no practical application of berkelium outside scientific research which is mostly directed at the synthesis of heavier transuranium elements and superheavy elements. A 22-milligram batch of berkelium-249 was prepared during a 250-day irradiation period and then purified for a further 90 days at Oak Ridge in 2009. This sample was used to synthesize the new element tennessine for the first time in 2009 at the Joint Institute for Nuclear Research, Russia, after it was bombarded with calcium-48 ions for 150 days. This was the culmination of the Russia–US collaboration on the synthesis of the heaviest elements on the periodic table. Berkelium is a soft, silvery-white, radioactive metal. The berkelium-249 isotope emits low-energy electrons and thus is relatively safe to handle. It decays with a half-life of 330 days to californium-249, which is a strong emitter of ionizing alpha particles. This gradual transformation is an important consideration when studying the properties of elemental berkelium and its chemical compounds, since the formation of californium brings not only chemical contamination, but also free-radical effects and self-heating from the emitted alpha particles. Characteristics Physical Berkelium is a soft, silvery-white, radioactive actinide metal. In the periodic table, it is located to the right of the actinide curium, to the left of the actinide californium and below the lanthanide terbium with which it shares many similarities in physical and chemical properties. Its density of 14.78 g/cm3 lies between those of curium (13.52 g/cm3) and californium (15.1 g/cm3), as does its melting point of 986 °C, below that of curium (1340 °C) but higher than that of californium (900 °C). Berkelium is relatively soft and has one of the lowest bulk moduli among the actinides, at about 20 GPa (2 Pa). ions shows two sharp fluorescence peaks at 652 nanometers (red light) and 742 nanometers (deep red – near-infrared) due to internal transitions at the f-electron shell. The relative intensity of these peaks depends on the excitation power and temperature of the sample. This emission can be observed, for example, after dispersing berkelium ions in a silicate glass, by melting the glass in presence of berkelium oxide or halide. Between 70 K and room temperature, berkelium behaves as a Curie–Weiss paramagnetic material with an effective magnetic moment of 9.69 Bohr magnetons (μB) and a Curie temperature of 101 K. This magnetic moment is almost equal to the theoretical value of 9.72 μB calculated within the simple atomic L-S coupling model. Upon cooling to about 34 K, berkelium undergoes a transition to an antiferromagnetic state. The enthalpy of dissolution in hydrochloric acid at standard conditions is −600 kJ/mol, from which the standard enthalpy of formation (ΔfH°) of aqueous ions is obtained as −601 kJ/mol. The standard electrode potential /Bk is −2.01 V. The ionization potential of a neutral berkelium atom is 6.23 eV. Allotropes At ambient conditions, berkelium assumes its most stable α form which has a hexagonal symmetry, space group P63/mmc, lattice parameters of 341 pm and 1107 pm. The crystal has a double-hexagonal close packing structure with the layer sequence ABAC and so is isotypic (having a similar structure) with α-lanthanum and α-forms of actinides beyond curium. This crystal structure changes with pressure and temperature. When compressed at room temperature to 7 GPa, α-berkelium transforms to the β modification, which has a face-centered cubic (fcc) symmetry and space group Fmm. This transition occurs without change in volume, but the enthalpy increases by 3.66 kJ/mol. Upon further compression to 25 GPa, berkelium transforms to an orthorhombic γ-berkelium structure similar to that of α-uranium. This transition is accompanied by a 12% volume decrease and delocalization of the electrons at the 5f electron shell. No further phase transitions are observed up to 57 GPa. Upon heating, α-berkelium transforms into another phase with an fcc lattice (but slightly different from β-berkelium), space group Fmm and the lattice constant of 500 pm; this fcc structure is equivalent to the closest packing with the sequence ABC. This phase is metastable and will gradually revert to the original α-berkelium phase at room temperature. The temperature of the phase transition is believed to be quite close to the melting point. Chemical Like all actinides, berkelium dissolves in various aqueous inorganic acids, liberating gaseous hydrogen and converting into the state. This trivalent oxidation state (+3) is the most stable, especially in aqueous solutions, but tetravalent (+4), pentavalent (+5), and possibly divalent (+2) berkelium compounds are also known. The existence of divalent berkelium salts is uncertain and has only been reported in mixed lanthanum(III) chloride-strontium chloride melts. A similar behavior is observed for the lanthanide analogue of berkelium, terbium. Aqueous solutions of ions are green in most acids. The color of ions is yellow in hydrochloric acid and orange-yellow in sulfuric acid. Berkelium does not react rapidly with oxygen at room temperature, possibly due to the formation of a protective oxide layer surface. However, it reacts with molten metals, hydrogen, halogens, chalcogens and pnictogens to form various binary compounds. Isotopes Nineteen isotopes and six nuclear isomers (excited states of an isotope) of berkelium have been characterized, with mass numbers ranging from 233 to 253 (except 235 and 237). All of them are radioactive. The longest half-lives are observed for 247Bk (1,380 years), 248Bk (over 300 years), and 249Bk (330 days); the half-lives of the other isotopes range from microseconds to several days. The isotope which is the easiest to synthesize is berkelium-249. This emits mostly soft β-particles which are inconvenient for detection. Its alpha radiation is rather weak (1.45%) with respect to the β-radiation, but is sometimes used to detect this isotope. The second important berkelium isotope, berkelium-247, is an alpha-emitter, as are most actinide isotopes. Occurrence All berkelium isotopes have a half-life far too short to be primordial. Therefore, any primordial berkelium − that is, berkelium present on the Earth during its formation − has decayed by now. On Earth, berkelium is mostly concentrated in certain areas, which were used for the atmospheric nuclear weapons tests between 1945 and 1980, as well as at the sites of nuclear incidents, such as the Chernobyl disaster, Three Mile Island accident and 1968 Thule Air Base B-52 crash. Analysis of the debris at the testing site of the first United States' first thermonuclear weapon, Ivy Mike, (1 November 1952, Enewetak Atoll), revealed high concentrations of various actinides, including berkelium. For reasons of military secrecy, this result was not published until 1956. Nuclear reactors produce mostly, among the berkelium isotopes, berkelium-249. During the storage and before the fuel disposal, most of it beta decays to californium-249. The latter has a half-life of 351 years, which is relatively long compared to the half-lives of other isotopes produced in the reactor, and is therefore undesirable in the disposal products. The transuranium elements from americium to fermium, including berkelium, occurred naturally in the natural nuclear fission reactor at Oklo, but no longer do so. Berkelium is also one of the elements that have theoretically been detected in Przybylski's Star. History Although very small amounts of berkelium were possibly produced in previous nuclear experiments, it was first intentionally synthesized, isolated and identified in December 1949 by Glenn T. Seaborg, Albert Ghiorso, Stanley Gerald Thompson, and Kenneth Street Jr. They used the 60-inch cyclotron at the University of California, Berkeley. Similar to the nearly simultaneous discovery of americium (element 95) and curium (element 96) in 1944, the new elements berkelium and californium (element 98) were both produced in 1949–1950. The name choice for element 97 followed the previous tradition of the Californian group to draw an analogy between the newly discovered actinide and the lanthanide element positioned above it in the periodic table. Previously, americium was named after a continent as its analogue europium, and curium honored scientists Marie and Pierre Curie as the lanthanide above it, gadolinium, was named after the explorer of the rare-earth elements Johan Gadolin. Thus the discovery report by the Berkeley group reads: "It is suggested that element 97 be given the name berkelium (symbol Bk) after the city of Berkeley in a manner similar to that used in naming its chemical homologue terbium (atomic number 65) whose name was derived from the town of Ytterby, Sweden, where the rare earth minerals were first found." This tradition ended with berkelium, though, as the naming of the next discovered actinide, californium, was not related to its lanthanide analogue dysprosium, but after the discovery place. The most difficult steps in the synthesis of berkelium were its separation from the final products and the production of sufficient quantities of americium for the target material. First, americium (241Am) nitrate solution was coated on a platinum foil, the solution was evaporated and the residue converted by annealing to americium dioxide (). This target was irradiated with 35 MeV alpha particles for 6 hours in the 60-inch cyclotron at the Lawrence Radiation Laboratory, University of California, Berkeley. The (α,2n) reaction induced by the irradiation yielded the 243Bk isotope and two free neutrons: + → + 2 After the irradiation, the coating was dissolved with nitric acid and then precipitated as the hydroxide using concentrated aqueous ammonia solution. The product was centrifugated and re-dissolved in nitric acid. To separate berkelium from the unreacted americium, this solution was added to a mixture of ammonium and ammonium sulfate and heated to convert all the dissolved americium into the oxidation state +6. Unoxidized residual americium was precipitated by the addition of hydrofluoric acid as americium(III) fluoride (). This step yielded a mixture of the accompanying product curium and the expected element 97 in form of trifluorides. The mixture was converted to the corresponding hydroxides by treating it with potassium hydroxide, and after centrifugation, was dissolved in perchloric acid. Further separation was carried out in the presence of a citric acid/ammonium buffer solution in a weakly acidic medium (pH≈3.5), using ion exchange at elevated temperature. The chromatographic separation behavior was unknown for the element 97 at the time, but was anticipated by analogy with terbium. The first results were disappointing because no alpha-particle emission signature could be detected from the elution product. With further analysis, searching for characteristic X-rays and conversion electron signals, a berkelium isotope was eventually detected. Its mass number was uncertain between 243 and 244 in the initial report, but was later established as 243. Synthesis and extraction Preparation of isotopes Berkelium is produced by bombarding lighter actinides uranium (238U) or plutonium (239Pu) with neutrons in a nuclear reactor. In a more common case of uranium fuel, plutonium is produced first by neutron capture (the so-called (n,γ) reaction or neutron fusion) followed by beta-decay: ^{238}_{92}U ->[\ce{(n,\gamma)}] ^{239}_{92}U ->[\beta^-][23.5 \ \ce{min}] ^{239}_{93}Np ->[\beta^-][2.3565 \ \ce{d}] ^{239}_{94}Pu (the times are half-lives) Plutonium-239 is further irradiated by a source that has a high neutron flux, several times higher than a conventional nuclear reactor, such as the 85-megawatt High Flux Isotope Reactor (HFIR) at the Oak Ridge National Laboratory in Tennessee, US. The higher flux promotes fusion reactions involving not one but several neutrons, converting 239Pu to 244Cm and then to 249Cm: Curium-249 has a short half-life of 64 minutes, and thus its further conversion to 250Cm has a low probability. Instead, it transforms by beta-decay into 249Bk: ^{249}_{96}Cm ->[{\beta^-}][64.15 \ \ce{min}] ^{249}_{97}Bk ->[\beta^-][330 \ \ce{d}] ^{249}_{98}Cf The thus-produced 249Bk has a long half-life of 330 days and thus can capture another neutron. However, the product, 250Bk, again has a relatively short half-life of 3.212 hours and thus does not yield any heavier berkelium isotopes. It instead decays to the californium isotope 250Cf: ^{249}_{97}Bk ->[\ce{(n,\gamma)}] ^{250}_{97}Bk ->[\beta^-][3.212 \ \ce{h}] ^{250}_{98}Cf Although 247Bk is the most stable isotope of berkelium, its production in nuclear reactors is very difficult because its potential progenitor 247Cm has never been observed to undergo beta decay. Thus, 249Bk is the most accessible isotope of berkelium, which still is available only in small quantities (only 0.66 grams have been produced in the US over the period 1967–1983) at a high price of the order 185 USD per microgram. It is the only berkelium isotope available in bulk quantities, and thus the only berkelium isotope whose properties can be extensively studied. The isotope 248Bk was first obtained in 1956 by bombarding a mixture of curium isotopes with 25 MeV α-particles. Although its direct detection was hindered by strong signal interference with 245Bk, the existence of a new isotope was proven by the growth of the decay product 248Cf which had been previously characterized. The half-life of 248Bk was estimated as hours, though later 1965 work gave a half-life in excess of 300 years (which may be due to an isomeric state). Berkelium-247 was produced during the same year by irradiating 244Cm with alpha-particles: Berkelium-242 was synthesized in 1979 by bombarding 235U with 11B, 238U with 10B, 232Th with 14N or 232Th with 15N. It converts by electron capture to 242Cm with a half-life of minutes. A search for an initially suspected isotope 241Bk was then unsuccessful; 241Bk has since been synthesized. Separation The fact that berkelium readily assumes oxidation state +4 in solids, and is relatively stable in this state in liquids greatly assists separation of berkelium away from many other actinides. These are inevitably produced in relatively large amounts during the nuclear synthesis and often favor the +3 state. This fact was not yet known in the initial experiments, which used a more complex separation procedure. Various inorganic oxidation agents can be applied to the solutions to convert it to the +4 state, such as bromates (), bismuthates (), chromates ( and ), silver(I) thiolate (), lead(IV) oxide (), ozone (), or photochemical oxidation procedures. More recently, it has been discovered that some organic and bio-inspired molecules, such as the chelator called 3,4,3-LI(1,2-HOPO), can also oxidize Bk(III) and stabilize Bk(IV) under mild conditions. is then extracted with ion exchange, extraction chromatography or liquid-liquid extraction using HDEHP (bis-(2-ethylhexyl) phosphoric acid), amines, tributyl phosphate or various other reagents. These procedures separate berkelium from most trivalent actinides and lanthanides, except for the lanthanide cerium (lanthanides are absent in the irradiation target but are created in various nuclear fission decay chains). A more detailed procedure adopted at the Oak Ridge National Laboratory was as follows: the initial mixture of actinides is processed with ion exchange using lithium chloride reagent, then precipitated as hydroxides, filtered and dissolved in nitric acid. It is then treated with high-pressure elution from cation exchange resins, and the berkelium phase is oxidized and extracted using one of the procedures described above. Reduction of the thus-obtained to the +3 oxidation state yields a solution, which is nearly free from other actinides (but contains cerium). Berkelium and cerium are then separated with another round of ion-exchange treatment. Bulk metal preparation In order to characterize chemical and physical properties of solid berkelium and its compounds, a program was initiated in 1952 at the Material Testing Reactor, Arco, Idaho, US. It resulted in preparation of an eight-gram plutonium-239 target and in the first production of macroscopic quantities (0.6 micrograms) of berkelium by Burris B. Cunningham and Stanley Gerald Thompson in 1958, after a continuous reactor irradiation of this target for six years. This irradiation method was and still is the only way of producing weighable amounts of the element, and most solid-state studies of berkelium have been conducted on microgram or submicrogram-sized samples. The world's major irradiation sources are the 85-megawatt High Flux Isotope Reactor at the Oak Ridge National Laboratory in Tennessee, USA, and the SM-2 loop reactor at the Research Institute of Atomic Reactors (NIIAR) in Dimitrovgrad, Russia, which are both dedicated to the production of transcurium elements (atomic number greater than 96). These facilities have similar power and flux levels, and are expected to have comparable production capacities for transcurium elements, although the quantities produced at NIIAR are not publicly reported. In a "typical processing campaign" at Oak Ridge, tens of grams of curium are irradiated to produce decigram quantities of californium, milligram quantities of berkelium-249 and einsteinium, and picogram quantities of fermium. In total, just over one gram of berkelium-249 has been produced at Oak Ridge since 1967. The first berkelium metal sample weighing 1.7 micrograms was prepared in 1971 by the reduction of fluoride with lithium vapor at 1000 °C; the fluoride was suspended on a tungsten wire above a tantalum crucible containing molten lithium. Later, metal samples weighing up to 0.5 milligrams were obtained with this method. Similar results are obtained with fluoride. Berkelium metal can also be produced by the reduction of oxide with thorium or lanthanum. Compounds Oxides Two oxides of berkelium are known, with the berkelium oxidation state of +3 () and +4 (). oxide is a brown solid, while oxide is a yellow-green solid with a melting point of 1920 °C and is formed from BkO2 by reduction with molecular hydrogen: Upon heating to 1200 °C, the oxide undergoes a phase change; it undergoes another phase change at 1750 °C. Such three-phase behavior is typical for the actinide sesquioxides. oxide, BkO, has been reported as a brittle gray solid but its exact chemical composition remains uncertain. Halides In halides, berkelium assumes the oxidation states +3 and +4. The +3 state is the most stable, especially in solutions, while the tetravalent halides and are only known in the solid phase. The coordination of berkelium atom in its trivalent fluoride and chloride is tricapped trigonal prismatic, with the coordination number of 9. In trivalent bromide, it is bicapped trigonal prismatic (coordination 8) or octahedral (coordination 6), and in the iodide it is octahedral. fluoride () is a yellow-green ionic solid and is isotypic with uranium tetrafluoride or zirconium tetrafluoride. fluoride () is also a yellow-green solid, but it has two crystalline structures. The most stable phase at low temperatures is isotypic with yttrium(III) fluoride, while upon heating to between 350 and 600 °C, it transforms to the structure found in lanthanum trifluoride. Visible amounts of chloride () were first isolated and characterized in 1962, and weighed only 3 billionths of a gram. It can be prepared by introducing hydrogen chloride vapors into an evacuated quartz tube containing berkelium oxide at a temperature about 500 °C. This green solid has a melting point of 600 °C, and is isotypic with uranium(III) chloride. Upon heating to nearly melting point, converts into an orthorhombic phase. Two forms of bromide are known: one with berkelium having coordination 6, and one with coordination 8. The latter is less stable and transforms to the former phase upon heating to about 350 °C. An important phenomenon for radioactive solids has been studied on these two crystal forms: the structure of fresh and aged 249BkBr3 samples was probed by X-ray diffraction over a period longer than 3 years, so that various fractions of berkelium-249 had beta decayed to californium-249. No change in structure was observed upon the 249BkBr3—249CfBr3 transformation. However, other differences were noted for 249BkBr3 and 249CfBr3. For example, the latter could be reduced with hydrogen to 249CfBr2, but the former could not – this result was reproduced on individual 249BkBr3 and 249CfBr3 samples, as well on the samples containing both bromides. The intergrowth of californium in berkelium occurs at a rate of 0.22% per day and is an intrinsic obstacle in studying berkelium properties. Beside a chemical contamination, 249Cf, being an alpha emitter, brings undesirable self-damage of the crystal lattice and the resulting self-heating. The chemical effect however can be avoided by performing measurements as a function of time and extrapolating the obtained results. Other inorganic compounds The pnictides of berkelium-249 of the type BkX are known for the elements nitrogen, phosphorus, arsenic and antimony. They crystallize in the rock-salt structure and are prepared by the reaction of either hydride () or metallic berkelium with these elements at elevated temperature (about 600 °C) under high vacuum. sulfide, , is prepared by either treating berkelium oxide with a mixture of hydrogen sulfide and carbon disulfide vapors at 1130 °C, or by directly reacting metallic berkelium with elemental sulfur. These procedures yield brownish-black crystals. and hydroxides are both stable in 1 molar solutions of sodium hydroxide. phosphate () has been prepared as a solid, which shows strong fluorescence under excitation with a green light. Berkelium hydrides are produced by reacting metal with hydrogen gas at temperatures about 250 °C. They are non-stoichiometric with the nominal formula (0 < x < 1). Several other salts of berkelium are known, including an oxysulfide (), and hydrated nitrate (), chloride (), sulfate () and oxalate (). Thermal decomposition at about 600 °C in an argon atmosphere (to avoid oxidation to ) of yields the crystals of oxysulfate (). This compound is thermally stable to at least 1000 °C in inert atmosphere. Organoberkelium compounds Berkelium forms a trigonal (η5–C5H5)3Bk metallocene complex with three cyclopentadienyl rings, which can be synthesized by reacting chloride with the molten beryllocene () at about 70 °C. It has an amber color and a density of 2.47 g/cm3. The complex is stable to heating to at least 250 °C, and sublimates without melting at about 350 °C. The high radioactivity of berkelium gradually destroys the compound (within a period of weeks). One cyclopentadienyl ring in (η5–C5H5)3Bk can be substituted by chlorine to yield . The optical absorption spectra of this compound are very similar to those of (η5–C5H5)3Bk. Applications There is currently no use for any isotope of berkelium outside basic scientific research. Berkelium-249 is a common target nuclide to prepare still heavier transuranium elements and superheavy elements, such as lawrencium, rutherfordium and bohrium. It is also useful as a source of the isotope californium-249, which is used for studies on the chemistry of californium in preference to the more radioactive californium-252 that is produced in neutron bombardment facilities such as the HFIR. A 22 milligram batch of berkelium-249 was prepared in a 250-day irradiation and then purified for 90 days at Oak Ridge in 2009. This target yielded the first 6 atoms of tennessine at the Joint Institute for Nuclear Research (JINR), Dubna, Russia, after bombarding it with calcium ions in the U400 cyclotron for 150 days. This synthesis was a culmination of the Russia-US collaboration between JINR and Lawrence Livermore National Laboratory on the synthesis of elements 113 to 118 which was initiated in 1989. Nuclear fuel cycle The nuclear fission properties of berkelium are different from those of the neighboring actinides curium and californium, and they suggest berkelium to perform poorly as a fuel in a nuclear reactor. Specifically, berkelium-249 has a moderately large neutron capture cross section of 710 barns for thermal neutrons, 1200 barns resonance integral, but very low fission cross section for thermal neutrons. In a thermal reactor, much of it will therefore be converted to berkelium-250 which quickly decays to californium-250. In principle, berkelium-249 can sustain a nuclear chain reaction in a fast breeder reactor. Its critical mass is relatively high at 192 kg, which can be reduced with a water or steel reflector but would still exceed the world production of this isotope. Berkelium-247 can maintain a chain reaction both in a thermal-neutron and in a fast-neutron reactor, however, its production is rather complex and thus the availability is much lower than its critical mass, which is about 75.7 kg for a bare sphere, 41.2 kg with a water reflector and 35.2 kg with a steel reflector (30 cm thickness). Health issues Little is known about the effects of berkelium on human body, and analogies with other elements may not be drawn because of different radiation products (electrons for berkelium and alpha particles, neutrons, or both for most other actinides). The low energy of electrons emitted from berkelium-249 (less than 126 keV) hinders its detection, due to signal interference with other decay processes, but also makes this isotope relatively harmless to humans as compared to other actinides. However, berkelium-249 transforms with a half-life of only 330 days to the strong alpha-emitter californium-249, which is rather dangerous and has to be handled in a glovebox in a dedicated laboratory. Most available berkelium toxicity data originate from research on animals. Upon ingestion by rats, only about 0.01% of berkelium ends in the blood stream. From there, about 65% goes to the bones, where it remains for about 50 years, 25% to the lungs (biological half-life about 20 years), 0.035% to the testicles or 0.01% to the ovaries where berkelium stays indefinitely. The balance of about 10% is excreted. In all these organs berkelium might promote cancer, and in the skeleton, its radiation can damage red blood cells. The maximum permissible amount of berkelium-249 in the human skeleton is 0.4 nanograms.
Physical sciences
Chemical elements_2
null
3760
https://en.wikipedia.org/wiki/Bauxite
Bauxite
Bauxite () is a sedimentary rock with a relatively high aluminium content. It is the world's main source of aluminium and gallium. Bauxite consists mostly of the aluminium minerals gibbsite (), boehmite (γ-AlO(OH)) and diaspore (α-AlO(OH)), mixed with the two iron oxides goethite (FeO(OH)) and haematite (), the aluminium clay mineral kaolinite () and small amounts of anatase () and ilmenite ( or ). Bauxite appears dull in luster and is reddish-brown, white, or tan. In 1821, the French geologist Pierre Berthier discovered bauxite near the village of Les Baux in Provence, southern France. Formation Numerous classification schemes have been proposed for bauxite but, , there was no consensus. Vadász (1951) distinguished lateritic bauxites (silicate bauxites) from karst bauxite ores (carbonate bauxites): The carbonate bauxites occur predominantly in Europe, Guyana, Suriname, and Jamaica above carbonate rocks (limestone and dolomite), where they were formed by lateritic weathering and residual accumulation of intercalated clay layers – dispersed clays which were concentrated as the enclosing limestones gradually dissolved during chemical weathering. The lateritic bauxites are found mostly in the countries of the tropics. They were formed by lateritization of various silicate rocks such as granite, gneiss, basalt, syenite, and shale. In comparison with the iron-rich laterites, the formation of bauxites depends even more on intense weathering conditions in a location with very good drainage. This enables the dissolution of the kaolinite and the precipitation of the gibbsite. Zones with highest aluminium content are frequently located below a ferruginous surface layer. The aluminium hydroxide in the lateritic bauxite deposits is almost exclusively gibbsite. In the case of Jamaica, recent analysis of the soils showed elevated levels of cadmium, suggesting that the bauxite originates from Miocene volcanic ash deposits from episodes of significant volcanism in Central America. Production and reserves Australia is the largest producer of bauxite, followed by Guinea and China. Bauxite is usually strip mined because it is almost always found near the surface of the terrain, with little or no overburden. Increased aluminium recycling, which requires less electric power than producing aluminium from ores, may considerably extend the world's bauxite reserves. Aluminium production , approximately 70% to 80% of the world's dry bauxite production is processed first into alumina and then into aluminium by electrolysis. Bauxite rocks are typically classified according to their intended commercial application: metallurgical, abrasive, cement, chemical, and refractory. Bauxite ore is usually heated in a pressure vessel along with a sodium hydroxide solution at a temperature of . At these temperatures, the aluminium is dissolved as sodium aluminate (the Bayer process). The aluminium compounds in the bauxite may be present as gibbsite (Al(OH)3), boehmite (AlOOH) or diaspore (AlOOH); the different forms of the aluminium component will dictate the extraction conditions. The undissolved waste, bauxite tailings, after the aluminium compounds are extracted contains iron oxides, silica, calcia, titania and some un-reacted alumina. After separation of the residue by filtering, pure gibbsite is precipitated when the liquid is cooled, and then seeded with fine-grained aluminium hydroxide. The gibbsite is usually converted into aluminium oxide, Al2O3, by heating in rotary kilns or fluid flash calciners to a temperature in excess of . This aluminium oxide is dissolved at a temperature of about in molten cryolite. Next, this molten substance can yield metallic aluminium by passing an electric current through it in the process of electrolysis, which is called the Hall–Héroult process, named after its American and French discoverers. Prior to the invention of this process, and prior to the Deville process, aluminium ore was refined by heating ore along with elemental sodium or potassium in a vacuum. The method was complicated and consumed materials that were themselves expensive at that time. This made early elemental aluminium more expensive than gold. Maritime safety As a bulk cargo, bauxite is a Group A cargo that may liquefy if excessively moist. Liquefaction and the free surface effect can cause the cargo to shift rapidly inside the hold and make the ship unstable, potentially sinking the ship. One vessel suspected to have been sunk in this way was the MS Bulk Jupiter in 2015. One method which can demonstrate this effect is the "can test", in which a sample of the material is placed in a cylindrical can and struck against a surface many times. If a moist slurry forms in the can, then there is a likelihood for the cargo to liquefy; although conversely, even if the sample remains dry it does not conclusively prove that it will remain that way, or that it is safe for loading. Source of gallium Bauxite is the main source of the rare metal gallium. During the processing of bauxite to alumina in the Bayer process, gallium accumulates in the sodium hydroxide liquor. From this it can be extracted by a variety of methods. The most recent is the use of ion-exchange resin. Achievable extraction efficiencies critically depend on the original concentration in the feed bauxite. At a typical feed concentration of 50 ppm, about 15 percent of the contained gallium is extractable. The remainder reports to the red mud and aluminium hydroxide streams. Bauxite is also a potential source for vanadium. Socio-ecological impacts The social and environmental impacts of bauxite extraction are well documented. Most of the world's bauxite deposits can be found within of the earths surface. Strip mining is the most common technique used for extracting shallow bauxite. This process involves removing the vegetation, top soil, and overburden to expose the bauxite ore. The overlying soil is typically stockpiled in order to rehabilitate the mine once operations have finished. During the strip mining process, the biodiversity and habitat once present in the area is completely lost and the hydrological and soil characteristics in the region are permanently altered. Other environmental impacts of bauxite mining include soil degradation, air pollution, and water pollution. Red mud Red mud is a highly alkaline sludge, with a high pH around 13, that is a byproduct of the Bayer process. It contains several elements such as sodium aluminoscilicate, calcium titanate, monohydrate aluminium, and trihydrate aluminium that do not break down in nature. When improperly stored, red mud can contaminate soil and water, which can result in local extinction of all life. Red mud was responsible for killing all life in the Marcal River in Hungary after a spill occurred in 2010. When red mud dries, it turns into dust that can cause lung disease, cancer and birth defects. Conflicts In the tropical regions of Asia, central Africa, South America and northern Australia, there has been an increase of bauxite mines on traditional and indigenous lands. This has resulted in a number of negative social impacts on local and indigenous peoples. In the Boké Region of Guinea, there has been a significant increase in bauxite mining pressure on the local population. This has resulted in potable water issues, air pollution, food contamination, and land expropriation disputes due to improper compensation. Bauxite mining has led to protests, civil unrest, and violent conflicts in Guinea, Ghana, Vietnam, and India. Guinea Guinea has a long history of mining related conflicts between communities and mining companies. Between 2015 and 2018, new bauxite mining operations in the Boké Region of Guinea have caused in 35 conflicts which include movements of revolts and road blockades. These conflicts have resulted in the loss of human life, the destruction of heavy machinery, and damage to government buildings. Ghana The Atewa range in Ghana, classified as an ecologically important forest reserve with an area of , has been is a recent site of conflict and controversy surrounding baxuite mining. The forest reserve is one of the only two upland evergreen forests in Ghana, and makes up a significant portion of the remaining 20% of forested habitat left in Ghana. The Atewa range falls under the jurisdiction of Akyem Abuakwa Traditional Area and is overseen by the king known as Okyenhene. In 2013, an NGO called A Rocha Ghana held a summit with the forestry and water resource commission, the minister of lands, the minister of the environment, and other important stakeholders. They came to the conclusion that no future government should mine bauxite in the region because the reserve is environmentally and culturally significant. In 2016, the government along with NGO's began the process of upgrading the reserved to a national park. However, that year an election took place, and before it became official, the newly elected National Patriotic Party (NPP) rejected the plan. In 2017, the government of Ghana signed a Memorandum of Understanding with China to develop new bauxite mining infrastructure in Ghana. Although there was no official plan to mine the Atewa Forest Reserve, tensions between local communities, NGO and the government began to rise. In 2019, tensions began to reach a peak when the government presented the Ghana Integrated Bauxite and Aluminium Development Authority Act that would create the legal framework required to develop and establish an integrated bauxite industry. In may of that year, the government began drilling deep holes in the reserve. These actions sparked several protests, including a march from the reserve to the presidential palace, an informational billboard campaign led by A Rocha Ghana, and a youth march. In 2020, A Rocha Ghana also sued the government over the drilling in the reserve after they failed to provide a statement explaining their actions. Vietnam In early 2009, the Vietnamese Government proposed a plan to mine remote regions of the central highlands. This proposal was highly controversial and sparked a nationwide debate and the most significant domestic conflict since the Vietnam War. Government scientists, journalists, religious leaders, retired high level state officials, and General Võ Nguyên Giáp, the military leader of anti-colonial revolution, were among the many people across Vietnamese society who opposed the governments plans. In an attempt to stop the spread of information across the globe, the government banned domestic reporters from reporting on bauxite mining. However, reporters turned to Vietnamese language websites and blogs where the reporting and discussion continued. On April 12, 2009, several well-respected Vietnamese scholars started a petition against the mining of bauxite that was signed by 135 accomplished and well known "Intellectuals". This petition helped unite the scattered anti-bauxite movement into a unified opposition against the state. These acts of governmental defiance were met with repressive state actions. Many domestic online reporters were arrested, and legislative action was taken to repress scientific research. India Most of India's bauxite ore reserves, which are among the top ten largest in the world, are located on tribal land. These tribal lands are densely populated and home to over 100 million Indigenous Indian peoples.  The mountain summits located on these lands act as a source of water and greatly contribute to the regions fertility. The Indian bauxite industry is interested in developing this land for aluminum production, which poses great risk to the terrestrial and aquatic ecosystems. Historically, the Indigenous peoples living on these lands have shown resistance to development, and oppose any new bauxite mining projects in the area. This has led to violent conflicts between Indigenous communities and police. On December 16, 2000, police killed three Indigenous protestors and wounded over a dozen more during a protest over a bauxite project in the Rayagada district of Odisha.
Physical sciences
Petrology
null
3778
https://en.wikipedia.org/wiki/Book
Book
A book is a medium for recording information in the form of writing or images. Modern books are typically in codex format, composed of many pages that are bound together and protected by a cover; they were preceded by several earlier formats, including the scroll and the tablet. The book publishing process is the series of steps involved in their creation and dissemination. As a conceptual object, a book refers to a written work of substantial length, which may be distributed either physically or digitally as an ebook. These works can be broadly classified into fiction (containing invented content, often narratives) and non-fiction (containing content intended as factual truth). A physical book may not contain such a work: for example, it may contain only drawings, engravings, photographs, puzzles, or removable content like paper dolls. It may also be left empty for personal use, as in the case of account books, appointment books, autograph books, notebooks, diaries and sketchbooks. Books are sold at both regular stores and specialized bookstores, as well as online for delivery, and can be borrowed from libraries. The reception of books has led to a number of social consequences, including censorship. The modern book industry has seen several major changes due to new technologies, including ebooks and audiobooks (recordings of books being read aloud). Awareness of the needs of print-disabled people has led to a rise in formats designed for greater accessibility, such as braille printing and large-print editions. Google Books estimated in 2010 that approximately 130 million total unique books had been published. Etymology The word book comes from the Old English , which in turn likely comes from the Germanic root , cognate to "beech". In Slavic languages like Russian, Bulgarian, Macedonian —"letter" is cognate with "beech". In Russian, Serbian and Macedonian, the word () or () refers to a primary school textbook that helps young children master the techniques of reading and writing. It is thus conjectured that the earliest Indo-European writings may have been carved on beech wood. The Latin word , meaning a book in the modern sense (bound and with separate leaves), originally meant "block of wood". An avid reader or collector of books is a bibliophile, or colloquially a "bookworm". Definitions In its modern incarnation, a book is typically composed of many pages (commonly of paper, parchment, or vellum) that are bound together along one edge and protected by a cover. By extension, book refers to a physical book's written, printed, or graphic contents. A single part or division of a longer written work may also be called a book, especially for some works composed in antiquity: each part of Aristotle's Physics, for example, is a book. It is difficult to create a precise definition of the book that clearly delineates it from other kinds of written material across time and culture. The meaning of the term has changed substantially over time with the evolution of communication media. Historian of books James Raven has suggested that when studying how books have been used to communicate, they should be defined in a broadly inclusive way as "portable, durable, replicable and legible" means of recording and disseminating information, rather than relying on physical or contextual features. This would include, for example, ebooks, newspapers, and quipus (a form of knot-based recording historically used by cultures in Andean South America), but not objects fixed in place such as inscribed monuments. A stricter definition is given by UNESCO: for the purpose of recording national statistics on book production, it recommended that a book be defined as "a non-periodical printed publication of at least 49 pages, exclusive of the cover pages, published in the country and made available to the public", distinguishing them from other written material such as pamphlets. Kovač et al. have critiqued this definition for failing to account for new digital formats. They propose four criteria (a minimum length; textual content; a form with defined boundaries; and "information architecture" like linear structure and certain textual elements) that form a "hierarchy of the book", in which formats that fulfill more criteria are considered more similar to the traditional printed book. Although in academic language a monograph is a specialist work on a single subject, in library and information science the term is used more broadly to mean any non-serial publication complete in one volume (a physical book) or a definite number of volumes (such as a multi-volume novel), in contrast to serial or periodical publications. History The history of books became an acknowledged academic discipline in the 1980s. Contributions to the field have come from textual scholarship, codicology, bibliography, philology, palaeography, art history, social history and cultural history. It aims to demonstrate that the book as an object, not just the text contained within it, is a conduit of interaction between readers and words. Analysis of each component part of the book can reveal its purpose, where and how it was kept, who read it, ideological and religious beliefs of the period, and whether readers interacted with the text within. Even a lack of such evidence can leave valuable clues about the nature of a particular book. The earliest forms of writing were etched on tablets, transitioning to palm leaves and papyrus in ancient times. Parchment and paper later emerged as important substrates for bookmaking, introducing greater durability and accessibility. Across regions like China, the Middle East, Europe, and South Asia, diverse methods of book production evolved. The Middle Ages saw the rise of illuminated manuscripts, intricately blending text and imagery, particularly during the Mughal era in South Asia under the patronage of rulers like Akbar and Shah Jahan. Prior to the invention of the printing press in the 15th century, made famous by the Gutenberg Bible, each text was a unique handcrafted valuable article, personalized through the design features incorporated by the scribe, owner, bookbinder, and illustrator. Its creation marked a pivotal moment for book production. Innovations like movable type and steam-powered presses accelerated manufacturing processes and contributed to increased literacy rates. Copyright protection also emerged, securing authors' rights and shaping the publishing landscape. The Late Modern Period introduced chapbooks, catering to a wider range of readers, and mechanization of the printing process further enhanced efficiency. The 20th century witnessed the advent of typewriters, computers, and desktop publishing, transforming document creation and printing. Digital advancements in the 21st century led to the rise of ebooks, propelled by the popularity of ereaders and accessibility features. While discussions about the potential decline of physical books have surfaced, print media has proven remarkably resilient, continuing to thrive as a multi-billion dollar industry. Additionally, efforts to make literature more inclusive emerged, with the development of Braille for the visually impaired and the creation of spoken books, providing alternative ways for individuals to access and enjoy literature. Tablet Some of the earliest written records were made on tablets. Clay tablets (flattened pieces of clay impressed with a stylus) were used in the Ancient Near East throughout the Bronze Age and well into the Iron Age, especially for writing in cuneiform. Wax tablets (pieces of wood covered in a layer of wax) were used in classical antiquity and throughout the Middle Ages. The custom of binding several wax tablets together (Roman pugillares) is a possible precursor of modern bound books. The etymology of the word codex (block of wood) suggests that it may have developed from wooden wax tablets. Scroll Scrolls made from papyrus were first used for writing in Ancient Egypt, perhaps as early as the First Dynasty, although the earliest evidence is from the account books of King Neferirkare Kakai of the Fifth Dynasty (about 2400 BC). According to Herodotus (History 5:58), the Phoenicians brought writing and papyrus to Greece around the 10th or 9th century BC. Whether made from papyrus, parchment, or paper, scrolls were the dominant writing medium in the Hellenistic, Roman, Chinese, Hebrew, and Macedonian cultures. The codex dominated in the Roman world by late antiquity, but scrolls persisted much longer in Asia. Codex The codex is the ancestor of the modern book, consisting of sheets of uniform size bound along one edge and typically held between two covers made of some more robust material. Isidore of Seville (died 636) explained the then-current relation between a codex, book, and scroll in his Etymologiae (VI.13): "A codex is composed of many books; a book is of one scroll. It is called codex by way of metaphor from the trunks (codex) of trees or vines, as if it were a wooden stock, because it contains in itself a multitude of books, as it were of branches". The first written mention of the codex as a form of book is from Martial, in his Apophoreta CLXXXIV at the end of the first century, where he praises its compactness. However, the codex never gained much popularity in the pagan Hellenistic world, and only within the Christian community did it gain widespread use. This change happened gradually during the 3rd and 4th centuries, and the reasons for adopting the codex form of the book were several: the format was more economical than the scroll, as both sides of the writing material can be used; and it was portable, searchable, and easier to conceal. The Christian authors may also have wanted to distinguish their writings from the pagan and Judaic texts written on scrolls. The codices of pre-Columbian Mesoamerica had the same form as the European codex, but were instead made with long folded strips of either fig bark (amatl) or plant fibers, often with a layer of whitewash applied before writing. New World codices were written as late as the 16th century (see Maya codices and Aztec codices). Those written before the Spanish conquests seem all to have been single long sheets folded concertina-style, sometimes written on both sides of the local amatl paper. Manuscript Manuscripts, handwritten and hand-copied documents, were the only form of writing before the invention and widespread adoption of print. Advances were made in the techniques used to create them. In the early Western Roman Empire, monasteries continued Latin writing traditions related to Christianity, and the clergy were the predominant readers and copyists. The bookmaking process was long and laborious. They were usually written on parchment or vellum, writing surfaces made from processed animal skin. The parchment had to be prepared, then the unbound pages were planned and ruled with a blunt tool or lead, after which the text was written by a scribe, who usually left blank areas for illustration and rubrication. Finally, it was bound by a bookbinder. Because of the difficulties involved in making and copying books, they were expensive and rare. Smaller monasteries usually had only a few dozen books. By the 9th century, larger collections held around 500 volumes and even at the end of the Middle Ages, the papal library in Avignon and Paris library of the Sorbonne held only around 2,000 volumes. The rise of universities in the 13th century led to an increased demand for books, and a new system for copying appeared. The books were divided into unbound leaves (pecia), which were lent out to different copyists, so the speed of book production was considerably increased. The system was maintained by secular stationers guilds, which produced both religious and non-religious material.In India, bound manuscripts made of birch bark or palm leaf had existed since antiquity. The text in palm leaf manuscripts was inscribed with a knife pen on rectangular cut and cured palm leaf sheets; coloring was then applied to the surface and wiped off, leaving the ink in the incised grooves. Each sheet typically had a hole through which a string could pass, and with these the sheets were tied together with a string to bind like a book. Woodblock printing In woodblock printing, a relief image of an entire page is carved into blocks of wood, inked, and used to print copies of that page. It originated in the Han dynasty before 220 AD, used to print textiles and later paper, and was widely used throughout East Asia. The oldest dated book printed by this method is The Diamond Sutra (868 AD). The method (called woodcut when used in art) arrived in Europe in the early 14th century. Books (known as block-books), as well as playing-cards and religious pictures, began to be produced by this method. Creating an entire book was a painstaking process, requiring a hand-carved block for each page, and the wooden blocks could crack if stored for too long. Movable type and incunabula The Chinese inventor Bi Sheng made movable type of earthenware , but there are no known surviving examples of his printing. Around 1450, Johannes Gutenberg independently invented movable type in Europe, along with innovations in casting the type based on a matrix and hand mould. This invention gradually made books less expensive to produce and more widely available. Early printed books, single sheets and images which were created before 1501 in Europe are known as incunables or incunabula. 19th century to present Steam-powered printing presses became popular in the early 19th century. These machines could print 1,100 sheets per hour, but workers could only set 2,000 letters per hour. Monotype and linotype typesetting machines were introduced in the late 19th century. They could set more than 6,000 letters per hour and an entire line of type at once. There have been numerous improvements in the printing press. In mid-20th century, European book production had risen to over 200,000 titles per year. During the 20th century, libraries faced an ever-increasing rate of publishing, sometimes called an information explosion. The advent of electronic publishing and the internet means that new information is often published online rather than in printed books, for example through a digital library. "Print on demand" technologies, which make it possible to print as few as one book at a time, have made self-publishing (and vanity publishing) much easier and more affordable, and has allowed publishers to keep low-selling books in print rather than declaring them out of print. Contemporary publishing Presently, books are typically produced by a publishing company in order to be put on the market by distributors and bookstores. The publisher negotiates a formal legal agreement with authors in order to obtain the copyright to works, then arranges for them to be produced and sold. The major steps of the publishing process are: editing and proofreading the work to be published; designing the printed book; manufacturing the books; and selling the books, including marketing and promotion. Each of these steps is usually taken on by third-party companies paid by the publisher. This is in contrast to self-publishing, where an author pays for the production and distribution of their own work and manages some or all steps of the publishing process. English-language publishing is currently dominated by the so-called "Big Five" publishers: Penguin Random House, Hachette Book Group, HarperCollins, Simon & Schuster, and Macmillan Publishers. They were estimated to make up almost 60 percent of the market for general-readership books in 2021. Design Book design is the art of incorporating the content, style, format, design, and sequence of the various elements of a book into a coherent unit. Layout Modern books are organized according to a particular format called the book's layout. Although there is great variation in layout, modern books tend to adhere to a set of rules with regard to what the parts of the layout are and what their content usually includes. A basic layout will include a front cover, a back cover and the book's content which is called its body copy or content pages. The front cover often bears the book's title (and subtitle, if any) and the name of its author or editor(s). The inside front cover page is usually left blank in both hardcover and paperback books. The next section, if present, is the book's front matter, which includes all textual material after the front cover but not part of the book's content such as a foreword, a dedication, a table of contents and publisher data such as the book's edition or printing number and place of publication. Between the body copy and the back cover goes the end matter which would include any indices, sets of tables, diagrams, glossaries or lists of cited works (though an edited book with several authors usually places cited works at the end of each authored chapter). The inside back cover page, like that inside the front cover, is usually blank. The back cover is the usual place for the book's ISBN and maybe a photograph of the author(s)/ editor(s), perhaps with a short introduction to them. Also here often appear plot summaries, barcodes and excerpted reviews of the book. The body of the books is usually divided into parts, chapters, sections and sometimes subsections that are composed of at least a paragraph or more. Size The size of a book is generally measured by the height against the width of a leaf, or sometimes the height and width of its cover. A series of terms commonly used by contemporary libraries and publishers for the general sizes of modern books ranges from folio (the largest), to quarto (smaller) and octavo (still smaller). Historically, these terms referred to the format of the book, a technical term used by printers and bibliographers to indicate the size of a leaf in terms of the size of the original sheet. For example, a quarto was a book printed on sheets of paper folded in half twice, with the first fold at right angles to the second, to produce 4 leaves (or 8 pages), each leaf one fourth the size of the original sheet printed – note that a leaf refers to the single piece of paper, whereas a page is one side of a leaf. Because the actual format of many modern books cannot be determined from examination of the books, bibliographers may not use these terms in scholarly descriptions. Illustration While some form of book illustration has existed since the invention of writing, the modern Western tradition of illustration began with 15th-century block books, in which the book's text and images were cut into the same block. Techniques such as engraving, etching, and lithography have also been influential. Manufacturing The methods used for the printing and binding of books continued fundamentally unchanged from the 15th century into the early 20th century. While there was more mechanization, a book printer in 1900 still used movable metal type assembled into words, lines, and pages to create copies. Modern paper books are printed on paper designed specifically for printing. Traditionally, book papers are off-white or low-white papers (easier to read), are opaque to minimize the show-through of text from one side of the page to the other and are (usually) made to tighter caliper or thickness specifications, particularly for case-bound books. Different paper qualities are used depending on the type of book: Machine finished coated papers, woodfree uncoated papers, coated fine papers and special fine papers are common paper grades. Today, the majority of books are printed by offset lithography. When a book is printed, the pages are laid out on the plate so that after the printed sheet is folded the pages will be in the correct sequence. Books tend to be manufactured nowadays in a few standard sizes. The sizes of books are usually specified as "trim size": the size of the page after the sheet has been folded and trimmed. The standard sizes result from sheet sizes (therefore machine sizes) which became popular 200 or 300 years ago, and have come to dominate the industry. British conventions in this regard prevail throughout the English-speaking world, except for the US. The European book manufacturing industry works to a completely different set of standards. Hardcover books have a stiff binding, while paperback books have cheaper, flexible covers which tend to be less durable. Publishers may produce low-cost pre-publication copies known as galleys or "bound proofs" for promotional purposes, such as generating reviews in advance of publication. Galleys are usually made as cheaply as possible, since they are not intended for sale. Printing Some books, particularly those with shorter runs (i.e. with fewer copies) will be printed on sheet-fed offset presses, but most books are now printed on web presses, which are fed by a continuous roll of paper, and can consequently print more copies in a shorter time. As the production line circulates, a complete "book" is collected together in one stack of pages, and another machine carries out the folding, pleating, and stitching of the pages into bundles of signatures (sections of pages) ready to go into the gathering line. The pages of a book are printed two at a time, not as one complete book. Excess numbers are printed to make up for any spoilage due to make-readies or test pages to assure final print quality. A make-ready is the preparatory work carried out by the pressmen to get the printing press up to the required quality of impression. Included in make-ready is the time taken to mount the plate onto the machine, clean up any mess from the previous job, and get the press up to speed. As soon as the pressman decides that the printing is correct, all the make-ready sheets will be discarded, and the press will start making books. Similar make readies take place in the folding and binding areas, each involving spoilage of paper. Recent developments in book manufacturing include the development of digital printing. Book pages are printed, in much the same way as an office copier works, using toner rather than ink. Each book is printed in one pass, not as separate signatures. Digital printing has permitted the manufacture of much smaller quantities than offset, in part because of the absence of make readies and of spoilage. Digital printing has opened up the possibility of print-on-demand, where no books are printed until after an order is received from a customer. Binding After the signatures are folded and gathered, they move into the bindery. In the middle of last century there were still many trade binders—stand-alone binding companies which did no printing, specializing in binding alone. At that time, because of the dominance of letterpress printing, typesetting and printing took place in one location, and binding in a different factory. When type was all metal, a typical book's worth of type would be bulky, fragile and heavy. The less it was moved in this condition the better: so printing would be carried out in the same location as the typesetting. Printed sheets on the other hand could easily be moved. Now, because of increasing computerization of preparing a book for the printer, the typesetting part of the job has flowed upstream, where it is done either by separately contracting companies working for the publisher, by the publishers themselves, or even by the authors. Mergers in the book manufacturing industry mean that it is now unusual to find a bindery which is not also involved in book printing (and vice versa). If the book is a hardback its path through the bindery will involve more points of activity than if it is a paperback. Unsewn binding is now increasingly common. The signatures of a book can also be held together by "Smyth sewing" using needles, "McCain sewing", using drilled holes often used in schoolbook binding, or "notch binding", where gashes about an inch long are made at intervals through the fold in the spine of each signature. The rest of the binding process is similar in all instances. Sewn and notch bound books can be bound as either hardbacks or paperbacks. Finishing "Making cases" happens off-line and prior to the book's arrival at the binding line. In the most basic case-making, two pieces of cardboard are placed onto a glued piece of cloth with a space between them into which is glued a thinner board cut to the width of the spine of the book. The overlapping edges of the cloth (about 5/8" all round) are folded over the boards, and pressed down to adhere. After case-making the stack of cases will go to the foil stamping area for adding decorations and type. Retail and distribution Bookselling is the commercial trading of books that forms the retail and distribution end of the publishing process. Accessible publishing Accessible publishing is an approach to publishing and book design whereby books and other texts are made available in alternative formats designed to aid or replace the reading process. It is particularly relevant for people who are blind, visually impaired or otherwise print-disabled. Alternative formats that have been developed to aid different people to read include varieties of larger fonts, specialized fonts for certain kinds of reading disabilities, braille, ebooks, and automated audiobooks and DAISY digital talking books. Accessible publishing has been made easier through developments in technology such as print on demand, ebook readers, the XML structured data format, the EPUB3 format and the Internet. Audiobooks An audiobook or talking book is a recording of a book or other work being read out loud. A reading of the complete text is described as "unabridged", while readings of shorter versions are abridgements. Spoken audio has been available in schools and public libraries and to a lesser extent in music shops since the 1930s. Many spoken word albums were made prior to the age of cassettes, compact discs, and downloadable audio, often of poetry and plays rather than books. It was not until the 1980s that the medium began to attract book retailers, and then book retailers started displaying audiobooks on bookshelves rather than in separate displays. Ebooks An ebook (short for electronic book), also spelled e-book or eBook, is a book publication made available in electronic form, consisting of text, images, or both, readable on the flat-panel display of computers or other electronic devices. Although sometimes defined as "an electronic version of a printed book", some ebooks exist without a printed equivalent. Ebooks can be read on dedicated e-reader devices and on any computer device that features a controllable viewing screen, including desktop computers, laptops, tablets and smartphones. In some markets, the sale of printed books has decreased due to the increased use of ebooks. However, printed books still largely outsell ebooks, and many people have a preference for print. Dummy books Dummy books (or faux books) are books that are designed to imitate a real book by appearance to deceive people, some books may be whole with empty pages, others may be hollow or in other cases, there may be a whole panel carved with spines which are then painted to look like books, titles of some books may also be fictitious. There are many reasons to have dummy books on display such as; to allude visitors of the vast wealth of information in their possession and to inflate the owner's appearance of wealth, to conceal something, for shop displays or for decorative purposes. In early 19th century at Gwrych Castle, North Wales, Lloyd Hesketh Bamford-Hesketh was known for his vast collection of books at his library, however, at the later part of that same century, the public became aware that parts of his library was a fabrication, dummy books were built and then locked behind glass doors to stop people from trying to access them, from this a proverb was born, "Like Hesky's library, all outside". Content Libraries, bookstores, and collections commonly divide books into fiction and non-fiction, though other types exist beyond this. Other books, which remain unpublished or are primarily published as part of different business functions (such as phone directories) may not be sold by bookstores or collected by libraries. Manuscripts, logbooks and other records may be classified and stored differently by special collections or archives. Fiction Fiction books contain invented material, typically narratives. Other literary forms such as poetry are included in the broad category. Most fiction is additionally categorized by literary form and genre. The novel is the most common form of fiction book. Novels are extended works of narrative fiction, typically featuring a plot, setting, themes and characters. The novel has had a tremendous impact on entertainment and publishing markets. A novella is a term sometimes used for fiction prose typically between 17,500 and 40,000 words, and a novelette between 7,500 and 17,500. A short story may be any length up to 10,000 words, but these word lengths vary. Comic books or graphic novels are books in which the story is illustrated. The characters and narrators use speech or thought bubbles to express verbal language. Non-fiction Non-fiction books are in principle based on fact, encompassing subjects such as history, politics, social and cultural issues, as well as autobiographies and memoirs. Nearly all academic literature is non-fiction. Reference Reference books are non-fiction books intended to be quickly referred to for information, rather than read beginning to end. The writing style used in these works is informative; the authors avoid opinions and the use of the first person, and emphasize facts. An almanac is a very general reference book, usually one-volume, with lists of data and information on many topics. An encyclopedia is a book or set of books designed to have more in-depth articles on many topics. A book listing words, their etymology, meanings, and other information is called a dictionary. An atlas is a book containing a collection of maps. A specialized reference work giving information about a particular field or technique, often intended for professional use, is often called a handbook. Books which try to list references and abstracts in a certain broad area may be called an index, such as Engineering Index, or abstracts such as chemical abstracts and biological abstracts. Technical Books with technical information on how to do something or how to use some equipment are called instruction manuals. Other popular how-to books include cookbooks and home improvement books. Educational Students often carry textbooks and schoolbooks for study purposes. Lap books are a learning tool created by students. Elementary school pupils often use workbooks, which are published with spaces or blanks to be filled by them for study or homework. In US higher education, it is common for a student to take an exam using a blue book. Religious Religious texts, including scripture, are texts which various religions consider to be of central importance to their religious tradition. They often feature a compilation or discussion of beliefs, ritual practices, moral commandments and laws, ethical conduct, spiritual aspirations, and admonitions for fostering a religious community. Hymnals are books with collections of musical hymns that can typically be found in churches. Prayerbooks or missals are books that contain written prayers and are commonly carried by monks, nuns, and other devoted followers or clergy. Children's books Unpublished Many books are only used to record personal ideas, notes, and accounts, such as notebooks, logbooks, commonplace books, and diaries. These books are rarely published and are typically destroyed or remain private. Address books, phone books, and calendar/appointment books are commonly used for recording appointments, meetings and personal contact information. Businesses historically used accounting books such as journals and ledgers to record financial data in a practice called bookkeeping (now usually held on computers rather than in hand-written form). Collection and classification Personal and public libraries, archives and other forms of book collection have led to the creation of many different organization and classification strategies. In the 19th and 20th century, libraries and library professionals systematized book collecting and classification systems to respond to the growing industry. The most widely used system is ISBN, which has provided unique identifiers for books since 1970. Libraries A library is a collection of books, and possibly other materials and media, that is accessible for use by its members and members of allied institutions. Libraries provide physical (hard copies) or digital (soft copies) materials, and may be a physical location, a virtual space, or both. A library's collection normally includes printed materials which may be borrowed, and usually also includes a reference section of publications which may only be utilized inside the premises. Resources such as commercial releases of films, television programs, other video recordings, radio, music and audio recordings may be available in many formats. These include DVDs, Blu-rays, CDs, cassettes, or other applicable formats such as microform. They may also provide access to information, music or other content held on bibliographic databases. Libraries can vary widely in size and may be organized and maintained by a public body such as a government, an institution (such as a school or museum), a corporation, or a private individual. In addition to providing materials, libraries also provide the services of librarians who are trained experts in finding, selecting, circulating and organising information while interpreting information needs and navigating and analyzing large amounts of information with a variety of resources. Library buildings often provide quiet areas for studying, as well as common areas for group study and collaboration, and may provide public facilities for access to their electronic resources, such as computers and access to the Internet. The library's clientele and general services offered vary depending on its type: users of a public library have different needs from those of a special library or academic library, for example. Libraries may also be community hubs, where programs are made available and people engage in lifelong learning. Modern libraries extend their services beyond the physical walls of the building by providing material accessible by electronic means, including from home via the Internet. Identification and classification In 2011, the International Federation of Library Associations and Institutions (IFLA) created the International Standard Bibliographic Description (ISBD) in order to standardize descriptions in bibliographies and library catalogs. Each book is specified by an International Standard Book Number, or ISBN, which is meant to be unique to every edition of every book produced by participating publishers, worldwide. It is managed by the ISBN Society. An ISBN has four parts: the first part is the country code, the second the publisher code, and the third the title code. The last part is a check digit, and can take values from 0–9 and X (10). The EAN Barcodes numbers for books are derived from the ISBN by prefixing 978, for Bookland, and calculating a new check digit. Commercial publishers in industrialized countries generally assign ISBNs to their books, so buyers may presume that the ISBN is part of a total international system, with no exceptions. However, many government publishers, in industrial as well as developing countries, do not participate fully in the ISBN system, and publish books which do not have ISBNs. A large or public collection requires a catalogue. Codes called "call numbers" relate the books to the catalogue, and determine their locations on the shelves. Call numbers are based on a Library classification system. The call number is placed on the spine of the book, normally a short distance before the bottom, and inside. Institutional or national standards, such as ANSI/NISO Z39.41 – 1997, establish the correct way to place information (such as the title, or the name of the author) on book spines, and on "shelvable" book-like objects, such as containers for DVDs, video tapes and software. One of the earliest and most widely known systems of cataloguing books is the Dewey Decimal System. Another widely known system is the Library of Congress Classification system. Both systems are biased towards subjects which were well represented in US libraries when they were developed, and hence have problems handling new subjects, such as computing, or subjects relating to other cultures. Information about books and authors can be stored in databases like online general-interest book databases. Metadata, which means "data about data" is information about a book. Metadata about a book may include its title, ISBN or other classification number (see above), the names of contributors (author, editor, illustrator) and publisher, its date and size, the language of the text, its subject matter, etc. Classification systems Bliss bibliographic classification (BC) Chinese Library Classification (CLC) Colon Classification Dewey Decimal Classification (DDC) Harvard-Yenching Classification Library of Congress Classification (LCC) New Classification Scheme for Chinese Libraries Universal Decimal Classification (UDC) Conservation Social and cultural issues Reception The impact of books can be various, and record of that reception comes in several formats: starting with initial public reception in contemporary newspapers, pop culture and correspondence, and then developing over time with different forms of literary criticism by professional and academic critics. For the publishing industry the "book review" is an important part of increasing awareness and reception of a book: able to make or break the public opinion about a newly published book. Book reviews Book censorship and bans Book censorship is the act of some authority taking measures to suppress ideas and information within a book. Censorship is "the regulation of free speech and other forms of entrenched authority". Censors typically identify as either a concerned parent, community members who react to a text without reading, or local or national organizations. Books have been censored by authoritarian dictatorships to silence dissent, such as the People's Republic of China, Nazi Germany and the Soviet Union. Books are most often censored for age appropriateness, offensive language, sexual content, amongst other reasons. Similarly, religions may issue lists of banned books, such as the historical example of the Catholic Church's Index Librorum Prohibitorum and bans of such books as Salman Rushdie's The Satanic Verses by Ayatollah Khomeini, which do not always carry legal force. Censorship can be enacted at the national or subnational level as well, and can carry legal penalties. In many cases, the authors of these books could face harsh sentences, exile from the country, or even execution. Book burning
Technology
Media and communication
null
3794
https://en.wikipedia.org/wiki/Brassicaceae
Brassicaceae
Brassicaceae () or (the older) Cruciferae () is a medium-sized and economically important family of flowering plants commonly known as the mustards, the crucifers, or the cabbage family. Most are herbaceous plants, while some are shrubs. The leaves are simple (although are sometimes deeply incised), lack stipules, and appear alternately on stems or in rosettes. The inflorescences are terminal and lack bracts. The flowers have four free sepals, four free alternating petals, two shorter free stamens and four longer free stamens. The fruit has seeds in rows, divided by a thin wall (or septum). The family contains 372 genera and 4,060 accepted species. The largest genera are Draba (440 species), Erysimum (261 species), Lepidium (234 species), Cardamine (233 species), and Alyssum (207 species). The family contains the cruciferous vegetables, including species such as Brassica oleracea (cultivated as cabbage, kale, cauliflower, broccoli and collards), Brassica rapa (turnip, Chinese cabbage, etc.), Brassica napus (rapeseed, etc.), Raphanus sativus (common radish), Armoracia rusticana (horseradish), but also a cut-flower Matthiola (stock) and the model organism Arabidopsis thaliana (thale cress). Pieris rapae and other butterflies of the family Pieridae are some of the best-known pests of Brassicaceae species planted as commercial crops. Trichoplusia ni (cabbage looper) moth is also becoming increasingly problematic for crucifers due to its resistance to commonly used pest control methods. Some rarer Pieris butterflies, such as P. virginiensis, depend upon native mustards for their survival in their native habitats. Some non-native mustards such as Alliaria petiolata (garlic mustard), an extremely invasive species in the United States, can be toxic to their larvae. Description Species belonging to the Brassicaceae are mostly annual, biennial, or perennial herbaceous plants, some are dwarf shrubs or shrubs, and very few vines. Although generally terrestrial, a few species such as water awlwort live submerged in fresh water. They may have a taproot or a sometimes woody caudex that may have few or many branches, some have thin or tuberous rhizomes, or rarely develop runners. Few species have multi-cellular glands. Hairs consist of one cell and occur in many forms: from simple to forked, star-, tree- or T-shaped, rarely taking the form of a shield or scale. They are never topped by a gland. The stems may be upright, rise up towards the tip, or lie flat, are mostly herbaceous but sometimes woody. Stems carry leaves or the stems may be leafless (in Caulanthus), and some species lack stems altogether. The leaves do not have stipules, but there may be a pair of glands at base of leaf stalks and flower stalks. The leaf may be seated or have a leafstalk. The leaf blade is usually simple, entire or dissected, rarely trifoliolate or pinnately compound. A leaf rosette at the base may be present or absent. The leaves along the stem are almost always alternately arranged, rarely apparently opposite. The stomata are of the anisocytic type. The genome size of Brassicaceae compared to that of other Angiosperm families is very small to small (less than 3.425 million base pairs per cell), varying from 150 Mbp in Arabidopsis thaliana and Sphaerocardamum spp., to 2375 Mbp Bunias orientalis. The number of homologous chromosome sets varies from four (n=4) in some Physaria and Stenopetalum species, five (n=5) in other Physaria and Stenopetalum species, Arabidopsis thaliana and a Mathiola species, to seventeen (n=17). About 35% of the species in which chromosomes have been counted have eight sets (n=8). Due to polyploidy, some species may have up to 256 individual chromosomes, with some very high counts in the North American species of Cardamine, such as C. diphylla. Hybridisation is not unusual in Brassicaceae, especially in Arabis, Rorippa, Cardamine and Boechera. Hybridisation between species originating in Africa and California, and subsequent polyploidisation is surmised for Lepidium species native to Australia and New Zealand. Inflorescence and flower Flowers may be arranged in racemes, panicles, or corymbs, with pedicels sometimes in the axil of a bract, and few species have flowers that sit individually on flower stems that spring from the axils of rosette leaves. The orientation of the pedicels when fruits are ripe varies dependent on the species. The flowers are bisexual, star symmetrical (zygomorphic in Iberis and Teesdalia) and the ovary positioned above the other floral parts. Each flower has four free or seldom merged sepals, the lateral two sometimes with a shallow spur, which are mostly shed after flowering, rarely persistent, may be reflexed, spreading, ascending, or erect, together forming a tube-, bell- or urn-shaped calyx. Each flower has four petals, set alternating with the sepals, although in some species these are rudimentary or absent. They may be differentiated into a blade and a claw or not, and consistently lack basal appendages. The blade is entire or has an indent at the tip, and may sometimes be much smaller than the claws. The mostly six stamens are set in two whorls: usually the two lateral, outer ones are shorter than the four inner stamens, but very rarely the stamens can all have the same length, and very rarely species have different numbers of stamens such as sixteen to twenty four in Megacarpaea, four in Cardamine hirsuta, and two in Coronopus. The filaments are slender and not fused, while the anthers consist of two pollen producing cavities, and open with longitudinal slits. The pollen grains are tricolpate. The receptacle carries a variable number of nectaries, but these are always present opposite the base of the lateral stamens. Ovary, fruit and seed There is one superior pistil that consists of two carpels that may either sit directly above the base of the stamens or on a stalk. It initially consists of only one cavity but during its further development a thin wall grows that divides the cavity, both placentas and separates the two valves (a so-called false septum). Rarely, there is only one cavity without a septum. The 2–600 ovules are usually along the side margin of the carpels, or rarely at the top. Fruits are capsules that open with two valves, usually towards the top. These are called silique if at least three times longer than wide, or silicle if the length is less than three times the width. The fruit is very variable in its other traits. There may be one persistent style that connects the ovary to the globular or conical stigma, which is undivided or has two spreading or connivent lobes. The variously shaped seeds are usually yellow or brown in color, and arranged in one or two rows in each cavity. The seed leaves are entire or have a notch at the tip. The seed does not contain endosperm. Differences with similar families Brassicaceae have a bisymmetrical corolla (left is mirrored by right, stem-side by out-side, but each quarter is not symmetrical), a septum dividing the fruit, lack stipules and have simple (although sometimes deeply incised) leaves. The sister family Cleomaceae has bilateral symmetrical corollas (left is mirrored by right, but stem-side is different from out-side), stipules and mostly palmately divided leaves, and mostly no septum. Capparaceae generally have a gynophore, sometimes an androgynophore, and a variable number of stamens. Phytochemistry Almost all Brassicaceae have C3 carbon fixation. The only exceptions are a few Moricandia species, which have a hybrid system between C3 and C4 carbon fixation, C4 fixation being more efficient in drought, high temperature and low nitrate availability. Brassicaceae contain different cocktails of dozens of glucosinolates. They also contain enzymes called myrosinases, that convert the glucosinolates into isothiocyanates, thiocyanates and nitriles, which are toxic to many organisms, and so help guard against herbivory. Taxonomy Carl Linnaeus in 1753 regarded the Brassicaceae as a natural group, naming them "Klass" Tetradynamia. Alfred Barton Rendle placed the family in the order Rhoeadales, while George Bentham and Joseph Dalton Hooker in their system published from 1862 to 1883, assigned it to their cohort Parietales (now the class Violales). Following Bentham and Hooker, John Hutchinson in 1948 and again in 1964 thought the Brassicaceae to stem from near the Papaveraceae. In 1994, a group of scientists including Walter Stephen Judd suggested to include the Capparaceae in the Brassicaceae. Early DNA-analysis showed that the Capparaceae—as defined at that moment—were paraphyletic, and it was suggested to assign the genera closest to the Brassicaceae to the Cleomaceae. The Cleomaceae and Brassicaceae diverged approximately 41 million years ago. All three families have consistently been placed in one order (variably called Capparales or Brassicales). The APG II system merged Cleomaceae and Brassicaceae. Other classifications have continued to recognize the Capparaceae, but with a more restricted circumscription, either including Cleome and its relatives in the Brassicaceae or recognizing them in the segregate family Cleomaceae. The APG III system has recently adopted this last solution, but this may change as a consensus arises on this point. Current insights in the relationships of the Brassicaceae, based on a 2012 DNA-analysis, are summarized in the following tree. Relationships within the family Early classifications depended on morphological comparison only, but because of extensive convergent evolution, these do not provide a reliable phylogeny. Although a substantial effort was made through molecular phylogenetic studies, the relationships within the Brassicaceae have not always been well resolved yet. It has long been clear that the Aethionema are sister of the remainder of the family. One analysis from 2014 represented the relation between 39 tribes with the following tree. Genera As of October 2023 Plants of the World Online accepts 346 genera. Etymology The name Brassicaceae comes to international scientific vocabulary from Neo-Latin, from Brassica, the type genus, + -aceae, a standardized suffix for plant family names in modern taxonomy. The genus name comes from the Classical Latin word brassica, referring to cabbage and other cruciferous vegetables. The alternative older name, Cruciferae, meaning "cross-bearing", describes the four petals of mustard flowers, which resemble a cross. Cruciferae is one of eight plant family names, not derived from a genus name and without the suffix -aceae that are authorized alternative names. Distribution Brassicaceae can be found almost on the entire land surface of the planet, but the family is absent from Antarctica, and also absent from some areas in the tropics i.e. northeastern Brazil, the Congo basin, Maritime Southeast Asia and tropical Australasia. The area of origin of the family is possibly the Irano-Turanian region, where approximately 900 species occur in 150 different genera. About 530 of those 900 species are endemics. Next in abundance comes the Mediterranean region, with around 630 species (290 of which are endemic) in 113 genera. The family is less prominent in the Saharo-Arabian region—65 genera, 180 species of which 62 are endemic—and North America (comprising the North American Atlantic region and the Rocky Mountain floristic region)—99 genera, 780 species of which 600 are endemic. South America has 40 genera containing 340 native species, Southern Africa 15 genera with over 100 species, and Australia and New-Zealand have 19 genera with 114 species between them. Ecology Brassicaceae are almost exclusively pollinated by insects. A chemical mechanism in the pollen is active in many species to avoid selfing. Two notable exceptions are exclusive self-pollination in closed flowers in Cardamine chenopodifolia, and wind pollination in Pringlea antiscorbutica. Although it can be cross-pollinated, Alliaria petiolata (garlic mustard) is self-fertile. Most species reproduce sexually through seed, but Cardamine bulbifera produces gemmae and in others, such as Cardamine pentaphyllos, the coral-like roots easily break into segments, that will grow into separate plants. In some species, such as in the genus Cardamine, seed pods open with force and so catapult the seeds quite far. Many of these have sticky seed coats, assisting long-distance dispersal by animals, and this may also explain several intercontinental dispersal events in the genus, and its near global distribution. Brassicaceae are common on serpentine and dolomite rich in magnesium. Over a hundred species in the family accumulate heavy metals, particularly zinc and nickel, which is a record percentage. Several Alyssum species can accumulate nickel up to 0.3% of their dry weight, and may be useful in soil remediation or even bio-mining. Brassicaceae contain glucosinolates as well as myrosinases inside their cells. When the cell is damaged, the myrosinases hydrolise the glucosinolates, leading to the synthesis of isothiocyanates, which are compounds toxic to most animals, fungi and bacteria. Some insect herbivores have developed counter adaptations such as rapid absorption of the glucosinates, quick alternative breakdown into non-toxic compounds and avoiding cell damage. In the whites family (Pieridae), one counter mechanism involves glucosinolate sulphatase, which changes the glucosinolate, so that it cannot be converted to isothiocyanate. A second is that the glucosinates are quickly broken down, forming nitriles. Differences between the mixtures of glucosinolates between species and even within species is large, and individual plants may produce in excess of fifty individual substances. The energy penalty for synthesising all these glucosinolates may be as high as 15% of the total needed to produce a leaf. Barbarea vulgaris (bittercress) also produces triterpenoid saponins. These adaptations and counter adaptations probably have led to extensive diversification in both the Brassicaceae and one of its major pests, the butterfly family Pieridae. A particular cocktail of volatile glucosinates triggers egg-laying in many species. Thus a particular crop can sometimes be protected by planting bittercress as a deadly bait, for the saponins kill the caterpillars, but the butterfly is still lured by the bittercress to lay its egg on the leaves. A moth that feeds on a range of Brassicaceae is the diamondback moth (Plutella xylostella). Like the Pieridae, it is capable of converting isothiocyanates into less problematic nitriles. Managing this pest in crops became more complicated after resistance developed against a toxin produced by Bacillus thuringiensis, which is used as a wide spectrum biological plant protection against caterpillars. Parasitoid wasps that feed on such insect herbivores are attracted to the chemical compounds released by the plants, and thus are able to locate their prey. The cabbage aphid (Brevicoryne brassicae) stores glucosinolates and synthesises its own myrosinases, which may deter its potential predators. Since its introduction in the 19th century, Alliaria petiolata has been shown to be extremely successful as an invasive species in temperate North America due, in part, to its secretion of allelopathic chemicals. These inhibit the germination of most competing plants and kill beneficial soil fungi needed by many plants, such as many tree species, to successfully see their seedlings grow to maturity. The monoculture formation of an herb layer carpet by this plant has been shown to dramatically alter forests, making them wetter, having fewer and fewer trees, and having more vines such as poison ivy (Toxicodendron radicans). The overall herb layer biodiversity is also drastically reduced, particularly in terms of sedges and forbs. Research has found that removing 80% of the garlic mustard infestation plants did not lead to a particularly significant recovery of that diversity. Instead, it required around 100% removal. Given that not one of an estimated 76 species that prey on the plant has been approved for biological control in North America and the variety of mechanisms the plant has to ensure its dominance without them (e.g. high seed production, self-fertility, allelopathy, spring growth that occurs before nearly all native plants, roots that break easily when pulling attempts are made, a complete lack of palatability for herbivores at all life stages, etc.) it is unlikely that such a high level of control can be established and maintained on the whole. It is estimated that adequate control can be achieved with the introduction of two European weevils, including one that is monophagous. The USDA's TAG group has blocked these introductions since 2004. In addition to being invasive, garlic mustard also is a threat to native North American Pieris butterflies such as P. oleracea, as they preferentially oviposit on it, although it is toxic to their larvae. Invasive aggressive mustard species are known for being self-fertile, seeding very heavily with small seeds that have a lengthy lifespan coupled with a very high rate of viability and germination, and for being completely unpalatable to both herbivores and insects in areas to which they are not native. Garlic mustard is toxic to several rarer North American Pieris species. Uses This family includes important agricultural crops, among which many vegetables such as cabbage, broccoli, cauliflower, kale, Brussels sprouts, collard greens, Savoy, kohlrabi, and gai lan (Brassica oleracea), turnip, napa cabbage, mizuna, bok choy and rapini (Brassica rapa), rocket salad/arugula (Eruca sativa), garden cress (Lepidium sativum), watercress (Nasturtium officinale) and radish (Raphanus) and a few spices like horseradish (Armoracia rusticana), wasabi (Eutrema japonicum), white, Indian and black mustard (Sinapis alba, Brassica juncea and B. nigra respectively). Vegetable oil is produced from the seeds of several species such as Brassica napus (rapeseed oil), perhaps providing the largest volume of vegetable oils of any species. Woad (Isatis tinctoria) was used in the past to produce a blue textile dye (indigo), but has largely been replaced by the same substance from unrelated tropical species like Indigofera tinctoria. Pringlea antiscorbutica, commonly known as Kerguelen cabbage, is edible, containing high levels of potassium. Its leaves contain a vitamin C-rich oil, a fact which, in the days of sailing ships, made it very attractive to sailors suffering from scurvy, hence the species name's epithet antiscorbutica, which means "against scurvy" in Low Latin. It was essential to the diets of the whalers on Kerguelen when pork, beef, or seal meat was used up. The Brassicaceae also includes ornamentals, such as species of Aethionema, Alyssum, Arabis, Aubrieta, Aurinia, Cheiranthus, Erysimum, Hesperis, Iberis, Lobularia, Lunaria, Malcolmia, and Matthiola. Honesty (Lunaria annua) is cultivated for the decorative value of the translucent remains of the fruits after drying. It can be a pest species in areas where it is not native. The small Eurasian weed Arabidopsis thaliana is widely used as model organism in the study of the molecular biology of flowering plants (Angiospermae). Some species are useful as food plants for Lepidoptera, such as certain wild mustard and cress species, such as Turritis glabra and Boechera laevigata that are utilized by several North American butterflies. Gallery
Biology and health sciences
Brassicales
null
3876
https://en.wikipedia.org/wiki/Binomial%20distribution
Binomial distribution
In probability theory and statistics, the binomial distribution with parameters and is the discrete probability distribution of the number of successes in a sequence of independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success (with probability ) or failure (with probability ). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., , the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the binomial test of statistical significance. The binomial distribution is frequently used to model the number of successes in a sample of size drawn with replacement from a population of size . If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for much larger than , the binomial distribution remains a good approximation, and is widely used. Definitions Probability mass function If the random variable follows the binomial distribution with parameters and , we write . The probability of getting exactly successes in independent Bernoulli trials (with the same rate ) is given by the probability mass function: for , where is the binomial coefficient. The formula can be understood as follows: is the probability of obtaining the sequence of independent Bernoulli trials in which trials are "successes" and the remaining trials result in "failure". Since the trials are independent with probabilities remaining constant between them, any sequence of trials with successes (and failures) has the same probability of being achieved (regardless of positions of successes within the sequence). There are such sequences, since the binomial coefficient counts the number of ways to choose the positions of the successes among the trials. The binomial distribution is concerned with the probability of obtaining any of these sequences, meaning the probability of obtaining one of them () must be added times, hence . In creating reference tables for binomial distribution probability, usually, the table is filled in up to values. This is because for , the probability can be calculated by its complement as Looking at the expression as a function of , there is a value that maximizes it. This value can be found by calculating and comparing it to 1. There is always an integer that satisfies is monotone increasing for and monotone decreasing for , with the exception of the case where is an integer. In this case, there are two values for which is maximal: and . is the most probable outcome (that is, the most likely, although this can still be unlikely overall) of the Bernoulli trials and is called the mode. Equivalently, . Taking the floor function, we obtain . Example Suppose a biased coin comes up heads with probability 0.3 when tossed. The probability of seeing exactly 4 heads in 6 tosses is Cumulative distribution function The cumulative distribution function can be expressed as: where is the "floor" under , i.e. the greatest integer less than or equal to . It can also be represented in terms of the regularized incomplete beta function, as follows: which is equivalent to the cumulative distribution functions of the beta distribution and of the -distribution: Some closed-form bounds for the cumulative distribution function are given below. Properties Expected value and variance If , that is, is a binomially distributed random variable, being the total number of experiments and p the probability of each experiment yielding a successful result, then the expected value of is: This follows from the linearity of the expected value along with the fact that is the sum of identical Bernoulli random variables, each with expected value . In other words, if are identical (and independent) Bernoulli random variables with parameter , then and The variance is: This similarly follows from the fact that the variance of a sum of independent random variables is the sum of the variances. Higher moments The first 6 central moments, defined as , are given by The non-central moments satisfy and in general where are the Stirling numbers of the second kind, and is the th falling power of . A simple bound follows by bounding the Binomial moments via the higher Poisson moments: This shows that if , then is at most a constant factor away from Mode Usually the mode of a binomial distribution is equal to , where is the floor function. However, when is an integer and is neither 0 nor 1, then the distribution has two modes: and . When is equal to 0 or 1, the mode will be 0 and correspondingly. These cases can be summarized as follows: Proof: Let For only has a nonzero value with . For we find and for . This proves that the mode is 0 for and for . Let . We find . From this follows So when is an integer, then and is a mode. In the case that , then only is a mode. Median In general, there is no single formula to find the median for a binomial distribution, and it may even be non-unique. However, several special results have been established: If is an integer, then the mean, median, and mode coincide and equal . Any median must lie within the interval . A median cannot lie too far away from the mean: . The median is unique and equal to when (except for the case when and is odd). When is a rational number (with the exception of \ and odd) the median is unique. When and is odd, any number in the interval is a median of the binomial distribution. If and is even, then is the unique median. Tail bounds For , upper bounds can be derived for the lower tail of the cumulative distribution function , the probability that there are at most successes. Since , these bounds can also be seen as bounds for the upper tail of the cumulative distribution function for . Hoeffding's inequality yields the simple bound which is however not very tight. In particular, for , we have that (for fixed , with ), but Hoeffding's bound evaluates to a positive constant. A sharper bound can be obtained from the Chernoff bound: where is the relative entropy (or Kullback-Leibler divergence) between an -coin and a -coin (i.e. between the and distribution): Asymptotically, this bound is reasonably tight; see for details. One can also obtain lower bounds on the tail , known as anti-concentration bounds. By approximating the binomial coefficient with Stirling's formula it can be shown that which implies the simpler but looser bound For and for even , it is possible to make the denominator constant: Statistical inference Estimation of parameters When is known, the parameter can be estimated using the proportion of successes: This estimator is found using maximum likelihood estimator and also the method of moments. This estimator is unbiased and uniformly with minimum variance, proven using Lehmann–Scheffé theorem, since it is based on a minimal sufficient and complete statistic (i.e.: ). It is also consistent both in probability and in MSE. This statistic is asymptotically normal thanks to the central limit theorem, because it is the same as taking the mean over Bernoulli samples. It has a variance of , a property which is used in various ways, such as in Wald's confidence intervals. A closed form Bayes estimator for also exists when using the Beta distribution as a conjugate prior distribution. When using a general as a prior, the posterior mean estimator is: The Bayes estimator is asymptotically efficient and as the sample size approaches infinity (), it approaches the MLE solution. The Bayes estimator is biased (how much depends on the priors), admissible and consistent in probability. Using the Bayesian estimator with the Beta distribution can be used with Thompson sampling. For the special case of using the standard uniform distribution as a non-informative prior, , the posterior mean estimator becomes: (A posterior mode should just lead to the standard estimator.) This method is called the rule of succession, which was introduced in the 18th century by Pierre-Simon Laplace. When relying on Jeffreys prior, the prior is , which leads to the estimator: When estimating with very rare events and a small (e.g.: if ), then using the standard estimator leads to which sometimes is unrealistic and undesirable. In such cases there are various alternative estimators. One way is to use the Bayes estimator , leading to: Another method is to use the upper bound of the confidence interval obtained using the rule of three: Confidence intervals for the parameter p Even for quite large values of n, the actual distribution of the mean is significantly nonnormal. Because of this problem several methods to estimate confidence intervals have been proposed. In the equations for confidence intervals below, the variables have the following meaning: n1 is the number of successes out of n, the total number of trials is the proportion of successes is the quantile of a standard normal distribution (i.e., probit) corresponding to the target error rate . For example, for a 95% confidence level the error  = 0.05, so  = 0.975 and  = 1.96. Wald method A continuity correction of may be added. Agresti–Coull method Here the estimate of is modified to This method works well for and . See here for . For use the Wilson (score) method below. Arcsine method Wilson (score) method The notation in the formula below differs from the previous formulas in two respects: Firstly, has a slightly different interpretation in the formula below: it has its ordinary meaning of 'the th quantile of the standard normal distribution', rather than being a shorthand for 'the th quantile'. Secondly, this formula does not use a plus-minus to define the two bounds. Instead, one may use to get the lower bound, or use to get the upper bound. For example: for a 95% confidence level the error  = 0.05, so one gets the lower bound by using , and one gets the upper bound by using . Comparison The so-called "exact" (Clopper–Pearson) method is the most conservative. (Exact does not mean perfectly accurate; rather, it indicates that the estimates will not be less conservative than the true value.) The Wald method, although commonly recommended in textbooks, is the most biased. Related distributions Sums of binomials If and are independent binomial variables with the same probability , then is again a binomial variable; its distribution is : A Binomial distributed random variable can be considered as the sum of Bernoulli distributed random variables. So the sum of two Binomial distributed random variables and is equivalent to the sum of Bernoulli distributed random variables, which means . This can also be proven directly using the addition rule. However, if and do not have the same probability , then the variance of the sum will be smaller than the variance of a binomial variable distributed as . Poisson binomial distribution The binomial distribution is a special case of the Poisson binomial distribution, which is the distribution of a sum of independent non-identical Bernoulli trials . Ratio of two binomial distributions This result was first derived by Katz and coauthors in 1978. Let and be independent. Let . Then log(T) is approximately normally distributed with mean log(p1/p2) and variance . Conditional binomials If X ~ B(n, p) and Y | X ~ B(X, q) (the conditional distribution of Y, given X), then Y is a simple binomial random variable with distribution Y ~ B(n, pq). For example, imagine throwing n balls to a basket UX and taking the balls that hit and throwing them to another basket UY. If p is the probability to hit UX then X ~ B(n, p) is the number of balls that hit UX. If q is the probability to hit UY then the number of balls that hit UY is Y ~ B(X, q) and therefore Y ~ B(n, pq). Since and , by the law of total probability, Since the equation above can be expressed as Factoring and pulling all the terms that don't depend on out of the sum now yields After substituting in the expression above, we get Notice that the sum (in the parentheses) above equals by the binomial theorem. Substituting this in finally yields and thus as desired. Bernoulli distribution The Bernoulli distribution is a special case of the binomial distribution, where . Symbolically, has the same meaning as . Conversely, any binomial distribution, , is the distribution of the sum of independent Bernoulli trials, , each with the same probability . Normal approximation If is large enough, then the skew of the distribution is not too great. In this case a reasonable approximation to is given by the normal distribution and this basic approximation can be improved in a simple way by using a suitable continuity correction. The basic approximation generally improves as increases (at least 20) and is better when is not near to 0 or 1. Various rules of thumb may be used to decide whether is large enough, and is far enough from the extremes of zero or one: One rule is that for the normal approximation is adequate if the absolute value of the skewness is strictly less than 0.3; that is, if This can be made precise using the Berry–Esseen theorem. A stronger rule states that the normal approximation is appropriate only if everything within 3 standard deviations of its mean is within the range of possible values; that is, only if This 3-standard-deviation rule is equivalent to the following conditions, which also imply the first rule above. The rule is totally equivalent to request that Moving terms around yields: Since , we can apply the square power and divide by the respective factors and , to obtain the desired conditions: Notice that these conditions automatically imply that . On the other hand, apply again the square root and divide by 3, Subtracting the second set of inequalities from the first one yields: and so, the desired first rule is satisfied, Another commonly used rule is that both values and must be greater than or equal to 5. However, the specific number varies from source to source, and depends on how good an approximation one wants. In particular, if one uses 9 instead of 5, the rule implies the results stated in the previous paragraphs. Assume that both values and are greater than 9. Since , we easily have that We only have to divide now by the respective factors and , to deduce the alternative form of the 3-standard-deviation rule: The following is an example of applying a continuity correction. Suppose one wishes to calculate for a binomial random variable . If has a distribution given by the normal approximation, then is approximated by . The addition of 0.5 is the continuity correction; the uncorrected normal approximation gives considerably less accurate results. This approximation, known as de Moivre–Laplace theorem, is a huge time-saver when undertaking calculations by hand (exact calculations with large are very onerous); historically, it was the first use of the normal distribution, introduced in Abraham de Moivre's book The Doctrine of Chances in 1738. Nowadays, it can be seen as a consequence of the central limit theorem since is a sum of independent, identically distributed Bernoulli variables with parameter . This fact is the basis of a hypothesis test, a "proportion z-test", for the value of using , the sample proportion and estimator of , in a common test statistic. For example, suppose one randomly samples people out of a large population and ask them whether they agree with a certain statement. The proportion of people who agree will of course depend on the sample. If groups of n people were sampled repeatedly and truly randomly, the proportions would follow an approximate normal distribution with mean equal to the true proportion p of agreement in the population and with standard deviation Poisson approximation The binomial distribution converges towards the Poisson distribution as the number of trials goes to infinity while the product converges to a finite limit. Therefore, the Poisson distribution with parameter can be used as an approximation to of the binomial distribution if is sufficiently large and is sufficiently small. According to rules of thumb, this approximation is good if and such that , or if and such that , or if and . Concerning the accuracy of Poisson approximation, see Novak, ch. 4, and references therein. Limiting distributions Poisson limit theorem: As approaches and approaches 0 with the product held fixed, the distribution approaches the Poisson distribution with expected value . de Moivre–Laplace theorem: As approaches while remains fixed, the distribution of approaches the normal distribution with expected value 0 and variance 1. This result is sometimes loosely stated by saying that the distribution of is asymptotically normal with expected value 0 and variance 1. This result is a specific case of the central limit theorem. Beta distribution The binomial distribution and beta distribution are different views of the same model of repeated Bernoulli trials. The binomial distribution is the PMF of successes given independent events each with a probability of success. Mathematically, when and , the beta distribution and the binomial distribution are related by a factor of : Beta distributions also provide a family of prior probability distributions for binomial distributions in Bayesian inference: Given a uniform prior, the posterior distribution for the probability of success given independent events with observed successes is a beta distribution. Computational methods Random number generation Methods for random number generation where the marginal distribution is a binomial distribution are well-established. One way to generate random variates samples from a binomial distribution is to use an inversion algorithm. To do so, one must calculate the probability that for all values from through . (These probabilities should sum to a value close to one, in order to encompass the entire sample space.) Then by using a pseudorandom number generator to generate samples uniformly between 0 and 1, one can transform the calculated samples into discrete numbers by using the probabilities calculated in the first step. History This distribution was derived by Jacob Bernoulli. He considered the case where where is the probability of success and and are positive integers. Blaise Pascal had earlier considered the case where , tabulating the corresponding binomial coefficients in what is now recognized as Pascal's triangle.
Mathematics
Statistics and probability
null
3931
https://en.wikipedia.org/wiki/Binary%20relation
Binary relation
In mathematics, a binary relation associates elements of one set called the domain with elements of another set called the codomain. Precisely, a binary relation over sets and is a set of ordered pairs where is in and is in . It encodes the common concept of relation: an element is related to an element , if and only if the pair belongs to the set of ordered pairs that defines the binary relation. An example of a binary relation is the "divides" relation over the set of prime numbers and the set of integers , in which each prime is related to each integer that is a multiple of , but not to an integer that is not a multiple of . In this relation, for instance, the prime number is related to numbers such as , , , , but not to or , just as the prime number is related to , , and , but not to or . Binary relations, and especially homogeneous relations, are used in many branches of mathematics to model a wide variety of concepts. These include, among others: the "is greater than", "is equal to", and "divides" relations in arithmetic; the "is congruent to" relation in geometry; the "is adjacent to" relation in graph theory; the "is orthogonal to" relation in linear algebra. A function may be defined as a binary relation that meets additional constraints. Binary relations are also heavily used in computer science. A binary relation over sets and is an element of the power set of Since the latter set is ordered by inclusion (), each relation has a place in the lattice of subsets of A binary relation is called a homogeneous relation when . A binary relation is also called a heterogeneous relation when it is not necessary that . Since relations are sets, they can be manipulated using set operations, including union, intersection, and complementation, and satisfying the laws of an algebra of sets. Beyond that, operations like the converse of a relation and the composition of relations are available, satisfying the laws of a calculus of relations, for which there are textbooks by Ernst Schröder, Clarence Lewis, and Gunther Schmidt. A deeper analysis of relations involves decomposing them into subsets called concepts, and placing them in a complete lattice. In some systems of axiomatic set theory, relations are extended to classes, which are generalizations of sets. This extension is needed for, among other things, modeling the concepts of "is an element of" or "is a subset of" in set theory, without running into logical inconsistencies such as Russell's paradox. A binary relation is the most studied special case of an -ary relation over sets , which is a subset of the Cartesian product Definition Given sets and , the Cartesian product is defined as and its elements are called ordered pairs. A over sets and is a subset of The set is called the or of , and the set the or of . In order to specify the choices of the sets and , some authors define a or as an ordered triple , where is a subset of called the of the binary relation. The statement reads " is -related to " and is denoted by . The or of is the set of all such that for at least one . The codomain of definition, , or of is the set of all such that for at least one . The of is the union of its domain of definition and its codomain of definition. When a binary relation is called a (or ). To emphasize the fact that and are allowed to be different, a binary relation is also called a heterogeneous relation. The prefix hetero is from the Greek ἕτερος (heteros, "other, another, different"). A heterogeneous relation has been called a rectangular relation, suggesting that it does not have the square-like symmetry of a homogeneous relation on a set where Commenting on the development of binary relations beyond homogeneous relations, researchers wrote, "... a variant of the theory has evolved that treats relations from the very beginning as or , i.e. as relations where the normal case is that they are relations between different sets." The terms correspondence, dyadic relation and two-place relation are synonyms for binary relation, though some authors use the term "binary relation" for any subset of a Cartesian product without reference to and , and reserve the term "correspondence" for a binary relation with reference to and . In a binary relation, the order of the elements is important; if then can be true or false independently of . For example, divides , but does not divide . Operations Union If and are binary relations over sets and then is the of and over and . The identity element is the empty relation. For example, is the union of < and =, and is the union of > and =. Intersection If and are binary relations over sets and then is the of and over and . The identity element is the universal relation. For example, the relation "is divisible by 6" is the intersection of the relations "is divisible by 3" and "is divisible by 2". Composition If is a binary relation over sets and , and is a binary relation over sets and then (also denoted by ) is the of and over and . The identity element is the identity relation. The order of and in the notation used here agrees with the standard notational order for composition of functions. For example, the composition (is parent of)(is mother of) yields (is maternal grandparent of), while the composition (is mother of)(is parent of) yields (is grandmother of). For the former case, if is the parent of and is the mother of , then is the maternal grandparent of . Converse If is a binary relation over sets and then is the , also called , of over and . For example, is the converse of itself, as is , and and are each other's converse, as are and . A binary relation is equal to its converse if and only if it is symmetric. Complement If is a binary relation over sets and then (also denoted by ) is the of over and . For example, and are each other's complement, as are and , and , and , and for total orders also and , and and . The complement of the converse relation is the converse of the complement: If the complement has the following properties: If a relation is symmetric, then so is the complement. The complement of a reflexive relation is irreflexive—and vice versa. The complement of a strict weak order is a total preorder—and vice versa. Restriction If is a binary homogeneous relation over a set and is a subset of then is the of to over . If is a binary relation over sets and and if is a subset of then is the of to over and . If a relation is reflexive, irreflexive, symmetric, antisymmetric, asymmetric, transitive, total, trichotomous, a partial order, total order, strict weak order, total preorder (weak order), or an equivalence relation, then so too are its restrictions. However, the transitive closure of a restriction is a subset of the restriction of the transitive closure, i.e., in general not equal. For example, restricting the relation " is parent of " to females yields the relation " is mother of the woman "; its transitive closure does not relate a woman with her paternal grandmother. On the other hand, the transitive closure of "is parent of" is "is ancestor of"; its restriction to females does relate a woman with her paternal grandmother. Also, the various concepts of completeness (not to be confused with being "total") do not carry over to restrictions. For example, over the real numbers a property of the relation is that every non-empty subset with an upper bound in has a least upper bound (also called supremum) in However, for the rational numbers this supremum is not necessarily rational, so the same property does not hold on the restriction of the relation to the rational numbers. A binary relation over sets and is said to be a relation over and , written if is a subset of , that is, for all and if , then . If is contained in and is contained in , then and are called written . If is contained in but is not contained in , then is said to be than , written For example, on the rational numbers, the relation is smaller than , and equal to the composition . Matrix representation Binary relations over sets and can be represented algebraically by logical matrices indexed by and with entries in the Boolean semiring (addition corresponds to OR and multiplication to AND) where matrix addition corresponds to union of relations, matrix multiplication corresponds to composition of relations (of a relation over and and a relation over and ), the Hadamard product corresponds to intersection of relations, the zero matrix corresponds to the empty relation, and the matrix of ones corresponds to the universal relation. Homogeneous relations (when ) form a matrix semiring (indeed, a matrix semialgebra over the Boolean semiring) where the identity matrix corresponds to the identity relation. Examples Types of binary relations Some important types of binary relations over sets and are listed below. Uniqueness properties: Injective (also called left-unique): for all and all if and then . In other words, every element of the codomain has at most one preimage element. For such a relation, is called a primary key of . For example, the green and blue binary relations in the diagram are injective, but the red one is not (as it relates both and to ), nor the black one (as it relates both and to ). Functional (also called right-unique or univalent): for all and all if and then . In other words, every element of the domain has at most one image element. Such a binary relation is called a or . For such a relation, is called of . For example, the red and green binary relations in the diagram are functional, but the blue one is not (as it relates to both and ), nor the black one (as it relates to both and ). One-to-one: injective and functional. For example, the green binary relation in the diagram is one-to-one, but the red, blue and black ones are not. One-to-many: injective and not functional. For example, the blue binary relation in the diagram is one-to-many, but the red, green and black ones are not. Many-to-one: functional and not injective. For example, the red binary relation in the diagram is many-to-one, but the green, blue and black ones are not. Many-to-many: not injective nor functional. For example, the black binary relation in the diagram is many-to-many, but the red, green and blue ones are not. Totality properties (only definable if the domain and codomain are specified): Total (also called left-total): for all there exists a such that . In other words, every element of the domain has at least one image element. In other words, the domain of definition of is equal to . This property, is different from the definition of (also called by some authors) in Properties. Such a binary relation is called a . For example, the red and green binary relations in the diagram are total, but the blue one is not (as it does not relate to any real number), nor the black one (as it does not relate to any real number). As another example, is a total relation over the integers. But it is not a total relation over the positive integers, because there is no in the positive integers such that . However, is a total relation over the positive integers, the rational numbers and the real numbers. Every reflexive relation is total: for a given , choose . Surjective (also called right-total): for all , there exists an such that . In other words, every element of the codomain has at least one preimage element. In other words, the codomain of definition of is equal to . For example, the green and blue binary relations in the diagram are surjective, but the red one is not (as it does not relate any real number to ), nor the black one (as it does not relate any real number to ). Uniqueness and totality properties (only definable if the domain and codomain are specified): A function (also called mapping): a binary relation that is functional and total. In other words, every element of the domain has exactly one image element. For example, the red and green binary relations in the diagram are functions, but the blue and black ones are not. An injection: a function that is injective. For example, the green relation in the diagram is an injection, but the red one is not; the black and the blue relation is not even a function. A surjection: a function that is surjective. For example, the green relation in the diagram is a surjection, but the red one is not. A bijection: a function that is injective and surjective. In other words, every element of the domain has exactly one image element and every element of the codomain has exactly one preimage element. For example, the green binary relation in the diagram is a bijection, but the red one is not. If relations over proper classes are allowed: Set-like (also called local): for all , the class of all such that , i.e. , is a set. For example, the relation is set-like, and every relation on two sets is set-like. The usual ordering < over the class of ordinal numbers is a set-like relation, while its inverse > is not. Sets versus classes Certain mathematical "relations", such as "equal to", "subset of", and "member of", cannot be understood to be binary relations as defined above, because their domains and codomains cannot be taken to be sets in the usual systems of axiomatic set theory. For example, to model the general concept of "equality" as a binary relation , take the domain and codomain to be the "class of all sets", which is not a set in the usual set theory. In most mathematical contexts, references to the relations of equality, membership and subset are harmless because they can be understood implicitly to be restricted to some set in the context. The usual work-around to this problem is to select a "large enough" set , that contains all the objects of interest, and work with the restriction instead of . Similarly, the "subset of" relation needs to be restricted to have domain and codomain (the power set of a specific set ): the resulting set relation can be denoted by Also, the "member of" relation needs to be restricted to have domain and codomain to obtain a binary relation that is a set. Bertrand Russell has shown that assuming to be defined over all sets leads to a contradiction in naive set theory, see Russell's paradox. Another solution to this problem is to use a set theory with proper classes, such as NBG or Morse–Kelley set theory, and allow the domain and codomain (and so the graph) to be proper classes: in such a theory, equality, membership, and subset are binary relations without special comment. (A minor modification needs to be made to the concept of the ordered triple , as normally a proper class cannot be a member of an ordered tuple; or of course one can identify the binary relation with its graph in this context.) With this definition one can for instance define a binary relation over every set and its power set. Homogeneous relation A homogeneous relation over a set is a binary relation over and itself, i.e. it is a subset of the Cartesian product It is also simply called a (binary) relation over . A homogeneous relation over a set may be identified with a directed simple graph permitting loops, where is the vertex set and is the edge set (there is an edge from a vertex to a vertex if and only if ). The set of all homogeneous relations over a set is the power set which is a Boolean algebra augmented with the involution of mapping of a relation to its converse relation. Considering composition of relations as a binary operation on , it forms a semigroup with involution. Some important properties that a homogeneous relation over a set may have are: : for all . For example, is a reflexive relation but > is not. : for all not . For example, is an irreflexive relation, but is not. : for all if then . For example, "is a blood relative of" is a symmetric relation. : for all if and then For example, is an antisymmetric relation. : for all if then not . A relation is asymmetric if and only if it is both antisymmetric and irreflexive. For example, > is an asymmetric relation, but is not. : for all if and then . A transitive relation is irreflexive if and only if it is asymmetric. For example, "is ancestor of" is a transitive relation, while "is parent of" is not. : for all if then or . : for all or . : for all if then some exists such that and . A is a relation that is reflexive, antisymmetric, and transitive. A is a relation that is irreflexive, asymmetric, and transitive. A is a relation that is reflexive, antisymmetric, transitive and connected. A is a relation that is irreflexive, asymmetric, transitive and connected. An is a relation that is reflexive, symmetric, and transitive. For example, " divides " is a partial, but not a total order on natural numbers "" is a strict total order on and " is parallel to " is an equivalence relation on the set of all lines in the Euclidean plane. All operations defined in section also apply to homogeneous relations. Beyond that, a homogeneous relation over a set may be subjected to closure operations like: the smallest reflexive relation over containing , the smallest transitive relation over containing , the smallest equivalence relation over containing . Calculus of relations Developments in algebraic logic have facilitated usage of binary relations. The calculus of relations includes the algebra of sets, extended by composition of relations and the use of converse relations. The inclusion meaning that implies , sets the scene in a lattice of relations. But since the inclusion symbol is superfluous. Nevertheless, composition of relations and manipulation of the operators according to Schröder rules, provides a calculus to work in the power set of In contrast to homogeneous relations, the composition of relations operation is only a partial function. The necessity of matching target to source of composed relations has led to the suggestion that the study of heterogeneous relations is a chapter of category theory as in the category of sets, except that the morphisms of this category are relations. The of the category Rel are sets, and the relation-morphisms compose as required in a category. Induced concept lattice Binary relations have been described through their induced concept lattices: A concept satisfies two properties: The logical matrix of is the outer product of logical vectors logical vectors. is maximal, not contained in any other outer product. Thus is described as a non-enlargeable rectangle. For a given relation the set of concepts, enlarged by their joins and meets, forms an "induced lattice of concepts", with inclusion forming a preorder. The MacNeille completion theorem (1937) (that any partial order may be embedded in a complete lattice) is cited in a 2013 survey article "Decomposition of relations on concept lattices". The decomposition is , where and are functions, called or left-total, functional relations in this context. The "induced concept lattice is isomorphic to the cut completion of the partial order that belongs to the minimal decomposition of the relation ." Particular cases are considered below: total order corresponds to Ferrers type, and identity corresponds to difunctional, a generalization of equivalence relation on a set. Relations may be ranked by the Schein rank which counts the number of concepts necessary to cover a relation. Structural analysis of relations with concepts provides an approach for data mining. Particular relations Proposition: If is a surjective relation and is its transpose, then where is the identity relation. Proposition: If is a serial relation, then where is the identity relation. Difunctional The idea of a difunctional relation is to partition objects by distinguishing attributes, as a generalization of the concept of an equivalence relation. One way this can be done is with an intervening set of indicators. The partitioning relation is a composition of relations using relations Jacques Riguet named these relations difunctional since the composition involves functional relations, commonly called partial functions. In 1950 Riguet showed that such relations satisfy the inclusion: In automata theory, the term rectangular relation has also been used to denote a difunctional relation. This terminology recalls the fact that, when represented as a logical matrix, the columns and rows of a difunctional relation can be arranged as a block matrix with rectangular blocks of ones on the (asymmetric) main diagonal. More formally, a relation on is difunctional if and only if it can be written as the union of Cartesian products , where the are a partition of a subset of and the likewise a partition of a subset of . Using the notation , a difunctional relation can also be characterized as a relation such that wherever and have a non-empty intersection, then these two sets coincide; formally implies In 1997 researchers found "utility of binary decomposition based on difunctional dependencies in database management." Furthermore, difunctional relations are fundamental in the study of bisimulations. In the context of homogeneous relations, a partial equivalence relation is difunctional. Ferrers type A strict order on a set is a homogeneous relation arising in order theory. In 1951 Jacques Riguet adopted the ordering of an integer partition, called a Ferrers diagram, to extend ordering to binary relations in general. The corresponding logical matrix of a general binary relation has rows which finish with a sequence of ones. Thus the dots of a Ferrer's diagram are changed to ones and aligned on the right in the matrix. An algebraic statement required for a Ferrers type relation R is If any one of the relations is of Ferrers type, then all of them are. Contact Suppose is the power set of , the set of all subsets of . Then a relation is a contact relation if it satisfies three properties: The set membership relation, "is an element of", satisfies these properties so is a contact relation. The notion of a general contact relation was introduced by Georg Aumann in 1970. In terms of the calculus of relations, sufficient conditions for a contact relation include where is the converse of set membership (). Preorder R\R Every relation generates a preorder which is the left residual. In terms of converse and complements, Forming the diagonal of , the corresponding row of and column of will be of opposite logical values, so the diagonal is all zeros. Then , so that is a reflexive relation. To show transitivity, one requires that Recall that is the largest relation such that Then (repeat) (Schröder's rule) (complementation) (definition) The inclusion relation Ω on the power set of can be obtained in this way from the membership relation on subsets of : Fringe of a relation Given a relation , its fringe is the sub-relation defined as When is a partial identity relation, difunctional, or a block diagonal relation, then . Otherwise the operator selects a boundary sub-relation described in terms of its logical matrix: is the side diagonal if is an upper right triangular linear order or strict order. is the block fringe if is irreflexive () or upper right block triangular. is a sequence of boundary rectangles when is of Ferrers type. On the other hand, when is a dense, linear, strict order. Mathematical heaps Given two sets and , the set of binary relations between them can be equipped with a ternary operation where denotes the converse relation of . In 1953 Viktor Wagner used properties of this ternary operation to define semiheaps, heaps, and generalized heaps. The contrast of heterogeneous and homogeneous relations is highlighted by these definitions:
Mathematics
Set theory
null
3942
https://en.wikipedia.org/wiki/Bijection
Bijection
A bijection, bijective function, or one-to-one correspondence between two mathematical sets is a function such that each element of the second set (the codomain) is the image of exactly one element of the first set (the domain). Equivalently, a bijection is a relation between two sets such that each element of either set is paired with exactly one element of the other set. A function is bijective if and only if it is invertible; that is, a function is bijective if and only if there is a function the inverse of , such that each of the two ways for composing the two functions produces an identity function: for each in and for each in For example, the multiplication by two defines a bijection from the integers to the even numbers, which has the division by two as its inverse function. A function is bijective if and only if it is both injective (or one-to-one)—meaning that each element in the codomain is mapped from at most one element of the domain—and surjective (or onto)—meaning that each element of the codomain is mapped from at least one element of the domain. The term one-to-one correspondence must not be confused with one-to-one function, which means injective but not necessarily surjective. The elementary operation of counting establishes a bijection from some finite set to the first natural numbers , up to the number of elements in the counted set. It results that two finite sets have the same number of elements if and only if there exists a bijection between them. More generally, two sets are said to have the same cardinal number if there exists a bijection between them. A bijective function from a set to itself is also called a permutation, and the set of all permutations of a set forms its symmetric group. Some bijections with further properties have received specific names, which include automorphisms, isomorphisms, homeomorphisms, diffeomorphisms, permutation groups, and most geometric transformations. Galois correspondences are bijections between sets of mathematical objects of apparently very different nature. Definition For a binary relation pairing elements of set X with elements of set Y to be a bijection, four properties must hold: each element of X must be paired with at least one element of Y, no element of X may be paired with more than one element of Y, each element of Y must be paired with at least one element of X, and no element of Y may be paired with more than one element of X. Satisfying properties (1) and (2) means that a pairing is a function with domain X. It is more common to see properties (1) and (2) written as a single statement: Every element of X is paired with exactly one element of Y. Functions which satisfy property (3) are said to be "onto Y " and are called surjections (or surjective functions). Functions which satisfy property (4) are said to be "one-to-one functions" and are called injections (or injective functions). With this terminology, a bijection is a function which is both a surjection and an injection, or using other words, a bijection is a function which is both "one-to-one" and "onto". Examples Batting line-up of a baseball or cricket team Consider the batting line-up of a baseball or cricket team (or any list of all the players of any sports team where every player holds a specific spot in a line-up). The set X will be the players on the team (of size nine in the case of baseball) and the set Y will be the positions in the batting order (1st, 2nd, 3rd, etc.) The "pairing" is given by which player is in what position in this order. Property (1) is satisfied since each player is somewhere in the list. Property (2) is satisfied since no player bats in two (or more) positions in the order. Property (3) says that for each position in the order, there is some player batting in that position and property (4) states that two or more players are never batting in the same position in the list. Seats and students of a classroom In a classroom there are a certain number of seats. A group of students enter the room and the instructor asks them to be seated. After a quick look around the room, the instructor declares that there is a bijection between the set of students and the set of seats, where each student is paired with the seat they are sitting in. What the instructor observed in order to reach this conclusion was that: Every student was in a seat (there was no one standing), No student was in more than one seat, Every seat had someone sitting there (there were no empty seats), and No seat had more than one student in it. The instructor was able to conclude that there were just as many seats as there were students, without having to count either set. More mathematical examples For any set X, the identity function 1X: X → X, 1X(x) = x is bijective. The function f: R → R, f(x) = 2x + 1 is bijective, since for each y there is a unique x = (y − 1)/2 such that f(x) = y. More generally, any linear function over the reals, f: R → R, f(x) = ax + b (where a is non-zero) is a bijection. Each real number y is obtained from (or paired with) the real number x = (y − b)/a. The function f: R → (−π/2, π/2), given by f(x) = arctan(x) is bijective, since each real number x is paired with exactly one angle y in the interval (−π/2, π/2) so that tan(y) = x (that is, y = arctan(x)). If the codomain (−π/2, π/2) was made larger to include an integer multiple of π/2, then this function would no longer be onto (surjective), since there is no real number which could be paired with the multiple of π/2 by this arctan function. The exponential function, g: R → R, g(x) = ex, is not bijective: for instance, there is no x in R such that g(x) = −1, showing that g is not onto (surjective). However, if the codomain is restricted to the positive real numbers , then g would be bijective; its inverse (see below) is the natural logarithm function ln. The function h: R → R+, h(x) = x2 is not bijective: for instance, h(−1) = h(1) = 1, showing that h is not one-to-one (injective). However, if the domain is restricted to , then h would be bijective; its inverse is the positive square root function. By Schröder–Bernstein theorem, given any two sets X and Y, and two injective functions f: X → Y and g: Y → X, there exists a bijective function h: X → Y. Inverses A bijection f with domain X (indicated by f: X → Y in functional notation) also defines a converse relation starting in Y and going to X (by turning the arrows around). The process of "turning the arrows around" for an arbitrary function does not, in general, yield a function, but properties (3) and (4) of a bijection say that this inverse relation is a function with domain Y. Moreover, properties (1) and (2) then say that this inverse function is a surjection and an injection, that is, the inverse function exists and is also a bijection. Functions that have inverse functions are said to be invertible. A function is invertible if and only if it is a bijection. Stated in concise mathematical notation, a function f: X → Y is bijective if and only if it satisfies the condition for every y in Y there is a unique x in X with y = f(x). Continuing with the baseball batting line-up example, the function that is being defined takes as input the name of one of the players and outputs the position of that player in the batting order. Since this function is a bijection, it has an inverse function which takes as input a position in the batting order and outputs the player who will be batting in that position. Composition The composition of two bijections f: X → Y and g: Y → Z is a bijection, whose inverse is given by is . Conversely, if the composition of two functions is bijective, it only follows that f is injective and g is surjective. Cardinality If X and Y are finite sets, then there exists a bijection between the two sets X and Y if and only if X and Y have the same number of elements. Indeed, in axiomatic set theory, this is taken as the definition of "same number of elements" (equinumerosity), and generalising this definition to infinite sets leads to the concept of cardinal number, a way to distinguish the various sizes of infinite sets. Properties A function f: R → R is bijective if and only if its graph meets every horizontal and vertical line exactly once. If X is a set, then the bijective functions from X to itself, together with the operation of functional composition (∘), form a group, the symmetric group of X, which is denoted variously by S(X), SX, or X! (X factorial). Bijections preserve cardinalities of sets: for a subset A of the domain with cardinality |A| and subset B of the codomain with cardinality |B|, one has the following equalities: |f(A)| = |A| and |f−1(B)| = |B|. If X and Y are finite sets with the same cardinality, and f: X → Y, then the following are equivalent: f is a bijection. f is a surjection. f is an injection. For a finite set S, there is a bijection between the set of possible total orderings of the elements and the set of bijections from S to S. That is to say, the number of permutations of elements of S is the same as the number of total orderings of that set—namely, n!. Category theory Bijections are precisely the isomorphisms in the category Set of sets and set functions. However, the bijections are not always the isomorphisms for more complex categories. For example, in the category Grp of groups, the morphisms must be homomorphisms since they must preserve the group structure, so the isomorphisms are group isomorphisms which are bijective homomorphisms. Generalization to partial functions The notion of one-to-one correspondence generalizes to partial functions, where they are called partial bijections, although partial bijections are only required to be injective. The reason for this relaxation is that a (proper) partial function is already undefined for a portion of its domain; thus there is no compelling reason to constrain its inverse to be a total function, i.e. defined everywhere on its domain. The set of all partial bijections on a given base set is called the symmetric inverse semigroup. Another way of defining the same notion is to say that a partial bijection from A to B is any relation R (which turns out to be a partial function) with the property that R is the graph of a bijection f:A′→B′, where A′ is a subset of A and B′ is a subset of B. When the partial bijection is on the same set, it is sometimes called a one-to-one partial transformation. An example is the Möbius transformation simply defined on the complex plane, rather than its completion to the extended complex plane. Gallery
Mathematics
Functions: General
null
3954
https://en.wikipedia.org/wiki/Biochemistry
Biochemistry
Biochemistry, or biological chemistry, is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis that allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs as well as organism structure and function. Biochemistry is closely related to molecular biology, the study of the molecular mechanisms of biological phenomena. Much of biochemistry deals with the structures, functions, and interactions of biological macromolecules such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition, and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles and methods have been combined with problem-solving approaches from engineering to manipulate living systems in order to produce useful tools for research, industrial processes, and diagnosis and control of diseasethe discipline of biotechnology. History At its most comprehensive definition, biochemistry can be seen as a study of the components and composition of living things and how they come together to become life. In this sense, the history of biochemistry may therefore go back as far as the ancient Greeks. However, biochemistry as a specific scientific discipline began sometime in the 19th century, or a little earlier, depending on which aspect of biochemistry is being focused on. Some argued that the beginning of biochemistry may have been the discovery of the first enzyme, diastase (now called amylase), in 1833 by Anselme Payen, while others considered Eduard Buchner's first demonstration of a complex biochemical process alcoholic fermentation in cell-free extracts in 1897 to be the birth of biochemistry. Some might also point as its beginning to the influential 1842 work by Justus von Liebig, Animal chemistry, or, Organic chemistry in its applications to physiology and pathology, which presented a chemical theory of metabolism, or even earlier to the 18th century studies on fermentation and respiration by Antoine Lavoisier. Many other pioneers in the field who helped to uncover the layers of complexity of biochemistry have been proclaimed founders of modern biochemistry. Emil Fischer, who studied the chemistry of proteins, and F. Gowland Hopkins, who studied enzymes and the dynamic nature of biochemistry, represent two examples of early biochemists. The term "biochemistry" was first used when Vinzenz Kletzinsky (1826–1882) had his "Compendium der Biochemie" printed in Vienna in 1858; it derived from a combination of biology and chemistry. In 1877, Felix Hoppe-Seyler used the term ( in German) as a synonym for physiological chemistry in the foreword to the first issue of Zeitschrift für Physiologische Chemie (Journal of Physiological Chemistry) where he argued for the setting up of institutes dedicated to this field of study. The German chemist Carl Neuberg however is often cited to have coined the word in 1903, while some credited it to Franz Hofmeister. It was once generally believed that life and its materials had some essential property or substance (often referred to as the "vital principle") distinct from any found in non-living matter, and it was thought that only living beings could produce the molecules of life. In 1828, Friedrich Wöhler published a paper on his serendipitous urea synthesis from potassium cyanate and ammonium sulfate; some regarded that as a direct overthrow of vitalism and the establishment of organic chemistry. However, the Wöhler synthesis has sparked controversy as some reject the death of vitalism at his hands. Since then, biochemistry has advanced, especially since the mid-20th century, with the development of new techniques such as chromatography, X-ray diffraction, dual polarisation interferometry, NMR spectroscopy, radioisotopic labeling, electron microscopy and molecular dynamics simulations. These techniques allowed for the discovery and detailed analysis of many molecules and metabolic pathways of the cell, such as glycolysis and the Krebs cycle (citric acid cycle), and led to an understanding of biochemistry on a molecular level. Another significant historic event in biochemistry is the discovery of the gene, and its role in the transfer of information in the cell. In the 1950s, James D. Watson, Francis Crick, Rosalind Franklin and Maurice Wilkins were instrumental in solving DNA structure and suggesting its relationship with the genetic transfer of information. In 1958, George Beadle and Edward Tatum received the Nobel Prize for work in fungi showing that one gene produces one enzyme. In 1988, Colin Pitchfork was the first person convicted of murder with DNA evidence, which led to the growth of forensic science. More recently, Andrew Z. Fire and Craig C. Mello received the 2006 Nobel Prize for discovering the role of RNA interference (RNAi) in the silencing of gene expression. Starting materials: the chemical elements of life Around two dozen chemical elements are essential to various kinds of biological life. Most rare elements on Earth are not needed by life (exceptions being selenium and iodine), while a few common ones (aluminum and titanium) are not used. Most organisms share element needs, but there are a few differences between plants and animals. For example, ocean algae use bromine, but land plants and animals do not seem to need any. All animals require sodium, but is not an essential element for plants. Plants need boron and silicon, but animals may not (or may need ultra-small amounts). Just six elements—carbon, hydrogen, nitrogen, oxygen, calcium and phosphorus—make up almost 99% of the mass of living cells, including those in the human body (see composition of the human body for a complete list). In addition to the six major elements that compose most of the human body, humans require smaller amounts of possibly 18 more. Biomolecules The 4 main classes of molecules in biochemistry (often called biomolecules) are carbohydrates, lipids, proteins, and nucleic acids. Many biological molecules are polymers: in this terminology, monomers are relatively small macromolecules that are linked together to create large macromolecules known as polymers. When monomers are linked together to synthesize a biological polymer, they undergo a process called dehydration synthesis. Different macromolecules can assemble in larger complexes, often needed for biological activity. Carbohydrates Two of the main functions of carbohydrates are energy storage and providing structure. One of the common sugars known as glucose is a carbohydrate, but not all carbohydrates are sugars. There are more carbohydrates on Earth than any other known type of biomolecule; they are used to store energy and genetic information, as well as play important roles in cell to cell interactions and communications. The simplest type of carbohydrate is a monosaccharide, which among other properties contains carbon, hydrogen, and oxygen, mostly in a ratio of 1:2:1 (generalized formula CnH2nOn, where n is at least 3). Glucose (C6H12O6) is one of the most important carbohydrates; others include fructose (C6H12O6), the sugar commonly associated with the sweet taste of fruits, and deoxyribose (C5H10O4), a component of DNA. A monosaccharide can switch between acyclic (open-chain) form and a cyclic form. The open-chain form can be turned into a ring of carbon atoms bridged by an oxygen atom created from the carbonyl group of one end and the hydroxyl group of another. The cyclic molecule has a hemiacetal or hemiketal group, depending on whether the linear form was an aldose or a ketose. In these cyclic forms, the ring usually has 5 or 6 atoms. These forms are called furanoses and pyranoses, respectively—by analogy with furan and pyran, the simplest compounds with the same carbon-oxygen ring (although they lack the carbon-carbon double bonds of these two molecules). For example, the aldohexose glucose may form a hemiacetal linkage between the hydroxyl on carbon 1 and the oxygen on carbon 4, yielding a molecule with a 5-membered ring, called glucofuranose. The same reaction can take place between carbons 1 and 5 to form a molecule with a 6-membered ring, called glucopyranose. Cyclic forms with a 7-atom ring called heptoses are rare. Two monosaccharides can be joined by a glycosidic or ester bond into a disaccharide through a dehydration reaction during which a molecule of water is released. The reverse reaction in which the glycosidic bond of a disaccharide is broken into two monosaccharides is termed hydrolysis. The best-known disaccharide is sucrose or ordinary sugar, which consists of a glucose molecule and a fructose molecule joined. Another important disaccharide is lactose found in milk, consisting of a glucose molecule and a galactose molecule. Lactose may be hydrolysed by lactase, and deficiency in this enzyme results in lactose intolerance. When a few (around three to six) monosaccharides are joined, it is called an oligosaccharide (oligo- meaning "few"). These molecules tend to be used as markers and signals, as well as having some other uses. Many monosaccharides joined form a polysaccharide. They can be joined in one long linear chain, or they may be branched. Two of the most common polysaccharides are cellulose and glycogen, both consisting of repeating glucose monomers. Cellulose is an important structural component of plant's cell walls and glycogen is used as a form of energy storage in animals. Sugar can be characterized by having reducing or non-reducing ends. A reducing end of a carbohydrate is a carbon atom that can be in equilibrium with the open-chain aldehyde (aldose) or keto form (ketose). If the joining of monomers takes place at such a carbon atom, the free hydroxy group of the pyranose or furanose form is exchanged with an OH-side-chain of another sugar, yielding a full acetal. This prevents opening of the chain to the aldehyde or keto form and renders the modified residue non-reducing. Lactose contains a reducing end at its glucose moiety, whereas the galactose moiety forms a full acetal with the C4-OH group of glucose. Saccharose does not have a reducing end because of full acetal formation between the aldehyde carbon of glucose (C1) and the keto carbon of fructose (C2). Lipids Lipids comprise a diverse range of molecules and to some extent is a catchall for relatively water-insoluble or nonpolar compounds of biological origin, including waxes, fatty acids, fatty-acid derived phospholipids, sphingolipids, glycolipids, and terpenoids (e.g., retinoids and steroids). Some lipids are linear, open-chain aliphatic molecules, while others have ring structures. Some are aromatic (with a cyclic [ring] and planar [flat] structure) while others are not. Some are flexible, while others are rigid. Lipids are usually made from one molecule of glycerol combined with other molecules. In triglycerides, the main group of bulk lipids, there is one molecule of glycerol and three fatty acids. Fatty acids are considered the monomer in that case, and may be saturated (no double bonds in the carbon chain) or unsaturated (one or more double bonds in the carbon chain). Most lipids have some polar character and are largely nonpolar. In general, the bulk of their structure is nonpolar or hydrophobic ("water-fearing"), meaning that it does not interact well with polar solvents like water. Another part of their structure is polar or hydrophilic ("water-loving") and will tend to associate with polar solvents like water. This makes them amphiphilic molecules (having both hydrophobic and hydrophilic portions). In the case of cholesterol, the polar group is a mere –OH (hydroxyl or alcohol). In the case of phospholipids, the polar groups are considerably larger and more polar, as described below. Lipids are an integral part of our daily diet. Most oils and milk products that we use for cooking and eating like butter, cheese, ghee etc. are composed of fats. Vegetable oils are rich in various polyunsaturated fatty acids (PUFA). Lipid-containing foods undergo digestion within the body and are broken into fatty acids and glycerol, the final degradation products of fats and lipids. Lipids, especially phospholipids, are also used in various pharmaceutical products, either as co-solubilizers (e.g. in parenteral infusions) or else as drug carrier components (e.g. in a liposome or transfersome). Proteins Proteins are very large molecules—macro-biopolymers—made from monomers called amino acids. An amino acid consists of an alpha carbon atom attached to an amino group, –NH2, a carboxylic acid group, –COOH (although these exist as –NH3+ and –COO− under physiologic conditions), a simple hydrogen atom, and a side chain commonly denoted as "–R". The side chain "R" is different for each amino acid of which there are 20 standard ones. It is this "R" group that makes each amino acid different, and the properties of the side chains greatly influence the overall three-dimensional conformation of a protein. Some amino acids have functions by themselves or in a modified form; for instance, glutamate functions as an important neurotransmitter. Amino acids can be joined via a peptide bond. In this dehydration synthesis, a water molecule is removed and the peptide bond connects the nitrogen of one amino acid's amino group to the carbon of the other's carboxylic acid group. The resulting molecule is called a dipeptide, and short stretches of amino acids (usually, fewer than thirty) are called peptides or polypeptides. Longer stretches merit the title proteins. As an example, the important blood serum protein albumin contains 585 amino acid residues. Proteins can have structural and/or functional roles. For instance, movements of the proteins actin and myosin ultimately are responsible for the contraction of skeletal muscle. One property many proteins have is that they specifically bind to a certain molecule or class of molecules—they may be extremely selective in what they bind. Antibodies are an example of proteins that attach to one specific type of molecule. Antibodies are composed of heavy and light chains. Two heavy chains would be linked to two light chains through disulfide linkages between their amino acids. Antibodies are specific through variation based on differences in the N-terminal domain. The enzyme-linked immunosorbent assay (ELISA), which uses antibodies, is one of the most sensitive tests modern medicine uses to detect various biomolecules. Probably the most important proteins, however, are the enzymes. Virtually every reaction in a living cell requires an enzyme to lower the activation energy of the reaction. These molecules recognize specific reactant molecules called substrates; they then catalyze the reaction between them. By lowering the activation energy, the enzyme speeds up that reaction by a rate of 1011 or more; a reaction that would normally take over 3,000 years to complete spontaneously might take less than a second with an enzyme. The enzyme itself is not used up in the process and is free to catalyze the same reaction with a new set of substrates. Using various modifiers, the activity of the enzyme can be regulated, enabling control of the biochemistry of the cell as a whole. The structure of proteins is traditionally described in a hierarchy of four levels. The primary structure of a protein consists of its linear sequence of amino acids; for instance, "alanine-glycine-tryptophan-serine-glutamate-asparagine-glycine-lysine-...". Secondary structure is concerned with local morphology (morphology being the study of structure). Some combinations of amino acids will tend to curl up in a coil called an α-helix or into a sheet called a β-sheet; some α-helixes can be seen in the hemoglobin schematic above. Tertiary structure is the entire three-dimensional shape of the protein. This shape is determined by the sequence of amino acids. In fact, a single change can change the entire structure. The alpha chain of hemoglobin contains 146 amino acid residues; substitution of the glutamate residue at position 6 with a valine residue changes the behavior of hemoglobin so much that it results in sickle-cell disease. Finally, quaternary structure is concerned with the structure of a protein with multiple peptide subunits, like hemoglobin with its four subunits. Not all proteins have more than one subunit. Ingested proteins are usually broken up into single amino acids or dipeptides in the small intestine and then absorbed. They can then be joined to form new proteins. Intermediate products of glycolysis, the citric acid cycle, and the pentose phosphate pathway can be used to form all twenty amino acids, and most bacteria and plants possess all the necessary enzymes to synthesize them. Humans and other mammals, however, can synthesize only half of them. They cannot synthesize isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Because they must be ingested, these are the essential amino acids. Mammals do possess the enzymes to synthesize alanine, asparagine, aspartate, cysteine, glutamate, glutamine, glycine, proline, serine, and tyrosine, the nonessential amino acids. While they can synthesize arginine and histidine, they cannot produce it in sufficient amounts for young, growing animals, and so these are often considered essential amino acids. If the amino group is removed from an amino acid, it leaves behind a carbon skeleton called an α-keto acid. Enzymes called transaminases can easily transfer the amino group from one amino acid (making it an α-keto acid) to another α-keto acid (making it an amino acid). This is important in the biosynthesis of amino acids, as for many of the pathways, intermediates from other biochemical pathways are converted to the α-keto acid skeleton, and then an amino group is added, often via transamination. The amino acids may then be linked together to form a protein. A similar process is used to break down proteins. It is first hydrolyzed into its component amino acids. Free ammonia (NH3), existing as the ammonium ion (NH4+) in blood, is toxic to life forms. A suitable method for excreting it must therefore exist. Different tactics have evolved in different animals, depending on the animals' needs. Unicellular organisms release the ammonia into the environment. Likewise, bony fish can release ammonia into the water where it is quickly diluted. In general, mammals convert ammonia into urea, via the urea cycle. In order to determine whether two proteins are related, or in other words to decide whether they are homologous or not, scientists use sequence-comparison methods. Methods like sequence alignments and structural alignments are powerful tools that help scientists identify homologies between related molecules. The relevance of finding homologies among proteins goes beyond forming an evolutionary pattern of protein families. By finding how similar two protein sequences are, we acquire knowledge about their structure and therefore their function. Nucleic acids Nucleic acids, so-called because of their prevalence in cellular nuclei, is the generic name of the family of biopolymers. They are complex, high-molecular-weight biochemical macromolecules that can convey genetic information in all living cells and viruses. The monomers are called nucleotides, and each consists of three components: a nitrogenous heterocyclic base (either a purine or a pyrimidine), a pentose sugar, and a phosphate group. The most common nucleic acids are deoxyribonucleic acid (DNA) and ribonucleic acid (RNA). The phosphate group and the sugar of each nucleotide bond with each other to form the backbone of the nucleic acid, while the sequence of nitrogenous bases stores the information. The most common nitrogenous bases are adenine, cytosine, guanine, thymine, and uracil. The nitrogenous bases of each strand of a nucleic acid will form hydrogen bonds with certain other nitrogenous bases in a complementary strand of nucleic acid. Adenine binds with thymine and uracil, thymine binds only with adenine, and cytosine and guanine can bind only with one another. Adenine, thymine, and uracil contain two hydrogen bonds, while hydrogen bonds formed between cytosine and guanine are three. Aside from the genetic material of the cell, nucleic acids often play a role as second messengers, as well as forming the base molecule for adenosine triphosphate (ATP), the primary energy-carrier molecule found in all living organisms. Also, the nitrogenous bases possible in the two nucleic acids are different: adenine, cytosine, and guanine occur in both RNA and DNA, while thymine occurs only in DNA and uracil occurs in RNA. Metabolism Carbohydrates as energy source Glucose is an energy source in most life forms. For instance, polysaccharides are broken down into their monomers by enzymes (glycogen phosphorylase removes glucose residues from glycogen, a polysaccharide). Disaccharides like lactose or sucrose are cleaved into their two component monosaccharides. Glycolysis (anaerobic) Glucose is mainly metabolized by a very important ten-step pathway called glycolysis, the net result of which is to break down one molecule of glucose into two molecules of pyruvate. This also produces a net two molecules of ATP, the energy currency of cells, along with two reducing equivalents of converting NAD+ (nicotinamide adenine dinucleotide: oxidized form) to NADH (nicotinamide adenine dinucleotide: reduced form). This does not require oxygen; if no oxygen is available (or the cell cannot use oxygen), the NAD is restored by converting the pyruvate to lactate (lactic acid) (e.g. in humans) or to ethanol plus carbon dioxide (e.g. in yeast). Other monosaccharides like galactose and fructose can be converted into intermediates of the glycolytic pathway. Aerobic In aerobic cells with sufficient oxygen, as in most human cells, the pyruvate is further metabolized. It is irreversibly converted to acetyl-CoA, giving off one carbon atom as the waste product carbon dioxide, generating another reducing equivalent as NADH. The two molecules acetyl-CoA (from one molecule of glucose) then enter the citric acid cycle, producing two molecules of ATP, six more NADH molecules and two reduced (ubi)quinones (via FADH2 as enzyme-bound cofactor), and releasing the remaining carbon atoms as carbon dioxide. The produced NADH and quinol molecules then feed into the enzyme complexes of the respiratory chain, an electron transport system transferring the electrons ultimately to oxygen and conserving the released energy in the form of a proton gradient over a membrane (inner mitochondrial membrane in eukaryotes). Thus, oxygen is reduced to water and the original electron acceptors NAD+ and quinone are regenerated. This is why humans breathe in oxygen and breathe out carbon dioxide. The energy released from transferring the electrons from high-energy states in NADH and quinol is conserved first as proton gradient and converted to ATP via ATP synthase. This generates an additional 28 molecules of ATP (24 from the 8 NADH + 4 from the 2 quinols), totaling to 32 molecules of ATP conserved per degraded glucose (two from glycolysis + two from the citrate cycle). It is clear that using oxygen to completely oxidize glucose provides an organism with far more energy than any oxygen-independent metabolic feature, and this is thought to be the reason why complex life appeared only after Earth's atmosphere accumulated large amounts of oxygen. Gluconeogenesis In vertebrates, vigorously contracting skeletal muscles (during weightlifting or sprinting, for example) do not receive enough oxygen to meet the energy demand, and so they shift to anaerobic metabolism, converting glucose to lactate. The combination of glucose from noncarbohydrates origin, such as fat and proteins. This only happens when glycogen supplies in the liver are worn out. The pathway is a crucial reversal of glycolysis from pyruvate to glucose and can use many sources like amino acids, glycerol and Krebs Cycle. Large scale protein and fat catabolism usually occur when those suffer from starvation or certain endocrine disorders. The liver regenerates the glucose, using a process called gluconeogenesis. This process is not quite the opposite of glycolysis, and actually requires three times the amount of energy gained from glycolysis (six molecules of ATP are used, compared to the two gained in glycolysis). Analogous to the above reactions, the glucose produced can then undergo glycolysis in tissues that need energy, be stored as glycogen (or starch in plants), or be converted to other monosaccharides or joined into di- or oligosaccharides. The combined pathways of glycolysis during exercise, lactate's crossing via the bloodstream to the liver, subsequent gluconeogenesis and release of glucose into the bloodstream is called the Cori cycle. Relationship to other "molecular-scale" biological sciences Researchers in biochemistry use specific techniques native to biochemistry, but increasingly combine these with techniques and ideas developed in the fields of genetics, molecular biology, and biophysics. There is not a defined line between these disciplines. Biochemistry studies the chemistry required for biological activity of molecules, molecular biology studies their biological activity, genetics studies their heredity, which happens to be carried by their genome. This is shown in the following schematic that depicts one possible view of the relationships between the fields: Biochemistry is the study of the chemical substances and vital processes occurring in live organisms. Biochemists focus heavily on the role, function, and structure of biomolecules. The study of the chemistry behind biological processes and the synthesis of biologically active molecules are applications of biochemistry. Biochemistry studies life at the atomic and molecular level. Genetics is the study of the effect of genetic differences in organisms. This can often be inferred by the absence of a normal component (e.g. one gene). The study of "mutants" – organisms that lack one or more functional components with respect to the so-called "wild type" or normal phenotype. Genetic interactions (epistasis) can often confound simple interpretations of such "knockout" studies. Molecular biology is the study of molecular underpinnings of the biological phenomena, focusing on molecular synthesis, modification, mechanisms and interactions. The central dogma of molecular biology, where genetic material is transcribed into RNA and then translated into protein, despite being oversimplified, still provides a good starting point for understanding the field. This concept has been revised in light of emerging novel roles for RNA. Chemical biology seeks to develop new tools based on small molecules that allow minimal perturbation of biological systems while providing detailed information about their function. Further, chemical biology employs biological systems to create non-natural hybrids between biomolecules and synthetic devices (for example emptied viral capsids that can deliver gene therapy or drug molecules).
Biology and health sciences
Chemistry
null
3959
https://en.wikipedia.org/wiki/Boolean%20algebra%20%28structure%29
Boolean algebra (structure)
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of both set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets, or its elements can be viewed as generalized truth values. It is also a special case of a De Morgan algebra and a Kleene algebra (with involution). Every Boolean algebra gives rise to a Boolean ring, and vice versa, with ring multiplication corresponding to conjunction or meet ∧, and ring addition to exclusive disjunction or symmetric difference (not disjunction ∨). However, the theory of Boolean rings has an inherent asymmetry between the two operators, while the axioms and theorems of Boolean algebra express the symmetry of the theory described by the duality principle. History The term "Boolean algebra" honors George Boole (1815–1864), a self-educated English mathematician. He introduced the algebraic system initially in a small pamphlet, The Mathematical Analysis of Logic, published in 1847 in response to an ongoing public controversy between Augustus De Morgan and William Hamilton, and later as a more substantial book, The Laws of Thought, published in 1854. Boole's formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a dual pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons and Charles Sanders Peirce. The first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whitehead's 1898 Universal Algebra. Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall Stone in the 1930s, and with Garrett Birkhoff's 1940 Lattice Theory. In the 1960s, Paul Cohen, Dana Scott, and others found deep new results in mathematical logic and axiomatic set theory using offshoots of Boolean algebra, namely forcing and Boolean-valued models. Definition A Boolean algebra is a set , equipped with two binary operations (called "meet" or "and"), (called "join" or "or"), a unary operation (called "complement" or "not") and two elements and in (called "bottom" and "top", or "least" and "greatest" element, also denoted by the symbols and , respectively), such that for all elements , and of , the following axioms hold: {| cellpadding=5 | | | associativity |- | | | commutativity |- | | | absorption |- | | | identity |- | | | distributivity |- | | | complements |} Note, however, that the absorption law and even the associativity law can be excluded from the set of axioms as they can be derived from the other axioms (see Proven properties). A Boolean algebra with only one element is called a trivial Boolean algebra or a degenerate Boolean algebra. (In older works, some authors required and to be distinct elements in order to exclude this case.) It follows from the last three pairs of axioms above (identity, distributivity and complements), or from the absorption axiom, that     if and only if     . The relation defined by if these equivalent conditions hold, is a partial order with least element 0 and greatest element 1. The meet and the join of two elements coincide with their infimum and supremum, respectively, with respect to ≤. The first four pairs of axioms constitute a definition of a bounded lattice. It follows from the first five pairs of axioms that any complement is unique. The set of axioms is self-dual in the sense that if one exchanges with and with in an axiom, the result is again an axiom. Therefore, by applying this operation to a Boolean algebra (or Boolean lattice), one obtains another Boolean algebra with the same elements; it is called its dual. Examples The simplest non-trivial Boolean algebra, the two-element Boolean algebra, has only two elements, and , and is defined by the rules: It has applications in logic, interpreting as false, as true, as and, as or, and as not. Expressions involving variables and the Boolean operations represent statement forms, and two such expressions can be shown to be equal using the above axioms if and only if the corresponding statement forms are logically equivalent. The two-element Boolean algebra is also used for circuit design in electrical engineering; here 0 and 1 represent the two different states of one bit in a digital circuit, typically high and low voltage. Circuits are described by expressions containing variables, and two such expressions are equal for all values of the variables if and only if the corresponding circuits have the same input–output behavior. Furthermore, every possible input–output behavior can be modeled by a suitable Boolean expression. The two-element Boolean algebra is also important in the general theory of Boolean algebras, because an equation involving several variables is generally true in all Boolean algebras if and only if it is true in the two-element Boolean algebra (which can be checked by a trivial brute force algorithm for small numbers of variables). This can for example be used to show that the following laws (Consensus theorems) are generally valid in all Boolean algebras: The power set (set of all subsets) of any given nonempty set forms a Boolean algebra, an algebra of sets, with the two operations (union) and (intersection). The smallest element 0 is the empty set and the largest element is the set itself. After the two-element Boolean algebra, the simplest Boolean algebra is that defined by the power set of two atoms: The set of all subsets of that are either finite or cofinite is a Boolean algebra and an algebra of sets called the finite–cofinite algebra. If is infinite then the set of all cofinite subsets of , which is called the Fréchet filter, is a free ultrafilter on . However, the Fréchet filter is not an ultrafilter on the power set of . Starting with the propositional calculus with sentence symbols, form the Lindenbaum algebra (that is, the set of sentences in the propositional calculus modulo logical equivalence). This construction yields a Boolean algebra. It is in fact the free Boolean algebra on generators. A truth assignment in propositional calculus is then a Boolean algebra homomorphism from this algebra to the two-element Boolean algebra. Given any linearly ordered set with a least element, the interval algebra is the smallest Boolean algebra of subsets of containing all of the half-open intervals such that is in and is either in or equal to . Interval algebras are useful in the study of Lindenbaum–Tarski algebras; every countable Boolean algebra is isomorphic to an interval algebra. For any natural number , the set of all positive divisors of , defining if divides , forms a distributive lattice. This lattice is a Boolean algebra if and only if is square-free. The bottom and the top elements of this Boolean algebra are the natural numbers and , respectively. The complement of is given by . The meet and the join of and are given by the greatest common divisor () and the least common multiple () of and , respectively. The ring addition is given by . The picture shows an example for . As a counter-example, considering the non-square-free , the greatest common divisor of 30 and its complement 2 would be 2, while it should be the bottom element 1. Other examples of Boolean algebras arise from topological spaces: if is a topological space, then the collection of all subsets of that are both open and closed forms a Boolean algebra with the operations (union) and (intersection). If is an arbitrary ring then its set of central idempotents, which is the set becomes a Boolean algebra when its operations are defined by and . Homomorphisms and isomorphisms A homomorphism between two Boolean algebras and is a function such that for all , in : , , , . It then follows that for all in . The class of all Boolean algebras, together with this notion of morphism, forms a full subcategory of the category of lattices. An isomorphism between two Boolean algebras and is a homomorphism with an inverse homomorphism, that is, a homomorphism such that the composition is the identity function on , and the composition is the identity function on . A homomorphism of Boolean algebras is an isomorphism if and only if it is bijective. Boolean rings Every Boolean algebra gives rise to a ring by defining (this operation is called symmetric difference in the case of sets and XOR in the case of logic) and . The zero element of this ring coincides with the 0 of the Boolean algebra; the multiplicative identity element of the ring is the of the Boolean algebra. This ring has the property that for all in ; rings with this property are called Boolean rings. Conversely, if a Boolean ring is given, we can turn it into a Boolean algebra by defining and . Since these two constructions are inverses of each other, we can say that every Boolean ring arises from a Boolean algebra, and vice versa. Furthermore, a map is a homomorphism of Boolean algebras if and only if it is a homomorphism of Boolean rings. The categories of Boolean rings and Boolean algebras are equivalent; in fact the categories are isomorphic. Hsiang (1985) gave a rule-based algorithm to check whether two arbitrary expressions denote the same value in every Boolean ring. More generally, Boudet, Jouannaud, and Schmidt-Schauß (1989) gave an algorithm to solve equations between arbitrary Boolean-ring expressions. Employing the similarity of Boolean rings and Boolean algebras, both algorithms have applications in automated theorem proving. Ideals and filters An ideal of the Boolean algebra is a nonempty subset such that for all , in we have in and for all in we have in . This notion of ideal coincides with the notion of ring ideal in the Boolean ring . An ideal of is called prime if and if in always implies in or in . Furthermore, for every we have that , and then if is prime we have or for every . An ideal of is called maximal if and if the only ideal properly containing is itself. For an ideal , if and , then or is contained in another proper ideal . Hence, such an is not maximal, and therefore the notions of prime ideal and maximal ideal are equivalent in Boolean algebras. Moreover, these notions coincide with ring theoretic ones of prime ideal and maximal ideal in the Boolean ring . The dual of an ideal is a filter. A filter of the Boolean algebra is a nonempty subset such that for all , in we have in and for all in we have in . The dual of a maximal (or prime) ideal in a Boolean algebra is ultrafilter. Ultrafilters can alternatively be described as 2-valued morphisms from to the two-element Boolean algebra. The statement every filter in a Boolean algebra can be extended to an ultrafilter is called the ultrafilter lemma and cannot be proven in Zermelo–Fraenkel set theory (ZF), if ZF is consistent. Within ZF, the ultrafilter lemma is strictly weaker than the axiom of choice. The ultrafilter lemma has many equivalent formulations: every Boolean algebra has an ultrafilter, every ideal in a Boolean algebra can be extended to a prime ideal, etc. Representations It can be shown that every finite Boolean algebra is isomorphic to the Boolean algebra of all subsets of a finite set. Therefore, the number of elements of every finite Boolean algebra is a power of two. Stone's celebrated representation theorem for Boolean algebras states that every Boolean algebra is isomorphic to the Boolean algebra of all clopen sets in some (compact totally disconnected Hausdorff) topological space. Axiomatics The first axiomatization of Boolean lattices/algebras in general was given by the English philosopher and mathematician Alfred North Whitehead in 1898. It included the above axioms and additionally and . In 1904, the American mathematician Edward V. Huntington (1874–1952) gave probably the most parsimonious axiomatization based on , , , even proving the associativity laws (see box). He also proved that these axioms are independent of each other. In 1933, Huntington set out the following elegant axiomatization for Boolean algebra. It requires just one binary operation and a unary functional symbol , to be read as 'complement', which satisfy the following laws: Herbert Robbins immediately asked: If the Huntington equation is replaced with its dual, to wit: do (1), (2), and (4) form a basis for Boolean algebra? Calling (1), (2), and (4) a Robbins algebra, the question then becomes: Is every Robbins algebra a Boolean algebra? This question (which came to be known as the Robbins conjecture) remained open for decades, and became a favorite question of Alfred Tarski and his students. In 1996, William McCune at Argonne National Laboratory, building on earlier work by Larry Wos, Steve Winker, and Bob Veroff, answered Robbins's question in the affirmative: Every Robbins algebra is a Boolean algebra. Crucial to McCune's proof was the computer program EQP he designed. For a simplification of McCune's proof, see Dahn (1998). Further work has been done for reducing the number of axioms; see Minimal axioms for Boolean algebra. Generalizations Removing the requirement of existence of a unit from the axioms of Boolean algebra yields "generalized Boolean algebras". Formally, a distributive lattice is a generalized Boolean lattice, if it has a smallest element and for any elements and in such that , there exists an element such that and . Defining as the unique such that and , we say that the structure is a generalized Boolean algebra, while is a generalized Boolean semilattice. Generalized Boolean lattices are exactly the ideals of Boolean lattices. A structure that satisfies all axioms for Boolean algebras except the two distributivity axioms is called an orthocomplemented lattice. Orthocomplemented lattices arise naturally in quantum logic as lattices of closed linear subspaces for separable Hilbert spaces.
Mathematics
Order theory
null
3973
https://en.wikipedia.org/wiki/Bicycle
Bicycle
A bicycle, also called a pedal cycle, bike, push-bike or cycle, is a human-powered or motor-assisted, pedal-driven, single-track vehicle, with two wheels attached to a frame, one behind the other. A is called a cyclist, or bicyclist. Bicycles were introduced in the 19th century in Europe. By the early 21st century there were more than 1 billion bicycles. There are many more bicycles than cars. Bicycles are the principal means of transport in many regions. They also provide a popular form of recreation, and have been adapted for use as children's toys. Bicycles are used for fitness, military and police applications, courier services, bicycle racing, and artistic cycling. The basic shape and configuration of a typical upright or "safety" bicycle, has changed little since the first chain-driven model was developed around 1885. However, many details have been improved, especially since the advent of modern materials and computer-aided design. These have allowed for a proliferation of specialized designs for many types of cycling. In the 21st century, electric bicycles have become popular. The bicycle's invention has had an enormous effect on society, both in terms of culture and of advancing modern industrial methods. Several components that played a key role in the development of the automobile were initially invented for use in the bicycle, including ball bearings, pneumatic tires, chain-driven sprockets, and tension-spoked wheels. Etymology The word bicycle first appeared in English print in The Daily News in 1868, to describe "Bysicles and trysicles" on the "Champs Elysées and Bois de Boulogne". The word was first used in 1847 in a French publication to describe an unidentified two-wheeled vehicle, possibly a carriage. The design of the bicycle was an advance on the velocipede, although the words were used with some degree of overlap for a time. Other words for bicycle include "bike", "pushbike", "pedal cycle", or "cycle". In Unicode, the code point for "bicycle" is 0x1F6B2. The entity &#x1F6B2; in HTML produces 🚲. Although bike and cycle are used interchangeably to refer mostly to two types of two-wheelers, the terms still vary across the world. In India, for example, a cycle refers only to a two-wheeler using pedal power whereas the term bike is used to describe a two-wheeler using internal combustion engine or electric motors as a source of motive power instead of motorcycle/motorbike. History The "dandy horse", also called Draisienne or Laufmaschine ("running machine"), was the first human means of transport to use only two wheels in tandem and was invented by the German Baron Karl von Drais. It is regarded as the first bicycle and von Drais is seen as the "father of the bicycle", but it did not have pedals. Von Drais introduced it to the public in Mannheim in 1817 and in Paris in 1818. Its rider sat astride a wooden frame supported by two in-line wheels and pushed the vehicle along with his or her feet while steering the front wheel. The first mechanically propelled, two-wheeled vehicle may have been built by Kirkpatrick MacMillan, a Scottish blacksmith, in 1839, although the claim is often disputed. He is also associated with the first recorded instance of a cycling traffic offense, when a Glasgow newspaper in 1842 reported an accident in which an anonymous "gentleman from Dumfries-shire... bestride a velocipede... of ingenious design" knocked over a little girl in Glasgow and was fined five shillings (). In the early 1860s, Frenchmen Pierre Michaux and Pierre Lallement took bicycle design in a new direction by adding a mechanical crank drive with pedals on an enlarged front wheel (the velocipede). This was the first in mass production. Another French inventor named Douglas Grasso had a failed prototype of Pierre Lallement's bicycle several years earlier. Several inventions followed using rear-wheel drive, the best known being the rod-driven velocipede by Scotsman Thomas McCall in 1869. In that same year, bicycle wheels with wire spokes were patented by Eugène Meyer of Paris. The French vélocipède, made of iron and wood, developed into the "penny-farthing" (historically known as an "ordinary bicycle", a retronym, since there was then no other kind). It featured a tubular steel frame on which were mounted wire-spoked wheels with solid rubber tires. These bicycles were difficult to ride due to their high seat and poor weight distribution. In 1868 Rowley Turner, a sales agent of the Coventry Sewing Machine Company (which soon became the Coventry Machinists Company), brought a Michaux cycle to Coventry, England. His uncle, Josiah Turner, and business partner James Starley, used this as a basis for the 'Coventry Model' in what became Britain's first cycle factory. The dwarf ordinary addressed some of these faults by reducing the front wheel diameter and setting the seat further back. This, in turn, required gearing—effected in a variety of ways—to efficiently use pedal power. Having to both pedal and steer via the front wheel remained a problem. Englishman J.K. Starley (nephew of James Starley), J.H. Lawson, and Shergold solved this problem by introducing the chain drive (originated by the unsuccessful "bicyclette" of Englishman Henry Lawson), connecting the frame-mounted cranks to the rear wheel. These models were known as safety bicycles, dwarf safeties, or upright bicycles for their lower seat height and better weight distribution, although without pneumatic tires the ride of the smaller-wheeled bicycle would be much rougher than that of the larger-wheeled variety. Starley's 1885 Rover, manufactured in Coventry is usually described as the first recognizably modern bicycle. Soon the seat tube was added which created the modern bike's double-triangle diamond frame. Further innovations increased comfort and ushered in a second bicycle craze, the 1890s Golden Age of Bicycles. In 1888, Scotsman John Boyd Dunlop introduced the first practical pneumatic tire, which soon became universal. Willie Hume demonstrated the supremacy of Dunlop's tyres in 1889, winning the tyre's first-ever races in Ireland and then England. Soon after, the rear freewheel was developed, enabling the rider to coast. This refinement led to the 1890s invention of coaster brakes. Dérailleur gears and hand-operated Bowden cable-pull brakes were also developed during these years, but were only slowly adopted by casual riders. The Svea Velocipede with vertical pedal arrangement and locking hubs was introduced in 1892 by the Swedish engineers Fredrik Ljungström and Birger Ljungström. It attracted attention at the World Fair and was produced in a few thousand units. In the 1870s many cycling clubs flourished. They were popular in a time when there were no cars on the market and the principal mode of transportation was horse-drawn vehicles, such the horse and buggy or the horsecar. Among the earliest clubs was The Bicycle Touring Club, which has operated since 1878. By the turn of the century, cycling clubs flourished on both sides of the Atlantic, and touring and racing became widely popular. The Raleigh Bicycle Company was founded in Nottingham, England in 1888. It became the biggest bicycle manufacturing company in the world, making over two million bikes per year. Bicycles and horse buggies were the two mainstays of private transportation just prior to the automobile, and the grading of smooth roads in the late 19th century was stimulated by the widespread advertising, production, and use of these devices. More than 1 billion bicycles have been manufactured worldwide as of the early 21st century. Bicycles are the most common vehicle of any kind in the world, and the most numerous model of any kind of vehicle, whether human-powered or motor vehicle, is the Chinese Flying Pigeon, with numbers exceeding 500 million. The next most numerous vehicle, the Honda Super Cub motorcycle, has more than 100 million units made, while most produced car, the Toyota Corolla, has reached 44 million and counting. Uses Bicycles are used for transportation, bicycle commuting, and utility cycling. They are also used professionally by mail carriers, paramedics, police, messengers, and general delivery services. Military uses of bicycles include communications, reconnaissance, troop movement, supply of provisions, and patrol, such as in bicycle infantries. They are also used for recreational purposes, including bicycle touring, mountain biking, physical fitness, and play. Bicycle sports include racing, BMX racing, track racing, criterium, roller racing, sportives and time trials. Major multi-stage professional events are the Giro d'Italia, the Tour de France, the Vuelta a España, the Tour de Pologne, and the Volta a Portugal. They are also used for entertainment and pleasure in other ways, such as in organised mass rides, artistic cycling and freestyle BMX. Technical aspects The bicycle has undergone continual adaptation and improvement since its inception. These innovations have continued with the advent of modern materials and computer-aided design, allowing for a proliferation of specialized bicycle types, improved bicycle safety, and riding comfort. Types Bicycles can be categorized in many different ways: by function, by number of riders, by general construction, by gearing or by means of propulsion. The more common types include utility bicycles, mountain bicycles, racing bicycles, touring bicycles, hybrid bicycles, cruiser bicycles, and BMX bikes. Less common are tandems, low riders, tall bikes, fixed gear, folding models, amphibious bicycles, cargo bikes, recumbents and electric bicycles. Unicycles, tricycles and quadracycles are not strictly bicycles, as they have respectively one, three and four wheels, but are often referred to informally as "bikes" or "cycles". Dynamics A bicycle stays upright while moving forward by being steered so as to keep its center of mass over the wheels. This steering is usually provided by the rider, but under certain conditions may be provided by the bicycle itself. The combined center of mass of a bicycle and its rider must lean into a turn to successfully navigate it. This lean is induced by a method known as countersteering, which can be performed by the rider turning the handlebars directly with the hands or indirectly by leaning the bicycle. Short-wheelbase or tall bicycles, when braking, can generate enough stopping force at the front wheel to flip longitudinally. The act of purposefully using this force to lift the rear wheel and balance on the front without tipping over is a trick known as a stoppie, endo, or front wheelie. Performance The bicycle is extraordinarily efficient in both biological and mechanical terms. The bicycle is the most efficient human-powered means of transportation in terms of energy a person must expend to travel a given distance. From a mechanical viewpoint, up to 99% of the energy delivered by the rider into the pedals is transmitted to the wheels, although the use of gearing mechanisms may reduce this by 10–15%. In terms of the ratio of cargo weight a bicycle can carry to total weight, it is also an efficient means of cargo transportation. A human traveling on a bicycle at low to medium speeds of around uses only the power required to walk. Air drag, which is proportional to the square of speed, requires dramatically higher power outputs as speeds increase. If the rider is sitting upright, the rider's body creates about 75% of the total drag of the bicycle/rider combination. Drag can be reduced by seating the rider in a more aerodynamically streamlined position. Drag can also be reduced by covering the bicycle with an aerodynamic fairing. The fastest recorded unpaced speed on a flat surface is . In addition, the carbon dioxide generated in the production and transportation of the food required by the bicyclist, per mile traveled, is less than that generated by energy efficient motorcars. Parts Frame The great majority of modern bicycles have a frame with upright seating that looks much like the first chain-driven bike. These upright bicycles almost always feature the diamond frame, a truss consisting of two triangles: the front triangle and the rear triangle. The front triangle consists of the head tube, top tube, down tube, and seat tube. The head tube contains the headset, the set of bearings that allows the fork to turn smoothly for steering and balance. The top tube connects the head tube to the seat tube at the top, and the down tube connects the head tube to the bottom bracket. The rear triangle consists of the seat tube and paired chain stays and seat stays. The chain stays run parallel to the chain, connecting the bottom bracket to the rear dropout, where the axle for the rear wheel is held. The seat stays connect the top of the seat tube (at or near the same point as the top tube) to the rear fork ends. Historically, women's bicycle frames had a top tube that connected in the middle of the seat tube instead of the top, resulting in a lower standover height at the expense of compromised structural integrity, since this places a strong bending load in the seat tube, and bicycle frame members are typically weak in bending. This design, referred to as a step-through frame or as an open frame, allows the rider to mount and dismount in a dignified way while wearing a skirt or dress. While some women's bicycles continue to use this frame style, there is also a variation, the mixte, which splits the top tube laterally into two thinner top tubes that bypass the seat tube on each side and connect to the rear fork ends. The ease of stepping through is also appreciated by those with limited flexibility or other joint problems. Because of its persistent image as a "women's" bicycle, step-through frames are not common for larger frames. Step-throughs were popular partly for practical reasons and partly for social mores of the day. For most of the history of bicycles' popularity women have worn long skirts, and the lower frame accommodated these better than the top-tube. Furthermore, it was considered "unladylike" for women to open their legs to mount and dismount—in more conservative times women who rode bicycles at all were vilified as immoral or immodest. These practices were akin to the older practice of riding horse sidesaddle. Another style is the recumbent bicycle. These are inherently more aerodynamic than upright versions, as the rider may lean back onto a support and operate pedals that are on about the same level as the seat. The world's fastest bicycle is a recumbent bicycle but this type was banned from competition in 1934 by the Union Cycliste Internationale. Historically, materials used in bicycles have followed a similar pattern as in aircraft, the goal being high strength and low weight. Since the late 1930s alloy steels have been used for frame and fork tubes in higher quality machines. By the 1980s aluminum welding techniques had improved to the point that aluminum tube could safely be used in place of steel. Since then aluminum alloy frames and other components have become popular due to their light weight, and most mid-range bikes are now principally aluminum alloy of some kind. More expensive bikes use carbon fibre due to its significantly lighter weight and profiling ability, allowing designers to make a bike both stiff and compliant by manipulating the lay-up. Virtually all professional racing bicycles now use carbon fibre frames, as they have the best strength to weight ratio. A typical modern carbon fiber frame can weigh less than . Other exotic frame materials include titanium and advanced alloys. Bamboo, a natural composite material with high strength-to-weight ratio and stiffness has been used for bicycles since 1894. Recent versions use bamboo for the primary frame with glued metal connections and parts, priced as exotic models. Drivetrain and gearing The drivetrain begins with pedals which rotate the cranks, which are held in axis by the bottom bracket. Most bicycles use a chain to transmit power to the rear wheel. A very small number of bicycles use a shaft drive to transmit power, or special belts. Hydraulic bicycle transmissions have been built, but they are currently inefficient and complex. Since cyclists' legs are most efficient over a narrow range of pedaling speeds, or cadence, a variable gear ratio helps a cyclist to maintain an optimum pedalling speed while covering varied terrain. Some, mainly utility, bicycles use hub gears with between 3 and 14 ratios, but most use the generally more efficient dérailleur system, by which the chain is moved between different cogs called chainrings and sprockets to select a ratio. A dérailleur system normally has two dérailleurs, or mechs, one at the front to select the chainring and another at the back to select the sprocket. Most bikes have two or three chainrings, and from 5 to 11 sprockets on the back, with the number of theoretical gears calculated by multiplying front by back. In reality, many gears overlap or require the chain to run diagonally, so the number of usable gears is fewer. An alternative to chaindrive is to use a synchronous belt. These are toothed and work much the same as a chain—popular with commuters and long distance cyclists they require little maintenance. They cannot be shifted across a cassette of sprockets, and are used either as single speed or with a hub gear. Different gears and ranges of gears are appropriate for different people and styles of cycling. Multi-speed bicycles allow gear selection to suit the circumstances: a cyclist could use a high gear when cycling downhill, a medium gear when cycling on a flat road, and a low gear when cycling uphill. In a lower gear every turn of the pedals leads to fewer rotations of the rear wheel. This allows the energy required to move the same distance to be distributed over more pedal turns, reducing fatigue when riding uphill, with a heavy load, or against strong winds. A higher gear allows a cyclist to make fewer pedal turns to maintain a given speed, but with more effort per turn of the pedals. With a chain drive transmission, a chainring attached to a crank drives the chain, which in turn rotates the rear wheel via the rear sprocket(s) (cassette or freewheel). There are four gearing options: two-speed hub gear integrated with chain ring, up to 3 chain rings, up to 12 sprockets, hub gear built into rear wheel (3-speed to 14-speed). The most common options are either a rear hub or multiple chain rings combined with multiple sprockets (other combinations of options are possible but less common). Steering The handlebars connect to the stem that connects to the fork that connects to the front wheel, and the whole assembly connects to the bike and rotates about the steering axis via the headset bearings. Three styles of handlebar are common. Upright handlebars, the norm in Europe and elsewhere until the 1970s, curve gently back toward the rider, offering a natural grip and comfortable upright position. Drop handlebars "drop" as they curve forward and down, offering the cyclist best braking power from a more aerodynamic "crouched" position, as well as more upright positions in which the hands grip the brake lever mounts, the forward curves, or the upper flat sections for increasingly upright postures. Mountain bikes generally feature a 'straight handlebar' or 'riser bar' with varying degrees of sweep backward and centimeters rise upwards, as well as wider widths which can provide better handling due to increased leverage against the wheel. Seating Saddles also vary with rider preference, from the cushioned ones favored by short-distance riders to narrower saddles which allow more room for leg swings. Comfort depends on riding position. With comfort bikes and hybrids, cyclists sit high over the seat, their weight directed down onto the saddle, such that a wider and more cushioned saddle is preferable. For racing bikes where the rider is bent over, weight is more evenly distributed between the handlebars and saddle, the hips are flexed, and a narrower and harder saddle is more efficient. Differing saddle designs exist for male and female cyclists, accommodating the genders' differing anatomies and sit bone width measurements, although bikes typically are sold with saddles most appropriate for men. Suspension seat posts and seat springs provide comfort by absorbing shock but can add to the overall weight of the bicycle. A recumbent bicycle has a reclined chair-like seat that some riders find more comfortable than a saddle, especially riders who suffer from certain types of seat, back, neck, shoulder, or wrist pain. Recumbent bicycles may have either under-seat or over-seat steering. Brakes Bicycle brakes may be rim brakes, in which friction pads are compressed against the wheel rims; hub brakes, where the mechanism is contained within the wheel hub, or disc brakes, where pads act on a rotor attached to the hub. Most road bicycles use rim brakes, but some use disc brakes. Disc brakes are more common for mountain bikes, tandems and recumbent bicycles than on other types of bicycles, due to their increased power, coupled with an increased weight and complexity. With hand-operated brakes, force is applied to brake levers mounted on the handlebars and transmitted via Bowden cables or hydraulic lines to the friction pads, which apply pressure to the braking surface, causing friction which slows the bicycle down. A rear hub brake may be either hand-operated or pedal-actuated, as in the back pedal coaster brakes which were popular in North America until the 1960s. Track bicycles do not have brakes, because all riders ride in the same direction around a track which does not necessitate sharp deceleration. Track riders are still able to slow down because all track bicycles are fixed-gear, meaning that there is no freewheel. Without a freewheel, coasting is impossible, so when the rear wheel is moving, the cranks are moving. To slow down, the rider applies resistance to the pedals, acting as a braking system which can be as effective as a conventional rear wheel brake, but not as effective as a front wheel brake. Suspension Bicycle suspension refers to the system or systems used to suspend the rider and all or part of the bicycle. This serves two purposes: to keep the wheels in continuous contact with the ground, improving control, and to isolate the rider and luggage from jarring due to rough surfaces, improving comfort. Bicycle suspensions are used primarily on mountain bicycles, but are also common on hybrid bicycles, as they can help deal with problematic vibration from poor surfaces. Suspension is especially important on recumbent bicycles, since while an upright bicycle rider can stand on the pedals to achieve some of the benefits of suspension, a recumbent rider cannot. Basic mountain bicycles and hybrids usually have front suspension only, whilst more sophisticated ones also have rear suspension. Road bicycles tend to have no suspension. Wheels and tires The wheel axle fits into fork ends in the frame and fork. A pair of wheels may be called a wheelset, especially in the context of ready-built "off the shelf", performance-oriented wheels. Tires vary enormously depending on their intended purpose. Road bicycles use tires 18 to 25 millimeters wide, most often completely smooth, or slick, and inflated to high pressure to roll fast on smooth surfaces. Off-road tires are usually between wide, and have treads for gripping in muddy conditions or metal studs for ice. Groupset Groupset generally refers to all of the components that make up a bicycle excluding the bicycle frame, fork, stem, wheels, tires, and rider contact points, such as the saddle and handlebars. Accessories Some components, which are often optional accessories on sports bicycles, are standard features on utility bicycles to enhance their usefulness, comfort, safety and visibility. Fenders with spoilers (mudflaps) protect the cyclist and moving parts from spray when riding through wet areas. In some countries (e.g. Germany, UK), fenders are called mudguards. The chainguards protect clothes from oil on the chain while preventing clothing from being caught between the chain and crankset teeth. Kick stands keep bicycles upright when parked, and bike locks deter theft. Front-mounted baskets, front or rear luggage carriers or racks, and panniers mounted above either or both wheels can be used to carry equipment or cargo. Pegs can be fastened to one, or both of the wheel hubs to either help the rider perform certain tricks, or allow a place for extra riders to stand, or rest. Parents sometimes add rear-mounted child seats, an auxiliary saddle fitted to the crossbar, or both to transport children. Bicycles can also be fitted with a hitch to tow a trailer for carrying cargo, a child, or both. Toe-clips and toestraps and clipless pedals help keep the foot locked in the proper pedal position and enable cyclists to pull and push the pedals. Technical accessories include cyclocomputers for measuring speed, distance, heart rate, GPS data etc. Other accessories include lights, reflectors, mirrors, racks, trailers, bags, water bottles and cages, and bell. Bicycle lights, reflectors, and helmets are required by law in some geographic regions depending on the legal code. It is more common to see bicycles with bottle generators, dynamos, lights, fenders, racks and bells in Europe. Bicyclists also have specialized form fitting and high visibility clothing. Children's bicycles may be outfitted with cosmetic enhancements such as bike horns, streamers, and spoke beads. Training wheels are sometimes used when learning to ride, but a dedicated balance bike teaches independent riding more effectively. Bicycle helmets can reduce injury in the event of a collision or accident, and a suitable helmet is legally required of riders in many jurisdictions. Helmets may be classified as an accessory or as an item of clothing. Bike trainers are used to enable cyclists to cycle while the bike remains stationary. They are frequently used to warm up before races or indoors when riding conditions are unfavorable. Standards A number of formal and industry standards exist for bicycle components to help make spare parts exchangeable and to maintain a minimum product safety. The International Organization for Standardization (ISO) has a special technical committee for cycles, TC149, that has the scope of "Standardization in the field of cycles, their components and accessories with particular reference to terminology, testing methods and requirements for performance and safety, and interchangeability". The European Committee for Standardization (CEN) also has a specific Technical Committee, TC333, that defines European standards for cycles. Their mandate states that EN cycle standards shall harmonize with ISO standards. Some CEN cycle standards were developed before ISO published their standards, leading to strong European influences in this area. European cycle standards tend to describe minimum safety requirements, while ISO standards have historically harmonized parts geometry. Maintenance and repair Like all devices with mechanical moving parts, bicycles require a certain amount of regular maintenance and replacement of worn parts. A bicycle is relatively simple compared with a car, so some cyclists choose to do at least part of the maintenance themselves. Some components are easy to handle using relatively simple tools, while other components may require specialist manufacturer-dependent tools. Many bicycle components are available at several different price/quality points; manufacturers generally try to keep all components on any particular bike at about the same quality level, though at the very cheap end of the market there may be some skimping on less obvious components (e.g. bottom bracket). There are several hundred assisted-service Community Bicycle Organizations worldwide. At a Community Bicycle Organization, laypeople bring in bicycles needing repair or maintenance; volunteers teach them how to do the required steps. Full service is available from bicycle mechanics at a local bike shop. In areas where it is available, some cyclists purchase roadside assistance from companies such as the Better World Club or the American Automobile Association. Maintenance The most basic maintenance item is keeping the tires correctly inflated; this can make a noticeable difference as to how the bike feels to ride. Bicycle tires usually have a marking on the sidewall indicating the pressure appropriate for that tire. Bicycles use much higher pressures than cars: car tires are normally in the range of , whereas bicycle tires are normally in the range of . Another basic maintenance item is regular lubrication of the chain and pivot points for derailleurs and brake components. Most of the bearings on a modern bike are sealed and grease-filled and require little or no attention; such bearings will usually last for or more. The crank bearings require periodic maintenance, which involves removing, cleaning and repacking with the correct grease. The chain and the brake blocks are the components which wear out most quickly, so these need to be checked from time to time, typically every or so. Most local bike shops will do such checks for free. Note that when a chain becomes badly worn it will also wear out the rear cogs/cassette and eventually the chain ring(s), so replacing a chain when only moderately worn will prolong the life of other components. Over the longer term, tires do wear out, after ; a rash of punctures is often the most visible sign of a worn tire. Repair Very few bicycle components can actually be repaired; replacement of the failing component is the normal practice. The most common roadside problem is a puncture of the tire's inner tube. A patch kit may be employed to fix the puncture or the tube can be replaced, though the latter solution comes at a greater cost and waste of material. Some brands of tires are much more puncture-resistant than others, often incorporating one or more layers of Kevlar; the downside of such tires is that they may be heavier and/or more difficult to fit and remove. Tools There are specialized bicycle tools for use both in the shop and at the roadside. Many cyclists carry tool kits. These may include a tire patch kit (which, in turn, may contain any combination of a hand pump or CO2 pump, tire levers, spare tubes, self-adhesive patches, or tube-patching material, an adhesive, a piece of sandpaper or a metal grater (for roughening the tube surface to be patched) and sometimes even a block of French chalk), wrenches, hex keys, screwdrivers, and a chain tool. Special, thin wrenches are often required for maintaining various screw-fastened parts, specifically, the frequently lubricated ball-bearing "cones". There are also cycling-specific multi-tools that combine many of these implements into a single compact device. More specialized bicycle components may require more complex tools, including proprietary tools specific for a given manufacturer. Social and historical aspects The bicycle has had a considerable effect on human society, in both the cultural and industrial realms. In daily life Around the turn of the 20th century, bicycles reduced crowding in inner-city tenements by allowing workers to commute from more spacious dwellings in the suburbs. They also reduced dependence on horses. Bicycles allowed people to travel for leisure into the country, since bicycles were three times as energy efficient as walking and three to four times as fast. In built-up cities around the world, urban planning uses cycling infrastructure like bikeways to reduce traffic congestion and air pollution. A number of cities around the world have implemented schemes known as bicycle sharing systems or community bicycle programs. The first of these was the White Bicycle plan in Amsterdam in 1965. It was followed by yellow bicycles in La Rochelle and green bicycles in Cambridge. These initiatives complement public transport systems and offer an alternative to motorized traffic to help reduce congestion and pollution. In Europe, especially in the Netherlands and parts of Germany and Denmark, bicycle commuting is common. In Copenhagen, a cyclists' organization runs a Cycling Embassy that promotes biking for commuting and sightseeing. The United Kingdom has a tax break scheme (IR 176) that allows employees to buy a new bicycle tax free to use for commuting. In the Netherlands all train stations offer free bicycle parking, or a more secure parking place for a small fee, with the larger stations also offering bicycle repair shops. Cycling is so popular that the parking capacity may be exceeded, while in some places such as Delft the capacity is usually exceeded. In Trondheim in Norway, the Trampe bicycle lift has been developed to encourage cyclists by giving assistance on a steep hill. Buses in many cities have bicycle carriers mounted on the front. There are towns in some countries where bicycle culture has been an integral part of the landscape for generations, even without much official support. That is the case of Ílhavo, in Portugal. In cities where bicycles are not integrated into the public transportation system, commuters often use bicycles as elements of a mixed-mode commute, where the bike is used to travel to and from train stations or other forms of rapid transit. Some students who commute several miles drive a car from home to a campus parking lot, then ride a bicycle to class. Folding bicycles are useful in these scenarios, as they are less cumbersome when carried aboard. Los Angeles removed a small amount of seating on some trains to make more room for bicycles and wheel chairs. Some US companies, notably in the tech sector, are developing both innovative cycle designs and cycle-friendliness in the workplace. Foursquare, whose CEO Dennis Crowley "pedaled to pitch meetings ... [when he] was raising money from venture capitalists" on a two-wheeler, chose a new location for its New York headquarters "based on where biking would be easy". Parking in the office was also integral to HQ planning. Mitchell Moss, who runs the Rudin Center for Transportation Policy & Management at New York University, said in 2012: "Biking has become the mode of choice for the educated high tech worker". Bicycles offer an important mode of transport in many developing countries. Until recently, bicycles have been a staple of everyday life throughout Asian countries. They are the most frequently used method of transport for commuting to work, school, shopping, and life in general. In Europe, bicycles are commonly used. They also offer a degree of exercise to keep individuals healthy. Bicycles are also celebrated in the visual arts. An example of this is the Bicycle Film Festival, a film festival hosted all around the world. Poverty alleviation Female emancipation The safety bicycle gave women unprecedented mobility, contributing to their emancipation in Western nations. As bicycles became safer and cheaper, more women had access to the personal freedom that bicycles embodied, and so the bicycle came to symbolize the New Woman of the late 19th century, especially in Britain and the United States. The bicycle craze in the 1890s also led to a movement for so-called rational dress, which helped liberate women from corsets and ankle-length skirts and other restrictive garments, substituting the then-shocking bloomers. The bicycle was recognized by 19th-century feminists and suffragists as a "freedom machine" for women. American Susan B. Anthony said in a New York World interview on 2 February 1896: "I think it has done more to emancipate woman than any one thing in the world. I rejoice every time I see a woman ride by on a wheel. It gives her a feeling of self-reliance and independence the moment she takes her seat; and away she goes, the picture of untrammelled womanhood." In 1895 Frances Willard, the tightly laced president of the Woman's Christian Temperance Union, wrote A Wheel Within a Wheel: How I Learned to Ride the Bicycle, with Some Reflections by the Way, a 75-page illustrated memoir praising "Gladys", her bicycle, for its "gladdening effect" on her health and political optimism. Willard used a cycling metaphor to urge other suffragists to action. In 1985, Georgena Terry started the first women-specific bicycle company. Her designs featured frame geometry and wheel sizes chosen to better fit women, with shorter top tubes and more suitable reach. Economic implications Bicycle manufacturing proved to be a training ground for other industries and led to the development of advanced metalworking techniques, both for the frames themselves and for special components such as ball bearings, washers, and sprockets. These techniques later enabled skilled metalworkers and mechanics to develop the components used in early automobiles and aircraft. Wilbur and Orville Wright, a pair of businessmen, ran the Wright Cycle Company which designed, manufactured and sold their bicycles during the bike boom of the 1890s. They also served to teach the industrial models later adopted, including mechanization and mass production (later copied and adopted by Ford and General Motors), vertical integration (also later copied and adopted by Ford), aggressive advertising (as much as 10% of all advertising in U.S. periodicals in 1898 was by bicycle makers), lobbying for better roads (which had the side benefit of acting as advertising, and of improving sales by providing more places to ride), all first practiced by Pope. In addition, bicycle makers adopted the annual model change (later derided as planned obsolescence, and usually credited to General Motors), which proved very successful. Early bicycles were an example of conspicuous consumption, being adopted by the fashionable elites. In addition, by serving as a platform for accessories, which could ultimately cost more than the bicycle itself, it paved the way for the likes of the Barbie doll. Bicycles helped create, or enhance, new kinds of businesses, such as bicycle messengers, traveling seamstresses, riding academies, and racing rinks. Their board tracks were later adapted to early motorcycle and automobile racing. There were a variety of new inventions, such as spoke tighteners, and specialized lights, socks and shoes, and even cameras, such as the Eastman Company's Poco. Probably the best known and most widely used of these inventions, adopted well beyond cycling, is Charles Bennett's Bike Web, which came to be called the jock strap. They also presaged a move away from public transit that would explode with the introduction of the automobile. J. K. Starley's company became the Rover Cycle Company Ltd. in the late 1890s, and then renamed the Rover Company when it started making cars. Morris Motors Limited (in Oxford) and Škoda also began in the bicycle business, as did the Wright brothers. Alistair Craig, whose company eventually emerged to become the engine manufacturers Ailsa Craig, also started from manufacturing bicycles, in Glasgow in March 1885. In general, U.S. and European cycle manufacturers used to assemble cycles from their own frames and components made by other companies, although very large companies (such as Raleigh) used to make almost every part of a bicycle (including bottom brackets, axles, etc.) In recent years, those bicycle makers have greatly changed their methods of production. Now, almost none of them produce their own frames. Many newer or smaller companies only design and market their products; the actual production is done by Asian companies. For example, some 60% of the world's bicycles are now being made in China. Despite this shift in production, as nations such as China and India become more wealthy, their own use of bicycles has declined due to the increasing affordability of cars and motorcycles. One of the major reasons for the proliferation of Chinese-made bicycles in foreign markets is the lower cost of labor in China. In line with the European financial crisis of that time, in 2011 the number of bicycle sales in Italy (1.75 million) passed the number of new car sales. Environmental impact One of the profound economic implications of bicycle use is that it liberates the user from motor fuel consumption. (Ballantine, 1972) The bicycle is an inexpensive, fast, healthy and environmentally friendly mode of transport. Ivan Illich stated that bicycle use extended the usable physical environment for people, while alternatives such as cars and motorways degraded and confined people's environment and mobility. Currently, two billion bicycles are in use around the world. Children, students, professionals, laborers, civil servants and seniors are pedaling around their communities. They all experience the freedom and the natural opportunity for exercise that the bicycle easily provides. Bicycle also has lowest carbon intensity of travel. Manufacturing The global bicycle market is $61 billion in 2011. , 130 million bicycles were sold every year globally and 66% of them were made in China. Legal requirements Early in its development, as with automobiles, there were restrictions on the operation of bicycles. Along with advertising, and to gain free publicity, Albert A. Pope litigated on behalf of cyclists. The 1968 Vienna Convention on Road Traffic of the United Nations considers a bicycle to be a vehicle, and a person controlling a bicycle (whether actually riding or not) is considered an operator or driver. The traffic codes of many countries reflect these definitions and demand that a bicycle satisfy certain legal requirements before it can be used on public roads. In many jurisdictions, it is an offense to use a bicycle that is not in a roadworthy condition. In some countries, bicycles must have functioning front and rear lights when ridden after dark. Some countries require child and/or adult cyclists to wear helmets, as this may protect riders from head trauma. Countries which require adult cyclists to wear helmets include Spain, New Zealand and Australia. Mandatory helmet wearing is one of the most controversial topics in the cycling world, with proponents arguing that it reduces head injuries and thus is an acceptable requirement, while opponents argue that by making cycling seem more dangerous and cumbersome, it reduces cyclist numbers on the streets, creating an overall negative health effect (fewer people cycling for their own health, and the remaining cyclists being more exposed through a reversed safety in numbers effect). Theft Bicycles are popular targets for theft, due to their value and ease of resale. The number of bicycles stolen annually is difficult to quantify as a large number of crimes are not reported. Around 50% of the participants in the Montreal International Journal of Sustainable Transportation survey were subjected to a bicycle theft in their lifetime as active cyclists. Most bicycles have serial numbers that can be recorded to verify identity in case of theft.
Technology
Transportation
null
3982
https://en.wikipedia.org/wiki/Bicarbonate
Bicarbonate
In inorganic chemistry, bicarbonate (IUPAC-recommended nomenclature: hydrogencarbonate) is an intermediate form in the deprotonation of carbonic acid. It is a polyatomic anion with the chemical formula . Bicarbonate serves a crucial biochemical role in the physiological pH buffering system. The term "bicarbonate" was coined in 1814 by the English chemist William Hyde Wollaston. The name lives on as a trivial name. Chemical properties The bicarbonate ion (hydrogencarbonate ion) is an anion with the empirical formula and a molecular mass of 61.01 daltons; it consists of one central carbon atom surrounded by three oxygen atoms in a trigonal planar arrangement, with a hydrogen atom attached to one of the oxygens. It is isoelectronic with nitric acid . The bicarbonate ion carries a negative one formal charge and is an amphiprotic species which has both acidic and basic properties. It is both the conjugate base of carbonic acid ; and the conjugate acid of , the carbonate ion, as shown by these equilibrium reactions: + 2 H2O + H2O + OH− H2CO3 + 2 OH− H2CO3 + 2 H2O + H3O+ + H2O + 2 H3O+. A bicarbonate salt forms when a positively charged ion attaches to the negatively charged oxygen atoms of the ion, forming an ionic compound. Many bicarbonates are soluble in water at standard temperature and pressure; in particular, sodium bicarbonate contributes to total dissolved solids, a common parameter for assessing water quality. Physiological role Bicarbonate () is a vital component of the pH buffering system of the human body (maintaining acid–base homeostasis). 70%–75% of CO2 in the body is converted into carbonic acid (H2CO3), which is the conjugate acid of and can quickly turn into it. With carbonic acid as the central intermediate species, bicarbonate – in conjunction with water, hydrogen ions, and carbon dioxide – forms this buffering system, which is maintained at the volatile equilibrium required to provide prompt resistance to pH changes in both the acidic and basic directions. This is especially important for protecting tissues of the central nervous system, where pH changes too far outside of the normal range in either direction could prove disastrous (see acidosis or alkalosis). Recently it has been also demonstrated that cellular bicarbonate metabolism can be regulated by mTORC1 signaling. Additionally, bicarbonate plays a key role in the digestive system. It raises the internal pH of the stomach, after highly acidic digestive juices have finished in their digestion of food. Bicarbonate also acts to regulate pH in the small intestine. It is released from the pancreas in response to the hormone secretin to neutralize the acidic chyme entering the duodenum from the stomach. Bicarbonate in the environment Bicarbonate is the dominant form of dissolved inorganic carbon in sea water, and in most fresh waters. As such it is an important sink in the carbon cycle. Some plants like Chara utilize carbonate and produce calcium carbonate (CaCO3) as result of biological metabolism. In freshwater ecology, strong photosynthetic activity by freshwater plants in daylight releases gaseous oxygen into the water and at the same time produces bicarbonate ions. These shift the pH upward until in certain circumstances the degree of alkalinity can become toxic to some organisms or can make other chemical constituents such as ammonia toxic. In darkness, when no photosynthesis occurs, respiration processes release carbon dioxide, and no new bicarbonate ions are produced, resulting in a rapid fall in pH. The flow of bicarbonate ions from rocks weathered by the carbonic acid in rainwater is an important part of the carbon cycle. Other uses The most common salt of the bicarbonate ion is sodium bicarbonate, NaHCO3, which is commonly known as baking soda. When heated or exposed to an acid such as acetic acid (vinegar), sodium bicarbonate releases carbon dioxide. This is used as a leavening agent in baking. Ammonium bicarbonate is used in the manufacturing of some cookies, crackers, and biscuits. Diagnostics In diagnostic medicine, the blood value of bicarbonate is one of several indicators of the state of acid–base physiology in the body. It is measured, along with chloride, potassium, and sodium, to assess electrolyte levels in an electrolyte panel test (which has Current Procedural Terminology, CPT, code 80051). The parameter standard bicarbonate concentration (SBCe) is the bicarbonate concentration in the blood at a PaCO2 of , full oxygen saturation and 36 °C. Bicarbonate compounds Sodium bicarbonate Potassium bicarbonate Caesium bicarbonate Magnesium bicarbonate Calcium bicarbonate Ammonium bicarbonate Carbonic acid
Physical sciences
Carbonic oxyanions
Chemistry
3989
https://en.wikipedia.org/wiki/Banach%20space
Banach space
In mathematics, more specifically in functional analysis, a Banach space (pronounced ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space. Banach spaces are named after the Polish mathematician Stefan Banach, who introduced this concept and studied it systematically in 1920–1922 along with Hans Hahn and Eduard Helly. Maurice René Fréchet was the first to use the term "Banach space" and Banach in turn then coined the term "Fréchet space". Banach spaces originally grew out of the study of function spaces by Hilbert, Fréchet, and Riesz earlier in the century. Banach spaces play a central role in functional analysis. In other areas of analysis, the spaces under study are often Banach spaces. Definition A Banach space is a complete normed space A normed space is a pair consisting of a vector space over a scalar field (where is commonly or ) together with a distinguished norm Like all norms, this norm induces a translation invariant distance function, called the canonical or (norm) induced metric, defined for all vectors by This makes into a metric space A sequence is called or or if for every real there exists some index such that whenever and are greater than The normed space is called a and the canonical metric is called a if is a , which by definition means for every Cauchy sequence in there exists some such that where because this sequence's convergence to can equivalently be expressed as: The norm of a normed space is called a if is a Banach space. L-semi-inner product For any normed space there exists an L-semi-inner product on such that for all in general, there may be infinitely many L-semi-inner products that satisfy this condition. L-semi-inner products are a generalization of inner products, which are what fundamentally distinguish Hilbert spaces from all other Banach spaces. This shows that all normed spaces (and hence all Banach spaces) can be considered as being generalizations of (pre-)Hilbert spaces. Characterization in terms of series The vector space structure allows one to relate the behavior of Cauchy sequences to that of converging series of vectors. A normed space is a Banach space if and only if each absolutely convergent series in converges to a value that lies within Topology The canonical metric of a normed space induces the usual metric topology on which is referred to as the canonical or norm induced topology. Every normed space is automatically assumed to carry this Hausdorff topology, unless indicated otherwise. With this topology, every Banach space is a Baire space, although there exist normed spaces that are Baire but not Banach. The norm is always a continuous function with respect to the topology that it induces. The open and closed balls of radius centered at a point are, respectively, the sets Any such ball is a convex and bounded subset of but a compact ball / neighborhood exists if and only if is a finite-dimensional vector space. In particular, no infinite–dimensional normed space can be locally compact or have the Heine–Borel property. If is a vector and is a scalar then Using shows that this norm-induced topology is translation invariant, which means that for any and the subset is open (respectively, closed) in if and only if this is true of its translation Consequently, the norm induced topology is completely determined by any neighbourhood basis at the origin. Some common neighborhood bases at the origin include: where is a sequence in of positive real numbers that converges to in (such as or for instance). So for example, every open subset of can be written as a union indexed by some subset where every may be picked from the aforementioned sequence (the open balls can be replaced with closed balls, although then the indexing set and radii may also need to be replaced). Additionally, can always be chosen to be countable if is a , which by definition means that contains some countable dense subset. Homeomorphism classes of separable Banach spaces All finite–dimensional normed spaces are separable Banach spaces and any two Banach spaces of the same finite dimension are linearly homeomorphic. Every separable infinite–dimensional Hilbert space is linearly isometrically isomorphic to the separable Hilbert sequence space with its usual norm The Anderson–Kadec theorem states that every infinite–dimensional separable Fréchet space is homeomorphic to the product space of countably many copies of (this homeomorphism need not be a linear map). Thus all infinite–dimensional separable Fréchet spaces are homeomorphic to each other (or said differently, their topology is unique up to a homeomorphism). Since every Banach space is a Fréchet space, this is also true of all infinite–dimensional separable Banach spaces, including In fact, is even homeomorphic to its own unit which stands in sharp contrast to finite–dimensional spaces (the Euclidean plane is not homeomorphic to the unit circle, for instance). This pattern in homeomorphism classes extends to generalizations of metrizable (locally Euclidean) topological manifolds known as , which are metric spaces that are around every point, locally homeomorphic to some open subset of a given Banach space (metric Hilbert manifolds and metric Fréchet manifolds are defined similarly). For example, every open subset of a Banach space is canonically a metric Banach manifold modeled on since the inclusion map is an open local homeomorphism. Using Hilbert space microbundles, David Henderson showed in 1969 that every metric manifold modeled on a separable infinite–dimensional Banach (or Fréchet) space can be topologically embedded as an subset of and, consequently, also admits a unique smooth structure making it into a Hilbert manifold. Compact and convex subsets There is a compact subset of whose convex hull is closed and thus also compact (see this footnote for an example). However, like in all Banach spaces, the convex hull of this (and every other) compact subset will be compact. But if a normed space is not complete then it is in general guaranteed that will be compact whenever is; an example can even be found in a (non-complete) pre-Hilbert vector subspace of As a topological vector space This norm-induced topology also makes into what is known as a topological vector space (TVS), which by definition is a vector space endowed with a topology making the operations of addition and scalar multiplication continuous. It is emphasized that the TVS is a vector space together with a certain type of topology; that is to say, when considered as a TVS, it is associated with particular norm or metric (both of which are "forgotten"). This Hausdorff TVS is even locally convex because the set of all open balls centered at the origin forms a neighbourhood basis at the origin consisting of convex balanced open sets. This TVS is also , which by definition refers to any TVS whose topology is induced by some (possibly unknown) norm. Normable TVSs are characterized by being Hausdorff and having a bounded convex neighborhood of the origin. All Banach spaces are barrelled spaces, which means that every barrel is neighborhood of the origin (all closed balls centered at the origin are barrels, for example) and guarantees that the Banach–Steinhaus theorem holds. Comparison of complete metrizable vector topologies The open mapping theorem implies that if and are topologies on that make both and into complete metrizable TVS (for example, Banach or Fréchet spaces) and if one topology is finer or coarser than the other then they must be equal (that is, if or then ). So for example, if and are Banach spaces with topologies and and if one of these spaces has some open ball that is also an open subset of the other space (or equivalently, if one of or is continuous) then their topologies are identical and their norms are equivalent. Completeness Complete norms and equivalent norms Two norms, and on a vector space are said to be if they induce the same topology; this happens if and only if there exist positive real numbers such that for all If and are two equivalent norms on a vector space then is a Banach space if and only if is a Banach space. See this footnote for an example of a continuous norm on a Banach space that is equivalent to that Banach space's given norm. All norms on a finite-dimensional vector space are equivalent and every finite-dimensional normed space is a Banach space. Complete norms vs complete metrics A metric on a vector space is induced by a norm on if and only if is translation invariant and , which means that for all scalars and all in which case the function defines a norm on and the canonical metric induced by is equal to Suppose that is a normed space and that is the norm topology induced on Suppose that is metric on such that the topology that induces on is equal to If is translation invariant then is a Banach space if and only if is a complete metric space. If is translation invariant, then it may be possible for to be a Banach space but for to be a complete metric space (see this footnote for an example). In contrast, a theorem of Klee, which also applies to all metrizable topological vector spaces, implies that if there exists complete metric on that induces the norm topology on then is a Banach space. A Fréchet space is a locally convex topological vector space whose topology is induced by some translation-invariant complete metric. Every Banach space is a Fréchet space but not conversely; indeed, there even exist Fréchet spaces on which no norm is a continuous function (such as the space of real sequences with the product topology). However, the topology of every Fréchet space is induced by some countable family of real-valued (necessarily continuous) maps called seminorms, which are generalizations of norms. It is even possible for a Fréchet space to have a topology that is induced by a countable family of (such norms would necessarily be continuous) but to not be a Banach/normable space because its topology can not be defined by any norm. An example of such a space is the Fréchet space whose definition can be found in the article on spaces of test functions and distributions. Complete norms vs complete topological vector spaces There is another notion of completeness besides metric completeness and that is the notion of a complete topological vector space (TVS) or TVS-completeness, which uses the theory of uniform spaces. Specifically, the notion of TVS-completeness uses a unique translation-invariant uniformity, called the canonical uniformity, that depends on vector subtraction and the topology that the vector space is endowed with, and so in particular, this notion of TVS completeness is independent of whatever norm induced the topology (and even applies to TVSs that are even metrizable). Every Banach space is a complete TVS. Moreover, a normed space is a Banach space (that is, its norm-induced metric is complete) if and only if it is complete as a topological vector space. If is a metrizable topological vector space (such as any norm induced topology, for example), then is a complete TVS if and only if it is a complete TVS, meaning that it is enough to check that every Cauchy in converges in to some point of (that is, there is no need to consider the more general notion of arbitrary Cauchy nets). If is a topological vector space whose topology is induced by (possibly unknown) norm (such spaces are called ), then is a complete topological vector space if and only if may be assigned a norm that induces on the topology and also makes into a Banach space. A Hausdorff locally convex topological vector space is normable if and only if its strong dual space is normable, in which case is a Banach space ( denotes the strong dual space of whose topology is a generalization of the dual norm-induced topology on the continuous dual space ; see this footnote for more details). If is a metrizable locally convex TVS, then is normable if and only if is a Fréchet–Urysohn space. This shows that in the category of locally convex TVSs, Banach spaces are exactly those complete spaces that are both metrizable and have metrizable strong dual spaces. Completions Every normed space can be isometrically embedded onto a dense vector subspace of Banach space, where this Banach space is called a of the normed space. This Hausdorff completion is unique up to isometric isomorphism. More precisely, for every normed space there exist a Banach space and a mapping such that is an isometric mapping and is dense in If is another Banach space such that there is an isometric isomorphism from onto a dense subset of then is isometrically isomorphic to This Banach space is the Hausdorff of the normed space The underlying metric space for is the same as the metric completion of with the vector space operations extended from to The completion of is sometimes denoted by General theory Linear operators, isomorphisms If and are normed spaces over the same ground field the set of all continuous -linear maps is denoted by In infinite-dimensional spaces, not all linear maps are continuous. A linear mapping from a normed space to another normed space is continuous if and only if it is bounded on the closed unit ball of Thus, the vector space can be given the operator norm For a Banach space, the space is a Banach space with respect to this norm. In categorical contexts, it is sometimes convenient to restrict the function space between two Banach spaces to only the short maps; in that case the space reappears as a natural bifunctor. If is a Banach space, the space forms a unital Banach algebra; the multiplication operation is given by the composition of linear maps. If and are normed spaces, they are isomorphic normed spaces if there exists a linear bijection such that and its inverse are continuous. If one of the two spaces or is complete (or reflexive, separable, etc.) then so is the other space. Two normed spaces and are isometrically isomorphic if in addition, is an isometry, that is, for every in The Banach–Mazur distance between two isomorphic but not isometric spaces and gives a measure of how much the two spaces and differ. Continuous and bounded linear functions and seminorms Every continuous linear operator is a bounded linear operator and if dealing only with normed spaces then the converse is also true. That is, a linear operator between two normed spaces is bounded if and only if it is a continuous function. So in particular, because the scalar field (which is or ) is a normed space, a linear functional on a normed space is a bounded linear functional if and only if it is a continuous linear functional. This allows for continuity-related results (like those below) to be applied to Banach spaces. Although boundedness is the same as continuity for linear maps between normed spaces, the term "bounded" is more commonly used when dealing primarily with Banach spaces. If is a subadditive function (such as a norm, a sublinear function, or real linear functional), then is continuous at the origin if and only if is uniformly continuous on all of ; and if in addition then is continuous if and only if its absolute value is continuous, which happens if and only if is an open subset of And very importantly for applying the Hahn–Banach theorem, a linear functional is continuous if and only if this is true of its real part and moreover, and the real part completely determines which is why the Hahn–Banach theorem is often stated only for real linear functionals. Also, a linear functional on is continuous if and only if the seminorm is continuous, which happens if and only if there exists a continuous seminorm such that ; this last statement involving the linear functional and seminorm is encountered in many versions of the Hahn–Banach theorem. Basic notions The Cartesian product of two normed spaces is not canonically equipped with a norm. However, several equivalent norms are commonly used, such as which correspond (respectively) to the coproduct and product in the category of Banach spaces and short maps (discussed above). For finite (co)products, these norms give rise to isomorphic normed spaces, and the product (or the direct sum ) is complete if and only if the two factors are complete. If is a closed linear subspace of a normed space there is a natural norm on the quotient space The quotient is a Banach space when is complete. The quotient map from onto sending to its class is linear, onto and has norm except when in which case the quotient is the null space. The closed linear subspace of is said to be a complemented subspace of if is the range of a surjective bounded linear projection In this case, the space is isomorphic to the direct sum of and the kernel of the projection Suppose that and are Banach spaces and that There exists a canonical factorization of as where the first map is the quotient map, and the second map sends every class in the quotient to the image in This is well defined because all elements in the same class have the same image. The mapping is a linear bijection from onto the range whose inverse need not be bounded. Classical spaces Basic examples of Banach spaces include: the Lp spaces and their special cases, the sequence spaces that consist of scalar sequences indexed by natural numbers ; among them, the space of absolutely summable sequences and the space of square summable sequences; the space of sequences tending to zero and the space of bounded sequences; the space of continuous scalar functions on a compact Hausdorff space equipped with the max norm, According to the Banach–Mazur theorem, every Banach space is isometrically isomorphic to a subspace of some For every separable Banach space there is a closed subspace of such that Any Hilbert space serves as an example of a Banach space. A Hilbert space on is complete for a norm of the form where is the inner product, linear in its first argument that satisfies the following: For example, the space is a Hilbert space. The Hardy spaces, the Sobolev spaces are examples of Banach spaces that are related to spaces and have additional structure. They are important in different branches of analysis, Harmonic analysis and Partial differential equations among others. Banach algebras A Banach algebra is a Banach space over or together with a structure of algebra over , such that the product map is continuous. An equivalent norm on can be found so that for all Examples The Banach space with the pointwise product, is a Banach algebra. The disk algebra consists of functions holomorphic in the open unit disk and continuous on its closure: Equipped with the max norm on the disk algebra is a closed subalgebra of The Wiener algebra is the algebra of functions on the unit circle with absolutely convergent Fourier series. Via the map associating a function on to the sequence of its Fourier coefficients, this algebra is isomorphic to the Banach algebra where the product is the convolution of sequences. For every Banach space the space of bounded linear operators on with the composition of maps as product, is a Banach algebra. A C*-algebra is a complex Banach algebra with an antilinear involution such that The space of bounded linear operators on a Hilbert space is a fundamental example of C*-algebra. The Gelfand–Naimark theorem states that every C*-algebra is isometrically isomorphic to a C*-subalgebra of some The space of complex continuous functions on a compact Hausdorff space is an example of commutative C*-algebra, where the involution associates to every function its complex conjugate Dual space If is a normed space and the underlying field (either the real or the complex numbers), the continuous dual space is the space of continuous linear maps from into or continuous linear functionals. The notation for the continuous dual is in this article. Since is a Banach space (using the absolute value as norm), the dual is a Banach space, for every normed space The Dixmier–Ng theorem characterizes the dual spaces of Banach spaces. The main tool for proving the existence of continuous linear functionals is the Hahn–Banach theorem. In particular, every continuous linear functional on a subspace of a normed space can be continuously extended to the whole space, without increasing the norm of the functional. An important special case is the following: for every vector in a normed space there exists a continuous linear functional on such that When is not equal to the vector, the functional must have norm one, and is called a norming functional for The Hahn–Banach separation theorem states that two disjoint non-empty convex sets in a real Banach space, one of them open, can be separated by a closed affine hyperplane. The open convex set lies strictly on one side of the hyperplane, the second convex set lies on the other side but may touch the hyperplane. A subset in a Banach space is total if the linear span of is dense in The subset is total in if and only if the only continuous linear functional that vanishes on is the functional: this equivalence follows from the Hahn–Banach theorem. If is the direct sum of two closed linear subspaces and then the dual of is isomorphic to the direct sum of the duals of and If is a closed linear subspace in one can associate the in the dual, The orthogonal is a closed linear subspace of the dual. The dual of is isometrically isomorphic to The dual of is isometrically isomorphic to The dual of a separable Banach space need not be separable, but: When is separable, the above criterion for totality can be used for proving the existence of a countable total subset in Weak topologies The weak topology on a Banach space is the coarsest topology on for which all elements in the continuous dual space are continuous. The norm topology is therefore finer than the weak topology. It follows from the Hahn–Banach separation theorem that the weak topology is Hausdorff, and that a norm-closed convex subset of a Banach space is also weakly closed. A norm-continuous linear map between two Banach spaces and is also weakly continuous, that is, continuous from the weak topology of to that of If is infinite-dimensional, there exist linear maps which are not continuous. The space of all linear maps from to the underlying field (this space is called the algebraic dual space, to distinguish it from also induces a topology on which is finer than the weak topology, and much less used in functional analysis. On a dual space there is a topology weaker than the weak topology of called weak* topology. It is the coarsest topology on for which all evaluation maps where ranges over are continuous. Its importance comes from the Banach–Alaoglu theorem. The Banach–Alaoglu theorem can be proved using Tychonoff's theorem about infinite products of compact Hausdorff spaces. When is separable, the unit ball of the dual is a metrizable compact in the weak* topology. Examples of dual spaces The dual of is isometrically isomorphic to : for every bounded linear functional on there is a unique element such that The dual of is isometrically isomorphic to . The dual of Lebesgue space is isometrically isomorphic to when and For every vector in a Hilbert space the mapping defines a continuous linear functional on The Riesz representation theorem states that every continuous linear functional on is of the form for a uniquely defined vector in The mapping is an antilinear isometric bijection from onto its dual When the scalars are real, this map is an isometric isomorphism. When is a compact Hausdorff topological space, the dual of is the space of Radon measures in the sense of Bourbaki. The subset of consisting of non-negative measures of mass 1 (probability measures) is a convex w*-closed subset of the unit ball of The extreme points of are the Dirac measures on The set of Dirac measures on equipped with the w*-topology, is homeomorphic to The result has been extended by Amir and Cambern to the case when the multiplicative Banach–Mazur distance between and is The theorem is no longer true when the distance is In the commutative Banach algebra the maximal ideals are precisely kernels of Dirac measures on More generally, by the Gelfand–Mazur theorem, the maximal ideals of a unital commutative Banach algebra can be identified with its characters—not merely as sets but as topological spaces: the former with the hull-kernel topology and the latter with the w*-topology. In this identification, the maximal ideal space can be viewed as a w*-compact subset of the unit ball in the dual Not every unital commutative Banach algebra is of the form for some compact Hausdorff space However, this statement holds if one places in the smaller category of commutative C*-algebras. Gelfand's representation theorem for commutative C*-algebras states that every commutative unital C*-algebra is isometrically isomorphic to a space. The Hausdorff compact space here is again the maximal ideal space, also called the spectrum of in the C*-algebra context. Bidual If is a normed space, the (continuous) dual of the dual is called , or of For every normed space there is a natural map, This defines as a continuous linear functional on that is, an element of The map is a linear map from to As a consequence of the existence of a norming functional for every this map is isometric, thus injective. For example, the dual of is identified with and the dual of is identified with the space of bounded scalar sequences. Under these identifications, is the inclusion map from to It is indeed isometric, but not onto. If is surjective, then the normed space is called reflexive (see below). Being the dual of a normed space, the bidual is complete, therefore, every reflexive normed space is a Banach space. Using the isometric embedding it is customary to consider a normed space as a subset of its bidual. When is a Banach space, it is viewed as a closed linear subspace of If is not reflexive, the unit ball of is a proper subset of the unit ball of The Goldstine theorem states that the unit ball of a normed space is weakly*-dense in the unit ball of the bidual. In other words, for every in the bidual, there exists a net in so that The net may be replaced by a weakly*-convergent sequence when the dual is separable. On the other hand, elements of the bidual of that are not in cannot be weak*-limit of in since is weakly sequentially complete. Banach's theorems Here are the main general results about Banach spaces that go back to the time of Banach's book () and are related to the Baire category theorem. According to this theorem, a complete metric space (such as a Banach space, a Fréchet space or an F-space) cannot be equal to a union of countably many closed subsets with empty interiors. Therefore, a Banach space cannot be the union of countably many closed subspaces, unless it is already equal to one of them; a Banach space with a countable Hamel basis is finite-dimensional. The Banach–Steinhaus theorem is not limited to Banach spaces. It can be extended for example to the case where is a Fréchet space, provided the conclusion is modified as follows: under the same hypothesis, there exists a neighborhood of in such that all in are uniformly bounded on This result is a direct consequence of the preceding Banach isomorphism theorem and of the canonical factorization of bounded linear maps. This is another consequence of Banach's isomorphism theorem, applied to the continuous bijection from onto sending to the sum Reflexivity The normed space is called reflexive when the natural map is surjective. Reflexive normed spaces are Banach spaces. This is a consequence of the Hahn–Banach theorem. Further, by the open mapping theorem, if there is a bounded linear operator from the Banach space onto the Banach space then is reflexive. Indeed, if the dual of a Banach space is separable, then is separable. If is reflexive and separable, then the dual of is separable, so is separable. Hilbert spaces are reflexive. The spaces are reflexive when More generally, uniformly convex spaces are reflexive, by the Milman–Pettis theorem. The spaces are not reflexive. In these examples of non-reflexive spaces the bidual is "much larger" than Namely, under the natural isometric embedding of into given by the Hahn–Banach theorem, the quotient is infinite-dimensional, and even nonseparable. However, Robert C. James has constructed an example of a non-reflexive space, usually called "the James space" and denoted by such that the quotient is one-dimensional. Furthermore, this space is isometrically isomorphic to its bidual. When is reflexive, it follows that all closed and bounded convex subsets of are weakly compact. In a Hilbert space the weak compactness of the unit ball is very often used in the following way: every bounded sequence in has weakly convergent subsequences. Weak compactness of the unit ball provides a tool for finding solutions in reflexive spaces to certain optimization problems. For example, every convex continuous function on the unit ball of a reflexive space attains its minimum at some point in As a special case of the preceding result, when is a reflexive space over every continuous linear functional in attains its maximum on the unit ball of The following theorem of Robert C. James provides a converse statement. The theorem can be extended to give a characterization of weakly compact convex sets. On every non-reflexive Banach space there exist continuous linear functionals that are not norm-attaining. However, the Bishop–Phelps theorem states that norm-attaining functionals are norm dense in the dual of Weak convergences of sequences A sequence in a Banach space is weakly convergent to a vector if converges to for every continuous linear functional in the dual The sequence is a weakly Cauchy sequence if converges to a scalar limit for every in A sequence in the dual is weakly* convergent to a functional if converges to for every in Weakly Cauchy sequences, weakly convergent and weakly* convergent sequences are norm bounded, as a consequence of the Banach–Steinhaus theorem. When the sequence in is a weakly Cauchy sequence, the limit above defines a bounded linear functional on the dual that is, an element of the bidual of and is the limit of in the weak*-topology of the bidual. The Banach space is weakly sequentially complete if every weakly Cauchy sequence is weakly convergent in It follows from the preceding discussion that reflexive spaces are weakly sequentially complete. An orthonormal sequence in a Hilbert space is a simple example of a weakly convergent sequence, with limit equal to the vector. The unit vector basis of for or of is another example of a weakly null sequence, that is, a sequence that converges weakly to For every weakly null sequence in a Banach space, there exists a sequence of convex combinations of vectors from the given sequence that is norm-converging to The unit vector basis of is not weakly Cauchy. Weakly Cauchy sequences in are weakly convergent, since -spaces are weakly sequentially complete. Actually, weakly convergent sequences in are norm convergent. This means that satisfies Schur's property. Results involving the basis Weakly Cauchy sequences and the basis are the opposite cases of the dichotomy established in the following deep result of H. P. Rosenthal. A complement to this result is due to Odell and Rosenthal (1975). By the Goldstine theorem, every element of the unit ball of is weak*-limit of a net in the unit ball of When does not contain every element of is weak*-limit of a in the unit ball of When the Banach space is separable, the unit ball of the dual equipped with the weak*-topology, is a metrizable compact space and every element in the bidual defines a bounded function on : This function is continuous for the compact topology of if and only if is actually in considered as subset of Assume in addition for the rest of the paragraph that does not contain By the preceding result of Odell and Rosenthal, the function is the pointwise limit on of a sequence of continuous functions on it is therefore a first Baire class function on The unit ball of the bidual is a pointwise compact subset of the first Baire class on Sequences, weak and weak* compactness When is separable, the unit ball of the dual is weak*-compact by the Banach–Alaoglu theorem and metrizable for the weak* topology, hence every bounded sequence in the dual has weakly* convergent subsequences. This applies to separable reflexive spaces, but more is true in this case, as stated below. The weak topology of a Banach space is metrizable if and only if is finite-dimensional. If the dual is separable, the weak topology of the unit ball of is metrizable. This applies in particular to separable reflexive Banach spaces. Although the weak topology of the unit ball is not metrizable in general, one can characterize weak compactness using sequences. A Banach space is reflexive if and only if each bounded sequence in has a weakly convergent subsequence. A weakly compact subset in is norm-compact. Indeed, every sequence in has weakly convergent subsequences by Eberlein–Šmulian, that are norm convergent by the Schur property of Type and cotype A way to classify Banach spaces is through the probabilistic notion of type and cotype, these two measure how far a Banach space is from a Hilbert space. Schauder bases A Schauder basis in a Banach space is a sequence of vectors in with the property that for every vector there exist defined scalars depending on such that Banach spaces with a Schauder basis are necessarily separable, because the countable set of finite linear combinations with rational coefficients (say) is dense. It follows from the Banach–Steinhaus theorem that the linear mappings are uniformly bounded by some constant Let denote the coordinate functionals which assign to every in the coordinate of in the above expansion. They are called biorthogonal functionals. When the basis vectors have norm the coordinate functionals have norm in the dual of Most classical separable spaces have explicit bases. The Haar system is a basis for The trigonometric system is a basis in when The Schauder system is a basis in the space The question of whether the disk algebra has a basis remained open for more than forty years, until Bočkarev showed in 1974 that admits a basis constructed from the Franklin system. Since every vector in a Banach space with a basis is the limit of with of finite rank and uniformly bounded, the space satisfies the bounded approximation property. The first example by Enflo of a space failing the approximation property was at the same time the first example of a separable Banach space without a Schauder basis. Robert C. James characterized reflexivity in Banach spaces with a basis: the space with a Schauder basis is reflexive if and only if the basis is both shrinking and boundedly complete. In this case, the biorthogonal functionals form a basis of the dual of Tensor product Let and be two -vector spaces. The tensor product of and is a -vector space with a bilinear mapping which has the following universal property: If is any bilinear mapping into a -vector space then there exists a unique linear mapping such that The image under of a couple in is denoted by and called a simple tensor. Every element in is a finite sum of such simple tensors. There are various norms that can be placed on the tensor product of the underlying vector spaces, amongst others the projective cross norm and injective cross norm introduced by A. Grothendieck in 1955. In general, the tensor product of complete spaces is not complete again. When working with Banach spaces, it is customary to say that the projective tensor product of two Banach spaces and is the of the algebraic tensor product equipped with the projective tensor norm, and similarly for the injective tensor product Grothendieck proved in particular that where is a compact Hausdorff space, the Banach space of continuous functions from to and the space of Bochner-measurable and integrable functions from to and where the isomorphisms are isometric. The two isomorphisms above are the respective extensions of the map sending the tensor to the vector-valued function Tensor products and the approximation property Let be a Banach space. The tensor product is identified isometrically with the closure in of the set of finite rank operators. When has the approximation property, this closure coincides with the space of compact operators on For every Banach space there is a natural norm linear map obtained by extending the identity map of the algebraic tensor product. Grothendieck related the approximation problem to the question of whether this map is one-to-one when is the dual of Precisely, for every Banach space the map is one-to-one if and only if has the approximation property. Grothendieck conjectured that and must be different whenever and are infinite-dimensional Banach spaces. This was disproved by Gilles Pisier in 1983. Pisier constructed an infinite-dimensional Banach space such that and are equal. Furthermore, just as Enflo's example, this space is a "hand-made" space that fails to have the approximation property. On the other hand, Szankowski proved that the classical space does not have the approximation property. Some classification results Characterizations of Hilbert space among Banach spaces A necessary and sufficient condition for the norm of a Banach space to be associated to an inner product is the parallelogram identity: It follows, for example, that the Lebesgue space is a Hilbert space only when If this identity is satisfied, the associated inner product is given by the polarization identity. In the case of real scalars, this gives: For complex scalars, defining the inner product so as to be -linear in antilinear in the polarization identity gives: To see that the parallelogram law is sufficient, one observes in the real case that is symmetric, and in the complex case, that it satisfies the Hermitian symmetry property and The parallelogram law implies that is additive in It follows that it is linear over the rationals, thus linear by continuity. Several characterizations of spaces isomorphic (rather than isometric) to Hilbert spaces are available. The parallelogram law can be extended to more than two vectors, and weakened by the introduction of a two-sided inequality with a constant : Kwapień proved that if for every integer and all families of vectors then the Banach space is isomorphic to a Hilbert space. Here, denotes the average over the possible choices of signs In the same article, Kwapień proved that the validity of a Banach-valued Parseval's theorem for the Fourier transform characterizes Banach spaces isomorphic to Hilbert spaces. Lindenstrauss and Tzafriri proved that a Banach space in which every closed linear subspace is complemented (that is, is the range of a bounded linear projection) is isomorphic to a Hilbert space. The proof rests upon Dvoretzky's theorem about Euclidean sections of high-dimensional centrally symmetric convex bodies. In other words, Dvoretzky's theorem states that for every integer any finite-dimensional normed space, with dimension sufficiently large compared to contains subspaces nearly isometric to the -dimensional Euclidean space. The next result gives the solution of the so-called . An infinite-dimensional Banach space is said to be homogeneous if it is isomorphic to all its infinite-dimensional closed subspaces. A Banach space isomorphic to is homogeneous, and Banach asked for the converse. An infinite-dimensional Banach space is hereditarily indecomposable when no subspace of it can be isomorphic to the direct sum of two infinite-dimensional Banach spaces. The Gowers dichotomy theorem asserts that every infinite-dimensional Banach space contains, either a subspace with unconditional basis, or a hereditarily indecomposable subspace and in particular, is not isomorphic to its closed hyperplanes. If is homogeneous, it must therefore have an unconditional basis. It follows then from the partial solution obtained by Komorowski and Tomczak–Jaegermann, for spaces with an unconditional basis, that is isomorphic to Metric classification If is an isometry from the Banach space onto the Banach space (where both and are vector spaces over ), then the Mazur–Ulam theorem states that must be an affine transformation. In particular, if this is maps the zero of to the zero of then must be linear. This result implies that the metric in Banach spaces, and more generally in normed spaces, completely captures their linear structure. Topological classification Finite dimensional Banach spaces are homeomorphic as topological spaces, if and only if they have the same dimension as real vector spaces. Anderson–Kadec theorem (1965–66) proves that any two infinite-dimensional separable Banach spaces are homeomorphic as topological spaces. Kadec's theorem was extended by Torunczyk, who proved that any two Banach spaces are homeomorphic if and only if they have the same density character, the minimum cardinality of a dense subset. Spaces of continuous functions When two compact Hausdorff spaces and are homeomorphic, the Banach spaces and are isometric. Conversely, when is not homeomorphic to the (multiplicative) Banach–Mazur distance between and must be greater than or equal to see above the results by Amir and Cambern. Although uncountable compact metric spaces can have different homeomorphy types, one has the following result due to Milutin: The situation is different for countably infinite compact Hausdorff spaces. Every countably infinite compact is homeomorphic to some closed interval of ordinal numbers equipped with the order topology, where is a countably infinite ordinal. The Banach space is then isometric to . When are two countably infinite ordinals, and assuming the spaces and are isomorphic if and only if . For example, the Banach spaces are mutually non-isomorphic. Examples Derivatives Several concepts of a derivative may be defined on a Banach space. See the articles on the Fréchet derivative and the Gateaux derivative for details. The Fréchet derivative allows for an extension of the concept of a total derivative to Banach spaces. The Gateaux derivative allows for an extension of a directional derivative to locally convex topological vector spaces. Fréchet differentiability is a stronger condition than Gateaux differentiability. The quasi-derivative is another generalization of directional derivative that implies a stronger condition than Gateaux differentiability, but a weaker condition than Fréchet differentiability. Generalizations Several important spaces in functional analysis, for instance the space of all infinitely often differentiable functions or the space of all distributions on are complete but are not normed vector spaces and hence not Banach spaces. In Fréchet spaces one still has a complete metric, while LF-spaces are complete uniform vector spaces arising as limits of Fréchet spaces.
Mathematics
Linear algebra
null
3996
https://en.wikipedia.org/wiki/Boat
Boat
A boat is a watercraft of a large range of types and sizes, but generally smaller than a ship, which is distinguished by its larger size or capacity, its shape, or its ability to carry boats. Small boats are typically used on inland waterways such as rivers and lakes, or in protected coastal areas. However, some boats (such as whaleboats) were intended for offshore use. In modern naval terms, a boat is a vessel small enough to be carried aboard a ship. Boats vary in proportion and construction methods with their intended purpose, available materials, or local traditions. Canoes have been used since prehistoric times and remain in use throughout the world for transportation, fishing, and sport. Fishing boats vary widely in style partly to match local conditions. Pleasure craft used in recreational boating include ski boats, pontoon boats, and sailboats. House boats may be used for vacationing or long-term residence. Lighters are used to move cargo to and from large ships unable to get close to shore. Lifeboats have rescue and safety functions. Boats can be propelled by manpower (e.g. rowboats and paddle boats), wind (e.g. sailboats), and inboard/outboard motors (including gasoline, diesel, and electric). History Differentiation from other prehistoric watercraft The earliest watercraft are considered to have been rafts. These would have been used for voyages such as the settlement of Australia sometime between 50,000 and 60,000 years ago. A boat differs from a raft by obtaining its buoyancy by having most of its structure exclude water with a waterproof layer, e.g. the planks of a wooden hull, the hide covering (or tarred canvas) of a currach. In contrast, a raft is buoyant because it joins components that are themselves buoyant, for example, logs, bamboo poles, bundles of reeds, floats (such as inflated hides, sealed pottery containers or, in a modern context, empty oil drums). The key difference between a raft and a boat is that the former is a "flow through" structure, with waves able to pass up through it. Consequently, except for short river crossings, a raft is not a practical means of transport in colder regions of the world as the users would be at risk of hypothermia. Today that climatic limitation restricts rafts to between 40° north and 40° south, with, in the past, similar boundaries that have moved as the world's climate has varied. Types The earliest boats may have been either dugouts or hide boats. The oldest recovered boat in the world, the Pesse canoe, found in the Netherlands, is a dugout made from the hollowed tree trunk of a Pinus sylvestris that was constructed somewhere between 8200 and 7600 BC. This canoe is exhibited in the Drents Museum in Assen, Netherlands. Other very old dugout boats have also been recovered. Hide boats, made from covering a framework with animal skins, could be equally as old as logboats, but such a structure is much less likely to survive in an archaeological context. Plank-built boats are considered, in most cases, to have developed from the logboat. There are examples of logboats that have been expanded: by deforming the hull under the influence of heat, by raising up the sides with added planks, or by splitting down the middle and adding a central plank to make it wider. (Some of these methods have been in quite recent usethere is no simple developmental sequence). The earliest known plank-built boats are from the Nile, dating to the third millennium BC. Outside Egypt, the next earliest are from England. The Ferriby boats are dated to the early part of the second millennium BC and the end of the third millennium. Plank-built boats require a level of woodworking technology that was first available in the neolithic with more complex versions only becoming achievable in the Bronze Age. Types Boats can be categorized by their means of propulsion. These divide into: Unpowered. This involves drifting with the tide or a river current. Powered by the crew-members on board, using oars, paddles or a punting pole or quant. Powered by sail. Towedeither by humans or animals from a river or canal bank (or in very shallow water, by walking on the sea or river bed) or by another vessel. Powered by machinery, such as internal combustion engines, steam engines or by batteries and an electric motor.Any one vessel may use more than one of these methods at different times or in combination. A number of large vessels are usually referred to as boats. Submarines are a prime example. Other types of large vessels which are traditionally called boats include Great Lakes freighters, riverboats, and ferryboats. Though large enough to carry their own boats and heavy cargo, these vessels are designed for operation on inland or protected coastal waters. Terminology The hull is the main, and in some cases only, structural component of a boat. It provides both capacity and buoyancy. The keel is a boat's "backbone", a lengthwise structural member to which the perpendicular frames are fixed. On some boats, a deck covers the hull, in part or whole. While a ship often has several decks, a boat is unlikely to have more than one. Above the deck are often lifelines connected to stanchions, bulwarks perhaps topped by gunnels, or some combination of the two. A cabin may protrude above the deck forward, aft, along the centerline, or cover much of the length of the boat. Vertical structures dividing the internal spaces are known as bulkheads. The forward end of a boat is called the bow, the aft end the stern. Facing forward the right side is referred to as starboard and the left side as port. Building materials Until the mid-19th century, most boats were made of natural materials, primarily wood, although bark and animal skins were also used. Early boats include the birch bark canoe, the animal hide-covered kayak and coracle and the dugout canoe made from a single log. By the mid-19th century, some boats had been built with iron or steel frames but still planked in wood. In 1855 ferro-cement boat construction was patented by the French, who coined the name "ferciment". This is a system by which a steel or iron wire framework is built in the shape of a boat's hull and covered over with cement. Reinforced with bulkheads and other internal structures it is strong but heavy, easily repaired, and, if sealed properly, will not leak or corrode. As the forests of Britain and Europe continued to be over-harvested to supply the keels of larger wooden boats, and the Bessemer process (patented in 1855) cheapened the cost of steel, steel ships and boats began to be more common. By the 1930s boats built entirely of steel from frames to plating were seen replacing wooden boats in many industrial uses and fishing fleets. Private recreational boats of steel remain uncommon. In 1895 WH Mullins produced steel boats of galvanized iron and by 1930 became the world's largest producer of pleasure boats. Mullins also offered boats in aluminum from 1895 through 1899 and once again in the 1920s, but it was not until the mid-20th century that aluminium gained widespread popularity. Though much more expensive than steel, aluminum alloys exist that do not corrode in salt water, allowing a similar load carrying capacity to steel at much less weight. Around the mid-1960s, boats made of fiberglass (aka "glass fiber") became popular, especially for recreational boats. Fiberglass is also known as "GRP" (glass-reinforced plastic) in the UK, and "FRP" (for fiber-reinforced plastic) in the US. Fiberglass boats are strong and do not rust, corrode, or rot. Instead, they are susceptible to structural degradation from sunlight and extremes in temperature over their lifespan. Fiberglass structures can be made stiffer with sandwich panels, where the fiberglass encloses a lightweight core such as balsa or foam. Cold molding is a modern construction method, using wood as the structural component. In one cold molding process, very thin strips of wood are layered over a form. Each layer is coated with resin, followed by another directionally alternating layer laid on top. Subsequent layers may be stapled or otherwise mechanically fastened to the previous, or weighted or vacuum bagged to provide compression and stabilization until the resin sets. An alternative process uses thin sheets of plywood shaped over a disposable male mold, and coated with epoxy. Propulsion The most common means of boat propulsion are as follows: Engine Inboard motor Stern drive (Inboard/outboard) Outboard motor Paddle wheel Water jet (jetboat, personal water craft) Fan (hovercraft, air boat) Man (rowing, paddling, setting pole etc.) Wind (sailing) Buoyancy A boat displaces its weight in water, regardless whether it is made of wood, steel, fiberglass, or even concrete. If weight is added to the boat, the volume of the hull drawn below the waterline will increase to keep the balance above and below the surface equal. Boats have a natural or designed level of buoyancy. Exceeding it will cause the boat first to ride lower in the water, second to take on water more readily than when properly loaded, and ultimately, if overloaded by any combination of structure, cargo, and water, sink. As commercial vessels must be correctly loaded to be safe, and as the sea becomes less buoyant in brackish areas such as the Baltic, the Plimsoll line was introduced to prevent overloading. European Union classification Since 1998 all new leisure boats and barges built in Europe between 2.5m and 24m must comply with the EU's Recreational Craft Directive (RCD). The Directive establishes four categories that permit the allowable wind and wave conditions for vessels in each class: Class A - the boat may safely navigate any waters. Class B - the boat is limited to offshore navigation. (Winds up to Force 8 & waves up to 4 metres) Class C - the boat is limited to inshore (coastal) navigation. (Winds up to Force 6 & waves up to 2 metres) Class D - the boat is limited to rivers, canals and small lakes. (Winds up to Force 4 & waves up to 0.5 metres) Europe is the main producer of recreational boats (the second production in the world is located in Poland). European brands are known all over the world - in fact, these are the brands that created RCD and set the standard for shipyards around the world.
Technology
Maritime transport
null
3997
https://en.wikipedia.org/wiki/Blood
Blood
Blood is a body fluid in the circulatory system of humans and other vertebrates that delivers necessary substances such as nutrients and oxygen to the cells, and transports metabolic waste products away from those same cells. Blood is composed of blood cells suspended in blood plasma. Plasma, which constitutes 55% of blood fluid, is mostly water (92% by volume), and contains proteins, glucose, mineral ions, and hormones. The blood cells are mainly red blood cells (erythrocytes), white blood cells (leukocytes), and (in mammals) platelets (thrombocytes). The most abundant cells are red blood cells. These contain hemoglobin, which facilitates oxygen transport by reversibly binding to it, increasing its solubility. Jawed vertebrates have an adaptive immune system, based largely on white blood cells. White blood cells help to resist infections and parasites. Platelets are important in the clotting of blood. Blood is circulated around the body through blood vessels by the pumping action of the heart. In animals with lungs, arterial blood carries oxygen from inhaled air to the tissues of the body, and venous blood carries carbon dioxide, a waste product of metabolism produced by cells, from the tissues to the lungs to be exhaled. Blood is bright red when its hemoglobin is oxygenated and dark red when it is deoxygenated. Medical terms related to blood often begin with hemo-, hemato-, haemo- or haemato- from the Greek word () for "blood". In terms of anatomy and histology, blood is considered a specialized form of connective tissue, given its origin in the bones and the presence of potential molecular fibers in the form of fibrinogen. Functions Blood performs many important functions within the body, including: Supply of oxygen to tissues (bound to hemoglobin, which is carried in red cells) Supply of nutrients such as glucose, amino acids, and fatty acids (dissolved in the blood or bound to plasma proteins (e.g., blood lipids)) Removal of waste such as carbon dioxide, urea, and lactic acid Immunological functions, including circulation of white blood cells, and detection of foreign material by antibodies Coagulation, the response to a broken blood vessel, the conversion of blood from a liquid to a semisolid gel to stop bleeding Messenger functions, including the transport of hormones and the signaling of tissue damage Regulation of core body temperature Hydraulic functions Constituents In mammals Blood accounts for 7% of the human body weight, with an average density around 1060 kg/m3, very close to pure water's density of 1000 kg/m3. The average adult has a blood volume of roughly or 1.3 gallons, which is composed of plasma and formed elements. The formed elements are the two types of blood cell or corpuscle – the red blood cells, (erythrocytes) and white blood cells (leukocytes), and the cell fragments called platelets that are involved in clotting. By volume, the red blood cells constitute about 45% of whole blood, the plasma about 54.3%, and white cells about 0.7%. Whole blood (plasma and cells) exhibits non-Newtonian fluid dynamics. Cells One microliter of blood contains: 4.7 to 6.1 million (male), 4.2 to 5.4 million (female) erythrocytes: Red blood cells contain the blood's hemoglobin and distribute oxygen. Mature red blood cells lack a nucleus and organelles in mammals. The red blood cells (together with endothelial vessel cells and other cells) are also marked by glycoproteins that define the different blood types. The proportion of blood occupied by red blood cells is referred to as the hematocrit, and is normally about 45%. The combined surface area of all red blood cells of the human body would be roughly 2,000 times as great as the body's exterior surface. 4,000–11,000 leukocytes: White blood cells are part of the body's immune system; they destroy and remove old or aberrant cells and cellular debris, as well as attack infectious agents (pathogens) and foreign substances. The cancer of leukocytes is called leukemia. 200,000–500,000 thrombocytes: Also called platelets, they take part in blood clotting (coagulation). Fibrin from the coagulation cascade creates a mesh over the platelet plug. Plasma About 55% of blood is blood plasma, a fluid that is the blood's liquid medium, which by itself is straw-yellow in color. The blood plasma volume totals of 2.7–3.0 liters (2.8–3.2 quarts) in an average human. It is essentially an aqueous solution containing 92% water, 8% blood plasma proteins, and trace amounts of other materials. Plasma circulates dissolved nutrients, such as glucose, amino acids, and fatty acids (dissolved in the blood or bound to plasma proteins), and removes waste products, such as carbon dioxide, urea, and lactic acid. Other important components include: Serum albumin Blood-clotting factors (to facilitate coagulation) Immunoglobulins (antibodies) lipoprotein particles Various other proteins Various electrolytes (mainly sodium and chloride) The term serum refers to plasma from which the clotting proteins have been removed. Most of the proteins remaining are albumin and immunoglobulins. Acidity Blood pH is regulated to stay within the narrow range of 7.35 to 7.45, making it slightly basic (compensation). Extra-cellular fluid in blood that has a pH below 7.35 is too acidic, whereas blood pH above 7.45 is too basic. A pH below 6.9 or above 7.8 is usually lethal. Blood pH, partial pressure of oxygen (pO2), partial pressure of carbon dioxide (pCO2), and bicarbonate (HCO3−) are carefully regulated by a number of homeostatic mechanisms, which exert their influence principally through the respiratory system and the urinary system to control the acid–base balance and respiration, which is called compensation. An arterial blood gas test measures these. Plasma also circulates hormones transmitting their messages to various tissues. The list of normal reference ranges for various blood electrolytes is extensive. In non-mammals Human blood is typical of that of mammals, although the precise details concerning cell numbers, size, protein structure, and so on, vary somewhat between species. In non-mammalian vertebrates, however, there are some key differences: Red blood cells of non-mammalian vertebrates are flattened and ovoid in form, and retain their cell nuclei. There is considerable variation in the types and proportions of white blood cells; for example, acidophils are generally more common than in humans. Platelets are unique to mammals; in other vertebrates, small nucleated, spindle cells called thrombocytes are responsible for blood clotting instead. Physiology Circulatory system Blood is circulated around the body through blood vessels by the pumping action of the heart. In humans, blood is pumped from the strong left ventricle of the heart through arteries to peripheral tissues and returns to the right atrium of the heart through veins. It then enters the right ventricle and is pumped through the pulmonary artery to the lungs and returns to the left atrium through the pulmonary veins. Blood then enters the left ventricle to be circulated again. Arterial blood carries oxygen from inhaled air to all of the cells of the body, and venous blood carries carbon dioxide, a waste product of metabolism by cells, to the lungs to be exhaled. However, one exception includes pulmonary arteries, which contain the most deoxygenated blood in the body, while the pulmonary veins contain oxygenated blood. Additional return flow may be generated by the movement of skeletal muscles, which can compress veins and push blood through the valves in veins toward the right atrium. The blood circulation was famously described by William Harvey in 1628. Cell production and degradation In vertebrates, the various cells of blood are made in the bone marrow in a process called hematopoiesis, which includes erythropoiesis, the production of red blood cells; and myelopoiesis, the production of white blood cells and platelets. During childhood, almost every human bone produces red blood cells; as adults, red blood cell production is limited to the larger bones: the bodies of the vertebrae, the breastbone (sternum), the ribcage, the pelvic bones, and the bones of the upper arms and legs. In addition, during childhood, the thymus gland, found in the mediastinum, is an important source of T lymphocytes. The proteinaceous component of blood (including clotting proteins) is produced predominantly by the liver, while hormones are produced by the endocrine glands and the watery fraction is regulated by the hypothalamus and maintained by the kidney. Healthy erythrocytes have a plasma life of about 120 days before they are degraded by the spleen, and the Kupffer cells in the liver. The liver also clears some proteins, lipids, and amino acids. The kidney actively secretes waste products into the urine. Oxygen transport About 98.5% of the oxygen in a sample of arterial blood in a healthy human breathing air at sea-level pressure is chemically combined with the hemoglobin. About 1.5% is physically dissolved in the other blood liquids and not connected to hemoglobin. The hemoglobin molecule is the primary transporter of oxygen in mammals and many other species. Hemoglobin has an oxygen binding capacity between 1.36 and 1.40 ml O2 per gram hemoglobin, which increases the total blood oxygen capacity seventyfold, compared to if oxygen solely were carried by its solubility of 0.03 ml O2 per liter blood per mm Hg partial pressure of oxygen (about 100 mm Hg in arteries). With the exception of pulmonary and umbilical arteries and their corresponding veins, arteries carry oxygenated blood away from the heart and deliver it to the body via arterioles and capillaries, where the oxygen is consumed; afterwards, venules and veins carry deoxygenated blood back to the heart. Under normal conditions in adult humans at rest, hemoglobin in blood leaving the lungs is about 98–99% saturated with oxygen, achieving an oxygen delivery between 950 and 1150 ml/min to the body. In a healthy adult at rest, oxygen consumption is approximately 200–250 ml/min, and deoxygenated blood returning to the lungs is still roughly 75% (70 to 78%) saturated. Increased oxygen consumption during sustained exercise reduces the oxygen saturation of venous blood, which can reach less than 15% in a trained athlete; although breathing rate and blood flow increase to compensate, oxygen saturation in arterial blood can drop to 95% or less under these conditions. Oxygen saturation this low is considered dangerous in an individual at rest (for instance, during surgery under anesthesia). Sustained hypoxia (oxygenation less than 90%), is dangerous to health, and severe hypoxia (saturations less than 30%) may be rapidly fatal. A fetus, receiving oxygen via the placenta, is exposed to much lower oxygen pressures (about 21% of the level found in an adult's lungs), so fetuses produce another form of hemoglobin with a much higher affinity for oxygen (hemoglobin F) to function under these conditions. Carbon dioxide transport CO2 is carried in blood in three different ways. (The exact percentages vary depending whether it is arterial or venous blood). Most of it (about 70%) is converted to bicarbonate ions by the enzyme carbonic anhydrase in the red blood cells by the reaction ; about 7% is dissolved in the plasma; and about 23% is bound to hemoglobin as carbamino compounds. Hemoglobin, the main oxygen-carrying molecule in red blood cells, carries both oxygen and carbon dioxide. However, the CO2 bound to hemoglobin does not bind to the same site as oxygen. Instead, it combines with the N-terminal groups on the four globin chains. However, because of allosteric effects on the hemoglobin molecule, the binding of CO2 decreases the amount of oxygen that is bound for a given partial pressure of oxygen. The decreased binding to carbon dioxide in the blood due to increased oxygen levels is known as the Haldane effect, and is important in the transport of carbon dioxide from the tissues to the lungs. A rise in the partial pressure of CO2 or a lower pH will cause offloading of oxygen from hemoglobin, which is known as the Bohr effect. Transport of hydrogen ions Some oxyhemoglobin loses oxygen and becomes deoxyhemoglobin. Deoxyhemoglobin binds most of the hydrogen ions as it has a much greater affinity for more hydrogen than does oxyhemoglobin. Lymphatic system In mammals, blood is in equilibrium with lymph, which is continuously formed in tissues from blood by capillary ultrafiltration. Lymph is collected by a system of small lymphatic vessels and directed to the thoracic duct, which drains into the left subclavian vein, where lymph rejoins the systemic blood circulation. Thermoregulation Blood circulation transports heat throughout the body, and adjustments to this flow are an important part of thermoregulation. Increasing blood flow to the surface (e.g., during warm weather or strenuous exercise) causes warmer skin, resulting in faster heat loss. In contrast, when the external temperature is low, blood flow to the extremities and surface of the skin is reduced and to prevent heat loss and is circulated to the important organs of the body, preferentially. Rate of flow Rate of blood flow varies greatly between different organs. Liver has the most abundant blood supply with an approximate flow of 1350 ml/min. Kidney and brain are the second and the third most supplied organs, with 1100 ml/min and ~700 ml/min, respectively. Relative rates of blood flow per 100 g of tissue are different, with kidney, adrenal gland and thyroid being the first, second and third most supplied tissues, respectively. Hydraulic functions The restriction of blood flow can also be used in specialized tissues to cause engorgement, resulting in an erection of that tissue; examples are the erectile tissue in the penis and clitoris. Another example of a hydraulic function is the jumping spider, in which blood forced into the legs under pressure causes them to straighten for a powerful jump, without the need for bulky muscular legs. Color Hemoglobin is the principal determinant of the color of blood (hemochrome). Each molecule has four heme groups, and their interaction with various molecules alters the exact color. Arterial blood and capillary blood are bright red, as oxygen imparts a strong red color to the heme group. Deoxygenated blood is a darker shade of red; this is present in veins, and can be seen during blood donation and when venous blood samples are taken. This is because the spectrum of light absorbed by hemoglobin differs between the oxygenated and deoxygenated states. Blood in carbon monoxide poisoning is bright red, because carbon monoxide causes the formation of carboxyhemoglobin. In cyanide poisoning, the body cannot use oxygen, so the venous blood remains oxygenated, increasing the redness. There are some conditions affecting the heme groups present in hemoglobin that can make the skin appear blue – a symptom called cyanosis. If the heme is oxidized, methemoglobin, which is more brownish and cannot transport oxygen, is formed. In the rare condition sulfhemoglobinemia, arterial hemoglobin is partially oxygenated, and appears dark red with a bluish hue. Veins close to the surface of the skin appear blue for a variety of reasons. However, the factors that contribute to this alteration of color perception are related to the light-scattering properties of the skin and the processing of visual input by the visual cortex, rather than the actual color of the venous blood. Skinks in the genus Prasinohaema have green blood due to a buildup of the waste product biliverdin. Disorders General medical Disorders of volume Injury can cause blood loss through bleeding. A healthy adult can lose almost 20% of blood volume (1 L) before the first symptom, restlessness, begins, and 40% of volume (2 L) before shock sets in. Thrombocytes are important for blood coagulation and the formation of blood clots, which can stop bleeding. Trauma to the internal organs or bones can cause internal bleeding, which can sometimes be severe. Dehydration can reduce the blood volume by reducing the water content of the blood. This would rarely result in shock (apart from the very severe cases) but may result in orthostatic hypotension and fainting. Disorders of circulation Shock is the ineffective perfusion of tissues, and can be caused by a variety of conditions including blood loss, infection, poor cardiac output. Atherosclerosis reduces the flow of blood through arteries, because atheroma lines arteries and narrows them. Atheroma tends to increase with age, and its progression can be compounded by many causes including smoking, high blood pressure, excess circulating lipids (hyperlipidemia), and diabetes mellitus. Coagulation can form a thrombosis, which can obstruct vessels. Problems with blood composition, the pumping action of the heart, or narrowing of blood vessels can have many consequences including hypoxia (lack of oxygen) of the tissues supplied. The term ischemia refers to tissue that is inadequately perfused with blood, and infarction refers to tissue death (necrosis), which can occur when the blood supply has been blocked (or is very inadequate). Hematological Anemia Insufficient red cell mass (anemia) can be the result of bleeding, blood disorders like thalassemia, or nutritional deficiencies, and may require one or more blood transfusions. Anemia can also be due to a genetic disorder in which the red blood cells do not function effectively. Anemia can be confirmed by a blood test if the hemoglobin value is less than 13.5 gm/dl in men or less than 12.0 gm/dl in women. Several countries have blood banks to fill the demand for transfusable blood. A person receiving a blood transfusion must have a blood type compatible with that of the donor. Sickle-cell anemia Disorders of cell proliferation Leukemia is a group of cancers of the blood-forming tissues and cells. Non-cancerous overproduction of red cells (polycythemia vera) or platelets (essential thrombocytosis) may be premalignant. Myelodysplastic syndromes involve ineffective production of one or more cell lines. Disorders of coagulation Hemophilia is a genetic illness that causes dysfunction in one of the blood's clotting mechanisms. This can allow otherwise inconsequential wounds to be life-threatening, but more commonly results in hemarthrosis, or bleeding into joint spaces, which can be crippling. Ineffective or insufficient platelets can also result in coagulopathy (bleeding disorders). Hypercoagulable state (thrombophilia) results from defects in regulation of platelet or clotting factor function, and can cause thrombosis. Infectious disorders of blood Blood is an important vector of infection. HIV, the virus that causes AIDS, is transmitted through contact with blood, semen or other body secretions of an infected person. Hepatitis B and C are transmitted primarily through blood contact. Owing to blood-borne infections, bloodstained objects are treated as a biohazard. Bacterial infection of the blood is bacteremia or sepsis. Viral Infection is viremia. Malaria and trypanosomiasis are blood-borne parasitic infections. Carbon monoxide poisoning Substances other than oxygen can bind to hemoglobin; in some cases, this can cause irreversible damage to the body. Carbon monoxide, for example, is extremely dangerous when carried to the blood via the lungs by inhalation, because carbon monoxide irreversibly binds to hemoglobin to form carboxyhemoglobin, so that less hemoglobin is free to bind oxygen, and fewer oxygen molecules can be transported throughout the blood. This can cause suffocation insidiously. A fire burning in an enclosed room with poor ventilation presents a very dangerous hazard, since it can create a build-up of carbon monoxide in the air. Some carbon monoxide binds to hemoglobin when smoking tobacco. Treatments Transfusion Blood for transfusion is obtained from human donors by blood donation and stored in a blood bank. There are many different blood types in humans, the ABO blood group system, and the Rhesus blood group system being the most important. Transfusion of blood of an incompatible blood group may cause severe, often fatal, complications, so crossmatching is done to ensure that a compatible blood product is transfused. Other blood products administered intravenously are platelets, blood plasma, cryoprecipitate, and specific coagulation factor concentrates. Intravenous administration Many forms of medication (from antibiotics to chemotherapy) are administered intravenously, as they are not readily or adequately absorbed by the digestive tract. After severe acute blood loss, liquid preparations, generically known as plasma expanders, can be given intravenously, either solutions of salts (NaCl, KCl, CaCl2 etc.) at physiological concentrations, or colloidal solutions, such as dextrans, human serum albumin, or fresh frozen plasma. In these emergency situations, a plasma expander is a more effective life-saving procedure than a blood transfusion, because the metabolism of transfused red blood cells does not restart immediately after a transfusion. Letting In modern evidence-based medicine, bloodletting is used in management of a few rare diseases, including hemochromatosis and polycythemia. However, bloodletting and leeching were common unvalidated interventions used until the 19th century, as many diseases were incorrectly thought to be due to an excess of blood, according to Hippocratic medicine. Etymology English blood (Old English blod) derives from Germanic and has cognates with a similar range of meanings in all other Germanic languages (e.g. German Blut, Swedish blod, Gothic blōþ). There is no accepted Indo-European etymology. History Classical Greek medicine Robin Fåhræus (a Swedish physician who devised the erythrocyte sedimentation rate) suggested that the Ancient Greek system of humorism, wherein the body was thought to contain four distinct bodily fluids (associated with different temperaments), were based upon the observation of blood clotting in a transparent container. When blood is drawn in a glass container and left undisturbed for about an hour, four different layers can be seen. A dark clot forms at the bottom (the "black bile"). Above the clot is a layer of red blood cells (the "blood"). Above this is a whitish layer of white blood cells (the "phlegm"). The top layer is clear yellow serum (the "yellow bile"). In general, Greek thinkers believed that blood was made from food. Plato and Aristotle are two important sources of evidence for this view, but it dates back to Homer's Iliad. Plato thinks that fire in our bellies transform food into blood. Plato believes that the movements of air in the body as we exhale and inhale carry the fire as it transforms our food into blood. Aristotle believed that food is concocted into blood in the heart and transformed into our body's matter. Types The ABO blood group system was discovered in the year 1900 by Karl Landsteiner. Jan Janský is credited with the first classification of blood into the four types (A, B, AB, and O) in 1907, which remains in use today. In 1907 the first blood transfusion was performed that used the ABO system to predict compatibility. The first non-direct transfusion was performed on 27 March 1914. The Rhesus factor was discovered in 1937. Culture and religion Due to its importance to life, blood is associated with a large number of beliefs. One of the most basic is the use of blood as a symbol for family relationships through birth/parentage; to be "related by blood" is to be related by ancestry or descendence, rather than marriage. This bears closely to bloodlines, and sayings such as "blood is thicker than water" and "bad blood", as well as "Blood brother". Blood is given particular emphasis in the Islamic, Jewish, and Christian religions, because Leviticus 17:11 says "the life of a creature is in the blood." This phrase is part of the Levitical law forbidding the drinking of blood or eating meat with the blood still intact instead of being poured off. Mythic references to blood can sometimes be connected to the life-giving nature of blood, seen in such events as childbirth, as contrasted with the blood of injury or death. Indigenous Australians In many indigenous Australian Aboriginal peoples' traditions, ochre (particularly red) and blood, both high in iron content and considered Maban, are applied to the bodies of dancers for ritual. As Lawlor states: Lawlor comments that blood employed in this fashion is held by these peoples to attune the dancers to the invisible energetic realm of the Dreamtime. Lawlor then connects these invisible energetic realms and magnetic fields, because iron is magnetic. European paganism Among the Germanic tribes, blood was used during their sacrifices; the Blóts. The blood was considered to have the power of its originator, and, after the butchering, the blood was sprinkled on the walls, on the statues of the gods, and on the participants themselves. This act of sprinkling blood was called blóedsian in Old English, and the terminology was borrowed by the Roman Catholic Church becoming to bless and blessing. The Hittite word for blood, ishar was a cognate to words for "oath" and "bond", see Ishara. The Ancient Greeks believed that the blood of the gods, ichor, was a substance that was poisonous to mortals. As a relic of Germanic Law, the cruentation, an ordeal where the corpse of the victim was supposed to start bleeding in the presence of the murderer, was used until the early 17th century. Christianity In Genesis 9:4, God prohibited Noah and his sons from eating blood (see Noahide Law). This command continued to be observed by the Eastern Orthodox Church. It is also found in the Bible that when the Angel of Death came around to the Hebrew house that the first-born child would not die if the angel saw lamb's blood wiped across the doorway. At the Council of Jerusalem, the apostles prohibited certain Christians from consuming blood – this is documented in Acts 15:20 and 29. This chapter specifies a reason (especially in verses 19–21): It was to avoid offending Jews who had become Christians, because the Mosaic Law Code prohibited the practice. Christ's blood is the means for the atonement of sins. Also, "... the blood of Jesus Christ his [God] Son cleanseth us from all sin." (1 John 1:7), "... Unto him [God] that loved us, and washed us from our sins in his own blood." (Revelation 1:5), and "And they overcame him (Satan) by the blood of the Lamb [Jesus the Christ], and by the word of their testimony ..." (Revelation 12:11). Some Christian churches, including Roman Catholicism, Eastern Orthodoxy, Oriental Orthodoxy, and the Assyrian Church of the East teach that, when consecrated, the Eucharistic wine actually becomes the blood of Jesus for worshippers to drink. Thus in the consecrated wine, Jesus becomes spiritually and physically present. This teaching is rooted in the Last Supper, as written in the four gospels of the Bible, in which Jesus stated to his disciples that the bread that they ate was his body, and the wine was his blood. "This cup is the new testament in my blood, which is shed for you." (). Most forms of Protestantism, especially those of a Methodist or Presbyterian lineage, teach that the wine is no more than a symbol of the blood of Christ, who is spiritually but not physically present. Lutheran theology teaches that the body and blood is present together "in, with, and under" the bread and wine of the Eucharistic feast. Judaism In Judaism, animal blood may not be consumed even in the smallest quantity (Leviticus 3:17 and elsewhere); this is reflected in Jewish dietary laws (Kashrut). Blood is purged from meat by rinsing and soaking in water (to loosen clots), salting and then rinsing with water again several times. Eggs must also be checked and any blood spots removed before consumption. Although blood from fish is biblically kosher, it is rabbinically forbidden to consume fish blood to avoid the appearance of breaking the Biblical prohibition. Another ritual involving blood involves the covering of the blood of fowl and game after slaughtering (Leviticus 17:13); the reason given by the Torah is: "Because the life of the animal is [in] its blood" (ibid 17:14). In relation to human beings, Kabbalah expounds on this verse that the animal soul of a person is in the blood, and that physical desires stem from it. Likewise, the mystical reason for salting temple sacrifices and slaughtered meat is to remove the blood of animal-like passions from the person. By removing the animal's blood, the animal energies and life-force contained in the blood are removed, making the meat fit for human consumption. Islam Consumption of food containing blood is forbidden by Islamic dietary laws. This is derived from the statement in the Qur'an, sura Al-Ma'ida (5:3): "Forbidden to you (for food) are: dead meat, blood, the flesh of swine, and that on which has been invoked the name of other than Allah." Blood is considered unclean, hence there are specific methods to obtain physical and ritual status of cleanliness once bleeding has occurred. Specific rules and prohibitions apply to menstruation, postnatal bleeding and irregular vaginal bleeding. When an animal has been slaughtered, the animal's neck is cut in a way to ensure that the spine is not severed, hence the brain may send commands to the heart to pump blood to it for oxygen. In this way, blood is removed from the body, and the meat is generally now safe to cook and eat. In modern times, blood transfusions are generally not considered against the rules. Jehovah's Witnesses Based on their interpretation of scriptures such as Acts 15:28, 29 ("Keep abstaining...from blood."), many Jehovah's Witnesses neither consume blood nor accept transfusions of whole blood or its major components: red blood cells, white blood cells, platelets (thrombocytes), and plasma. Members may personally decide whether they will accept medical procedures that involve their own blood or substances that are further fractionated from the four major components. Vampirism Vampires are mythical creatures that drink blood directly for sustenance, usually with a preference for human blood. Cultures all over the world have myths of this kind; for example the 'Nosferatu' legend, a human who achieves damnation and immortality by drinking the blood of others, originates from Eastern European folklore. Ticks, leeches, female mosquitoes, vampire bats, and an assortment of other natural creatures do consume the blood of other animals, but only bats are associated with vampires. This has no relation to vampire bats, which are New World creatures discovered well after the origins of the European myths. Invertebrates In invertebrates, a body fluid analogous to blood called hemolymph is found, the main difference being that hemolymph is not contained in a closed circulatory system. Hemolymph may function to carry oxygen, although hemoglobin is not necessarily used. Crustaceans and mollusks use hemocyanin instead of hemoglobin. In most insects, their hemolymph does not contain oxygen-carrying molecules because their bodies are small enough for their tracheal system to suffice for supplying oxygen. Other uses Forensic and archaeological Blood residue can help forensic investigators identify weapons, reconstruct a criminal action, and link suspects to the crime. Through bloodstain pattern analysis, forensic information can also be gained from the spatial distribution of bloodstains. Blood residue analysis is also a technique used in archeology. Artistic Blood is one of the body fluids that has been used in art. In particular, the performances of Viennese Actionist Hermann Nitsch, Istvan Kantor, Franko B, Lennie Lee, Ron Athey, Yang Zhichao, Lucas Abela and Kira O'Reilly, along with the photography of Andres Serrano, have incorporated blood as a prominent visual element. Marc Quinn has made sculptures using frozen blood, including a cast of his own head made using his own blood. Genealogical The term blood is used in genealogical circles to refer to one's ancestry, origins, and ethnic background as in the word bloodline. Other terms where blood is used in a family history sense are blue-blood, royal blood, mixed-blood and blood relative.
Biology and health sciences
Biology
null
4015
https://en.wikipedia.org/wiki/BASIC
BASIC
BASIC (Beginners' All-purpose Symbolic Instruction Code) is a family of general-purpose, high-level programming languages designed for ease of use. The original version was created by John G. Kemeny and Thomas E. Kurtz at Dartmouth College in 1963. They wanted to enable students in non-scientific fields to use computers. At the time, nearly all computers required writing custom software, which only scientists and mathematicians tended to learn. In addition to the programming language, Kemeny and Kurtz developed the Dartmouth Time-Sharing System (DTSS), which allowed multiple users to edit and run BASIC programs simultaneously on remote terminals. This general model became popular on minicomputer systems like the PDP-11 and Data General Nova in the late 1960s and early 1970s. Hewlett-Packard produced an entire computer line for this method of operation, introducing the HP2000 series in the late 1960s and continuing sales into the 1980s. Many early video games trace their history to one of these versions of BASIC. The emergence of microcomputers in the mid-1970s led to the development of multiple BASIC dialects, including Microsoft BASIC in 1975. Due to the tiny main memory available on these machines, often 4 KB, a variety of Tiny BASIC dialects were also created. BASIC was available for almost any system of the era, and became the de facto programming language for home computer systems that emerged in the late 1970s. These PCs almost always had a BASIC interpreter installed by default, often in the machine's firmware or sometimes on a ROM cartridge. BASIC declined in popularity in the 1990s, as more powerful microcomputers came to market and programming languages with advanced features (such as Pascal and C) became tenable on such computers. By then, most nontechnical personal computer users relied on pre-written applications rather than writing their own programs. In 1991, Microsoft released Visual Basic, combining an updated version of BASIC with a visual forms builder. This reignited use of the language and "VB" remains a major programming language in the form of VB.NET, while a hobbyist scene for BASIC more broadly continues to exist. Origin John G. Kemeny was the chairman of the Dartmouth College Mathematics Department. Based largely on his reputation as an innovator in math teaching, in 1959 the college won an Alfred P. Sloan Foundation award for $500,000 to build a new department building. Thomas E. Kurtz had joined the department in 1956, and from the 1960s Kemeny and Kurtz agreed on the need for programming literacy among students outside the traditional STEM fields. Kemeny later noted that "Our vision was that every student on campus should have access to a computer, and any faculty member should be able to use a computer in the classroom whenever appropriate. It was as simple as that." Kemeny and Kurtz had made two previous experiments with simplified languages, DARSIMCO (Dartmouth Simplified Code) and DOPE (Dartmouth Oversimplified Programming Experiment). These did not progress past a single freshman class. New experiments using Fortran and ALGOL followed, but Kurtz concluded these languages were too tricky for what they desired. As Kurtz noted, Fortran had numerous oddly formed commands, notably an "almost impossible-to-memorize convention for specifying a loop: . Is it '1, 10, 2' or '1, 2, 10', and is the comma after the line number required or not?" Moreover, the lack of any sort of immediate feedback was a key problem; the machines of the era used batch processing and took a long time to complete a run of a program. While Kurtz was visiting MIT, John McCarthy suggested that time-sharing offered a solution; a single machine could divide up its processing time among many users, giving them the illusion of having a (slow) computer to themselves. Small programs would return results in a few seconds. This led to increasing interest in a system using time-sharing and a new language specifically for use by non-STEM students. Kemeny wrote the first version of BASIC. The acronym BASIC comes from the name of an unpublished paper by Thomas Kurtz. The new language was heavily patterned on FORTRAN II; statements were one-to-a-line, numbers were used to indicate the target of loops and branches, and many of the commands were similar or identical to Fortran. However, the syntax was changed wherever it could be improved. For instance, the difficult to remember DO loop was replaced by the much easier to remember , and the line number used in the DO was instead indicated by the NEXT I. Likewise, the cryptic IF statement of Fortran, whose syntax matched a particular instruction of the machine on which it was originally written, became the simpler . These changes made the language much less idiosyncratic while still having an overall structure and feel similar to the original FORTRAN. The project received a $300,000 grant from the National Science Foundation, which was used to purchase a GE-225 computer for processing, and a Datanet-30 realtime processor to handle the Teletype Model 33 teleprinters used for input and output. A team of a dozen undergraduates worked on the project for about a year, writing both the DTSS system and the BASIC compiler. The first version BASIC language was released on 1 May 1964. Initially, BASIC concentrated on supporting straightforward mathematical work, with matrix arithmetic support from its initial implementation as a batch language, and character string functionality being added by 1965. Usage in the university rapidly expanded, requiring the main CPU to be replaced by a GE-235, and still later by a GE-635. By the early 1970s there were hundreds of terminals connected to the machines at Dartmouth, some of them remotely. Wanting use of the language to become widespread, its designers made the compiler available free of charge. In the 1960s, software became a chargeable commodity; until then, it was provided without charge as a service with expensive computers, usually available only to lease. They also made it available to high schools in the Hanover, New Hampshire, area and regionally throughout New England on Teletype Model 33 and Model 35 teleprinter terminals connected to Dartmouth via dial-up phone lines, and they put considerable effort into promoting the language. In the following years, as other dialects of BASIC appeared, Kemeny and Kurtz's original BASIC dialect became known as Dartmouth BASIC. New Hampshire recognized the accomplishment in 2019 when it erected a highway historical marker in Hanover describing the creation of "the first user-friendly programming language". Spread on time-sharing services The emergence of BASIC took place as part of a wider movement toward time-sharing systems. First conceptualized during the late 1950s, the idea became so dominant in the computer industry by the early 1960s that its proponents were speaking of a future in which users would "buy time on the computer much the same way that the average household buys power and water from utility companies". General Electric, having worked on the Dartmouth project, wrote their own underlying operating system and launched an online time-sharing system known as Mark I. It featured BASIC as one of its primary selling points. Other companies in the emerging field quickly followed suit; Tymshare introduced SUPER BASIC in 1968, CompuServe had a version on the DEC-10 at their launch in 1969, and by the early 1970s BASIC was largely universal on general-purpose mainframe computers. Even IBM eventually joined the club with the introduction of VS-BASIC in 1973. Although time-sharing services with BASIC were successful for a time, the widespread success predicted earlier was not to be. The emergence of minicomputers during the same period, and especially low-cost microcomputers in the mid-1970s, allowed anyone to purchase and run their own systems rather than buy online time which was typically billed at dollars per minute. Spread on minicomputers BASIC, by its very nature of being small, was naturally suited to porting to the minicomputer market, which was emerging at the same time as the time-sharing services. These machines had small main memory, perhaps as little as 4 KB in modern terminology, and lacked high-performance storage like hard drives that make compilers practical. On these systems, BASIC was normally implemented as an interpreter rather than a compiler due to its lower requirement for working memory. A particularly important example was HP Time-Shared BASIC, which, like the original Dartmouth system, used two computers working together to implement a time-sharing system. The first, a low-end machine in the HP 2100 series, was used to control user input and save and load their programs to tape or disk. The other, a high-end version of the same underlying machine, ran the programs and generated output. For a cost of about $100,000, one could own a machine capable of running between 16 and 32 users at the same time. The system, bundled as the HP 2000, was the first mini platform to offer time-sharing and was an immediate runaway success, catapulting HP to become the third-largest vendor in the minicomputer space, behind DEC and Data General (DG). DEC, the leader in the minicomputer space since the mid-1960s, had initially ignored BASIC. This was due to their work with RAND Corporation, who had purchased a PDP-6 to run their JOSS language, which was conceptually very similar to BASIC. This led DEC to introduce a smaller, cleaned up version of JOSS known as FOCAL, which they heavily promoted in the late 1960s. However, with timesharing systems widely offering BASIC, and all of their competition in the minicomputer space doing the same, DEC's customers were clamoring for BASIC. After management repeatedly ignored their pleas, David H. Ahl took it upon himself to buy a BASIC for the PDP-8, which was a major success in the education market. By the early 1970s, FOCAL and JOSS had been forgotten and BASIC had become almost universal in the minicomputer market. DEC would go on to introduce their updated version, BASIC-PLUS, for use on the RSTS/E time-sharing operating system. During this period a number of simple text-based games were written in BASIC, most notably Mike Mayfield's Star Trek. David Ahl collected these, some ported from FOCAL, and published them in an educational newsletter he compiled. He later collected a number of these into book form, 101 BASIC Computer Games, published in 1973. During the same period, Ahl was involved in the creation of a small computer for education use, an early personal computer. When management refused to support the concept, Ahl left DEC in 1974 to found the seminal computer magazine, Creative Computing. The book remained popular, and was re-published on several occasions. Explosive growth: the home computer era The introduction of the first microcomputers in the mid-1970s was the start of explosive growth for BASIC. It had the advantage that it was fairly well known to the young designers and computer hobbyists who took an interest in microcomputers, many of whom had seen BASIC on minis or mainframes. Despite Dijkstra's famous judgement in 1975, "It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration", BASIC was one of the few languages that was both high-level enough to be usable by those without training and small enough to fit into the microcomputers of the day, making it the de facto standard programming language on early microcomputers. The first microcomputer version of BASIC was co-written by Bill Gates, Paul Allen and Monte Davidoff for their newly formed company, Micro-Soft. This was released by MITS in punch tape format for the Altair 8800 shortly after the machine itself, immediately cementing BASIC as the primary language of early microcomputers. Members of the Homebrew Computer Club began circulating copies of the program, causing Gates to write his Open Letter to Hobbyists, complaining about this early example of software piracy. Partially in response to Gates's letter, and partially to make an even smaller BASIC that would run usefully on 4 KB machines, Bob Albrecht urged Dennis Allison to write their own variation of the language. How to design and implement a stripped-down version of an interpreter for the BASIC language was covered in articles by Allison in the first three quarterly issues of the People's Computer Company newsletter published in 1975 and implementations with source code published in Dr. Dobb's Journal of Tiny BASIC Calisthenics & Orthodontia: Running Light Without Overbyte. This led to a wide variety of Tiny BASICs with added features or other improvements, with versions from Tom Pittman and Li-Chen Wang becoming particularly well known. Micro-Soft, by this time Microsoft, ported their interpreter for the MOS 6502, which quickly become one of the most popular microprocessors of the 8-bit era. When new microcomputers began to appear, notably the "1977 trinity" of the TRS-80, Commodore PET and Apple II, they either included a version of the MS code, or quickly introduced new models with it. Ohio Scientific's personal computers also joined this trend at that time. By 1978, MS BASIC was a de facto standard and practically every home computer of the 1980s included it in ROM. Upon boot, a BASIC interpreter in direct mode was presented. Commodore Business Machines includes Commodore BASIC, based on Microsoft BASIC. The Apple II and TRS-80 each have two versions of BASIC: a smaller introductory version with the initial releases of the machines and a Microsoft-based version introduced as interest in the platforms increased. As new companies entered the field, additional versions were added that subtly changed the BASIC family. The Atari 8-bit computers use the 8 KB Atari BASIC which is not derived from Microsoft BASIC. Sinclair BASIC was introduced in 1980 with the Sinclair ZX80, and was later extended for the Sinclair ZX81 and the Sinclair ZX Spectrum. The BBC published BBC BASIC, developed by Acorn Computers, incorporates extra structured programming keywords and floating-point features. As the popularity of BASIC grew in this period, computer magazines published complete source code in BASIC for video games, utilities, and other programs. Given BASIC's straightforward nature, it was a simple matter to type in the code from the magazine and execute the program. Different magazines were published featuring programs for specific computers, though some BASIC programs were considered universal and could be used in machines running any variant of BASIC (sometimes with minor adaptations). Many books of type-in programs were also available, and in particular, Ahl published versions of the original 101 BASIC games converted into the Microsoft dialect and published it from Creative Computing as BASIC Computer Games. This book, and its sequels, provided hundreds of ready-to-go programs that could be easily converted to practically any BASIC-running platform. The book reached the stores in 1978, just as the home computer market was starting off, and it became the first million-selling computer book. Later packages, such as Learn to Program BASIC would also have gaming as an introductory focus. On the business-focused CP/M computers which soon became widespread in small business environments, Microsoft BASIC (MBASIC) was one of the leading applications. In 1978, David Lien published the first edition of The BASIC Handbook: An Encyclopedia of the BASIC Computer Language, documenting keywords across over 78 different computers. By 1981, the second edition documented keywords from over 250 different computers, showcasing the explosive growth of the microcomputer era. IBM PC and compatibles When IBM was designing the IBM PC, they followed the paradigm of existing home computers in having a built-in BASIC interpreter. They sourced this from Microsoft – IBM Cassette BASIC – but Microsoft also produced several other versions of BASIC for MS-DOS/PC DOS including IBM Disk BASIC (BASIC D), IBM BASICA (BASIC A), GW-BASIC (a BASICA-compatible version that did not need IBM's ROM) and QBasic, all typically bundled with the machine. In addition they produced the Microsoft QuickBASIC Compiler (1985) for power users and hobbyists, and the Microsoft BASIC Professional Development System (PDS) for professional programmers. Turbo Pascal-publisher Borland published Turbo Basic 1.0 in 1985 (successor versions were marketed under the name PowerBASIC). On Unix-like systems, specialized implementations were created such as XBasic and X11-Basic. XBasic was ported to Microsoft Windows as XBLite, and cross-platform variants such as SmallBasic, yabasic, Bywater BASIC, nuBasic, MyBasic, Logic Basic, Liberty BASIC, and wxBasic emerged. FutureBASIC and Chipmunk Basic meanwhile targeted the Apple Macintosh, while yab is a version of yaBasic optimized for BeOS, ZETA and Haiku. These later variations introduced many extensions, such as improved string manipulation and graphics support, access to the file system and additional data types. More important were the facilities for structured programming, including additional control structures and proper subroutines supporting local variables. The addition of an integrated development environment (IDE) and electronic Help files made the products easier to work with and supported learning tools and school curriculum. In 1989, Microsoft Press published Learn BASIC Now, a book-and-software system designed to teach BASIC programming to self-taught learners who were using IBM-PC compatible systems and the Apple Macintosh. Learn BASIC Now included software disks containing the Microsoft QuickBASIC Interpreter and a programming tutorial written by Michael Halvorson and David Rygmyr. Learning systems like Learn BASIC Now popularized structured BASIC and helped QuickBASIC reach an installed base of four million active users. By the late 1980s, many users were using pre-made applications written by others rather than learning programming themselves, and professional developers had a wide range of advanced languages available on small computers. C and later C++ became the languages of choice for professional "shrink wrap" application development. A niche that BASIC continued to fill was for hobbyist video game development, as game creation systems and readily available game engines were still in their infancy. The Atari ST had STOS BASIC while the Amiga had AMOS BASIC for this purpose. Microsoft first exhibited BASIC for game development with DONKEY.BAS for GW-BASIC, and later GORILLA.BAS and NIBBLES.BAS for QuickBASIC. QBasic maintained an active game development community, which helped later spawn the QB64 and FreeBASIC implementations. An early example of this market is the QBasic software package Microsoft Game Shop (1990), a hobbyist-inspired release that included six "arcade-style" games that were easily customizable in QBasic. In 2013, a game written in QBasic and compiled with QB64 for modern computers entitled Black Annex was released on Steam. Blitz Basic, Dark Basic, SdlBasic, Super Game System Basic, PlayBASIC, CoolBasic, AllegroBASIC, ethosBASIC, GLBasic and Basic4GL further filled this demand, right up to the modern RCBasic, NaaLaa, AppGameKit, Monkey 2, and Cerberus-X. Visual Basic In 1991, Microsoft introduced Visual Basic, an evolutionary development of QuickBASIC. It included constructs from that language such as block-structured control statements, parameterized subroutines and optional static typing as well as object-oriented constructs from other languages such as "With" and "For Each". The language retained some compatibility with its predecessors, such as the Dim keyword for declarations, "Gosub"/Return statements and optional line numbers which could be used to locate errors. An important driver for the development of Visual Basic was as the new macro language for Microsoft Excel, a spreadsheet program. To the surprise of many at Microsoft who still initially marketed it as a language for hobbyists, the language came into widespread use for small custom business applications shortly after the release of VB version 3.0, which is widely considered the first relatively stable version. Microsoft also spun it off as Visual Basic for Applications and Embedded Visual Basic. While many advanced programmers still scoffed at its use, VB met the needs of small businesses efficiently as by that time, computers running Windows 3.1 had become fast enough that many business-related processes could be completed "in the blink of an eye" even using a "slow" language, as long as large amounts of data were not involved. Many small business owners found they could create their own small, yet useful applications in a few evenings to meet their own specialized needs. Eventually, during the lengthy lifetime of VB3, knowledge of Visual Basic had become a marketable job skill. Microsoft also produced VBScript in 1996 and Visual Basic .NET in 2001. The latter has essentially the same power as C# and Java but with syntax that reflects the original Basic language, and also features some cross-platform capability through implementations such as Mono-Basic. The IDE, with its event-driven GUI builder, was also influential on other rapid application development tools, most notably Borland Software's Delphi for Object Pascal and its own descendants such as Lazarus. Mainstream support for the final version 6.0 of the original Visual Basic ended on March 31, 2005, followed by extended support in March 2008. Owing to its persistent remaining popularity, third-party attempts to further support it exist. On February 2, 2017, Microsoft announced that development on VB.NET would no longer be in parallel with that of C#, and on March 11, 2020, it was announced that evolution of the VB.NET language had also concluded. Even so, the language was still supported. Post-1990 versions and dialects Many other BASIC dialects have also sprung up since 1990, including the open source QB64 and FreeBASIC, inspired by QBasic, and the Visual Basic-styled RapidQ, HBasic, Basic For Qt and Gambas. Modern commercial incarnations include PureBasic, PowerBASIC, Xojo, Monkey X and True BASIC (the direct successor to Dartmouth BASIC from a company controlled by Kurtz). Several web-based simple BASIC interpreters also now exist, including Microsoft's Small Basic and Google's wwwBASIC. A number of compilers also exist that convert BASIC into JavaScript. such as NS Basic. Building from earlier efforts such as Mobile Basic, many dialects are now available for smartphones and tablets. On game consoles, an application for the Nintendo 3DS and Nintendo DSi called Petit Computer allows for programming in a slightly modified version of BASIC with DS button support. A version has also been released for Nintendo Switch, which has also been supplied a version of the Fuze Code System, a BASIC variant first implemented as a custom Raspberry Pi machine. Previously BASIC was made available on consoles as Family BASIC (for the Nintendo Famicom) and PSX Chipmunk Basic (for the original PlayStation), while yabasic was ported to the PlayStation 2 and FreeBASIC to the original Xbox. Calculators Variants of BASIC are available on graphing and otherwise programmable calculators made by Texas Instruments (TI-BASIC), HP (HP BASIC), Casio (Casio BASIC), and others. Windows command-line QBasic, a version of Microsoft QuickBASIC without the linker to make EXE files, is present in the Windows NT and DOS-Windows 95 streams of operating systems and can be obtained for more recent releases like Windows 7 which do not have them. Prior to DOS 5, the Basic interpreter was GW-Basic. QuickBasic is part of a series of three languages issued by Microsoft for the home and office power user and small-scale professional development; QuickC and QuickPascal are the other two. For Windows 95 and 98, which do not have QBasic installed by default, they can be copied from the installation disc, which will have a set of directories for old and optional software; other missing commands like Exe2Bin and others are in these same directories. Other The various Microsoft, Lotus, and Corel office suites and related products are programmable with Visual Basic in one form or another, including LotusScript, which is very similar to VBA 6. The Host Explorer terminal emulator uses WWB as a macro language; or more recently the programme and the suite in which it is contained is programmable in an in-house Basic variant known as Hummingbird Basic. The VBScript variant is used for programming web content, Outlook 97, Internet Explorer, and the Windows Script Host. WSH also has a Visual Basic for Applications (VBA) engine installed as the third of the default engines along with VBScript, JScript, and the numerous proprietary or open source engines which can be installed like PerlScript, a couple of Rexx-based engines, Python, Ruby, Tcl, Delphi, XLNT, PHP, and others; meaning that the two versions of Basic can be used along with the other mentioned languages, as well as LotusScript, in a WSF file, through the component object model, and other WSH and VBA constructions. VBScript is one of the languages that can be accessed by the 4Dos, 4NT, and Take Command enhanced shells. SaxBasic and WWB are also very similar to the Visual Basic line of Basic implementations. The pre-Office 97 macro language for Microsoft Word is known as WordBASIC. Excel 4 and 5 use Visual Basic itself as a macro language. Chipmunk Basic, an old-school interpreter similar to BASICs of the 1970s, is available for Linux, Microsoft Windows and macOS. Legacy The ubiquity of BASIC interpreters on personal computers was such that textbooks once included simple "Try It In BASIC" exercises that encouraged students to experiment with mathematical and computational concepts on classroom or home computers. Popular computer magazines of the day typically included type-in programs. Futurist and sci-fi writer David Brin mourned the loss of ubiquitous BASIC in a 2006 Salon article as have others who first used computers during this era. In turn, the article prompted Microsoft to develop and release Small Basic; it also inspired similar projects like Basic-256 and the web based Quite Basic. Dartmouth held a 50th anniversary celebration for BASIC on 1 May 2014. The pedagogical use of BASIC has been followed by other languages, such as Pascal, Java and particularly Python. Dartmouth College celebrated the 50th anniversary of the BASIC language with a day of events on April 30, 2014. A short documentary film was produced for the event. Syntax Typical BASIC keywords Data manipulation LET assigns a value (which may be the result of an expression) to a variable. In most dialects of BASIC, LET is optional, and a line with no other identifiable keyword will assume the keyword to be LET. DATA holds a list of values which are assigned sequentially using the READ command. READ reads a value from a DATA statement and assigns it to a variable. An internal pointer keeps track of the last DATA element that was read and moves it one position forward with each READ. Most dialects allow multiple variables as parameters, reading several values in a single operation. RESTORE resets the internal pointer to the first DATA statement, allowing the program to begin READing from the first value. Many dialects allow an optional line number or ordinal value to allow the pointer to be reset to a selected location. DIM Sets up an array. Program flow control IF ... THEN ... {ELSE} used to perform comparisons or make decisions. Early dialects only allowed a line number after the THEN, but later versions allowed any valid statement to follow. ELSE was not widely supported, especially in earlier versions. FOR ... TO ... {STEP} ... NEXT repeat a section of code a given number of times. A variable that acts as a counter, the "index", is available within the loop. WHILE ... WEND and REPEAT ... UNTIL repeat a section of code while the specified condition is true. The condition may be evaluated before each iteration of the loop, or after. Both of these commands are found mostly in later dialects. DO ... LOOP {WHILE} or {UNTIL} repeat a section of code indefinitely or while/until the specified condition is true. The condition may be evaluated before each iteration of the loop, or after. Similar to WHILE, these keywords are mostly found in later dialects. GOTO jumps to a numbered or labelled line in the program. Most dialects also allowed the form . GOSUB ... RETURN jumps to a numbered or labelled line, executes the code it finds there until it reaches a RETURN command, on which it jumps back to the statement following the GOSUB, either after a colon, or on the next line. This is used to implement subroutines. ON ... GOTO/GOSUB chooses where to jump based on the specified conditions. See Switch statement for other forms. DEF FN a pair of keywords introduced in the early 1960s to define functions. The original BASIC functions were modelled on FORTRAN single-line functions. BASIC functions were one expression with variable arguments, rather than subroutines, with a syntax on the model of DEF FND(x) = x*x at the beginning of a program. Function names were originally restricted to FN, plus one letter, i.e., FNA, FNB ... Input and output LIST displays the full source code of the current program. PRINT displays a message on the screen or other output device. INPUT asks the user to enter the value of a variable. The statement may include a prompt message. TAB used with PRINT to set the position where the next character will be shown on the screen or printed on paper. AT is an alternative form. SPC prints out a number of space characters. Similar in concept to TAB but moves by a number of additional spaces from the current column rather than moving to a specified column. Mathematical functions ABS Absolute value ATN Arctangent (result in radians) COS Cosine (argument in radians) EXP Exponential function INT Integer part (typically floor function) LOG Natural logarithm RND Random number generation SIN Sine (argument in radians) SQR Square root TAN Tangent (argument in radians) Miscellaneous REM holds a programmer's comment or REMark; often used to give a title to the program and to help identify the purpose of a given section of code. USR ("User Serviceable Routine") transfers program control to a machine language subroutine, usually entered as an alphanumeric string or in a list of DATA statements. CALL alternative form of USR found in some dialects. Does not require an artificial parameter to complete the function-like syntax of USR, and has a clearly defined method of calling different routines in memory. TRON / TROFF turns on display of each line number as it is run ("TRace ON"). This was useful for debugging or correcting of problems in a program. TROFF turns it back off again. ASM some compilers such as Freebasic, Purebasic, and Powerbasic also support inline assembly language, allowing the programmer to intermix high-level and low-level code, typically prefixed with "ASM" or "!" statements. Data types and variables Minimal versions of BASIC had only integer variables and one- or two-letter variable names, which minimized requirements of limited and expensive memory (RAM). More powerful versions had floating-point arithmetic, and variables could be labelled with names six or more characters long. There were some problems and restrictions in early implementations; for example, Applesoft BASIC allowed variable names to be several characters long, but only the first two were significant, thus it was possible to inadvertently write a program with variables "LOSS" and "LOAN", which would be treated as being the same; assigning a value to "LOAN" would silently overwrite the value intended as "LOSS". Keywords could not be used in variables in many early BASICs; "SCORE" would be interpreted as "SC" OR "E", where OR was a keyword. String variables are usually distinguished in many microcomputer dialects by having $ suffixed to their name as a sigil, and values are often identified as strings by being delimited by "double quotation marks". Arrays in BASIC could contain integers, floating point or string variables. Some dialects of BASIC supported matrices and matrix operations, which can be used to solve sets of simultaneous linear algebraic equations. These dialects would directly support matrix operations such as assignment, addition, multiplication (of compatible matrix types), and evaluation of a determinant. Many microcomputer BASICs did not support this data type; matrix operations were still possible, but had to be programmed explicitly on array elements. Examples Unstructured BASIC New BASIC programmers on a home computer might start with a simple program, perhaps using the language's PRINT statement to display a message on the screen; a well-known and often-replicated example is Kernighan and Ritchie's "Hello, World!" program: 10 PRINT "Hello, World!" 20 END An infinite loop could be used to fill the display with the message: 10 PRINT "Hello, World!" 20 GOTO 10 Note that the END statement is optional and has no action in most dialects of BASIC. It was not always included, as is the case in this example. This same program can be modified to print a fixed number of messages using the common FOR...NEXT statement: 10 LET N=10 20 FOR I=1 TO N 30 PRINT "Hello, World!" 40 NEXT I Most home computers BASIC versions, such as MSX BASIC and GW-BASIC, supported simple data types, loop cycles, and arrays. The following example is written for GW-BASIC, but will work in most versions of BASIC with minimal changes: 10 INPUT "What is your name: "; U$ 20 PRINT "Hello "; U$ 30 INPUT "How many stars do you want: "; N 40 S$ = "" 50 FOR I = 1 TO N 60 S$ = S$ + "*" 70 NEXT I 80 PRINT S$ 90 INPUT "Do you want more stars? "; A$ 100 IF LEN(A$) = 0 THEN GOTO 90 110 A$ = LEFT$(A$, 1) 120 IF A$ = "Y" OR A$ = "y" THEN GOTO 30 130 PRINT "Goodbye "; U$ 140 END The resulting dialog might resemble: What is your name: Mike Hello Mike How many stars do you want: 7 ******* Do you want more stars? yes How many stars do you want: 3 *** Do you want more stars? no Goodbye Mike The original Dartmouth Basic was unusual in having a matrix keyword, MAT. Although not implemented by most later microprocessor derivatives, it is used in this example from the 1968 manual which averages the numbers that are input: 5 LET S = 0 10 MAT INPUT V 20 LET N = NUM 30 IF N = 0 THEN 99 40 FOR I = 1 TO N 45 LET S = S + V(I) 50 NEXT I 60 PRINT S/N 70 GO TO 5 99 END Structured BASIC Second-generation BASICs (for example, VAX Basic, SuperBASIC, True BASIC, QuickBASIC, BBC BASIC, Pick BASIC, PowerBASIC, Liberty BASIC, QB64 and (arguably) COMAL) introduced a number of features into the language, primarily related to structured and procedure-oriented programming. Usually, line numbering is omitted from the language and replaced with labels (for GOTO) and procedures to encourage easier and more flexible design. In addition keywords and structures to support repetition, selection and procedures with local variables were introduced. The following example is in Microsoft QuickBASIC: REM QuickBASIC example REM Forward declaration - allows the main code to call a REM subroutine that is defined later in the source code DECLARE SUB PrintSomeStars (StarCount!) REM Main program follows INPUT "What is your name: ", UserName$ PRINT "Hello "; UserName$ DO INPUT "How many stars do you want: ", NumStars CALL PrintSomeStars(NumStars) DO INPUT "Do you want more stars? ", Answer$ LOOP UNTIL Answer$ <> "" Answer$ = LEFT$(Answer$, 1) LOOP WHILE UCASE$(Answer$) = "Y" PRINT "Goodbye "; UserName$ END REM subroutine definition SUB PrintSomeStars (StarCount) REM This procedure uses a local variable called Stars$ Stars$ = STRING$(StarCount, "*") PRINT Stars$ END SUB Object-oriented BASIC Third-generation BASIC dialects such as Visual Basic, Xojo, Gambas, StarOffice Basic, BlitzMax and PureBasic introduced features to support object-oriented and event-driven programming paradigm. Most built-in procedures and functions are now represented as methods of standard objects rather than operators. Also, the operating system became increasingly accessible to the BASIC language. The following example is in Visual Basic .NET: Public Module StarsProgram Private Function Ask(prompt As String) As String Console.Write(prompt) Return Console.ReadLine() End Function Public Sub Main() Dim userName = Ask("What is your name: ") Console.WriteLine("Hello {0}", userName) Dim answer As String Do Dim numStars = CInt(Ask("How many stars do you want: ")) Dim stars As New String("*"c, numStars) Console.WriteLine(stars) Do answer = Ask("Do you want more stars? ") Loop Until answer <> "" Loop While answer.StartsWith("Y", StringComparison.OrdinalIgnoreCase) Console.WriteLine("Goodbye {0}", userName) End Sub End Module Standards ANSI/ISO/IEC/ECMA Standard for Minimal BASIC: ANSI X3.60-1978 "For minimal BASIC" ISO/IEC 6373:1984 "Data Processing—Programming Languages—Minimal BASIC" ECMA-55 Minimal BASIC (withdrawn, similar to ANSI X3.60-1978) ANSI/ISO/IEC/ECMA Standard for Full BASIC: ANSI X3.113-1987 "Programming Languages Full BASIC" INCITS/ISO/IEC 10279-1991 (R2005) "Information Technology – Programming Languages – Full BASIC" ECMA-116 BASIC (withdrawn, similar to ANSI X3.113-1987) ANSI/ISO/IEC Addendum Defining Modules: ANSI X3.113 Interpretations-1992 "BASIC Technical Information Bulletin # 1 Interpretations of ANSI 03.113-1987" ISO/IEC 10279:1991/ Amd 1:1994 "Modules and Single Character Input Enhancement" Compilers and interpreters
Technology
"Historical" languages
null
4024
https://en.wikipedia.org/wiki/Butterfly%20effect
Butterfly effect
In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. The term is closely associated with the work of the mathematician and meteorologist Edward Norton Lorenz. He noted that the butterfly effect is derived from the example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. Lorenz originally used a seagull causing a storm but was persuaded to make it more poetic with the use of a butterfly and tornado by 1972. He discovered the effect when he observed runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner. He noted that the weather model would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome. The idea that small causes may have large effects in weather was earlier acknowledged by the French mathematician and physicist Henri Poincaré. The American mathematician and philosopher Norbert Wiener also contributed to this theory. Lorenz's work placed the concept of instability of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos. The concept of the butterfly effect has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences. History In The Vocation of Man (1800), Johann Gottlieb Fichte says "you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole". Chaos theory and the sensitive dependence on initial conditions were described in numerous forms of literature. This is evidenced by the case of the three-body problem by Poincaré in 1890. He later proposed that such phenomena could be common, for example, in meteorology. In 1898, Jacques Hadamard noted general divergence of trajectories in spaces of negative curvature. Pierre Duhem discussed the possible general significance of this in 1908. In 1950, Alan Turing noted: "The displacement of a single electron by a billionth of a centimetre at one moment might make the difference between a man being killed by an avalanche a year later, or escaping." The idea that the death of one butterfly could eventually have a far-reaching ripple effect on subsequent historical events made its earliest known appearance in "A Sound of Thunder", a 1952 short story by Ray Bradbury. "A Sound of Thunder" features time travel. More precisely, though, almost the exact idea and the exact phrasing —of a tiny insect's wing affecting the entire atmosphere's winds— was published in a children's book which became extremely successful and well-known globally in 1962, the year before Lorenz published: In 1961, Lorenz was running a numerical computer model to redo a weather prediction from the middle of the previous run as a shortcut. He entered the initial condition 0.506 from the printout instead of entering the full precision 0.506127 value. The result was a completely different weather scenario. Lorenz wrote: In 1963, Lorenz published a theoretical study of this effect in a highly cited, seminal paper called Deterministic Nonperiodic Flow (the calculations were performed on a Royal McBee LGP-30 computer). Elsewhere he stated: Following proposals from colleagues, in later speeches and papers, Lorenz used the more poetic butterfly. According to Lorenz, when he failed to provide a title for a talk he was to present at the 139th meeting of the American Association for the Advancement of Science in 1972, Philip Merilees concocted Does the flap of a butterfly's wings in Brazil set off a tornado in Texas? as a title. Although a butterfly flapping its wings has remained constant in the expression of this concept, the location of the butterfly, the consequences, and the location of the consequences have varied widely. The phrase refers to the effect of a butterfly's wings creating tiny changes in the atmosphere that may ultimately alter the path of a tornado or delay, accelerate, or even prevent the occurrence of a tornado in another location. The butterfly does not power or directly create the tornado, but the term is intended to imply that the flap of the butterfly's wings can cause the tornado: in the sense that the flap of the wings is a part of the initial conditions of an interconnected complex web; one set of conditions leads to a tornado, while the other set of conditions doesn't. The flapping wing creates a small change in the initial condition of the system, which cascades to large-scale alterations of events (compare: domino effect). Had the butterfly not flapped its wings, the trajectory of the system might have been vastly different—but it's also equally possible that the set of conditions without the butterfly flapping its wings is the set that leads to a tornado. The butterfly effect presents an obvious challenge to prediction, since initial conditions for a system such as the weather can never be known to complete accuracy. This problem motivated the development of ensemble forecasting, in which a number of forecasts are made from perturbed initial conditions. Some scientists have since argued that the weather system is not as sensitive to initial conditions as previously believed. David Orrell argues that the major contributor to weather forecast error is model error, with sensitivity to initial conditions playing a relatively small role. Stephen Wolfram also notes that the Lorenz equations are highly simplified and do not contain terms that represent viscous effects; he believes that these terms would tend to damp out small perturbations. Recent studies using generalized Lorenz models that included additional dissipative terms and nonlinearity suggested that a larger heating parameter is required for the onset of chaos. While the "butterfly effect" is often explained as being synonymous with sensitive dependence on initial conditions of the kind described by Lorenz in his 1963 paper (and previously observed by Poincaré), the butterfly metaphor was originally applied to work he published in 1969 which took the idea a step further. Lorenz proposed a mathematical model for how tiny motions in the atmosphere scale up to affect larger systems. He found that the systems in that model could only be predicted up to a specific point in the future, and beyond that, reducing the error in the initial conditions would not increase the predictability (as long as the error is not zero). This demonstrated that a deterministic system could be "observationally indistinguishable" from a non-deterministic one in terms of predictability. Recent re-examinations of this paper suggest that it offered a significant challenge to the idea that our universe is deterministic, comparable to the challenges offered by quantum physics. In the book entitled The Essence of Chaos published in 1993, Lorenz defined butterfly effect as: "The phenomenon that a small alteration in the state of a dynamical system will cause subsequent states to differ greatly from the states that would have followed without the alteration." This feature is the same as sensitive dependence of solutions on initial conditions (SDIC) in . In the same book, Lorenz applied the activity of skiing and developed an idealized skiing model for revealing the sensitivity of time-varying paths to initial positions. A predictability horizon is determined before the onset of SDIC. Illustrations {|class="wikitable" width=100% |- ! colspan=3|The butterfly effect in the Lorenz attractor |- | colspan="2" style="text-align:center;" | time 0 ≤ t ≤ 30 (larger) | style="text-align:center;" | z coordinate (larger) |- | colspan="2" style="text-align:center;"| | style="text-align:center;"| |- |colspan=3 | These figures show two segments of the three-dimensional evolution of two trajectories (one in blue, and the other in yellow) for the same period of time in the Lorenz attractor starting at two initial points that differ by only 10−5 in the x-coordinate. Initially, the two trajectories seem coincident, as indicated by the small difference between the z coordinate of the blue and yellow trajectories, but for t > 23 the difference is as large as the value of the trajectory. The final position of the cones indicates that the two trajectories are no longer coincident at t = 30. |- | style="text-align:center;" colspan="3" | An animation of the Lorenz attractor shows the continuous evolution. |} Theory and mathematical definition Recurrence, the approximate return of a system toward its initial conditions, together with sensitive dependence on initial conditions, are the two main ingredients for chaotic motion. They have the practical consequence of making complex systems, such as the weather, difficult to predict past a certain time range (approximately a week in the case of weather) since it is impossible to measure the starting atmospheric conditions completely accurately. A dynamical system displays sensitive dependence on initial conditions if points arbitrarily close together separate over time at an exponential rate. The definition is not topological, but essentially metrical. Lorenz defined sensitive dependence as follows: The property characterizing an orbit (i.e., a solution) if most other orbits that pass close to it at some point do not remain close to it as time advances. If M is the state space for the map , then displays sensitive dependence to initial conditions if for any x in M and any δ > 0, there are y in M, with distance such that and such that for some positive parameter a. The definition does not require that all points from a neighborhood separate from the base point x, but it requires one positive Lyapunov exponent. In addition to a positive Lyapunov exponent, boundedness is another major feature within chaotic systems. The simplest mathematical framework exhibiting sensitive dependence on initial conditions is provided by a particular parametrization of the logistic map: which, unlike most chaotic maps, has a closed-form solution: where the initial condition parameter is given by . For rational , after a finite number of iterations maps into a periodic sequence. But almost all are irrational, and, for irrational , never repeats itself – it is non-periodic. This solution equation clearly demonstrates the two key features of chaos – stretching and folding: the factor 2n shows the exponential growth of stretching, which results in sensitive dependence on initial conditions (the butterfly effect), while the squared sine function keeps folded within the range [0, 1]. In physical systems In weather Overview The butterfly effect is most familiar in terms of weather; it can easily be demonstrated in standard weather prediction models, for example. The climate scientists James Annan and William Connolley explain that chaos is important in the development of weather prediction methods; models are sensitive to initial conditions. They add the caveat: "Of course the existence of an unknown butterfly flapping its wings has no direct bearing on weather forecasts, since it will take far too long for such a small perturbation to grow to a significant size, and we have many more immediate uncertainties to worry about. So the direct impact of this phenomenon on weather prediction is often somewhat wrong." Differentiating types of butterfly effects The concept of the butterfly effect encompasses several phenomena. The two kinds of butterfly effects, including the sensitive dependence on initial conditions, and the ability of a tiny perturbation to create an organized circulation at large distances, are not exactly the same. In Palmer et al., a new type of butterfly effect is introduced, highlighting the potential impact of small-scale processes on finite predictability within the Lorenz 1969 model. Additionally, the identification of ill-conditioned aspects of the Lorenz 1969 model points to a practical form of finite predictability. These two distinct mechanisms suggesting finite predictability in the Lorenz 1969 model are collectively referred to as the third kind of butterfly effect. The authors in have considered Palmer et al.'s suggestions and have aimed to present their perspective without raising specific contentions. The third kind of butterfly effect with finite predictability, as discussed in, was primarily proposed based on a convergent geometric series, known as Lorenz's and Lilly's formulas. Ongoing discussions are addressing the validity of these two formulas for estimating predictability limits in. A comparison of the two kinds of butterfly effects and the third kind of butterfly effect has been documented. In recent studies, it was reported that both meteorological and non-meteorological linear models have shown that instability plays a role in producing a butterfly effect, which is characterized by brief but significant exponential growth resulting from a small disturbance. Recent debates on butterfly effects The first kind of butterfly effect (BE1), known as SDIC (Sensitive Dependence on Initial Conditions), is widely recognized and demonstrated through idealized chaotic models. However, opinions differ regarding the second kind of butterfly effect, specifically the impact of a butterfly flapping its wings on tornado formation, as indicated in two 2024 articles. In more recent discussions published by Physics Today, it is acknowledged that the second kind of butterfly effect (BE2) has never been rigorously verified using a realistic weather model. While the studies suggest that BE2 is unlikely in the real atmosphere, its invalidity in this context does not negate the applicability of BE1 in other areas, such as pandemics or historical events. For the third kind of butterfly effect, the limited predictability within the Lorenz 1969 model is explained by scale interactions in one article and by system ill-conditioning in another more recent study. Finite predictability in chaotic systems According to Lighthill (1986), the presence of SDIC (commonly known as the butterfly effect) implies that chaotic systems have a finite predictability limit. In a literature review, it was found that Lorenz's perspective on the predictability limit can be condensed into the following statement: (A). The Lorenz 1963 model qualitatively revealed the essence of a finite predictability within a chaotic system such as the atmosphere. However, it did not determine a precise limit for the predictability of the atmosphere. (B). In the 1960s, the two-week predictability limit was originally estimated based on a doubling time of five days in real-world models. Since then, this finding has been documented in Charney et al. (1966) and has become a consensus. Recently, a short video has been created to present Lorenz's perspective on predictability limit. A recent study refers to the two-week predictability limit, initially calculated in the 1960s with the Mintz-Arakawa model's five-day doubling time, as the "Predictability Limit Hypothesis." Inspired by Moore's Law, this term acknowledges the collaborative contributions of Lorenz, Mintz, and Arakawa under Charney's leadership. The hypothesis supports the investigation into extended-range predictions using both partial differential equation (PDE)-based physics methods and Artificial Intelligence (AI) techniques. Revised perspectives on chaotic and non-chaotic systems By revealing coexisting chaotic and non-chaotic attractors within Lorenz models, Shen and his colleagues proposed a revised view that "weather possesses chaos and order", in contrast to the conventional view of "weather is chaotic". As a result, sensitive dependence on initial conditions (SDIC) does not always appear. Namely, SDIC appears when two orbits (i.e., solutions) become the chaotic attractor; it does not appear when two orbits move toward the same point attractor. The above animation for double pendulum motion provides an analogy. For large angles of swing the motion of the pendulum is often chaotic. By comparison, for small angles of swing, motions are non-chaotic. Multistability is defined when a system (e.g., the double pendulum system) contains more than one bounded attractor that depends only on initial conditions. The multistability was illustrated using kayaking in Figure on the right side (i.e., Figure 1 of ) where the appearance of strong currents and a stagnant area suggests instability and local stability, respectively. As a result, when two kayaks move along strong currents, their paths display SDIC. On the other hand, when two kayaks move into a stagnant area, they become trapped, showing no typical SDIC (although a chaotic transient may occur). Such features of SDIC or no SDIC suggest two types of solutions and illustrate the nature of multistability. By taking into consideration time-varying multistability that is associated with the modulation of large-scale processes (e.g., seasonal forcing) and aggregated feedback of small-scale processes (e.g., convection), the above revised view is refined as follows: "The atmosphere possesses chaos and order; it includes, as examples, emerging organized systems (such as tornadoes) and time varying forcing from recurrent seasons." In quantum mechanics The potential for sensitive dependence on initial conditions (the butterfly effect) has been studied in a number of cases in semiclassical and quantum physics, including atoms in strong fields and the anisotropic Kepler problem. Some authors have argued that extreme (exponential) dependence on initial conditions is not expected in pure quantum treatments; however, the sensitive dependence on initial conditions demonstrated in classical motion is included in the semiclassical treatments developed by Martin Gutzwiller and John B. Delos and co-workers. The random matrix theory and simulations with quantum computers prove that some versions of the butterfly effect in quantum mechanics do not exist. Other authors suggest that the butterfly effect can be observed in quantum systems. Zbyszek P. Karkuszewski et al. consider the time evolution of quantum systems which have slightly different Hamiltonians. They investigate the level of sensitivity of quantum systems to small changes in their given Hamiltonians. David Poulin et al. presented a quantum algorithm to measure fidelity decay, which "measures the rate at which identical initial states diverge when subjected to slightly different dynamics". They consider fidelity decay to be "the closest quantum analog to the (purely classical) butterfly effect". Whereas the classical butterfly effect considers the effect of a small change in the position and/or velocity of an object in a given Hamiltonian system, the quantum butterfly effect considers the effect of a small change in the Hamiltonian system with a given initial position and velocity. This quantum butterfly effect has been demonstrated experimentally. Quantum and semiclassical treatments of system sensitivity to initial conditions are known as quantum chaos. In popular culture The butterfly effect has appeared across mediums such as literature (for instance, A Sound of Thunder), films and television (such as The Simpsons), video games (such as Life Is Strange), webcomics (such as Homestuck), AI-driven expansive language models, and more.
Mathematics
Dynamical systems
null
4035
https://en.wikipedia.org/wiki/Black
Black
Black is a color that results from the absence or complete absorption of visible light. It is an achromatic color, without hue, like white and grey. It is often used symbolically or figuratively to represent darkness. Black and white have often been used to describe opposites such as good and evil, the Dark Ages versus the Age of Enlightenment, and night versus day. Since the Middle Ages, black has been the symbolic color of solemnity and authority, and for this reason it is still commonly worn by judges and magistrates. Black was one of the first colors used by artists in Neolithic cave paintings. It was used in ancient Egypt and Greece as the color of the underworld. In the Roman Empire, it became the color of mourning, and over the centuries it was frequently associated with death, evil, witches, and magic. In the 14th century, it was worn by royalty, clergy, judges, and government officials in much of Europe. It became the color worn by English romantic poets, businessmen and statesmen in the 19th century, and a high fashion color in the 20th century. According to surveys in Europe and North America, it is the color most commonly associated with mourning, the end, secrets, magic, force, violence, fear, evil, and elegance. Black is the most common ink color used for printing books, newspapers and documents, as it provides the highest contrast with white paper and thus is the easiest color to read. Similarly, black text on a white screen is the most common format used on computer screens. the darkest material is made by MIT engineers from vertically aligned carbon nanotubes. Etymology The word black comes from Old English blæc ("black, dark", also, "ink"), from Proto-Germanic *blakkaz ("burned"), from Proto-Indo-European *bhleg- ("to burn, gleam, shine, flash"), from base *bhel- ("to shine"), related to Old Saxon blak ("ink"), Old High German blach ("black"), Old Norse blakkr ("dark"), Dutch blaken ("to burn"), and Swedish bläck ("ink"). More distant cognates include Latin flagrare ("to blaze, glow, burn"), and Ancient Greek phlegein ("to burn, scorch"). The Ancient Greeks sometimes used the same word to name different colors, if they had the same intensity. Kuanos could mean both dark blue and black. The Ancient Romans had two words for black: ater was a flat, dull black, while niger was a brilliant, saturated black. Ater has vanished from the vocabulary, but niger was the source of the country name Nigeria, the English word Negro, and the word for "black" in most modern Romance languages (French: noir; Spanish and Portuguese: negro; Italian: nero; Romanian: negru). Old High German also had two words for black: swartz for dull black and blach for a luminous black. These are parallelled in Middle English by the terms swart for dull black and blaek for luminous black. Swart still survives as the word swarthy, while blaek became the modern English black. The former is cognate with the words used for black in most modern Germanic languages aside from English (German: schwarz, Dutch: zwart, Swedish: svart, Danish: sort, Icelandic: svartr). In heraldry, the word used for the black color is sable, named for the black fur of the sable, an animal. Art Prehistoric Black was one of the first colors used in art. The Lascaux Cave in France contains drawings of bulls and other animals drawn by paleolithic artists between 18,000 and 17,000 years ago. They began by using charcoal, and later achieved darker pigments by burning bones or grinding a powder of manganese oxide. Ancient For the ancient Egyptians, black had positive associations; being the color of fertility and the rich black soil flooded by the Nile. It was the color of Anubis, the god of the underworld, who took the form of a black jackal, and offered protection against evil to the dead. To ancient Greeks, black represented the underworld, separated from the living by the river Acheron, whose water ran black. Those who had committed the worst sins were sent to Tartarus, the deepest and darkest level. In the center was the palace of Hades, the king of the underworld, where he was seated upon a black ebony throne. Black was one of the most important colors used by ancient Greek artists. In the 6th century BC, they began making black-figure pottery and later red figure pottery, using a highly original technique. In black-figure pottery, the artist would paint figures with a glossy clay slip on a red clay pot. When the pot was fired, the figures painted with the slip would turn black, against a red background. Later they reversed the process, painting the spaces between the figures with slip. This created magnificent red figures against a glossy black background. In the social hierarchy of ancient Rome, purple was reserved for the emperor; red was the color worn by soldiers (red cloaks for the officers, red tunics for the soldiers); white the color worn by the priests, and black was worn by craftsmen and artisans. The black they wore was not deep and rich; the vegetable dyes used to make black were not solid or lasting, so the blacks often faded to gray or brown. In Latin, the word for black, ater and to darken, atere, were associated with cruelty, brutality and evil. They were the root of the English words "atrocious" and "atrocity". For the Romans, black symbolized death and mourning. In the 2nd century BC Roman magistrates wore a dark toga, called a toga pulla, to funeral ceremonies. Later, under the Empire, the family of the deceased also wore dark colors for a long period; then, after a banquet to mark the end of mourning, exchanged the black for a white toga. In Roman poetry, death was called the hora nigra, the black hour. The German and Scandinavian peoples worshipped their own goddess of the night, Nótt, who crossed the sky in a chariot drawn by a black horse. They also feared Hel, the goddess of the kingdom of the dead, whose skin was black on one side and red on the other. They also held sacred the raven. They believed that Odin, the king of the Nordic pantheon, had two black ravens, Huginn and Muninn, who served as his agents, traveling the world for him, watching and listening. Postclassical In the early Middle Ages, black was commonly associated with darkness and evil. In Medieval paintings, the devil was usually depicted as having human form, but with wings and black skin or hair. 12th and 13th centuries In fashion, black did not have the prestige of red, the color of the nobility. It was worn by Benedictine monks as a sign of humility and penitence. In the 12th century a famous theological dispute broke out between the Cistercian monks, who wore white, and the Benedictines, who wore black. A Benedictine abbot, Pierre the Venerable, accused the Cistercians of excessive pride in wearing white instead of black. Saint Bernard of Clairvaux, the founder of the Cistercians responded that black was the color of the devil, hell, "of death and sin", while white represented "purity, innocence and all the virtues". Black symbolized both power and secrecy in the medieval world. The emblem of the Holy Roman Empire of Germany was a black eagle. The black knight in the poetry of the Middle Ages was an enigmatic figure, hiding his identity, usually wrapped in secrecy. Black ink, invented in China, was traditionally used in the Middle Ages for writing, for the simple reason that black was the darkest color and therefore provided the greatest contrast with white paper or parchment, making it the easiest color to read. It became even more important in the 15th century, with the invention of printing. A new kind of ink, printer's ink, was created out of soot, turpentine and walnut oil. The new ink made it possible to spread ideas to a mass audience through printed books, and to popularize art through black and white prints. Because of its contrast and clarity, black ink on white paper continued to be the standard for printing books, newspapers and documents; and for the same reason black text on a white background is the most common format used on computer screens. 14th and 15th centuries In the early Middle Ages, princes, nobles and the wealthy usually wore bright colors, particularly scarlet cloaks from Italy. Black was rarely part of the wardrobe of a noble family. The one exception was the fur of the sable. This glossy black fur, from an animal of the marten family, was the finest and most expensive fur in Europe. It was imported from Russia and Poland and used to trim the robes and gowns of royalty. In the 14th century, the status of black began to change. First, high-quality black dyes began to arrive on the market, allowing garments of a deep, rich black. Magistrates and government officials began to wear black robes, as a sign of the importance and seriousness of their positions. A third reason was the passage of sumptuary laws in some parts of Europe which prohibited the wearing of costly clothes and certain colors by anyone except members of the nobility. The famous bright scarlet cloaks from Venice and the peacock blue fabrics from Florence were restricted to the nobility. The wealthy bankers and merchants of northern Italy responded by changing to black robes and gowns, made with the most expensive fabrics. The change to the more austere but elegant black was quickly picked up by the kings and nobility. It began in northern Italy, where the Duke of Milan and the Count of Savoy and the rulers of Mantua, Ferrara, Rimini and Urbino began to dress in black. It then spread to France, led by Louis I, Duke of Orleans, younger brother of King Charles VI of France. It moved to England at the end of the reign of King Richard II (1377–1399), where all the court began to wear black. In 1419–20, black became the color of the powerful Duke of Burgundy, Philip the Good. It moved to Spain, where it became the color of the Spanish Habsburgs, of Charles V and of his son, Philip II of Spain (1527–1598). European rulers saw it as the color of power, dignity, humility and temperance. By the end of the 16th century, it was the color worn by almost all the monarchs of Europe and their courts. Modern 16th and 17th centuries While black was the color worn by the Catholic rulers of Europe, it was also the emblematic color of the Protestant Reformation in Europe and the Puritans in England and America. John Calvin, Philip Melanchthon and other Protestant theologians denounced the richly colored and decorated interiors of Roman Catholic churches. They saw the color red, worn by the pope and his cardinals, as the color of luxury, sin, and human folly. In some northern European cities, mobs attacked churches and cathedrals, smashed the stained glass windows and defaced the statues and decoration. In Protestant doctrine, clothing was required to be sober, simple and discreet. Bright colors were banished and replaced by blacks, browns and grays; women and children were recommended to wear white. In the Protestant Netherlands, Rembrandt used this sober new palette of blacks and browns to create portraits whose faces emerged from the shadows expressing the deepest human emotions. The Catholic painters of the Counter-Reformation, like Rubens, went in the opposite direction; they filled their paintings with bright and rich colors. The new Baroque churches of the Counter-Reformation were usually shining white inside and filled with statues, frescoes, marble, gold and colorful paintings, to appeal to the public. But European Catholics of all classes, like Protestants, eventually adopted a sober wardrobe that was mostly black, brown and gray. In the second part of the 17th century, Europe and America experienced an epidemic of fear of witchcraft. People widely believed that the devil appeared at midnight in a ceremony called a Black Mass or black sabbath, usually in the form of a black animal, often a goat, a dog, a wolf, a bear, a deer or a rooster, accompanied by their familiar spirits, black cats, serpents and other black creatures. This was the origin of the widespread superstition about black cats and other black animals. In medieval Flanders, in a ceremony called Kattenstoet, black cats were thrown from the belfry of the Cloth Hall of Ypres to ward off witchcraft. Witch trials were common in both Europe and America during this period. During the notorious Salem witch trials in New England in 1692–93, one of those on trial was accused of being able turn into a "black thing with a blue cap," and others of having familiars in the form of a black dog, a black cat and a black bird. Nineteen women and men were hanged as witches. 18th and 19th centuries In the 18th century, during the European Age of Enlightenment, black receded as a fashion color. Paris became the fashion capital, and pastels, blues, greens, yellow and white became the colors of the nobility and upper classes. But after the French Revolution, black again became the dominant color. Black was the color of the industrial revolution, largely fueled by coal, and later by oil. Thanks to coal smoke, the buildings of the large cities of Europe and America gradually turned black. By 1846 the industrial area of the West Midlands of England was "commonly called 'the Black Country'". Charles Dickens and other writers described the dark streets and smoky skies of London, and they were vividly illustrated in the wood-engravings of French artist Gustave Doré. A different kind of black was an important part of the romantic movement in literature. Black was the color of melancholy, the dominant theme of romanticism. The novels of the period were filled with castles, ruins, dungeons, storms, and meetings at midnight. The leading poets of the movement were usually portrayed dressed in black, usually with a white shirt and open collar, and a scarf carelessly over their shoulder, Percy Bysshe Shelley and Lord Byron helped create the enduring stereotype of the romantic poet. The invention of inexpensive synthetic black dyes and the industrialization of the textile industry meant that high-quality black clothes were available for the first time to the general population. In the 19th century black gradually became the most popular color of business dress of the upper and middle classes in England, the Continent, and America. Black dominated literature and fashion in the 19th century, and played a large role in painting. James McNeill Whistler made the color the subject of his most famous painting, Arrangement in grey and black number one (1871), better known as Whistler's Mother. Some 19th-century French painters had a low opinion of black: "Reject black," Paul Gauguin said, "and that mix of black and white they call gray. Nothing is black, nothing is gray." But Édouard Manet used blacks for their strength and dramatic effect. Manet's portrait of painter Berthe Morisot was a study in black which perfectly captured her spirit of independence. The black gave the painting power and immediacy; he even changed her eyes, which were green, to black to strengthen the effect. Henri Matisse quoted the French impressionist Pissarro telling him, "Manet is stronger than us all – he made light with black." Pierre-Auguste Renoir used luminous blacks, especially in his portraits. When someone told him that black was not a color, Renoir replied: "What makes you think that? Black is the queen of colors. I always detested Prussian blue. I tried to replace black with a mixture of red and blue, I tried using cobalt blue or ultramarine, but I always came back to ivory black." Vincent van Gogh used black lines to outline many of the objects in his paintings, such as the bed in the famous painting of his bedroom. making them stand apart. His painting of black crows over a cornfield, painted shortly before he died, was particularly agitated and haunting. In the late 19th century, black also became the color of anarchism. (See the section political movements.) 20th and 21st centuries In the 20th century, black was utilised by Italian and German fascism. (See the section political movements). In art, the colour regained some of the territory that it had lost during the 19th century. The Russian painter Kasimir Malevich, a member of the Suprematist movement, created the Black Square in 1915, is widely considered the first purely abstract painting. He wrote, "The painted work is no longer simply the imitation of reality, but is this very reality ... It is not a demonstration of ability, but the materialization of an idea." Black was appreciated by Henri Matisse. "When I didn't know what color to put down, I put down black," he said in 1945. "Black is a force: I used black as ballast to simplify the construction ... Since the impressionists it seems to have made continuous progress, taking a more and more important part in color orchestration, comparable to that of the double bass as a solo instrument." In the 1950s, black came to be a symbol of individuality and intellectual and social rebellion, the color of those who did not accept established norms and values. In Paris, it was worn by Left-Bank intellectuals and performers such as Juliette Gréco, and by some members of the Beat Movement in New York and San Francisco. Black leather jackets were worn by motorcycle gangs such as the Hells Angels and street gangs on the fringes of society in the United States. Black as a color of rebellion was celebrated in such films as The Wild One, with Marlon Brando. By the end of the 20th century, black was the emblematic color of the punk subculture punk fashion, and the goth subculture. Goth fashion, which emerged in England in the 1980s, was inspired by Victorian era mourning dress. In men's fashion, black gradually ceded its dominance to navy blue, particularly in business suits. Black evening dress and formal dress in general were worn less and less. In 1960, John F. Kennedy was the last American President to be inaugurated wearing formal dress; Lyndon Johnson and his successors were inaugurated wearing business suits. Women's fashion was revolutionized and simplified in 1926 by the French designer Coco Chanel, who published a drawing of a simple black dress in Vogue magazine. She famously said, "A woman needs just three things; a black dress, a black sweater, and, on her arm, a man she loves." French designer Jean Patou also followed suit by creating a black collection in 1929. Other designers contributed to the trend of the little black dress. The Italian designer Gianni Versace said, "Black is the quintessence of simplicity and elegance," and French designer Yves Saint Laurent said, "black is the liaison which connects art and fashion. One of the most famous black dresses of the century was designed by Hubert de Givenchy and was worn by Audrey Hepburn in the 1961 film Breakfast at Tiffany's. The American civil rights movement in the 1950s was a struggle for the political equality of African Americans. It developed into the Black Power movement in the early 1960s until the late 1980s, and the Black Lives Matter movement in the 2010s and 2020s. It also popularized the slogan "Black is Beautiful". Science Physics In the visible spectrum, black is the result of the absorption of all light wavelengths. Black can be defined as the visual impression (or color) experienced when no visible light reaches the eye. Pigments or dyes that absorb light rather than reflect it back to the eye look black. A black pigment can, however, result from a combination of several pigments that collectively absorb all wavelengths of visible light. If appropriate proportions of three primary pigments are mixed, the result reflects so little light as to be called black. This provides two superficially opposite but actually complementary descriptions of black. Black is the color produced by the absorption of all wavelengths of visible light, or an exhaustive combination of multiple colors of pigment. In physics, a black body is a perfect absorber of light, but, by a thermodynamic rule, it is also the best emitter. Thus, the best radiative cooling, out of sunlight, is by using black paint, though it is important that it be black (a nearly perfect absorber) in the infrared as well. In elementary science, far ultraviolet light is called "black light" because, while itself unseen, it causes many minerals and other substances to fluoresce. Absorption of light is contrasted by transmission, reflection and diffusion, where the light is only redirected, causing objects to appear transparent, reflective or white respectively. A material is said to be black if most incoming light is absorbed equally in the material. Light (electromagnetic radiation in the visible spectrum) interacts with the atoms and molecules, which causes the energy of the light to be converted into other forms of energy, usually heat. This means that black surfaces can act as thermal collectors, absorbing light and generating heat (see Solar thermal collector). As of September 2019, the darkest material is made from vertically aligned carbon nanotubes. The material was grown by MIT engineers and was reported to have a 99.995% absorption rate of any incoming light. This surpasses any former darkest materials including Vantablack, which has a peak absorption rate of 99.965% in the visible spectrum. Chemistry Pigments The earliest pigments used by Neolithic man were charcoal, red ocher and yellow ocher. The black lines of cave art were drawn with the tips of burnt torches made of a wood with resin. Different charcoal pigments were made by burning different woods and animal products, each of which produced a different tone. The charcoal would be ground and then mixed with animal fat to make the pigment. Vine black was produced in Roman times by burning the cut branches of grapevines. It could also be produced by burning the remains of the crushed grapes, which were collected and dried in an oven. According to the historian Vitruvius, the deepness and richness of the black produced corresponded to the quality of the wine. The finest wines produced a black with a bluish tinge the color of indigo. The 15th-century painter Cennino Cennini described how this pigment was made during the Renaissance in his famous handbook for artists: "...there is a black which is made from the tendrils of vines. And these tendrils need to be burned. And when they have been burned, throw some water onto them and put them out and then mull them in the same way as the other black. And this is a lean and black pigment and is one of the perfect pigments that we use." Cennini also noted that "There is another black which is made from burnt almond shells or peaches and this is a perfect, fine black." Similar fine blacks were made by burning the pits of the peach, cherry or apricot. The powdered charcoal was then mixed with gum arabic or the yellow of an egg to make a paint. Different civilizations burned different plants to produce their charcoal pigments. The Inuit of Alaska used wood charcoal mixed with the blood of seals to paint masks and wooden objects. The Polynesians burned coconuts to produce their pigment. Lamp black was used as a pigment for painting and frescoes, as a dye for fabrics, and in some societies for making tattoos. The 15th century Florentine painter Cennino Cennini described how it was made during the Renaissance: "... take a lamp full of linseed oil and fill the lamp with the oil and light the lamp. Then place it, lit, under a thoroughly clean pan and make sure that the flame from the lamp is two or three fingers from the bottom of the pan. The smoke that comes off the flame will hit the bottom of the pan and gather, becoming thick. Wait a bit. take the pan and brush this pigment (that is, this smoke) onto paper or into a pot with something. And it is not necessary to mull or grind it because it is a very fine pigment. Re-fill the lamp with the oil and put it under the pan like this several times and, in this way, make as much of it as is necessary." This same pigment was used by Indian artists to paint the Ajanta Caves, and as dye in ancient Japan. Ivory black, also known as bone char, was originally produced by burning ivory and mixing the resulting charcoal powder with oil. The color is still made today, but ordinary animal bones are substituted for ivory. Mars black is a black pigment made of synthetic iron oxides. It is commonly used in water-colors and oil painting. It takes its name from Mars, the god of war and patron of iron. Dyes Good-quality black dyes were not known until the middle of the 14th century. The most common early dyes were made from bark, roots or fruits of different trees; usually walnuts, chestnuts, or certain oak trees. The blacks produced were often more gray, brown or bluish. The cloth had to be dyed several times to darken the color. One solution used by dyers was add to the dye some iron filings, rich in iron oxide, which gave a deeper black. Another was to first dye the fabric dark blue, and then to dye it black. A much richer and deeper black dye was eventually found made from the oak apple or "gall-nut". The gall-nut is a small round tumor which grows on oak and other varieties of trees. They range in size from 2–5 cm, and are caused by chemicals injected by the larva of certain kinds of gall wasp in the family Cynipidae. The dye was very expensive; a great quantity of gall-nuts were needed for a very small amount of dye. The gall-nuts which made the best dye came from Poland, eastern Europe, the near east and North Africa. Beginning in about the 14th century, dye from gall-nuts was used for clothes of the kings and princes of Europe. Another important source of natural black dyes from the 17th century onwards was the logwood tree, or Haematoxylum campechianum, which also produced reddish and bluish dyes. It is a species of flowering tree in the legume family, Fabaceae, that is native to southern Mexico and northern Central America. The modern nation of Belize grew from 17th century English logwood logging camps. Since the mid-19th century, synthetic black dyes have largely replaced natural dyes. One of the important synthetic blacks is Nigrosin, a mixture of synthetic black dyes (CI 50415, Solvent black 5) made by heating a mixture of nitrobenzene, aniline and aniline hydrochloride in the presence of a copper or iron catalyst. Its main industrial uses are as a colorant for lacquers and varnishes and in marker-pen inks. Inks The first known inks were made by the Chinese, and date back to the 23rd century B.C. They used natural plant dyes and minerals such as graphite ground with water and applied with an ink brush. Early Chinese inks similar to the modern inkstick have been found dating to about 256 BC at the end of the Warring States period. They were produced from soot, usually produced by burning pine wood, mixed with animal glue. To make ink from an inkstick, the stick is continuously ground against an inkstone with a small quantity of water to produce a dark liquid which is then applied with an ink brush. Artists and calligraphists could vary the thickness of the resulting ink by reducing or increasing the intensity and time of ink grinding. These inks produced the delicate shading and subtle or dramatic effects of Chinese brush painting. India ink (or "Indian ink" in British English) is a black ink once widely used for writing and printing and now more commonly used for drawing, especially when inking comic books and comic strips. The technique of making it probably came from China. India ink has been in use in India since at least the 4th century BC, where it was called masi. In India, the black color of the ink came from bone char, tar, pitch and other substances. The ancient Romans had a black writing ink they called atramentum librarium. Its name came from the Latin word atrare, which meant to make something black. (This was the same root as the English word atrocious.) It was usually made, like India ink, from soot, although one variety, called atramentum elephantinum, was made by burning the ivory of elephants. Gall-nuts were also used for making fine black writing ink. Iron gall ink (also known as iron gall nut ink or oak gall ink) was a purple-black or brown-black ink made from iron salts and tannic acids from gall nut. It was the standard writing and drawing ink in Europe, from about the 12th century to the 19th century, and remained in use well into the 20th century. Astronomy A black hole is a region of spacetime where gravity prevents anything, including light, from escaping. The theory of general relativity predicts that a sufficiently compact mass will deform spacetime to form a black hole. Around a black hole there is a mathematically defined boundary called an event horizon that marks the point of no return. It is called "black" because it absorbs all the light that hits the horizon, reflecting nothing, just like a perfect black body in thermodynamics. Black holes of stellar mass are expected to form when very massive stars collapse at the end of their life cycle. After a black hole has formed it can continue to grow by absorbing mass from its surroundings. By absorbing other stars and merging with other black holes, supermassive black holes of millions of solar masses may form. There is general consensus that supermassive black holes exist in the centers of most galaxies. Although a black hole itself is black, infalling material forms an accretion disk, one of the brightest types of object in the universe. Black-body radiation refers to the radiation coming from a body at a given temperature where all incoming energy (light) is converted to heat. Black sky refers to the appearance of space as one emerges from Earth's atmosphere. Why the night sky and space are black – Olbers' paradox The fact that outer space is black is sometimes called Olbers' paradox. In theory, because the universe is full of stars, and is believed to be infinitely large, it would be expected that the light of an infinite number of stars would be enough to brilliantly light the whole universe all the time. However, the background color of outer space is black. This contradiction was first noted in 1823 by German astronomer Heinrich Wilhelm Matthias Olbers, who posed the question of why the night sky was black. The current accepted answer is that, although the universe may be infinitely large, it is not infinitely old. It is thought to be about 13.8 billion years old, so we can only see objects as far away as the distance light can travel in 13.8 billion years. Light from stars farther away has not reached Earth, and cannot contribute to making the sky bright. Furthermore, as the universe is expanding, many stars are moving away from Earth. As they move, the wavelength of their light becomes longer, through the Doppler effect, and shifts toward red, or even becomes invisible. As a result of these two phenomena, there is not enough starlight to make space anything but black. The daytime sky on Earth is blue because light from the Sun strikes molecules in Earth's atmosphere scattering light in all directions. Blue light is scattered more than other colors, and reaches the eye in greater quantities, making the daytime sky appear blue. This is known as Rayleigh scattering. The nighttime sky on Earth is black because the part of Earth experiencing night is facing away from the Sun, the light of the Sun is blocked by Earth itself, and there is no other bright nighttime source of light in the vicinity. Thus, there is not enough light to undergo Rayleigh scattering and make the sky blue. On the Moon, on the other hand, because there is virtually no atmosphere to scatter the light, the sky is black both day and night. This also holds true for other locations without an atmosphere, such as Mercury. Biology Culture In China, the color black is associated with water, one of the five fundamental elements believed to compose all things; and with winter, cold, and the direction north, usually symbolized by a black tortoise. It is also associated with disorder, including the positive disorder which leads to change and new life. When the first emperor of China Qin Shi Huang seized power from the Zhou dynasty, he changed the Imperial color from red to black, saying that black extinguished red. Only when the Han dynasty appeared in 206 BC was red restored as the imperial color. In Japan, black is associated with mystery, the night, the unknown, the supernatural, the invisible and death. Combined with white, it can symbolize intuition. In 10th- and 11th-century Japan, it was believed that wearing black could bring misfortune. It was worn at court by those who wanted to set themselves apart from the established powers or who had renounced material possessions. In Japan black can also symbolize experience, as opposed to white, which symbolizes naiveté. The black belt in martial arts symbolizes experience, while a white belt is worn by novices. Japanese men traditionally wear a black kimono with some white decoration on their wedding day. Black is associated with depth in Indonesia, as well as the subterranean world, demons, disaster, and the left hand. When combined with white, however, it symbolizes harmony and equilibrium. Political movements Anarchism Anarchism is a political philosophy, most popular in the late 19th and early 20th centuries, which holds that governments and capitalism are harmful and undesirable. The symbols of anarchism was usually either a black flag or a black letter A. More recently it is usually represented with a bisected red and black flag, to emphasise the movement's socialist roots in the First International. Anarchism was most popular in Spain, France, Italy, Ukraine and Argentina. There were also small but influential movements in the United States, Russia and many other countries all around the world. Fascism The Blackshirts () were Fascist paramilitary groups in Italy during the period immediately following World War I and until the end of World War II. The Blackshirts were officially known as the Voluntary Militia for National Security (Milizia Volontaria per la Sicurezza Nazionale, or MVSN). Inspired by the black uniforms of the Arditi, Italy's elite storm troops of World War I, the Fascist Blackshirts were organized by Benito Mussolini as the military tool of his political movement. They used violence and intimidation against Mussolini's opponents. The emblem of the Italian fascists was a black flag with fasces, an axe in a bundle of sticks, an ancient Roman symbol of authority. Mussolini came to power in 1922 through his March on Rome with the blackshirts. Black was also adopted by Adolf Hitler and the Nazis in Germany. Red, white and black were the colors of the flag of the German Empire from 1870 to 1918. In Mein Kampf, Hitler explained that they were "revered colors expressive of our homage to the glorious past." Hitler also wrote that "the new flag ... should prove effective as a large poster" because "in hundreds of thousands of cases a really striking emblem may be the first cause of awakening interest in a movement." The black swastika was meant to symbolize the Aryan race, which, according to the Nazis, "was always anti-Semitic and will always be anti-Semitic." Several designs by a number of different authors were considered, but the one adopted in the end was Hitler's personal design. Black became the color of the uniform of the SS, the Schutzstaffel or "defense corps", the paramilitary wing of the Nazi Party, and was worn by SS officers from 1932 until the end of World War II. The Nazis used a black triangle to symbolize anti-social elements. The symbol originates from Nazi concentration camps, where every prisoner had to wear one of the Nazi concentration camp badges on their jacket, the color of which categorized them according to "their kind". Many Black Triangle prisoners were either mentally disabled or mentally ill. The homeless were also included, as were alcoholics, the Romani people, the habitually "work-shy", prostitutes, draft dodgers and pacifists. More recently the black triangle has been adopted as a symbol in lesbian culture and by disabled activists. Black shirts were also worn by the British Union of Fascists before World War II, and members of fascist movements in the Netherlands. Patriotic resistance The Lützow Free Corps, composed of volunteer German students and academics fighting against Napoleon in 1813, could not afford to make special uniforms and therefore adopted black, as the only color that could be used to dye their civilian clothing without the original color showing. In 1815 the students began to carry a red, black and gold flag, which they believed (incorrectly) had been the colors of the Holy Roman Empire (the imperial flag had actually been gold and black). In 1848, this banner became the flag of the German confederation. In 1866, Prussia unified Germany under its rule, and imposed the red, white and black of its own flag, which remained the colors of the German flag until the end of the Second World War. In 1949 the Federal Republic of Germany returned to the original flag and colors of the students and professors of 1815, which is the flag of Germany today. Military Black has been a traditional color of cavalry and armoured or mechanized troops. German armoured troops (Panzerwaffe) traditionally wore black uniforms, and even in others, a black beret is common. In Finland, black is the symbolic color for both armoured troops and combat engineers, and military units of these specialities have black flags and unit insignia. The black beret and the color black is also a symbol of special forces in many countries. Soviet and Russian OMON special police and Russian naval infantry wear a black beret. A black beret is also worn by military police in the Canadian, Czech, Croatian, Portuguese, Spanish and Serbian armies. The silver-on-black skull and crossbones symbol or Totenkopf and a black uniform were used by Hussars and Black Brunswickers, the German Panzerwaffe and the Nazi Schutzstaffel, and U.S. 400th Missile Squadron (crossed missiles), and continues in use with the Estonian Kuperjanov Battalion. Religion In Christian theology, black was the color of the universe before God created light. In many religious cultures, from Mesoamerica to Oceania to India and Japan, the world was created out of a primordial darkness. In the Bible the light of faith and Christianity is often contrasted with the darkness of ignorance and paganism. In Christianity, the devil is often called the "prince of darkness". The term was used in John Milton's poem Paradise Lost, published in 1667, referring to Satan, who is viewed as the embodiment of evil. It is an English translation of the Latin phrase princeps tenebrarum, which occurs in the Acts of Pilate, written in the fourth century, in the 11th-century hymn Rhythmus de die mortis by Pietro Damiani, and in a sermon by Bernard of Clairvaux from the 12th century. The phrase also occurs in King Lear by William Shakespeare (), Act III, Scene IV, l. 14: 'The prince of darkness is a gentleman." Priests and pastors of the Roman Catholic, Eastern Orthodox and Protestant churches commonly wear black, as do monks of the Benedictine Order, who consider it the color of humility and penitence. In Islam, black, along with green, plays an important symbolic role. It is the color of the Black Standard, the banner that is said to have been carried by the soldiers of Muhammad. It is also used as a symbol in Shi'a Islam (heralding the advent of the Mahdi), and the flag of followers of Islamism and Jihadism. In Hinduism, the goddess Kali, goddess of time and change, is portrayed with black or dark blue skin. wearing a necklace adorned with severed heads and hands. Her name means "The black one". She destroys anger and passion according to Hindu mythology and her devotees are supposed to abstain from meat or intoxication. Kali does not eat meat, but it is the śāstra's injunction that those who are unable to give up meat-eating, they may sacrifice one goat, not cow, one small animal before the goddess Kali, on amāvāsya (new moon) day, night, not day, and they can eat it. In Paganism, black represents dignity, force, stability, and protection. The color is often used to banish and release negative energies, or binding. An athame is a ceremonial blade often having a black handle, which is used in some forms of witchcraft. Sports The national rugby union team of New Zealand is called the All Blacks, in reference to their black outfits, and the color is also shared by other New Zealand national teams such as the Black Caps (cricket) and the Kiwis (rugby league). Association football (soccer) referees traditionally wear all-black uniforms, however nowadays other uniform colors may also be worn. In auto racing, a black flag signals a driver to go into the pits. In baseball, "the black" refers to the batter's eye, a blacked out area around the center-field bleachers, painted black to give hitters a decent background for pitched balls. A large number of teams have uniforms designed with black colors even when the team does not normally feature that color. Many feel the color sometimes imparts a psychological advantage in its wearers. Black is used by numerous professional and collegiate sports teams Idioms and expressions In general, the Negro race of African origin is called "Black", while the Caucasian race of European origin is called "White". In the United States, "Black Friday" (the day after Thanksgiving Day, the fourth Thursday in November) is traditionally the busiest shopping day of the year. Many Americans are on holiday because of Thanksgiving, and many retailers open earlier and close later than normal, and offer special prices. The day's name originated in Philadelphia sometime before 1961, and originally was used to describe the heavy and disruptive downtown pedestrian and vehicle traffic which would occur on that day. Later an alternative explanation began to be offered: that "Black Friday" indicates the point in the year that retailers begin to turn a profit, or are "in the black", because of the large volume of sales on that day. "In the black" means profitable. Accountants originally used black ink in ledgers to indicate profit, and red ink to indicate a loss. Black Friday also refers to any particularly disastrous day on financial markets. The first Black Friday (1869), 24 September 1869, was caused by the efforts of two speculators, Jay Gould and James Fisk, to corner the gold market on the New York Gold Exchange. A blacklist is a list of undesirable persons or entities (to be placed on the list is to be "blacklisted"). Black comedy is a form of comedy dealing with morbid and serious topics. The expression is similar to black humor or black humour. A black mark against a person relates to something bad they have done. A black mood is a bad one (cf Winston Churchill's clinical depression, which he called "my black dog"). Black market is used to denote the trade of illegal goods, or alternatively the illegal trade of otherwise legal items at considerably higher prices, e.g. to evade rationing. Black propaganda is the use of known falsehoods, partial truths, or masquerades in propaganda to confuse an opponent. Blackmail is the act of threatening someone to do something that would hurt them in some way, such as by revealing sensitive information about them, in order to force the threatened party to fulfill certain demands. Ordinarily, such a threat is illegal. If the black eight-ball, in billiards, is sunk before all others are out of play, the player loses. The black sheep of the family is the ne'er-do-well. To blackball someone is to block their entry into a club or some such institution. In the traditional English gentlemen's club, members vote on the admission of a candidate by secretly placing a white or black ball in a hat. If upon the completion of voting, there was even one black ball amongst the white, the candidate would be denied membership, and he would never know who had "blackballed" him. Black tea in the Western culture is known as "crimson tea" in Chinese and culturally influenced languages (, Mandarin Chinese hóngchá; Japanese kōcha; Korean hongcha). "The black" is a wildfire suppression term referring to a burned area on a wildfire capable of acting as a safety zone. Black coffee refers to coffee without sugar or cream. Associations and symbolism In the West, black is commonly associated with mourning and bereavement, and usually worn at funerals and memorial services. In some traditional societies, for example in Greece and Italy, some widows wear black for the rest of their lives. In contrast, across much of Africa and parts of Asia like Vietnam, white is a color of mourning. A "black day" (or week or month) usually refers to tragic date. The Romans marked fasti days with white stones and nefasti days with black. The term is often used to remember massacres. Black months include the Black September in Jordan, when large numbers of Palestinians were killed, and Black July in Sri Lanka, the killing of members of the Tamil population by the Sinhalese government. In the financial world, the term often refers to a dramatic drop in the stock market. For example, the Wall Street crash of 1929, the stock market crash on 29 October 1929, which marked the start of the Great Depression, is nicknamed Black Tuesday, and was preceded by Black Thursday, a downturn on 24 October the previous week. In western popular culture, black has long been associated with evil and darkness. It is the traditional color of witchcraft and black magic. Black is frequently used as a color of power, law and authority. In many countries judges and magistrates wear black robes. That custom began in Europe in the 13th and 14th centuries. Jurists, magistrates and certain other court officials in France began to wear long black robes during the reign of Philip IV of France (1285–1314), and in England from the time of Edward I (1271–1307). The custom spread to the cities of Italy at about the same time, between 1300 and 1320. The robes of judges resembled those worn by the clergy, and represented the law and authority of the King, while those of the clergy represented the law of God and authority of the church. Until the 20th century most police uniforms were black, until they were largely replaced by blue in France, the U.S. and other countries. In the United States, police cars are frequently Black and white. The riot control units of the Basque Autonomous Police in Spain are known as beltzak ("blacks") after their uniform. Black formal attire is still worn at many solemn occasions or ceremonies, from graduations to formal balls. Graduation gowns are copied from the gowns worn by university professors in the Middle Ages, which in turn were copied from the robes worn by judges and priests, who often taught at the early universities. The mortarboard hat worn by graduates is adapted from a square cap called a biretta worn by Medieval professors and clerics. In the 19th and 20th centuries, many machines and devices, large and small, were painted black, to stress their functionality. These included telephones, sewing machines, steamships, railroad locomotives, and automobiles. The Ford Model T, the first mass-produced car, was available only in black from 1914 to 1926. Of means of transportation, only airplanes were rarely ever painted black. The term "Black" is often used in the West to describe people whose skin is darker. In the United States, it is particularly used to describe African Americans. Black is also commonly used as a racial description in the United Kingdom, since ethnicity was first measured in the 2001 census. In Canada, census respondents can identify themselves as Black. In Brazil, the Brazilian Institute of Geography and Statistics (IBGE) asks people to identify themselves as branco (white), pardo (brown), preto (black), or amarelo (yellow). Black and white have often been used to describe opposites, particularly light and darkness and good and evil. In Medieval literature, the white knight usually represented virtue, the black knight something mysterious and sinister. In American westerns, the hero often wore a white hat, the villain a black hat. In philosophy and arguments, the issue is often described as black-and-white, meaning that the issue at hand is dichotomized (having two clear, opposing sides with no middle ground). Black is commonly associated with secrecy. The Black Chamber was a term given to an office which secretly opened and read diplomatic mail and broke codes. Queen Elizabeth I had such an office, headed by her Secretary, Sir Francis Walsingham, which successfully broke the Spanish codes and broke up several plots against the Queen. In France a cabinet noir was established inside the French post office by Louis XIII to open diplomatic mail. It was closed during the French Revolution but re-opened under Napoleon I. The Habsburg Empire and Dutch Republic had similar black chambers. The United States created a secret peacetime Black Chamber, called the Cipher Bureau, in 1919. It was funded by the State Department and Army and disguised as a commercial company in New York. It successfully broke a number of diplomatic codes, including the code of the Japanese government. It was closed down in 1929 after the State Department withdrew funding, when the new Secretary of State, Henry Stimson, stated that "Gentlemen do not read each other's mail." The Cipher Bureau was the ancestor of the U.S. National Security Agency. A black project is a secret unacknowledged military project, such as Enigma Decryption during World War II, or a secret counter-narcotics or police sting operation. Black ops are covert operations carried out by a government, government agency or military. A black budget is a government budget that is allocated for classified or other secret operations of a nation. The black budget is an account expenses and spending related to military research and covert operations. The black budget is mostly classified due to security reasons. Black is the color most commonly associated with elegance in Europe and the United States. Black first became a fashionable color for men in Europe in the 17th century, in the courts of Italy and Spain. In the 19th century, it was the fashion for men both in business and for evening wear. For women's fashion, the defining moment was the invention of the simple black dress by Coco Chanel in 1926. Thereafter, a long black gown was used for formal occasions, while the simple black dress could be used for everything else. The expression "X is the new black" is a reference to the latest trend or fad that is considered a wardrobe basic for the duration of the trend, on the basis that black is always fashionable. The phrase has taken on a life of its own and has become a cliché.
Physical sciences
Color terms
null
4054
https://en.wikipedia.org/wiki/Battleship
Battleship
A battleship is a large, heavily armored warship with a main battery consisting of large-caliber guns, designed to serve as capital ships with the most intense firepower. Before the rise of supercarriers, battleships were among the largest and most formidable weapon systems ever built. The term battleship came into use in the late 1880s to describe a type of ironclad warship, now referred to by historians as pre-dreadnought battleships. In 1906, the commissioning of into the United Kingdom's Royal Navy heralded a revolution in the field of battleship design. Subsequent battleship designs, influenced by HMS Dreadnought, were referred to as "dreadnoughts", though the term eventually became obsolete as dreadnoughts became the only type of battleship in common use. Battleships dominated naval warfare in the late 19th and early 20th centuries, and were a symbol of naval dominance and national might, and for decades were a major intimidation factor for power projection in both diplomacy and military strategy. A global arms race in battleship construction began in Europe in the 1890s and culminated at the decisive Battle of Tsushima in 1905, the outcome of which significantly influenced the design of HMS Dreadnought. The launch of Dreadnought in 1906 commenced a new naval arms race. Three major fleet actions between steel battleships took place: the long-range gunnery duel at the Battle of the Yellow Sea in 1904, the decisive Battle of Tsushima in 1905 (both during the Russo-Japanese War) and the inconclusive Battle of Jutland in 1916, during the First World War. Jutland was the largest naval battle and the only full-scale clash of dreadnoughts of the war, and it was the last major battle in naval history fought primarily by battleships. The Naval Treaties of the 1920s and 1930s limited the number of battleships, though technical innovation in battleship design continued. Both the Allied and Axis powers built battleships during World War II, though the increasing importance of the aircraft carrier meant that the battleship played a less important role than had been expected in that conflict. The value of the battleship has been questioned, even during their heyday. There were few of the decisive fleet battles that battleship proponents expected and used to justify the vast resources spent on building battlefleets. Even in spite of their huge firepower and protection, battleships were increasingly vulnerable to much smaller and relatively inexpensive weapons: initially the torpedo and the naval mine, and later attack aircraft and the guided missile. The growing range of naval engagements led to the aircraft carrier replacing the battleship as the leading capital ship during World War II, with the last battleship to be launched being in 1944. Four battleships were retained by the United States Navy until the end of the Cold War for fire support purposes and were last used in combat during the Gulf War in 1991, and then struck from the U.S. Naval Vessel Register in the 2000s. Many World War II-era American battleships survive today as museum ships. History Ships of the line A ship of the line was a large, unarmored wooden sailing ship which mounted a battery of up to 120 smoothbore guns and carronades, which came to prominence with the adoption of line of battle tactics in the early 17th century and the end of the sailing battleship's heyday in the 1830s. From 1794, the alternative term 'line of battle ship' was contracted (informally at first) to 'battle ship' or 'battleship'. The sheer number of guns fired broadside meant a ship of the line could wreck any wooden enemy, holing her hull, knocking down masts, wrecking her rigging, and killing her crew. However, the effective range of the guns was as little as a few hundred yards, so the battle tactics of sailing ships depended in part on the wind. Over time, ships of the line gradually became larger and carried more guns, but otherwise remained quite similar. The first major change to the ship of the line concept was the introduction of steam power as an auxiliary propulsion system. Steam power was gradually introduced to the navy in the first half of the 19th century, initially for small craft and later for frigates. The French Navy introduced steam to the line of battle with the 90-gun in 1850—the first true steam battleship. Napoléon was armed as a conventional ship-of-the-line, but her steam engines could give her a speed of , regardless of the wind. This was a potentially decisive advantage in a naval engagement. The introduction of steam accelerated the growth in size of battleships. France and the United Kingdom were the only countries to develop fleets of wooden steam screw battleships although several other navies operated small numbers of screw battleships, including Russia (9), the Ottoman Empire (3), Sweden (2), Naples (1), Denmark (1) and Austria (1). Ironclads The adoption of steam power was only one of a number of technological advances which revolutionized warship design in the 19th century. The ship of the line was overtaken by the ironclad: powered by steam, protected by metal armor, and armed with guns firing high-explosive shells. Explosive shells Guns that fired explosive or incendiary shells were a major threat to wooden ships, and these weapons quickly became widespread after the introduction of 8-inch shell guns as part of the standard armament of French and American line-of-battle ships in 1841. In the Crimean War, six line-of-battle ships and two frigates of the Russian Black Sea Fleet destroyed seven Turkish frigates and three corvettes with explosive shells at the Battle of Sinop in 1853. Later in the war, French ironclad floating batteries used similar weapons against the defenses at the Battle of Kinburn. Nevertheless, wooden-hulled ships stood up comparatively well to shells, as shown in the 1866 Battle of Lissa, where the modern Austrian steam two-decker ranged across a confused battlefield, rammed an Italian ironclad and took 80 hits from Italian ironclads, many of which were shells, but including at least one 300-pound shot at point-blank range. Despite losing her bowsprit and her foremast, and being set on fire, she was ready for action again the very next day. Iron armor and construction The development of high-explosive shells made the use of iron armor plate on warships necessary. In 1859 France launched , the first ocean-going ironclad warship. She had the profile of a ship of the line, cut to one deck due to weight considerations. Although made of wood and reliant on sail for most journeys, Gloire was fitted with a propeller, and her wooden hull was protected by a layer of thick iron armor. Gloire prompted further innovation from the Royal Navy, anxious to prevent France from gaining a technological lead. The superior armored frigate followed Gloire by only 14 months, and both nations embarked on a program of building new ironclads and converting existing screw ships of the line to armored frigates. Within two years, Italy, Austria, Spain and Russia had all ordered ironclad warships, and by the time of the famous clash of the and the at the Battle of Hampton Roads at least eight navies possessed ironclad ships. Navies experimented with the positioning of guns, in turrets (like the USS Monitor), central-batteries or barbettes, or with the ram as the principal weapon. As steam technology developed, masts were gradually removed from battleship designs. By the mid-1870s steel was used as a construction material alongside iron and wood. The French Navy's , laid down in 1873 and launched in 1876, was a central battery and barbette warship which became the first battleship in the world to use steel as the principal building material. Pre-dreadnought battleship The term "battleship" was officially adopted by the Royal Navy in the re-classification of 1892. By the 1890s, there was an increasing similarity between battleship designs, and the type that later became known as the 'pre-dreadnought battleship' emerged. These were heavily armored ships, mounting a mixed battery of guns in turrets, and without sails. The typical first-class battleship of the pre-dreadnought era displaced 15,000 to 17,000 tons, had a speed of , and an armament of four guns in two turrets fore and aft with a mixed-caliber secondary battery amidships around the superstructure. An early design with superficial similarity to the pre-dreadnought is the British of 1871. The slow-firing main guns were the principal weapons for battleship-to-battleship combat. The intermediate and secondary batteries had two roles. Against major ships, it was thought a 'hail of fire' from quick-firing secondary weapons could distract enemy gun crews by inflicting damage to the superstructure, and they would be more effective against smaller ships such as cruisers. Smaller guns (12-pounders and smaller) were reserved for protecting the battleship against the threat of torpedo attack from destroyers and torpedo boats. The beginning of the pre-dreadnought era coincided with Britain reasserting her naval dominance. For many years previously, Britain had taken naval supremacy for granted. Expensive naval projects were criticized by political leaders of all inclinations. However, in 1888 a war scare with France and the build-up of the Russian navy gave added impetus to naval construction, and the British Naval Defence Act of 1889 laid down a new fleet including eight new battleships. The principle that Britain's navy should be more powerful than the two next most powerful fleets combined was established. This policy was designed to deter France and Russia from building more battleships, but both nations nevertheless expanded their fleets with more and better pre-dreadnoughts in the 1890s. In the last years of the 19th century and the first years of the 20th, the escalation in the building of battleships became an arms race between Britain and Germany. The German naval laws of 1890 and 1898 authorized a fleet of 38 battleships, a vital threat to the balance of naval power. Britain answered with further shipbuilding, but by the end of the pre-dreadnought era, British supremacy at sea had markedly weakened. In 1883, the United Kingdom had 38 battleships, twice as many as France and almost as many as the rest of the world put together. In 1897, Britain's lead was far smaller due to competition from France, Germany, and Russia, as well as the development of pre-dreadnought fleets in Italy, the United States and Japan. The Ottoman Empire, Spain, Sweden, Denmark, Norway, the Netherlands, Chile and Brazil all had second-rate fleets led by armored cruisers, coastal defence ships or monitors. Pre-dreadnoughts continued the technical innovations of the ironclad. Turrets, armor plate, and steam engines were all improved over the years, and torpedo tubes were also introduced. A small number of designs, including the American and es, experimented with all or part of the 8-inch intermediate battery superimposed over the 12-inch primary. Results were poor: recoil factors and blast effects resulted in the 8-inch battery being completely unusable, and the inability to train the primary and intermediate armaments on different targets led to significant tactical limitations. Even though such innovative designs saved weight (a key reason for their inception), they proved too cumbersome in practice. Dreadnought era In 1906, the British Royal Navy launched the revolutionary . Created as a result of pressure from Admiral Sir John ("Jackie") Fisher, HMS Dreadnought rendered existing battleships obsolete. Combining an "all-big-gun" armament of ten 12-inch (305 mm) guns with unprecedented speed (from steam turbine engines) and protection, she prompted navies worldwide to re-evaluate their battleship building programs. While the Japanese had laid down an all-big-gun battleship, , in 1904 and the concept of an all-big-gun ship had been in circulation for several years, it had yet to be validated in combat. Dreadnought sparked a new arms race, principally between Britain and Germany but reflected worldwide, as the new class of warships became a crucial element of national power. Technical development continued rapidly through the dreadnought era, with steep changes in armament, armor and propulsion. Ten years after Dreadnoughts commissioning, much more powerful ships, the super-dreadnoughts, were being built. Origin In the first years of the 20th century, several navies worldwide experimented with the idea of a new type of battleship with a uniform armament of very heavy guns. Admiral Vittorio Cuniberti, the Italian Navy's chief naval architect, articulated the concept of an all-big-gun battleship in 1903. When the Regia Marina did not pursue his ideas, Cuniberti wrote an article in Janes proposing an "ideal" future British battleship, a large armored warship of 17,000 tons, armed solely with a single calibre main battery (twelve 12-inch [305 mm] guns), carrying belt armor, and capable of 24 knots (44 km/h). The Russo-Japanese War provided operational experience to validate the "all-big-gun" concept. During the Battle of the Yellow Sea on August 10, 1904, Admiral Togo of the Imperial Japanese Navy commenced deliberate 12-inch gun fire at the Russian flagship Tzesarevich at 14,200 yards (13,000 meters). At the Battle of Tsushima on May 27, 1905, Russian Admiral Rozhestvensky's flagship fired the first 12-inch guns at the Japanese flagship Mikasa at 7,000 meters. It is often held that these engagements demonstrated the importance of the gun over its smaller counterparts, though some historians take the view that secondary batteries were just as important as the larger weapons when dealing with smaller fast-moving torpedo craft. Such was the case, albeit unsuccessfully, when the at Tsushima had been sent to the bottom by destroyer-launched torpedoes. The 1903–04 design also retained traditional triple-expansion steam engines. As early as 1904, Jackie Fisher had been convinced of the need for fast, powerful ships with an all-big-gun armament. If Tsushima influenced his thinking, it was to persuade him of the need to standardise on guns. Fisher's concerns were submarines and destroyers equipped with torpedoes, then threatening to outrange battleship guns, making speed imperative for capital ships. Fisher's preferred option was his brainchild, the battlecruiser: lightly armored but heavily armed with eight 12-inch guns and propelled to by steam turbines. It was to prove this revolutionary technology that Dreadnought was designed in January 1905, laid down in October 1905 and sped to completion by 1906. She carried ten 12-inch guns, had an 11-inch armor belt, and was the first large ship powered by turbines. She mounted her guns in five turrets; three on the centerline (one forward, two aft) and two on the wings, giving her at her launch twice the broadside of any other warship. She retained a number of 12-pound (3-inch, 76 mm) quick-firing guns for use against destroyers and torpedo-boats. Her armor was heavy enough for her to go head-to-head with any other ship in a gun battle, and conceivably win. Dreadnought was to have been followed by three s, their construction delayed to allow lessons from Dreadnought to be used in their design. While Fisher may have intended Dreadnought to be the last Royal Navy battleship, the design was so successful he found little support for his plan to switch to a battlecruiser navy. Although there were some problems with the ship (the wing turrets had limited arcs of fire and strained the hull when firing a full broadside, and the top of the thickest armor belt lay below the waterline at full load), the Royal Navy promptly commissioned another six ships to a similar design in the and es. An American design, , authorized in 1905 and laid down in December 1906, was another of the first dreadnoughts, but she and her sister, , were not launched until 1908. Both used triple-expansion engines and had a superior layout of the main battery, dispensing with Dreadnoughts wing turrets. They thus retained the same broadside, despite having two fewer guns. Arms race In 1897, before the revolution in design brought about by , the Royal Navy had 62 battleships in commission or building, a lead of 26 over France and 50 over Germany. From the 1906 launching of Dreadnought, an arms race with major strategic consequences was prompted. Major naval powers raced to build their own dreadnoughts. Possession of modern battleships was not only seen as vital to naval power, but also, as with nuclear weapons after World War II, represented a nation's standing in the world. Germany, France, Japan, Italy, Austria, and the United States all began dreadnought programmes; while the Ottoman Empire, Argentina, Russia, Brazil, and Chile commissioned dreadnoughts to be built in British and American yards. World War I By virtue of geography, the Royal Navy was able to use her imposing battleship and battlecruiser fleet to impose a strict and successful naval blockade of Germany and kept Germany's smaller battleship fleet bottled up in the North Sea: only narrow channels led to the Atlantic Ocean and these were guarded by British forces. Both sides were aware that, because of the greater number of British dreadnoughts, a full fleet engagement would be likely to result in a British victory. The German strategy was therefore to try to provoke an engagement on their terms: either to induce a part of the Grand Fleet to enter battle alone, or to fight a pitched battle near the German coastline, where friendly minefields, torpedo-boats and submarines could be used to even the odds. This did not happen, however, due in large part to the necessity to keep submarines for the Atlantic campaign. Submarines were the only vessels in the Imperial German Navy able to break out and raid British commerce in force, but even though they sank many merchant ships, they could not successfully counter-blockade the United Kingdom; the Royal Navy successfully adopted convoy tactics to combat Germany's submarine counter-blockade and eventually defeated it. This was in stark contrast to Britain's successful blockade of Germany. The first two years of war saw the Royal Navy's battleships and battlecruisers regularly "sweep" the North Sea making sure that no German ships could get in or out. Only a few German surface ships that were already at sea, such as the famous light cruiser , were able to raid commerce. Even some of those that did manage to get out were hunted down by battlecruisers, as in the Battle of the Falklands, December 7, 1914. The results of sweeping actions in the North Sea were battles including the Heligoland Bight and Dogger Bank and German raids on the English coast, all of which were attempts by the Germans to lure out portions of the Grand Fleet in an attempt to defeat the Royal Navy in detail. On May 31, 1916, a further attempt to draw British ships into battle on German terms resulted in a clash of the battlefleets in the Battle of Jutland. The German fleet withdrew to port after two short encounters with the British fleet. Less than two months later, the Germans once again attempted to draw portions of the Grand Fleet into battle. The resulting Action of 19 August 1916 proved inconclusive. This reinforced German determination not to engage in a fleet to fleet battle. In the other naval theatres there were no decisive pitched battles. In the Black Sea, engagement between Russian and Ottoman battleships was restricted to skirmishes. In the Baltic Sea, action was largely limited to the raiding of convoys, and the laying of defensive minefields; the only significant clash of battleship squadrons there was the Battle of Moon Sound at which one Russian pre-dreadnought was lost. The Adriatic was in a sense the mirror of the North Sea: the Austro-Hungarian dreadnought fleet remained bottled up by the British and French blockade. And in the Mediterranean, the most important use of battleships was in support of the amphibious assault on Gallipoli. In September 1914, the threat posed to surface ships by German U-boats was confirmed by successful attacks on British cruisers, including the sinking of three British armored cruisers by the German submarine in less than an hour. The British Super-dreadnought soon followed suit as she struck a mine laid by a German U-boat in October 1914 and sank. The threat that German U-boats posed to British dreadnoughts was enough to cause the Royal Navy to change their strategy and tactics in the North Sea to reduce the risk of U-boat attack. Further near-misses from submarine attacks on battleships and casualties amongst cruisers led to growing concern in the Royal Navy about the vulnerability of battleships. As the war wore on however, it turned out that whilst submarines did prove to be a very dangerous threat to older pre-dreadnought battleships, as shown by examples such as the sinking of , which was caught in the Dardanelles by a British submarine and and were torpedoed by U-21 as well as , , etc., the threat posed to dreadnought battleships proved to have been largely a false alarm. HMS Audacious turned out to be the only dreadnought sunk by a submarine in World War I. While battleships were never intended for anti-submarine warfare, there was one instance of a submarine being sunk by a dreadnought battleship. HMS Dreadnought rammed and sank the German submarine U-29 on March 18, 1915, off the Moray Firth. Whilst the escape of the German fleet from the superior British firepower at Jutland was effected by the German cruisers and destroyers successfully turning away the British battleships, the German attempt to rely on U-boat attacks on the British fleet failed. Torpedo boats did have some successes against battleships in World War I, as demonstrated by the sinking of the British pre-dreadnought by during the Dardanelles Campaign and the destruction of the Austro-Hungarian dreadnought by Italian motor torpedo boats in June 1918. In large fleet actions, however, destroyers and torpedo boats were usually unable to get close enough to the battleships to damage them. The only battleship sunk in a fleet action by either torpedo boats or destroyers was the obsolescent German pre-dreadnought . She was sunk by destroyers during the night phase of the Battle of Jutland. The German High Seas Fleet, for their part, were determined not to engage the British without the assistance of submarines; and since the submarines were needed more for raiding commercial traffic, the fleet stayed in port for much of the war. Inter-war period For many years, Germany simply had no battleships. The Armistice with Germany required that most of the High Seas Fleet be disarmed and interned in a neutral port; largely because no neutral port could be found, the ships remained in British custody in Scapa Flow, Scotland. The Treaty of Versailles specified that the ships should be handed over to the British. Instead, most of them were scuttled by their German crews on June 21, 1919, just before the signature of the peace treaty. The treaty also limited the German Navy, and prevented Germany from building or possessing any capital ships. The inter-war period saw the battleship subjected to strict international limitations to prevent a costly arms race breaking out. While the victors were not limited by the Treaty of Versailles, many of the major naval powers were crippled after the war. Faced with the prospect of a naval arms race against the United Kingdom and Japan, which would in turn have led to a possible Pacific war, the United States was keen to conclude the Washington Naval Treaty of 1922. This treaty limited the number and size of battleships that each major nation could possess, and required Britain to accept parity with the U.S. and to abandon the British alliance with Japan. The Washington treaty was followed by a series of other naval treaties, including the First Geneva Naval Conference (1927), the First London Naval Treaty (1930), the Second Geneva Naval Conference (1932), and finally the Second London Naval Treaty (1936), which all set limits on major warships. These treaties became effectively obsolete on September 1, 1939, at the beginning of World War II, but the ship classifications that had been agreed upon still apply. The treaty limitations meant that fewer new battleships were launched in 1919–1939 than in 1905–1914. The treaties also inhibited development by imposing upper limits on the weights of ships. Designs like the projected British , the first American , and the Japanese —all of which continued the trend to larger ships with bigger guns and thicker armor—never got off the drawing board. Those designs which were commissioned during this period were referred to as treaty battleships. Rise of air power As early as 1914, the British Admiral Percy Scott predicted that battleships would soon be made irrelevant by aircraft. By the end of World War I, aircraft had successfully adopted the torpedo as a weapon. In 1921 the Italian general and air theorist Giulio Douhet completed a hugely influential treatise on strategic bombing titled The Command of the Air, which foresaw the dominance of air power over naval units. In the 1920s, General Billy Mitchell of the United States Army Air Corps, believing that air forces had rendered navies around the world obsolete, testified in front of Congress that "1,000 bombardment airplanes can be built and operated for about the price of one battleship" and that a squadron of these bombers could sink a battleship, making for more efficient use of government funds. This infuriated the U.S. Navy, but Mitchell was nevertheless allowed to conduct a careful series of bombing tests alongside Navy and Marine bombers. In 1921, he bombed and sank numerous ships, including the "unsinkable" German World War I battleship and the American pre-dreadnought . Although Mitchell had required "war-time conditions", the ships sunk were obsolete, stationary, defenseless and had no damage control. The sinking of Ostfriesland was accomplished by violating an agreement that would have allowed Navy engineers to examine the effects of various munitions: Mitchell's airmen disregarded the rules, and sank the ship within minutes in a coordinated attack. The stunt made headlines, and Mitchell declared, "No surface vessels can exist wherever air forces acting from land bases are able to attack them." While far from conclusive, Mitchell's test was significant because it put proponents of the battleship against naval aviation on the defensive. Rear Admiral William A. Moffett used public relations against Mitchell to make headway toward expansion of the U.S. Navy's nascent aircraft carrier program. Rearmament The Royal Navy, United States Navy, and Imperial Japanese Navy extensively upgraded and modernized their World War I–era battleships during the 1930s. Among the new features were an increased tower height and stability for the optical rangefinder equipment (for gunnery control), more armor (especially around turrets) to protect against plunging fire and aerial bombing, and additional anti-aircraft weapons. Some British ships received a large block superstructure nicknamed the "Queen Anne's castle", such as in and , which would be used in the new conning towers of the fast battleships. External bulges were added to improve both buoyancy to counteract weight increase and provide underwater protection against mines and torpedoes. The Japanese rebuilt all of their battleships, plus their battlecruisers, with distinctive "pagoda" structures, though the received a more modern bridge tower that would influence the new . Bulges were fitted, including steel tube arrays to improve both underwater and vertical protection along the waterline. The U.S. experimented with cage masts and later tripod masts, though after the Japanese attack on Pearl Harbor some of the most severely damaged ships (such as and ) were rebuilt with tower masts, for an appearance similar to their contemporaries. Radar, which was effective beyond visual range and effective in complete darkness or adverse weather, was introduced to supplement optical fire control. Even when war threatened again in the late 1930s, battleship construction did not regain the level of importance it had held in the years before World War I. The "building holiday" imposed by the naval treaties meant the capacity of dockyards worldwide had shrunk, and the strategic position had changed. In Germany, the ambitious Plan Z for naval rearmament was abandoned in favor of a strategy of submarine warfare supplemented by the use of battlecruisers and commerce raiding (in particular by s). In Britain, the most pressing need was for air defenses and convoy escorts to safeguard the civilian population from bombing or starvation, and re-armament construction plans consisted of five ships of the . It was in the Mediterranean that navies remained most committed to battleship warfare. France intended to build six battleships of the and es, and the Italians four ships. Neither navy built significant aircraft carriers. The U.S. preferred to spend limited funds on aircraft carriers until the . Japan, also prioritising aircraft carriers, nevertheless began work on three mammoth Yamatos (although the third, , was later completed as a carrier) and a planned fourth was cancelled. At the outbreak of the Spanish Civil War, the Spanish navy included only two small dreadnought battleships, and . España (originally named Alfonso XIII), by then in reserve at the northwestern naval base of El Ferrol, fell into Nationalist hands in July 1936. The crew aboard Jaime I remained loyal to the Republic, killed their officers, who apparently supported Franco's attempted coup, and joined the Republican Navy. Thus each side had one battleship; however, the Republican Navy generally lacked experienced officers. The Spanish battleships mainly restricted themselves to mutual blockades, convoy escort duties, and shore bombardment, rarely in direct fighting against other surface units. In April 1937, España ran into a mine laid by friendly forces, and sank with little loss of life. In May 1937, Jaime I was damaged by Nationalist air attacks and a grounding incident. The ship was forced to go back to port to be repaired. There she was again hit by several aerial bombs. It was then decided to tow the battleship to a more secure port, but during the transport she suffered an internal explosion that caused 300 deaths and her total loss. Several Italian and German capital ships participated in the non-intervention blockade. On May 29, 1937, two Republican aircraft managed to bomb the German pocket battleship outside Ibiza, causing severe damage and loss of life. retaliated two days later by bombarding Almería, causing much destruction, and the resulting Deutschland incident meant the end of German and Italian participation in non-intervention. World War II The —an obsolete pre-dreadnought—fired the first shots of World War II with the bombardment of the Polish garrison at Westerplatte; and the final surrender of the Japanese Empire took place aboard a United States Navy battleship, . Between those two events, it had become clear that aircraft carriers were the new principal ships of the fleet and that battleships now performed a secondary role. Battleships played a part in major engagements in Atlantic, Pacific and Mediterranean theaters; in the Atlantic, the Germans used their battleships as independent commerce raiders. However, clashes between battleships were of little strategic importance. The Battle of the Atlantic was fought between destroyers and submarines, and most of the decisive fleet clashes of the Pacific war were determined by aircraft carriers. In the first year of the war, armored warships defied predictions that aircraft would dominate naval warfare. and surprised and sank the aircraft carrier off western Norway in June 1940. This engagement marked the only time a fleet carrier was sunk by surface gunnery. In the attack on Mers-el-Kébir, British battleships opened fire on the French battleships in the harbor near Oran in Algeria with their heavy guns. The fleeing French ships were then pursued by planes from aircraft carriers. The subsequent years of the war saw many demonstrations of the maturity of the aircraft carrier as a strategic naval weapon and its effectiveness against battleships. The British air attack on the Italian naval base at Taranto sank one Italian battleship and damaged two more. The same Swordfish torpedo bombers played a crucial role in sinking the German battleship . On December 7, 1941, the Japanese launched a surprise attack on Pearl Harbor. Within a short time, five of eight U.S. battleships were sunk or sinking, with the rest damaged. All three American aircraft carriers were out to sea, however, and evaded destruction. The sinking of the British battleship and battlecruiser , demonstrated the vulnerability of a battleship to air attack while at sea without sufficient air cover, settling the argument begun by Mitchell in 1921. Both warships were under way and en route to attack the Japanese amphibious force that had invaded Malaya when they were caught by Japanese land-based bombers and torpedo bombers on December 10, 1941. At many of the early crucial battles of the Pacific, for instance Coral Sea and Midway, battleships were either absent or overshadowed as carriers launched wave after wave of planes into the attack at a range of hundreds of miles. In later battles in the Pacific, battleships primarily performed shore bombardment in support of amphibious landings and provided anti-aircraft defense as escort for the carriers. Even the largest battleships ever constructed, Japan's , which carried a main battery of nine 18-inch (46 cm) guns and were designed as a principal strategic weapon, were never given a chance to show their potential in the decisive battleship action that figured in Japanese pre-war planning. The last battleship confrontation in history was the Battle of Surigao Strait, on October 25, 1944, in which a numerically and technically superior American battleship group destroyed a lesser Japanese battleship group by gunfire after it had already been devastated by destroyer torpedo attacks. All but one of the American battleships in this confrontation had previously been sunk during the attack on Pearl Harbor and subsequently raised and repaired. fired the last major-caliber salvo of this battle. In April 1945, during the battle for Okinawa, the world's most powerful battleship, the Yamato, was sent out on a suicide mission against a massive U.S. force and sunk by overwhelming pressure from carrier aircraft with nearly all hands lost. After that, the Japanese fleet remaining in the home islands was also destroyed by the US naval air force. Cold War After World War II, several navies retained their existing battleships, but they were no longer strategically dominant military assets. It soon became apparent that they were no longer worth the considerable cost of construction and maintenance and only one new battleship was commissioned after the war, . During the war it had been demonstrated that battleship-on-battleship engagements like Leyte Gulf or the sinking of were the exception and not the rule, and with the growing role of aircraft, engagement ranges were becoming longer and longer, making heavy gun armament irrelevant. The armor of a battleship was equally irrelevant in the face of a nuclear attack as tactical missiles with a range of or more could be mounted on the Soviet and s. By the end of the 1950s, smaller vessel classes such as destroyers, which formerly offered no noteworthy opposition to battleships, now were capable of eliminating battleships from outside the range of the ship's heavy guns. The remaining battleships met a variety of ends. and the were sunk during the testing of nuclear weapons in Operation Crossroads in 1946. Both battleships proved resistant to nuclear air burst but vulnerable to underwater nuclear explosions (in the case of Arkansas, "vulnerable" due to her proximity to the bomb crushing, flipping, and sinking her in less than a second). The was taken by the Soviets as reparations and renamed Novorossiysk; she was sunk by a leftover German mine in the Black Sea on October 29, 1955. The two ships were scrapped in 1956. The was scrapped in 1954, in 1968, and in 1970. The United Kingdom's four surviving ships were scrapped in 1957, and followed in 1960. All other surviving British battleships had been sold or broken up by 1949. The Soviet Union's was scrapped in 1953, in 1957 and (back under her original name, , since 1942) in 1956–57. Brazil's was scrapped in Genoa in 1953, and her sister ship sank during a storm in the Atlantic en route to the breakers in Italy in 1951. Argentina kept its two ships until 1956 and Chile kept (formerly ) until 1959. The Turkish battlecruiser (formerly , launched in 1911) was scrapped in 1976 after an offer to sell her back to Germany was refused. Sweden had several small coastal-defense battleships, one of which, , survived until 1970. The Soviets scrapped four large incomplete cruisers in the late 1950s, whilst plans to build a number of new s were abandoned following the death of Joseph Stalin in 1953. The three old German battleships , , and all met similar ends. Hessen was taken over by the Soviet Union and renamed Tsel. She was scrapped in 1960. Schleswig-Holstein was renamed Borodino, and was used as a target ship until 1960. Schlesien, too, was used as a target ship. She was broken up between 1952 and 1957. The s gained a new lease of life in the U.S. Navy as fire support ships. Radar and computer-controlled gunfire could be aimed with pinpoint accuracy to target. The U.S. recommissioned all four Iowa-class battleships for the Korean War and the for the Vietnam War. These were primarily used for shore bombardment, New Jersey firing nearly 6,000 rounds of 16-inch shells and over 14,000 rounds of 5-inch projectiles during her tour on the gunline, seven times more rounds against shore targets in Vietnam than she had fired in the Second World War. As part of Navy Secretary John F. Lehman's effort to build a 600-ship Navy in the 1980s, and in response to the commissioning of by the Soviet Union, the United States recommissioned all four Iowa-class battleships. On several occasions, battleships were support ships in carrier battle groups, or led their own battleship battle group. These were modernized to carry Tomahawk (TLAM) missiles, with New Jersey seeing action bombarding Lebanon in 1983 and 1984, while and fired their 16-inch (406 mm) guns at land targets and launched missiles during Operation Desert Storm in 1991. Wisconsin served as the TLAM strike commander for the Persian Gulf, directing the sequence of launches that marked the opening of Desert Storm, firing a total of 24 TLAMs during the first two days of the campaign. The primary threat to the battleships were Iraqi shore-based surface-to-surface missiles; Missouri was targeted by two Iraqi Silkworm missiles, with one missing and another being intercepted by the British destroyer . End of the battleship era After was stricken in 1962, the four Iowa-class ships were the only battleships in commission or reserve anywhere in the world. There was an extended debate when the four Iowa ships were finally decommissioned in the early 1990s. and were maintained to a standard whereby they could be rapidly returned to service as fire support vessels, pending the development of a superior fire support vessel. These last two battleships were finally stricken from the U.S. Naval Vessel Register in 2006. The Military Balance and Russian Foreign Military Review states the U.S. Navy listed one battleship in the reserve (Naval Inactive Fleet/Reserve 2nd Turn) in 2010. The Military Balance states the U.S. Navy listed no battleships in the reserve in 2014. When the last Iowa-class ship was finally stricken from the Naval Vessel Registry, no battleships remained in service or in reserve with any navy worldwide. A number are preserved as museum ships, either afloat or in drydock. The U.S. has eight battleships on display: , , , , , , , and . Missouri and New Jersey are museums at Pearl Harbor and Camden, New Jersey, respectively. Iowa is on display as an educational attraction at the Los Angeles Waterfront in San Pedro, California. Wisconsin now serves as a museum ship in Norfolk, Virginia. Massachusetts, which has the distinction of never having lost a man during service, is on display at the Battleship Cove naval museum in Fall River, Massachusetts. Texas, the first battleship turned into a museum, is normally on display at the San Jacinto Battleground State Historic Site, near Houston, but as of 2021 is closed for repairs. North Carolina is on display in Wilmington, North Carolina. Alabama is on display in Mobile, Alabama. The wreck of , sunk during the Pearl Harbor attack in 1941, is designated a historical landmark and national gravesite. The wreck of , also sunk during the attack, is a historic landmark. The only other 20th-century battleship on display is the Japanese pre-dreadnought . A replica of the ironclad battleship was built by the Weihai Port Bureau in 2003 and is on display in Weihai, China. Former battleships that were previously used as museum ships included , , and . Strategy and doctrine Doctrine Battleships were the embodiment of sea power. For American naval officer Alfred Thayer Mahan and his followers, a strong navy was vital to the success of a nation, and control of the seas was vital for the projection of force on land and overseas. Mahan's theory, proposed in The Influence of Sea Power Upon History, 1660–1783 of 1890, dictated the role of the battleship was to sweep the enemy from the seas. While the work of escorting, blockading, and raiding might be done by cruisers or smaller vessels, the presence of the battleship was a potential threat to any convoy escorted by any vessels other than capital ships. This concept of "potential threat" can be further generalized to the mere existence (as opposed to presence) of a powerful fleet tying the opposing fleet down. This concept came to be known as a "fleet in being"—an idle yet mighty fleet forcing others to spend time, resource and effort to actively guard against it. Mahan went on to say victory could only be achieved by engagements between battleships, which came to be known as the decisive battle doctrine in some navies, while targeting merchant ships (commerce raiding or guerre de course, as posited by the Jeune École) could never succeed. Mahan was highly influential in naval and political circles throughout the age of the battleship, calling for a large fleet of the most powerful battleships possible. Mahan's work developed in the late 1880s, and by the end of the 1890s it had acquired much international influence on naval strategy; in the end, it was adopted by many major navies (notably the British, American, German, and Japanese). The strength of Mahanian opinion was important in the development of the battleships arms races, and equally important in the agreement of the Powers to limit battleship numbers in the interwar era. The "fleet in being" suggested battleships could simply by their existence tie down superior enemy resources. This in turn was believed to be able to tip the balance of a conflict even without a battle. This suggested even for inferior naval powers a battleship fleet could have important strategic effect. Tactics While the role of battleships in both World Wars reflected Mahanian doctrine, the details of battleship deployment were more complex. Unlike ships of the line, the battleships of the late 19th and early 20th centuries had significant vulnerability to torpedoes and mines—because efficient mines and torpedoes did not exist before that—which could be used by relatively small and inexpensive craft. The Jeune École doctrine of the 1870s and 1880s recommended placing torpedo boats alongside battleships; these would hide behind the larger ships until gun-smoke obscured visibility enough for them to dart out and fire their torpedoes. While this tactic was made less effective by the development of smokeless propellant, the threat from more capable torpedo craft (later including submarines) remained. By the 1890s, the Royal Navy had developed the first destroyers, which were initially designed to intercept and drive off any attacking torpedo boats. During the First World War and subsequently, battleships were rarely deployed without a protective screen of destroyers. Battleship doctrine emphasized the concentration of the battlegroup. In order for this concentrated force to be able to bring its power to bear on a reluctant opponent (or to avoid an encounter with a stronger enemy fleet), battlefleets needed some means of locating enemy ships beyond horizon range. This was provided by scouting forces; at various stages battlecruisers, cruisers, destroyers, airships, submarines and aircraft were all used. (With the development of radio, direction finding and traffic analysis would come into play, as well, so even shore stations, broadly speaking, joined the battlegroup.) So for most of their history, battleships operated surrounded by squadrons of destroyers and cruisers. The North Sea campaign of the First World War illustrates how, despite this support, the threat of mine and torpedo attack, and the failure to integrate or appreciate the capabilities of new techniques, seriously inhibited the operations of the Royal Navy Grand Fleet, the greatest battleship fleet of its time. Strategic and diplomatic impact The presence of battleships had a great psychological and diplomatic impact. Similar to possessing nuclear weapons today, the ownership of battleships served to enhance a nation's force projection. Even during the Cold War, the psychological impact of a battleship was significant. In 1946, USS Missouri was dispatched to deliver the remains of the ambassador from Turkey, and her presence in Turkish and Greek waters staved off a possible Soviet thrust into the Balkan region. In September 1983, when Druze militia in Lebanon's Shouf Mountains fired upon U.S. Marine peacekeepers, the arrival of USS New Jersey stopped the firing. Gunfire from New Jersey later killed militia leaders. Value for money Battleships were the largest and most complex, and hence the most expensive warships of their time; as a result, the value of investment in battleships has always been contested. As the French politician Etienne Lamy wrote in 1879, "The construction of battleships is so costly, their effectiveness so uncertain and of such short duration, that the enterprise of creating an armored fleet seems to leave fruitless the perseverance of a people". The Jeune École school of thought of the 1870s and 1880s sought alternatives to the crippling expense and debatable utility of a conventional battlefleet. It proposed what would nowadays be termed a sea denial strategy, based on fast, long-ranged cruisers for commerce raiding and torpedo boat flotillas to attack enemy ships attempting to blockade French ports. The ideas of the Jeune École were ahead of their time; it was not until the 20th century that efficient mines, torpedoes, submarines, and aircraft were available that allowed similar ideas to be effectively implemented. The determination of powers such as Germany to build battlefleets with which to confront much stronger rivals has been criticized by historians, who emphasise the futility of investment in a battlefleet that has no chance of matching its opponent in an actual battle. Former operators : lost its two Dingyuan-class battleships and during the Battle of Weihaiwei in 1895. : lost its entire navy following the collapse of the Empire at the end of World War I. : its only battleship, KB Jugoslavija, was sunk by Italian frogmen during the 1918 Raid on Pula. : lost its entire navy upon its conquest by the Bolsheviks in 1921. : sole surviving battleship TCG Turgut Reis was decommissioned in 1933. : lost its two surviving s during the Spanish Civil War, both in 1937. : lost its two s during the German bombing of Salamis in 1941. : scuttled its two surviving s in 1945, during the closing months of World War II. : surrendered its sole surviving battleship, , to the United States following World War II. : decommissioned its last battleship, , in 1952. : decommissioned its two s in 1953. : decommissioned its last two s in 1956. : decommissioned its last battleship, , in 1957. : decommissioned its last battleship, , in 1958. : decommissioned its last battleship, , in 1960. : decommissioned its last battleship, , in 1970. : decommissioned its last battleship, , in 1992. She was the last active battleship of any navy.
Technology
Naval warfare
null
4099
https://en.wikipedia.org/wiki/Bone
Bone
A bone is a rigid organ that constitutes part of the skeleton in most vertebrate animals. Bones protect the various other organs of the body, produce red and white blood cells, store minerals, provide structure and support for the body, and enable mobility. Bones come in a variety of shapes and sizes and have complex internal and external structures. They are lightweight yet strong and hard and serve multiple functions. Bone tissue (osseous tissue), which is also called bone in the uncountable sense of that word, is hard tissue, a type of specialised connective tissue. It has a honeycomb-like matrix internally, which helps to give the bone rigidity. Bone tissue is made up of different types of bone cells. Osteoblasts and osteocytes are involved in the formation and mineralisation of bone; osteoclasts are involved in the resorption of bone tissue. Modified (flattened) osteoblasts become the lining cells that form a protective layer on the bone surface. The mineralised matrix of bone tissue has an organic component of mainly collagen called ossein and an inorganic component of bone mineral made up of various salts. Bone tissue is mineralized tissue of two types, cortical bone and cancellous bone. Other types of tissue found in bones include bone marrow, endosteum, periosteum, nerves, blood vessels and cartilage. In the human body at birth, approximately 300 bones are present. Many of these fuse together during development, leaving a total of 206 separate bones in the adult, not counting numerous small sesamoid bones. The largest bone in the body is the femur or thigh-bone, and the smallest is the stapes in the middle ear. The Greek word for bone is ὀστέον ("osteon"), hence the many terms that use it as a prefix—such as osteopathy. In anatomical terminology, including the Terminologia Anatomica international standard, the word for a bone is os (for example, os breve, os longum, os sesamoideum). Structure Bone is not uniformly solid, but consists of a flexible matrix (about 30%) and bound minerals (about 70%), which are intricately woven and continuously remodeled by a group of specialized bone cells. Their unique composition and design allows bones to be relatively hard and strong, while remaining lightweight. Bone matrix is 90 to 95% composed of elastic collagen fibers, also known as ossein, and the remainder is ground substance. The elasticity of collagen improves fracture resistance. The matrix is hardened by the binding of inorganic mineral salt, calcium phosphate, in a chemical arrangement known as bone mineral, a form of calcium apatite. It is the mineralization that gives bones rigidity. Bone is actively constructed and remodeled throughout life by specialized bone cells known as osteoblasts and osteoclasts. Within any single bone, the tissue is woven into two main patterns: cortical and cancellous bone, each with a distinct appearance and characteristics. Cortex The hard outer layer of bones is composed of cortical bone, which is also called compact bone as it is much denser than cancellous bone. It forms the hard exterior (cortex) of bones. The cortical bone gives bone its smooth, white, and solid appearance, and accounts for 80% of the total bone mass of an adult human skeleton. It facilitates bone's main functions—to support the whole body, to protect organs, to provide levers for movement, and to store and release chemical elements, mainly calcium. It consists of multiple microscopic columns, each called an osteon or Haversian system. Each column is multiple layers of osteoblasts and osteocytes around a central canal called the osteonic canal. Volkmann's canals at right angles connect the osteons together. The columns are metabolically active, and as bone is reabsorbed and created the nature and location of the cells within the osteon will change. Cortical bone is covered by a periosteum on its outer surface, and an endosteum on its inner surface. The endosteum is the boundary between the cortical bone and the cancellous bone. The primary anatomical and functional unit of cortical bone is the osteon. Trabeculae Cancellous bone or spongy bone, also known as trabecular bone, is the internal tissue of the skeletal bone and is an open cell porous network that follows the material properties of biofoams. Cancellous bone has a higher surface-area-to-volume ratio than cortical bone and it is less dense. This makes it weaker and more flexible. The greater surface area also makes it suitable for metabolic activities such as the exchange of calcium ions. Cancellous bone is typically found at the ends of long bones, near joints, and in the interior of vertebrae. Cancellous bone is highly vascular and often contains red bone marrow where hematopoiesis, the production of blood cells, occurs. The primary anatomical and functional unit of cancellous bone is the trabecula. The trabeculae are aligned towards the mechanical load distribution that a bone experiences within long bones such as the femur. As far as short bones are concerned, trabecular alignment has been studied in the vertebral pedicle. Thin formations of osteoblasts covered in endosteum create an irregular network of spaces, known as trabeculae. Within these spaces are bone marrow and hematopoietic stem cells that give rise to platelets, red blood cells and white blood cells. Trabecular marrow is composed of a network of rod- and plate-like elements that make the overall organ lighter and allow room for blood vessels and marrow. Trabecular bone accounts for the remaining 20% of total bone mass but has nearly ten times the surface area of compact bone. The words cancellous and trabecular refer to the tiny lattice-shaped units (trabeculae) that form the tissue. It was first illustrated accurately in the engravings of Crisóstomo Martinez. Marrow Bone marrow, also known as myeloid tissue in red bone marrow, can be found in almost any bone that holds cancellous tissue. In newborns, all such bones are filled exclusively with red marrow or hematopoietic marrow, but as the child ages the hematopoietic fraction decreases in quantity and the fatty/ yellow fraction called marrow adipose tissue (MAT) increases in quantity. In adults, red marrow is mostly found in the bone marrow of the femur, the ribs, the vertebrae and pelvic bones. Vascular supply Bone receives about 10% of cardiac output. Blood enters the endosteum, flows through the marrow, and exits through small vessels in the cortex. In humans, blood oxygen tension in bone marrow is about 6.6%, compared to about 12% in arterial blood, and 5% in venous and capillary blood. Cells Bone is metabolically active tissue composed of several types of cells. These cells include osteoblasts, which are involved in the creation and mineralization of bone tissue, osteocytes, and osteoclasts, which are involved in the reabsorption of bone tissue. Osteoblasts and osteocytes are derived from osteoprogenitor cells, but osteoclasts are derived from the same cells that differentiate to form macrophages and monocytes. Within the marrow of the bone there are also hematopoietic stem cells. These cells give rise to other cells, including white blood cells, red blood cells, and platelets. Osteoblast Osteoblasts are mononucleate bone-forming cells. They are located on the surface of osteon seams and make a protein mixture known as osteoid, which mineralizes to become bone. The osteoid seam is a narrow region of a newly formed organic matrix, not yet mineralized, located on the surface of a bone. Osteoid is primarily composed of Type I collagen. Osteoblasts also manufacture hormones, such as prostaglandins, to act on the bone itself. The osteoblast creates and repairs new bone by actually building around itself. First, the osteoblast puts up collagen fibers. These collagen fibers are used as a framework for the osteoblasts' work. The osteoblast then deposits calcium phosphate which is hardened by hydroxide and bicarbonate ions. The brand-new bone created by the osteoblast is called osteoid. Once the osteoblast is finished working it is actually trapped inside the bone once it hardens. When the osteoblast becomes trapped, it becomes known as an osteocyte. Other osteoblasts remain on the top of the new bone and are used to protect the underlying bone, these become known as bone lining cells. Osteocyte Osteocytes are cells of mesenchymal origin and originate from osteoblasts that have migrated into and become trapped and surrounded by a bone matrix that they themselves produced. The spaces the cell body of osteocytes occupy within the mineralized collagen type I matrix are known as lacunae, while the osteocyte cell processes occupy channels called canaliculi. The many processes of osteocytes reach out to meet osteoblasts, osteoclasts, bone lining cells, and other osteocytes probably for the purposes of communication. Osteocytes remain in contact with other osteocytes in the bone through gap junctions—coupled cell processes which pass through the canalicular channels. Osteoclast Osteoclasts are very large multinucleate cells that are responsible for the breakdown of bones by the process of bone resorption. New bone is then formed by the osteoblasts. Bone is constantly remodeled by the resorption of osteoclasts and created by osteoblasts. Osteoclasts are large cells with multiple nuclei located on bone surfaces in what are called Howship's lacunae (or resorption pits). These lacunae are the result of surrounding bone tissue that has been reabsorbed. Because the osteoclasts are derived from a monocyte stem-cell lineage, they are equipped with phagocytic-like mechanisms similar to circulating macrophages. Osteoclasts mature and/or migrate to discrete bone surfaces. Upon arrival, active enzymes, such as tartrate-resistant acid phosphatase, are secreted against the mineral substrate. The reabsorption of bone by osteoclasts also plays a role in calcium homeostasis. Composition Bones consist of living cells (osteoblasts and osteocytes) embedded in a mineralized organic matrix. The primary inorganic component of human bone is hydroxyapatite, the dominant bone mineral, having the nominal composition of Ca10(PO4)6(OH)2. The organic components of this matrix consist mainly of type I collagen—"organic" referring to materials produced as a result of the human body—and inorganic components, which alongside the dominant hydroxyapatite phase, include other compounds of calcium and phosphate including salts. Approximately 30% of the acellular component of bone consists of organic matter, while roughly 70% by mass is attributed to the inorganic phase. The collagen fibers give bone its tensile strength, and the interspersed crystals of hydroxyapatite give bone its compressive strength. These effects are synergistic. The exact composition of the matrix may be subject to change over time due to nutrition and biomineralization, with the ratio of calcium to phosphate varying between 1.3 and 2.0 (per weight), and trace minerals such as magnesium, sodium, potassium and carbonate also be found. Type I collagen composes 90–95% of the organic matrix, with the remainder of the matrix being a homogenous liquid called ground substance consisting of proteoglycans such as hyaluronic acid and chondroitin sulfate, as well as non-collagenous proteins such as osteocalcin, osteopontin or bone sialoprotein. Collagen consists of strands of repeating units, which give bone tensile strength, and are arranged in an overlapping fashion that prevents shear stress. The function of ground substance is not fully known. Two types of bone can be identified microscopically according to the arrangement of collagen: woven and lamellar. Woven bone (also known as fibrous bone), which is characterized by a haphazard organization of collagen fibers and is mechanically weak. Lamellar bone, which has a regular parallel alignment of collagen into sheets ("lamellae") and is mechanically strong. Woven bone is produced when osteoblasts produce osteoid rapidly, which occurs initially in all fetal bones, but is later replaced by more resilient lamellar bone. In adults, woven bone is created after fractures or in Paget's disease. Woven bone is weaker, with a smaller number of randomly oriented collagen fibers, but forms quickly; it is for this appearance of the fibrous matrix that the bone is termed woven. It is soon replaced by lamellar bone, which is highly organized in concentric sheets with a much lower proportion of osteocytes to surrounding tissue. Lamellar bone, which makes its first appearance in humans in the fetus during the third trimester, is stronger and filled with many collagen fibers parallel to other fibers in the same layer (these parallel columns are called osteons). In cross-section, the fibers run in opposite directions in alternating layers, much like in plywood, assisting in the bone's ability to resist torsion forces. After a fracture, woven bone forms initially and is gradually replaced by lamellar bone during a process known as "bony substitution". Compared to woven bone, lamellar bone formation takes place more slowly. The orderly deposition of collagen fibers restricts the formation of osteoid to about 1 to 2 μm per day. Lamellar bone also requires a relatively flat surface to lay the collagen fibers in parallel or concentric layers. Deposition The extracellular matrix of bone is laid down by osteoblasts, which secrete both collagen and ground substance. These cells synthesise collagen alpha polypetpide chains and then secrete collagen molecules. The collagen molecules associate with their neighbors and crosslink via lysyl oxidase to form collagen fibrils. At this stage, they are not yet mineralized, and this zone of unmineralized collagen fibrils is called "osteoid". Around and inside collagen fibrils calcium and phosphate eventually precipitate within days to weeks becoming then fully mineralized bone with an overall carbonate substituted hydroxyapatite inorganic phase. In order to mineralise the bone, the osteoblasts secrete alkaline phosphatase, some of which is carried by vesicles. This cleaves the inhibitory pyrophosphate and simultaneously generates free phosphate ions for mineralization, acting as the foci for calcium and phosphate deposition. Vesicles may initiate some of the early mineralization events by rupturing and acting as a centre for crystals to grow on. Bone mineral may be formed from globular and plate structures, and via initially amorphous phases. Types Five types of bones are found in the human body: long, short, flat, irregular, and sesamoid. Long bones are characterized by a shaft, the diaphysis, that is much longer than its width; and by an epiphysis, a rounded head at each end of the shaft. They are made up mostly of compact bone, with lesser amounts of marrow, located within the medullary cavity, and areas of spongy, cancellous bone at the ends of the bones. Most bones of the limbs, including those of the fingers and toes, are long bones. The exceptions are the eight carpal bones of the wrist, the seven articulating tarsal bones of the ankle and the sesamoid bone of the kneecap. Long bones such as the clavicle, that have a differently shaped shaft or ends are also called modified long bones. Short bones are roughly cube-shaped, and have only a thin layer of compact bone surrounding a spongy interior. Short bones provide stability and support as well as some limited motion. The bones of the wrist and ankle are short bones. Flat bones are thin and generally curved, with two parallel layers of compact bone sandwiching a layer of spongy bone. Most of the bones of the skull are flat bones, as is the sternum. Sesamoid bones are bones embedded in tendons. Since they act to hold the tendon further away from the joint, the angle of the tendon is increased and thus the leverage of the muscle is increased. Examples of sesamoid bones are the patella and the pisiform. Irregular bones do not fit into the above categories. They consist of thin layers of compact bone surrounding a spongy interior. As implied by the name, their shapes are irregular and complicated. Often this irregular shape is due to their many centers of ossification or because they contain bony sinuses. The bones of the spine, pelvis, and some bones of the skull are irregular bones. Examples include the ethmoid and sphenoid bones. Terminology In the study of anatomy, anatomists use a number of anatomical terms to describe the appearance, shape and function of bones. Other anatomical terms are also used to describe the location of bones. Like other anatomical terms, many of these derive from Latin and Greek. Some anatomists still use Latin to refer to bones. The term "osseous", and the prefix "osteo-", referring to things related to bone, are still used commonly today. Some examples of terms used to describe bones include the term "foramen" to describe a hole through which something passes, and a "canal" or "meatus" to describe a tunnel-like structure. A protrusion from a bone can be called a number of terms, including a "condyle", "crest", "spine", "eminence", "tubercle" or "tuberosity", depending on the protrusion's shape and location. In general, long bones are said to have a "head", "neck", and "body". When two bones join, they are said to "articulate". If the two bones have a fibrous connection and are relatively immobile, then the joint is called a "suture". Development The formation of bone is called ossification. During the fetal stage of development this occurs by two processes: intramembranous ossification and endochondral ossification. Intramembranous ossification involves the formation of bone from connective tissue whereas endochondral ossification involves the formation of bone from cartilage. Intramembranous ossification mainly occurs during formation of the flat bones of the skull but also the mandible, maxilla, and clavicles; the bone is formed from connective tissue such as mesenchyme tissue rather than from cartilage. The process includes: the development of the ossification center, calcification, trabeculae formation and the development of the periosteum. Endochondral ossification occurs in long bones and most other bones in the body; it involves the development of bone from cartilage. This process includes the development of a cartilage model, its growth and development, development of the primary and secondary ossification centers, and the formation of articular cartilage and the epiphyseal plates. Endochondral ossification begins with points in the cartilage called "primary ossification centers". They mostly appear during fetal development, though a few short bones begin their primary ossification after birth. They are responsible for the formation of the diaphyses of long bones, short bones and certain parts of irregular bones. Secondary ossification occurs after birth and forms the epiphyses of long bones and the extremities of irregular and flat bones. The diaphysis and both epiphyses of a long bone are separated by a growing zone of cartilage (the epiphyseal plate). At skeletal maturity (18 to 25 years of age), all of the cartilage is replaced by bone, fusing the diaphysis and both epiphyses together (epiphyseal closure). In the upper limbs, only the diaphyses of the long bones and scapula are ossified. The epiphyses, carpal bones, coracoid process, medial border of the scapula, and acromion are still cartilaginous. The following steps are followed in the conversion of cartilage to bone: Zone of reserve cartilage. This region, farthest from the marrow cavity, consists of typical hyaline cartilage that as yet shows no sign of transforming into bone. Zone of cell proliferation. A little closer to the marrow cavity, chondrocytes multiply and arrange themselves into longitudinal columns of flattened lacunae. Zone of cell hypertrophy. Next, the chondrocytes cease to divide and begin to hypertrophy (enlarge), much like they do in the primary ossification center of the fetus. The walls of the matrix between lacunae become very thin. Zone of calcification. Minerals are deposited in the matrix between the columns of lacunae and calcify the cartilage. These are not the permanent mineral deposits of bone, but only a temporary support for the cartilage that would otherwise soon be weakened by the breakdown of the enlarged lacunae. Zone of bone deposition. Within each column, the walls between the lacunae break down and the chondrocytes die. This converts each column into a longitudinal channel, which is immediately invaded by blood vessels and marrow from the marrow cavity. Osteoblasts line up along the walls of these channels and begin depositing concentric lamellae of matrix, while osteoclasts dissolve the temporarily calcified cartilage. Bone development in youth is extremely important in preventing future complications of the skeletal system. Regular exercise during childhood and adolescence can help improve bone architecture, making bones more resilient and less prone to fractures in adulthood. Physical activity, specifically resistance training, stimulates growth of bones by increasing both bone density and strength. Studies have shown a positive correlation between the adaptations of resistance training and bone density. While nutritional and pharmacological approaches may also improve bone health, the strength and balance adaptations from resistance training are a substantial added benefit. Weight-bearing exercise may assist in osteoblast (bone-forming cells) formation and help to increase bone mineral content. High-impact sports, which involve quick changes in direction, jumping, and running, are particularly effective with stimulating bone growth in the youth. Sports such as soccer, basketball, and tennis have shown to have positive effects on bone mineral density as well as bone mineral content in teenagers. Engaging in physical activity during childhood years, particularly in these high-impact osteogenic sports, can help to positively influence bone mineral density in adulthood. Children and adolescents who participate in regular physical activity will place the groundwork for bone health later in life, reducing the risk of bone-related conditions such as osteoporosis. Functions Bones have a variety of functions: Mechanical Bones serve a variety of mechanical functions. Together the bones in the body form the skeleton. They provide a frame to keep the body supported, and an attachment point for skeletal muscles, tendons, ligaments and joints, which function together to generate and transfer forces so that individual body parts or the whole body can be manipulated in three-dimensional space (the interaction between bone and muscle is studied in biomechanics). Bones protect internal organs, such as the skull protecting the brain or the ribs protecting the heart and lungs. Because of the way that bone is formed, bone has a high compressive strength of about , poor tensile strength of 104–121 MPa, and a very low shear stress strength (51.6 MPa). This means that bone resists pushing (compressional) stress well, resist pulling (tensional) stress less well, but only poorly resists shear stress (such as due to torsional loads). While bone is essentially brittle, bone does have a significant degree of elasticity, contributed chiefly by collagen. Mechanically, bones also have a special role in hearing. The ossicles are three small bones in the middle ear which are involved in sound transduction. Synthetic The cancellous part of bones contain bone marrow. Bone marrow produces blood cells in a process called hematopoiesis. Blood cells that are created in bone marrow include red blood cells, platelets and white blood cells. Progenitor cells such as the hematopoietic stem cell divide in a process called mitosis to produce precursor cells. These include precursors which eventually give rise to white blood cells, and erythroblasts which give rise to red blood cells. Unlike red and white blood cells, created by mitosis, platelets are shed from very large cells called megakaryocytes. This process of progressive differentiation occurs within the bone marrow. After the cells are matured, they enter the circulation. Every day, over 2.5 billion red blood cells and platelets, and 50–100 billion granulocytes are produced in this way. As well as creating cells, bone marrow is also one of the major sites where defective or aged red blood cells are destroyed. Metabolic Mineral storage – bones act as reserves of minerals important for the body, most notably calcium and phosphorus. Determined by the species, age, and the type of bone, bone cells make up to 15 percent of the bone. Growth factor storage—mineralized bone matrix stores important growth factors such as insulin-like growth factors, transforming growth factor, bone morphogenetic proteins and others. Fat storage – marrow adipose tissue (MAT) acts as a storage reserve of fatty acids. Acid-base balance – bone buffers the blood against excessive pH changes by absorbing or releasing alkaline salts. Detoxification – bone tissues can also store heavy metals and other foreign elements, removing them from the blood and reducing their effects on other tissues. These can later be gradually released for excretion. Endocrine organ – bone controls phosphate metabolism by releasing fibroblast growth factor 23 (FGF-23), which acts on kidneys to reduce phosphate reabsorption. Bone cells also release a hormone called osteocalcin, which contributes to the regulation of blood sugar (glucose) and fat deposition. Osteocalcin increases both the insulin secretion and sensitivity, in addition to boosting the number of insulin-producing cells and reducing stores of fat. Calcium balance – the process of bone resorption by the osteoclasts releases stored calcium into the systemic circulation and is an important process in regulating calcium balance. As bone formation actively fixes circulating calcium in its mineral form, removing it from the bloodstream, resorption actively unfixes it thereby increasing circulating calcium levels. These processes occur in tandem at site-specific locations. Calcium Strong bones during our youth is essential for preventing osteoporosis and bone fragility as we age. The importance of insuring factors that could influence increases in BMD while lowering our risks for further bone degradation is necessary during our childhood as these factors lead to a supportive and healthy lifestyle/bone health. Up till the age of 30, the bone stores that we have will ultimately start to decrease as we surpass this age. Influencing factors that can help us have larger stores and higher amounts of BMD will allow us to see less harmful results as we reach older adulthood. The issue of having fragile bones during our childhood leads to an increase in certain disorders and conditions such as juvenile osteoporosis, though it is less common to see, the necessity for a healthy routine especially when it comes to bone development is essential in our youth. Children that naturally have lower bone mineral density have a lower quality of life and therefore lead a life that is less fulfilling and uncomfortable. Factors such as increases in Calcium intake has been shown to increase BMD stores. Studies have shown that increasing calcium stores whether that be through supplementation or intake via foods and beverages such as leafy greens and milk have pushed the notion that prepuberty or even early pubertal children will see increases in BMD with the addition of increase Calcium intake. Another research study goes on to show that long-term calcium intake has been proven to significantly contribute to overall BMD in children without certain conditions or disorders. This data shows that ensuring adequate calcium intake in children reinforces the structure and rate at which bones will begin to densify. Further detailing how structuring a strong nutritional plan with adequate amounts of Calcium sources can lead to strong bones but also can be a worth-while strategy into preventing further damage or degradation of bone stores as we age. The connection between Calcium intake & BMD and its effects on youth as a whole is a very world-wide issue and has been shown to affect different ethnicities in a variety of differing ways. In a recent study, there was a strong correlation between calcium intake and BMD across a variety of diverse populations of children and adolescence ultimately coming to the conclusion that fundamentally, achieving optimal bone health is necessary for providing our youth with the ability to undergo hormonal changes as well. They found in a study of over 10,000 children ages 8-19 that in females, African Americans, and the 12-15 adolescent groups that at 2.6-2.8g/kg of body weight, they began to see a decrease in BMD. They elaborate on this by determining that this is strongly influenced by a lower baseline in calcium intake throughout puberty. Genetic factors have also been shown to influence lower acceptance of calcium stores. Ultimately, the window that youth have for accruing and building resilient bones is very minimal. Being able to consistently meet calcium needs while also engaging in weight-bearing exercise is essential for building a strong initial bone foundation at which to build upon. Being able to reach our daily value of 1300mg for ages 9-18 is becoming more and more necessary and as we progress in health, the chance that osteoporosis and other factors such as bone fragility or potential for stunted growth can be greatly reduced through these resources, ultimately leading to a more fulfilling and healthier lifestyle. Remodeling Bone is constantly being created and replaced in a process known as remodeling. This ongoing turnover of bone is a process of resorption followed by replacement of bone with little change in shape. This is accomplished through osteoblasts and osteoclasts. Cells are stimulated by a variety of signals, and together referred to as a remodeling unit. Approximately 10% of the skeletal mass of an adult is remodelled each year. The purpose of remodeling is to regulate calcium homeostasis, repair microdamaged bones from everyday stress, and to shape the skeleton during growth. Repeated stress, such as weight-bearing exercise or bone healing, results in the bone thickening at the points of maximum stress (Wolff's law). It has been hypothesized that this is a result of bone's piezoelectric properties, which cause bone to generate small electrical potentials under stress. The action of osteoblasts and osteoclasts are controlled by a number of chemical enzymes that either promote or inhibit the activity of the bone remodeling cells, controlling the rate at which bone is made, destroyed, or changed in shape. The cells also use paracrine signalling to control the activity of each other. For example, the rate at which osteoclasts resorb bone is inhibited by calcitonin and osteoprotegerin. Calcitonin is produced by parafollicular cells in the thyroid gland, and can bind to receptors on osteoclasts to directly inhibit osteoclast activity. Osteoprotegerin is secreted by osteoblasts and is able to bind RANK-L, inhibiting osteoclast stimulation. Osteoblasts can also be stimulated to increase bone mass through increased secretion of osteoid and by inhibiting the ability of osteoclasts to break down osseous tissue. Increased secretion of osteoid is stimulated by the secretion of growth hormone by the pituitary, thyroid hormone and the sex hormones (estrogens and androgens). These hormones also promote increased secretion of osteoprotegerin. Osteoblasts can also be induced to secrete a number of cytokines that promote reabsorption of bone by stimulating osteoclast activity and differentiation from progenitor cells. Vitamin D, parathyroid hormone and stimulation from osteocytes induce osteoblasts to increase secretion of RANK-ligand and interleukin 6, which cytokines then stimulate increased reabsorption of bone by osteoclasts. These same compounds also increase secretion of macrophage colony-stimulating factor by osteoblasts, which promotes the differentiation of progenitor cells into osteoclasts, and decrease secretion of osteoprotegerin. Volume Bone volume is determined by the rates of bone formation and bone resorption. Certain growth factors may work to locally alter bone formation by increasing osteoblast activity. Numerous bone-derived growth factors have been isolated and classified via bone cultures. These factors include insulin-like growth factors I and II, transforming growth factor-beta, fibroblast growth factor, platelet-derived growth factor, and bone morphogenetic proteins. Evidence suggests that bone cells produce growth factors for extracellular storage in the bone matrix. The release of these growth factors from the bone matrix could cause the proliferation of osteoblast precursors. Essentially, bone growth factors may act as potential determinants of local bone formation. Cancellous bone volume in postmenopausal osteoporosis may be determined by the relationship between the total bone forming surface and the percent of surface resorption. Clinical significance A number of diseases can affect bone, including arthritis, fractures, infections, osteoporosis and tumors. Conditions relating to bone can be managed by a variety of doctors, including rheumatologists for joints, and orthopedic surgeons, who may conduct surgery to fix broken bones. Other doctors, such as rehabilitation specialists may be involved in recovery, radiologists in interpreting the findings on imaging, and pathologists in investigating the cause of the disease, and family doctors may play a role in preventing complications of bone disease such as osteoporosis. When a doctor sees a patient, a history and exam will be taken. Bones are then often imaged, called radiography. This might include ultrasound X-ray, CT scan, MRI scan and other imaging such as a Bone scan, which may be used to investigate cancer. Other tests such as a blood test for autoimmune markers may be taken, or a synovial fluid aspirate may be taken. Fractures In normal bone, fractures occur when there is significant force applied or repetitive trauma over a long time. Fractures can also occur when a bone is weakened, such as with osteoporosis, or when there is a structural problem, such as when the bone remodels excessively (such as Paget's disease) or is the site of the growth of cancer. Common fractures include wrist fractures and hip fractures, associated with osteoporosis, vertebral fractures associated with high-energy trauma and cancer, and fractures of long-bones. Not all fractures are painful. When serious, depending on the fractures type and location, complications may include flail chest, compartment syndromes or fat embolism. Compound fractures involve the bone's penetration through the skin. Some complex fractures can be treated by the use of bone grafting procedures that replace missing bone portions. Fractures and their underlying causes can be investigated by X-rays, CT scans and MRIs. Fractures are described by their location and shape, and several classification systems exist, depending on the location of the fracture. A common long bone fracture in children is a Salter–Harris fracture. When fractures are managed, pain relief is often given, and the fractured area is often immobilised. This is to promote bone healing. In addition, surgical measures such as internal fixation may be used. Because of the immobilisation, people with fractures are often advised to undergo rehabilitation. Tumors Tumor that can affect bone in several ways. Examples of benign bone tumors include osteoma, osteoid osteoma, osteochondroma, osteoblastoma, enchondroma, giant-cell tumor of bone, and aneurysmal bone cyst. Cancer Cancer can arise in bone tissue, and bones are also a common site for other cancers to spread (metastasise) to. Cancers that arise in bone are called "primary" cancers, although such cancers are rare. Metastases within bone are "secondary" cancers, with the most common being breast cancer, lung cancer, prostate cancer, thyroid cancer, and kidney cancer. Secondary cancers that affect bone can either destroy bone (called a "lytic" cancer) or create bone (a "sclerotic" cancer). Cancers of the bone marrow inside the bone can also affect bone tissue, examples including leukemia and multiple myeloma. Bone may also be affected by cancers in other parts of the body. Cancers in other parts of the body may release parathyroid hormone or parathyroid hormone-related peptide. This increases bone reabsorption, and can lead to bone fractures. Bone tissue that is destroyed or altered as a result of cancers is distorted, weakened, and more prone to fracture. This may lead to compression of the spinal cord, destruction of the marrow resulting in bruising, bleeding and immunosuppression, and is one cause of bone pain. If the cancer is metastatic, then there might be other symptoms depending on the site of the original cancer. Some bone cancers can also be felt. Cancers of the bone are managed according to their type, their stage, prognosis, and what symptoms they cause. Many primary cancers of bone are treated with radiotherapy. Cancers of bone marrow may be treated with chemotherapy, and other forms of targeted therapy such as immunotherapy may be used. Palliative care, which focuses on maximising a person's quality of life, may play a role in management, particularly if the likelihood of survival within five years is poor. Diabetes Type 1 diabetes is an autoimmune disease in which the body attacks the insulin-producing pancreas cells causing the body to not make enough insulin. In contrast type 2 diabetes in which the body creates enough Insulin, but becomes resistant to it over time. Children makeup approximately 85% of Type 1 Diabetes cases and in America there was an average 22% rise in cases over the first 24 months of the Covid-19 Pandemic. With the increase of developing some form of diabetes across all ranges continually growing the health impacts on bone development and bone health in these populations are still being researched. Most evidence suggests that diabetes, either Type 1 and Type 2, inhibits osteoblastic activity and causes both lower BMD and BMC in both adults and children. The weakening of these developmental aspects is thought to lead to an increased risk of developing many diseases such as osteoarthritis, osteoporosis, osteopenia and fractures. Development of any of these diseases is thought to be correlated with a decrease in ability to perform in athletic environments and activities of daily living. Focusing on therapies that target molecules like osteocalcin or AGEs could provide new ways to improve bone health and help manage the complications of diabetes more effectively. Other painful conditions Osteomyelitis is inflammation of the bone or bone marrow due to bacterial infection. Osteomalacia is a painful softening of adult bone caused by severe vitamin D deficiency. Osteogenesis imperfecta Osteochondritis dissecans Ankylosing spondylitis Skeletal fluorosis is a bone disease caused by an excessive accumulation of fluoride in the bones. In advanced cases, skeletal fluorosis damages bones and joints and is painful. Osteoporosis Osteoporosis is a disease of bone where there is reduced bone mineral density, increasing the likelihood of fractures. Osteoporosis is defined in women by the World Health Organization as a bone mineral density of 2.5 standard deviations below peak bone mass, relative to the age and sex-matched average. This density is measured using dual energy X-ray absorptiometry (DEXA), with the term "established osteoporosis" including the presence of a fragility fracture. Osteoporosis is most common in women after menopause, when it is called "postmenopausal osteoporosis", but may develop in men and premenopausal women in the presence of particular hormonal disorders and other chronic diseases or as a result of smoking and medications, specifically glucocorticoids. Osteoporosis usually has no symptoms until a fracture occurs. For this reason, DEXA scans are often done in people with one or more risk factors, who have developed osteoporosis and are at risk of fracture. One of the most important risk factors for osteoporosis is advanced age. Accumulation of oxidative DNA damage in osteoblastic and osteoclastic cells appears to be a key factor in age-related osteoporosis. Osteoporosis treatment includes advice to stop smoking, decrease alcohol consumption, exercise regularly, and have a healthy diet. Calcium and trace mineral supplements may also be advised, as may Vitamin D. When medication is used, it may include bisphosphonates, Strontium ranelate, and hormone replacement therapy. Osteopathic medicine Osteopathic medicine is a school of medical thought that links the musculoskeletal system to overall health. , over 77,000 physicians in the United States are trained in osteopathic medical schools. Bone health Bone health is vastly important all throughout life due to a number of reasons, some of those being, without strong healthy bones we are more at risk for different chronic diseases, and fractures as well as day to day function being more difficult with poor bone health. Developing strong bones as a child is one of the most important steps to having healthy bones all throughout life because this is when a strong foundation is built, which will make it much easier to maintain musculoskeletal health in later years. Adolescence offers a window to really develop bones in either a positive or negative way. It is estimated that diet and exercise during these years can impact peak bone mass as an adult nearly 20-40%. One study done on children with developmental coordination disorder found an increase in bone mass up to 4% and 5% in the cortical areas of the tibia alone from a 13 week training period, which is truly significant when considering how participants only participated in the multimodal workouts twice per week, and it would be reasonable to expect these increases to be greater if workouts were more frequent, especially in youth without developmental coordination disorder. Peak bone mass occurs between the second and third decade of most people's lives, and with this being the case if we can really stockpile as much bone mass and increase our BMD and BMC by living healthy active lives, and having good diets that consume adequate calcium and vitamin D then we will truly have a leg up in our later lives as well as actively decreasing risks of certain chronic diseases such as osteoporosis. Osteology The study of bones and teeth is referred to as osteology. It is frequently used in anthropology, archeology and forensic science for a variety of tasks. This can include determining the nutritional, health, age or injury status of the individual the bones were taken from. Preparing fleshed bones for these types of studies can involve the process of maceration. Typically anthropologists and archeologists study bone tools made by Homo sapiens and Homo neanderthalensis. Bones can serve a number of uses such as projectile points or artistic pigments, and can also be made from external bones such as antlers. Other animals Bird skeletons are very lightweight. Their bones are smaller and thinner, to aid flight. Among mammals, bats come closest to birds in terms of bone density, suggesting that small dense bones are a flight adaptation. Many bird bones have little marrow due to them being hollow. A bird's beak is primarily made of bone as projections of the mandibles which are covered in keratin. Some bones, primarily formed separately in subcutaneous tissues, include headgears (such as bony core of horns, antlers, ossicones), osteoderm, and os penis/os clitoris. A deer's antlers are composed of bone which is an unusual example of bone being outside the skin of the animal once the velvet is shed. The extinct predatory fish Dunkleosteus had sharp edges of hard exposed bone along its jaws. The proportion of cortical bone that is 80% in the human skeleton may be much lower in other animals, especially in marine mammals and marine turtles, or in various Mesozoic marine reptiles, such as ichthyosaurs, among others. This proportion can vary quickly in evolution; it often increases in early stages of returns to an aquatic lifestyle, as seen in early whales and pinnipeds, among others. It subsequently decreases in pelagic taxa, which typically acquire spongy bone, but aquatic taxa that live in shallow water can retain very thick, pachyostotic, osteosclerotic, or pachyosteosclerotic bones, especially if they move slowly, like sea cows. In some cases, even marine taxa that had acquired spongy bone can revert to thicker, compact bones if they become adapted to live in shallow water, or in hypersaline (denser) water. Many animals, particularly herbivores, practice osteophagy—the eating of bones. This is presumably carried out in order to replenish lacking phosphate. Many bone diseases that affect humans also affect other vertebrates—an example of one disorder is skeletal fluorosis. Society and culture Bones from slaughtered animals have a number of uses. In prehistoric times, they have been used for making bone tools. They have further been used in bone carving, already important in prehistoric art, and also in modern time as crafting materials for buttons, beads, handles, bobbins, calculation aids, head nuts, dice, poker chips, pick-up sticks, arrows, scrimshaw, ornaments, etc. Bone glue can be made by prolonged boiling of ground or cracked bones, followed by filtering and evaporation to thicken the resulting fluid. Historically once important, bone glue and other animal glues today have only a few specialized uses, such as in antiques restoration. Essentially the same process, with further refinement, thickening and drying, is used to make gelatin. Broth is made by simmering several ingredients for a long time, traditionally including bones. Bone char, a porous, black, granular material primarily used for filtration and also as a black pigment, is produced by charring mammal bones. Oracle bone script was a writing system used in ancient China based on inscriptions in bones. Its name originates from oracle bones, which were mainly ox clavicle. The Ancient Chinese (mainly in the Shang dynasty), would write their questions on the oracle bone, and burn the bone, and where the bone cracked would be the answer for the questions. To point the bone at someone is considered bad luck in some cultures, such as Australian aborigines, such as by the Kurdaitcha. The wishbones of fowl have been used for divination, and are still customarily used in a tradition to determine which one of two people pulling on either prong of the bone may make a wish. Various cultures throughout history have adopted the custom of shaping an infant's head by the practice of artificial cranial deformation. A widely practised custom in China was that of foot binding to limit the normal growth of the foot. Additional images
Biology and health sciences
Biology
null
4101
https://en.wikipedia.org/wiki/Brouwer%20fixed-point%20theorem
Brouwer fixed-point theorem
Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function mapping a nonempty compact convex set to itself, there is a point such that . The simplest forms of Brouwer's theorem are for continuous functions from a closed interval in the real numbers to itself or from a closed disk to itself. A more general form than the latter is for continuous functions from a nonempty convex compact subset of Euclidean space to itself. Among hundreds of fixed-point theorems, Brouwer's is particularly well known, due in part to its use across numerous fields of mathematics. In its original field, this result is one of the key theorems characterizing the topology of Euclidean spaces, along with the Jordan curve theorem, the hairy ball theorem, the invariance of dimension and the Borsuk–Ulam theorem. This gives it a place among the fundamental theorems of topology. The theorem is also used for proving deep results about differential equations and is covered in most introductory courses on differential geometry. It appears in unlikely fields such as game theory. In economics, Brouwer's fixed-point theorem and its extension, the Kakutani fixed-point theorem, play a central role in the proof of existence of general equilibrium in market economies as developed in the 1950s by economics Nobel prize winners Kenneth Arrow and Gérard Debreu. The theorem was first studied in view of work on differential equations by the French mathematicians around Henri Poincaré and Charles Émile Picard. Proving results such as the Poincaré–Bendixson theorem requires the use of topological methods. This work at the end of the 19th century opened into several successive versions of the theorem. The case of differentiable mappings of the -dimensional closed ball was first proved in 1910 by Jacques Hadamard and the general case for continuous mappings by Brouwer in 1911. Statement The theorem has several formulations, depending on the context in which it is used and its degree of generalization. The simplest is sometimes given as follows: In the plane Every continuous function from a closed disk to itself has at least one fixed point. This can be generalized to an arbitrary finite dimension: In Euclidean spaceEvery continuous function from a closed ball of a Euclidean space into itself has a fixed point. A slightly more general version is as follows: Convex compact setEvery continuous function from a nonempty convex compact subset K of a Euclidean space to K itself has a fixed point. An even more general form is better known under a different name: Schauder fixed point theoremEvery continuous function from a nonempty convex compact subset K of a Banach space to K itself has a fixed point. Importance of the pre-conditions The theorem holds only for functions that are endomorphisms (functions that have the same set as the domain and codomain) and for nonempty sets that are compact (thus, in particular, bounded and closed) and convex (or homeomorphic to convex). The following examples show why the pre-conditions are important. The function f as an endomorphism Consider the function with domain [-1,1]. The range of the function is [0,2]. Thus, f is not an endomorphism. Boundedness Consider the function which is a continuous function from to itself. As it shifts every point to the right, it cannot have a fixed point. The space is convex and closed, but not bounded. Closedness Consider the function which is a continuous function from the open interval (−1,1) to itself. Since x = 1 is not part of the interval, there is not a fixed point of f(x) = x. The space (−1,1) is convex and bounded, but not closed. On the other hand, the function f have a fixed point for the closed interval [−1,1], namely f(1) = 1. (The domain of f is (-1,1) but the range is (0,1), which are not the same. Earlier, one of the conditions for functions satisfying the theorem is that the domain and range were the same, not that one be a subset of the other. Thus the reason for f failing is not closure.) Convexity Convexity is not strictly necessary for Brouwer's fixed-point theorem. Because the properties involved (continuity, being a fixed point) are invariant under homeomorphisms, Brouwer's fixed-point theorem is equivalent to forms in which the domain is required to be a closed unit ball . For the same reason it holds for every set that is homeomorphic to a closed ball (and therefore also closed, bounded, connected, without holes, etc.). The following example shows that Brouwer's fixed-point theorem does not work for domains with holes. Consider the function , which is a continuous function from the unit circle to itself. Since -x≠x holds for any point of the unit circle, f has no fixed point. The analogous example works for the n-dimensional sphere (or any symmetric domain that does not contain the origin). The unit circle is closed and bounded, but it has a hole (and so it is not convex) . The function f have a fixed point for the unit disc, since it takes the origin to itself. A formal generalization of Brouwer's fixed-point theorem for "hole-free" domains can be derived from the Lefschetz fixed-point theorem.
Mathematics
Topology
null
4106
https://en.wikipedia.org/wiki/Benzoic%20acid
Benzoic acid
Benzoic acid () is a white (or colorless) solid organic compound with the formula , whose structure consists of a benzene ring () with a carboxyl () substituent. The benzoyl group is often abbreviated "Bz" (not to be confused with "Bn," which is used for benzyl), thus benzoic acid is also denoted as BzOH, since the benzoyl group has the formula –. It is the simplest aromatic carboxylic acid. The name is derived from gum benzoin, which was for a long time its only source. Benzoic acid occurs naturally in many plants and serves as an intermediate in the biosynthesis of many secondary metabolites. Salts of benzoic acid are used as food preservatives. Benzoic acid is an important precursor for the industrial synthesis of many other organic substances. The salts and esters of benzoic acid are known as benzoates (). History Benzoic acid was discovered in the sixteenth century. The dry distillation of gum benzoin was first described by Nostradamus (1556), and then by Alexius Pedemontanus (1560) and Blaise de Vigenère (1596). Justus von Liebig and Friedrich Wöhler determined the composition of benzoic acid. These latter also investigated how hippuric acid is related to benzoic acid. In 1875 Salkowski discovered the antifungal properties of benzoic acid, which explains the preservation of benzoate-containing cloudberry fruits. Production Industrial preparations Benzoic acid is produced commercially by partial oxidation of toluene with oxygen. The process is catalyzed by cobalt or manganese naphthenates. The process uses abundant materials, and proceeds in high yield. The first industrial process involved the reaction of benzotrichloride (trichloromethyl benzene) with calcium hydroxide in water, using iron or iron salts as catalyst. The resulting calcium benzoate is converted to benzoic acid with hydrochloric acid. The product contains significant amounts of chlorinated benzoic acid derivatives. For this reason, benzoic acid for human consumption was obtained by dry distillation of gum benzoin. Food-grade benzoic acid is now produced synthetically. Laboratory synthesis Benzoic acid is cheap and readily available, so the laboratory synthesis of benzoic acid is mainly practiced for its pedagogical value. It is a common undergraduate preparation. Benzoic acid can be purified by recrystallization from water because of its high solubility in hot water and poor solubility in cold water. The avoidance of organic solvents for the recrystallization makes this experiment particularly safe. This process usually gives a yield of around 65%. By hydrolysis Like other nitriles and amides, benzonitrile and benzamide can be hydrolyzed to benzoic acid or its conjugate base in acid or basic conditions. From Grignard reagent Bromobenzene can be converted to benzoic acid by "carboxylation" of the intermediate phenylmagnesium bromide. This synthesis offers a convenient exercise for students to carry out a Grignard reaction, an important class of carbon–carbon bond forming reaction in organic chemistry. Oxidation of benzyl compounds Benzyl alcohol and benzyl chloride and virtually all benzyl derivatives are readily oxidized to benzoic acid. Uses Benzoic acid is mainly consumed in the production of phenol by oxidative decarboxylation at 300−400 °C: The temperature required can be lowered to 200 °C by the addition of catalytic amounts of copper(II) salts. The phenol can be converted to cyclohexanol, which is a starting material for nylon synthesis. Precursor to plasticizers Benzoate plasticizers, such as the glycol-, diethyleneglycol-, and triethyleneglycol esters, are obtained by transesterification of methyl benzoate with the corresponding diol. These plasticizers, which are used similarly to those derived from terephthalic acid ester, represent alternatives to phthalates. Precursor to sodium benzoate and related preservatives Benzoic acid and its salts are used as food preservatives, represented by the E numbers E210, E211, E212, and E213. Benzoic acid inhibits the growth of mold, yeast and some bacteria. It is either added directly or created from reactions with its sodium, potassium, or calcium salt. The mechanism starts with the absorption of benzoic acid into the cell. If the intracellular pH changes to 5 or lower, the anaerobic fermentation of glucose through phosphofructokinase is decreased by 95%. The efficacy of benzoic acid and benzoate is thus dependent on the pH of the food. Benzoic acid, benzoates and their derivatives are used as preservatives for acidic foods and beverages such as citrus fruit juices (citric acid), sparkling drinks (carbon dioxide), soft drinks (phosphoric acid), pickles (vinegar) and other acidified foods. Typical concentrations of benzoic acid as a preservative in food are between 0.05 and 0.1%. Foods in which benzoic acid may be used and maximum levels for its application are controlled by local food laws. Concern has been expressed that benzoic acid and its salts may react with ascorbic acid (vitamin C) in some soft drinks, forming small quantities of carcinogenic benzene. Medicinal Benzoic acid is a constituent of Whitfield's ointment which is used for the treatment of fungal skin diseases such as ringworm and athlete's foot. As the principal component of gum benzoin, benzoic acid is also a major ingredient in both tincture of benzoin and Friar's balsam. Such products have a long history of use as topical antiseptics and inhalant decongestants. Benzoic acid was used as an expectorant, analgesic, and antiseptic in the early 20th century. Niche and laboratory uses In teaching laboratories, benzoic acid is a common standard for calibrating a bomb calorimeter. Biology and health effects Benzoic acid occurs naturally as do its esters in many plant and animal species. Appreciable amounts are found in most berries (around 0.05%). Ripe fruits of several Vaccinium species (e.g., cranberry, V. vitis macrocarpon; bilberry, V. myrtillus) contain as much as 0.03–0.13% free benzoic acid. Benzoic acid is also formed in apples after infection with the fungus Nectria galligena. Among animals, benzoic acid has been identified primarily in omnivorous or phytophageous species, e.g., in viscera and muscles of the rock ptarmigan (Lagopus muta) as well as in gland secretions of male muskoxen (Ovibos moschatus) or Asian bull elephants (Elephas maximus). Gum benzoin contains up to 20% of benzoic acid and 40% benzoic acid esters. In terms of its biosynthesis, benzoate is produced in plants from cinnamic acid. A pathway has been identified from phenol via 4-hydroxybenzoate. Reactions Reactions of benzoic acid can occur at either the aromatic ring or at the carboxyl group. Aromatic ring Electrophilic aromatic substitution reaction will take place mainly in 3-position due to the electron-withdrawing carboxylic group; i.e. benzoic acid is meta directing. Carboxyl group Reactions typical for carboxylic acids apply also to benzoic acid. Benzoate esters are the product of the acid catalysed reaction with alcohols. Benzoic acid amides are usually prepared from benzoyl chloride. Dehydration to benzoic anhydride is induced with acetic anhydride or phosphorus pentoxide. Highly reactive acid derivatives such as acid halides are easily obtained by mixing with halogenation agents like phosphorus chlorides or thionyl chloride. Orthoesters can be obtained by the reaction of alcohols under acidic water free conditions with benzonitrile. Reduction to benzaldehyde and benzyl alcohol is possible using DIBAL-H, LiAlH4 or sodium borohydride. Decarboxylation to benzene may be effected by heating in quinoline in the presence of copper salts. Hunsdiecker decarboxylation can be achieved by heating the silver salt. Safety and mammalian metabolism It is excreted as hippuric acid. Benzoic acid is metabolized by butyrate-CoA ligase into an intermediate product, benzoyl-CoA, which is then metabolized by glycine N-acyltransferase into hippuric acid. Humans metabolize toluene which is also excreted as hippuric acid. For humans, the World Health Organization's International Programme on Chemical Safety (IPCS) suggests a provisional tolerable intake would be 5 mg/kg body weight per day. Cats have a significantly lower tolerance against benzoic acid and its salts than rats and mice. Lethal dose for cats can be as low as 300 mg/kg body weight. The oral for rats is 3040 mg/kg, for mice it is 1940–2263 mg/kg. In Taipei, Taiwan, a city health survey in 2010 found that 30% of dried and pickled food products had benzoic acid.
Physical sciences
Specific acids
Chemistry
4107
https://en.wikipedia.org/wiki/Boltzmann%20distribution
Boltzmann distribution
In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form: where is the probability of the system being in state , is the exponential function, is the energy of that state, and a constant of the distribution is the product of the Boltzmann constant and thermodynamic temperature . The symbol denotes proportionality (see for the proportionality constant). The term system here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore, the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied. The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference: The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium" The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902. The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution or Maxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certain state as a function of that state's energy, while the Maxwell-Boltzmann distributions give the probabilities of particle speeds or energies in ideal gases. The distribution of energies in a one-dimensional gas however, does follow the Boltzmann distribution. The distribution The Boltzmann distribution is a probability distribution that gives the probability of a certain state as a function of that state's energy and temperature of the system to which the distribution is applied. It is given as where: is the exponential function, is the probability of state , is the energy of state , is the Boltzmann constant, is the absolute temperature of the system, is the number of all states accessible to the system of interest, (denoted by some authors by ) is the normalization denominator, which is the canonical partition function It results from the constraint that the probabilities of all accessible states must add up to 1. Using Lagrange multipliers, one can prove that the Boltzmann distribution is the distribution that maximizes the entropy subject to the normalization constraint that and the constraint that equals a particular mean energy value, except for two special cases. (These special cases occur when the mean value is either the minimum or maximum of the energies . In these cases, the entropy maximizing distribution is a limit of Boltzmann distributions where approaches zero from above or below, respectively.) The partition function can be calculated if we know the energies of the states accessible to the system of interest. For atoms the partition function values can be found in the NIST Atomic Spectra Database. The distribution shows that states with lower energy will always have a higher probability of being occupied than the states with higher energy. It can also give us the quantitative relationship between the probabilities of the two states being occupied. The ratio of probabilities for states and is given as where: is the probability of state , the probability of state , is the energy of state , is the energy of state . The corresponding ratio of populations of energy levels must also take their degeneracies into account. The Boltzmann distribution is often used to describe the distribution of particles, such as atoms or molecules, over bound states accessible to them. If we have a system consisting of many particles, the probability of a particle being in state is practically the probability that, if we pick a random particle from that system and check what state it is in, we will find it is in state . This probability is equal to the number of particles in state divided by the total number of particles in the system, that is the fraction of particles that occupy state . where is the number of particles in state and is the total number of particles in the system. We may use the Boltzmann distribution to find this probability that is, as we have seen, equal to the fraction of particles that are in state i. So the equation that gives the fraction of particles in state as a function of the energy of that state is This equation is of great importance to spectroscopy. In spectroscopy we observe a spectral line of atoms or molecules undergoing transitions from one state to another. In order for this to be possible, there must be some particles in the first state to undergo the transition. We may find that this condition is fulfilled by finding the fraction of particles in the first state. If it is negligible, the transition is very likely not observed at the temperature for which the calculation was done. In general, a larger fraction of molecules in the first state means a higher number of transitions to the second state. This gives a stronger spectral line. However, there are other factors that influence the intensity of a spectral line, such as whether it is caused by an allowed or a forbidden transition. The softmax function commonly used in machine learning is related to the Boltzmann distribution: Generalized Boltzmann distribution Distribution of the form is called generalized Boltzmann distribution by some authors. The Boltzmann distribution is a special case of the generalized Boltzmann distribution. The generalized Boltzmann distribution is used in statistical mechanics to describe canonical ensemble, grand canonical ensemble and isothermal–isobaric ensemble. The generalized Boltzmann distribution is usually derived from the principle of maximum entropy, but there are other derivations. The generalized Boltzmann distribution has the following properties: It is the only distribution for which the entropy as defined by Gibbs entropy formula matches with the entropy as defined in classical thermodynamics. It is the only distribution that is mathematically consistent with the fundamental thermodynamic relation where state functions are described by ensemble average. In statistical mechanics The Boltzmann distribution appears in statistical mechanics when considering closed systems of fixed composition that are in thermal equilibrium (equilibrium with respect to energy exchange). The most general case is the probability distribution for the canonical ensemble. Some special cases (derivable from the canonical ensemble) show the Boltzmann distribution in different aspects: Canonical ensemble (general case) The canonical ensemble gives the probabilities of the various possible states of a closed system of fixed volume, in thermal equilibrium with a heat bath. The canonical ensemble has a state probability distribution with the Boltzmann form. Statistical frequencies of subsystems' states (in a non-interacting collection) When the system of interest is a collection of many non-interacting copies of a smaller subsystem, it is sometimes useful to find the statistical frequency of a given subsystem state, among the collection. The canonical ensemble has the property of separability when applied to such a collection: as long as the non-interacting subsystems have fixed composition, then each subsystem's state is independent of the others and is also characterized by a canonical ensemble. As a result, the expected statistical frequency distribution of subsystem states has the Boltzmann form. Maxwell–Boltzmann statistics of classical gases (systems of non-interacting particles) In particle systems, many particles share the same space and regularly change places with each other; the single-particle state space they occupy is a shared space. Maxwell–Boltzmann statistics give the expected number of particles found in a given single-particle state, in a classical gas of non-interacting particles at equilibrium. This expected number distribution has the Boltzmann form. Although these cases have strong similarities, it is helpful to distinguish them as they generalize in different ways when the crucial assumptions are changed: When a system is in thermodynamic equilibrium with respect to both energy exchange and particle exchange, the requirement of fixed composition is relaxed and a grand canonical ensemble is obtained rather than canonical ensemble. On the other hand, if both composition and energy are fixed, then a microcanonical ensemble applies instead. If the subsystems within a collection do interact with each other, then the expected frequencies of subsystem states no longer follow a Boltzmann distribution, and even may not have an analytical solution. The canonical ensemble can however still be applied to the collective states of the entire system considered as a whole, provided the entire system is in thermal equilibrium. With quantum gases of non-interacting particles in equilibrium, the number of particles found in a given single-particle state does not follow Maxwell–Boltzmann statistics, and there is no simple closed form expression for quantum gases in the canonical ensemble. In the grand canonical ensemble the state-filling statistics of quantum gases are described by Fermi–Dirac statistics or Bose–Einstein statistics, depending on whether the particles are fermions or bosons, respectively. In mathematics In more general mathematical settings, the Boltzmann distribution is also known as the Gibbs measure. In statistics and machine learning, it is called a log-linear model. In deep learning, the Boltzmann distribution is used in the sampling distribution of stochastic neural networks such as the Boltzmann machine, restricted Boltzmann machine, energy-based models and deep Boltzmann machine. In deep learning, the Boltzmann machine is considered to be one of the unsupervised learning models. In the design of Boltzmann machine in deep learning, as the number of nodes are increased the difficulty of implementing in real time applications becomes critical, so a different type of architecture named Restricted Boltzmann machine is introduced. In economics The Boltzmann distribution can be introduced to allocate permits in emissions trading. The new allocation method using the Boltzmann distribution can describe the most probable, natural, and unbiased distribution of emissions permits among multiple countries. The Boltzmann distribution has the same form as the multinomial logit model. As a discrete choice model, this is very well known in economics since Daniel McFadden made the connection to random utility maximization.
Physical sciences
Statistical mechanics
Physics
4111
https://en.wikipedia.org/wiki/Bioleaching
Bioleaching
Bioleaching is the extraction or liberation of metals from their ores through the use of living organisms. Bioleaching is one of several applications within biohydrometallurgy and several methods are used to treat ores or concentrates containing copper, zinc, lead, arsenic, antimony, nickel, molybdenum, gold, silver, and cobalt. Bioleaching falls into two broad categories. The first, is the use of microorganisms to oxidize refractory minerals to release valuable metals such and gold and silver. Most commonly the minerals that are the target of oxidization are pyrite and arsenopyrite. The second category is leaching of sulphide minerals to release the associated metal, for example, leaching of pentlandite to release nickel, or the leaching of chalcocite, covellite or chalcopyrite to release copper. Process Bioleaching can involve numerous ferrous iron and sulfur oxidizing bacteria, including Acidithiobacillus ferrooxidans (formerly known as Thiobacillus ferrooxidans) and Acidithiobacillus thiooxidans (formerly known as Thiobacillus thiooxidans). As a general principle, in one proposed method of bacterial leaching known as Indirect Leaching, Fe3+ ions are used to oxidize the ore. This step is entirely independent of microbes. The role of the bacteria is further oxidation of the ore, but also the regeneration of the chemical oxidant Fe3+ from Fe2+. For example, bacteria catalyse the breakdown of the mineral pyrite (FeS2) by oxidising the sulfur and metal (in this case ferrous iron, (Fe2+)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal. Pyrite leaching (FeS2): In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe3+), which in turn is reduced to give ferrous ion (Fe2+): (1)      spontaneous The ferrous ion is then oxidized by bacteria using oxygen: (2)      (iron oxidizers) Thiosulfate is also oxidized by bacteria to give sulfate: (3)      (sulfur oxidizers) The ferric ion produced in reaction (2) oxidized more sulfide as in reaction (1), closing the cycle and given the net reaction: (4)   The net products of the reaction are soluble ferrous sulfate and sulfuric acid. The microbial oxidation process occurs at the cell membrane of the bacteria. The electrons pass into the cells and are used in biochemical processes to produce energy for the bacteria while reducing oxygen to water. The critical reaction is the oxidation of sulfide by ferric iron. The main role of the bacterial step is the regeneration of this reactant. The process for copper is very similar, but the efficiency and kinetics depend on the copper mineralogy. The most efficient minerals are supergene minerals such as chalcocite, Cu2S and covellite, CuS. The main copper mineral chalcopyrite (CuFeS2) is not leached very efficiently, which is why the dominant copper-producing technology remains flotation, followed by smelting and refining. The leaching of CuFeS2 follows the two stages of being dissolved and then further oxidised, with Cu2+ ions being left in solution. Chalcopyrite leaching: (1)      spontaneous (2)      (iron oxidizers) (3)      (sulfur oxidizers) net reaction: (4)   In general, sulfides are first oxidized to elemental sulfur, whereas disulfides are oxidized to give thiosulfate, and the processes above can be applied to other sulfidic ores. Bioleaching of non-sulfidic ores such as pitchblende also uses ferric iron as an oxidant (e.g., UO2 + 2 Fe3+ ==> UO22+ + 2 Fe2+). In this case, the sole purpose of the bacterial step is the regeneration of Fe3+. Sulfidic iron ores can be added to speed up the process and provide a source of iron. Bioleaching of non-sulfidic ores by layering of waste sulfides and elemental sulfur, colonized by Acidithiobacillus spp., has been accomplished, which provides a strategy for accelerated leaching of materials that do not contain sulfide minerals. Further processing The dissolved copper (Cu2+) ions are removed from the solution by ligand exchange solvent extraction, which leaves other ions in the solution. The copper is removed by bonding to a ligand, which is a large molecule consisting of a number of smaller groups, each possessing a lone electron pair. The ligand-copper complex is extracted from the solution using an organic solvent such as kerosene: Cu2+(aq) + 2LH(organic) → CuL2(organic) + 2H+(aq) The ligand donates electrons to the copper, producing a complex - a central metal atom (copper) bonded to the ligand. Because this complex has no charge, it is no longer attracted to polar water molecules and dissolves in the kerosene, which is then easily separated from the solution. Because the initial reaction is reversible, it is determined by pH. Adding concentrated acid reverses the equation, and the copper ions go back into an aqueous solution. Then the copper is passed through an electro-winning process to increase its purity: An electric current is passed through the resulting solution of copper ions. Because copper ions have a 2+ charge, they are attracted to the negative cathodes and collect there. The copper can also be concentrated and separated by displacing the copper with Fe from scrap iron: Cu2+(aq) + Fe(s) → Cu(s) + Fe2+(aq) The electrons lost by the iron are taken up by the copper. Copper is the oxidising agent (it accepts electrons), and iron is the reducing agent (it loses electrons). Traces of precious metals such as gold may be left in the original solution. Treating the mixture with sodium cyanide in the presence of free oxygen dissolves the gold. The gold is removed from the solution by adsorbing (taking it up on the surface) to charcoal. With fungi Several species of fungi can be used for bioleaching. Fungi can be grown on many different substrates, such as electronic scrap, catalytic converters, and fly ash from municipal waste incineration. Experiments have shown that two fungal strains (Aspergillus niger, Penicillium simplicissimum) were able to mobilize Cu and Sn by 65%, and Al, Ni, Pb, and Zn by more than 95%. Aspergillus niger can produce some organic acids such as citric acid. This form of leaching does not rely on microbial oxidation of metal but rather uses microbial metabolism as source of acids that directly dissolve the metal. Feasibility Economic feasibility Bioleaching is in general simpler and, therefore, cheaper to operate and maintain than traditional processes, since fewer specialists are needed to operate complex chemical plants. And low concentrations are not a problem for bacteria because they simply ignore the waste that surrounds the metals, attaining extraction yields of over 90% in some cases. These microorganisms actually gain energy by breaking down minerals into their constituent elements. The company simply collects the ions out of the solution after the bacteria have finished. Bioleaching can be used to extract metals from low concentration ores such as gold that are too poor for other technologies. It can be used to partially replace the extensive crushing and grinding that translates to prohibitive cost and energy consumption in a conventional process. Because the lower cost of bacterial leaching outweighs the time it takes to extract the metal. High concentration ores, such as copper, are more economical to smelt rather bioleach due to the slow speed of the bacterial leaching process compared to smelting. The slow speed of bioleaching introduces a significant delay in cash flow for new mines. Nonetheless, at the largest copper mine of the world, Escondida in Chile the process seems to be favorable. Economically it is also very expensive and many companies once started can not keep up with the demand and end up in debt. In space In 2020 scientists showed, with an experiment with different gravity environments on the ISS, that microorganisms could be employed to mine useful elements from basaltic rocks via bioleaching in space. Environmental impact The process is more environmentally friendly than traditional extraction methods. For the company this can translate into profit, since the necessary limiting of sulfur dioxide emissions during smelting is expensive. Less landscape damage occurs, since the bacteria involved grow naturally, and the mine and surrounding area can be left relatively untouched. As the bacteria breed in the conditions of the mine, they are easily cultivated and recycled. Toxic chemicals are sometimes produced in the process. Sulfuric acid and H+ ions that have been formed can leak into the ground and surface water turning it acidic, causing environmental damage. Heavy ions such as iron, zinc, and arsenic leak during acid mine drainage. When the pH of this solution rises, as a result of dilution by fresh water, these ions precipitate, forming "Yellow Boy" pollution. For these reasons, a setup of bioleaching must be carefully planned, since the process can lead to a biosafety failure. Unlike other methods, once started, bioheap leaching cannot be quickly stopped, because leaching would still continue with rainwater and natural bacteria. Projects like Finnish Talvivaara proved to be environmentally and economically disastrous.
Technology
Biotechnology
null
4115
https://en.wikipedia.org/wiki/Boiling%20point
Boiling point
The boiling point of a substance is the temperature at which the vapor pressure of a liquid equals the pressure surrounding the liquid and the liquid changes into a vapor. The boiling point of a liquid varies depending upon the surrounding environmental pressure. A liquid in a partial vacuum, i.e., under a lower pressure, has a lower boiling point than when that liquid is at atmospheric pressure. Because of this, water boils at 100°C (or with scientific precision: ) under standard pressure at sea level, but at at altitude. For a given pressure, different liquids will boil at different temperatures. The normal boiling point (also called the atmospheric boiling point or the atmospheric pressure boiling point) of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, one atmosphere. At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid. The standard boiling point has been defined by IUPAC since 1982 as the temperature at which boiling occurs under a pressure of one bar. The heat of vaporization is the energy required to transform a given quantity (a mol, kg, pound, etc.) of a substance from a liquid into a gas at a given pressure (often atmospheric pressure). Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid. Saturation temperature and pressure A saturated liquid contains as much thermal energy as it can without boiling (or conversely a saturated vapor contains as little thermal energy as it can without condensing). Saturation temperature means boiling point. The saturation temperature is the temperature for a corresponding saturation pressure at which a liquid boils into its vapor phase. The liquid can be said to be saturated with thermal energy—any addition of thermal energy results in a phase transition. If the pressure in a system remains constant (isobaric), a vapor at saturation temperature will begin to condense into its liquid phase as thermal energy (heat) is removed. Similarly, a liquid at saturation temperature and pressure will boil into its vapor phase as additional thermal energy is applied. The boiling point corresponds to the temperature at which the vapor pressure of the liquid equals the surrounding environmental pressure. Thus, the boiling point is dependent on the pressure. Boiling points may be published with respect to the NIST, USA standard pressure of 101.325 kPa (1 atm), or the IUPAC standard pressure of 100.000 kPa (1 bar). At higher elevations, where the atmospheric pressure is much lower, the boiling point is also lower. The boiling point increases with increased pressure up to the critical point, where the gas and liquid properties become identical. The boiling point cannot be increased beyond the critical point. Likewise, the boiling point decreases with decreasing pressure until the triple point is reached. The boiling point cannot be reduced below the triple point. Suppose the heat of vaporization and the vapor pressure of a liquid at a certain temperature are known. In that case, the boiling point can be calculated by using the Clausius–Clapeyron equation, thus: where: is the boiling point at the pressure of interest, is the ideal gas constant, is the vapor pressure of the liquid, is some pressure where the corresponding is known (usually data available at 1 atm or 100 kPa (1 bar)), is the heat of vaporization of the liquid, is the boiling temperature, is the natural logarithm. Saturation pressure is the pressure for a corresponding saturation temperature at which a liquid boils into its vapor phase. Saturation pressure and saturation temperature have a direct relationship: as saturation pressure is increased, so is saturation temperature. If the temperature in a system remains constant (an isothermal system), vapor at saturation pressure and temperature will begin to condense into its liquid phase as the system pressure is increased. Similarly, a liquid at saturation pressure and temperature will tend to flash into its vapor phase as system pressure is decreased. There are two conventions regarding the standard boiling point of water: The normal boiling point is commonly given as (actually following the thermodynamic definition of the Celsius scale based on the kelvin) at a pressure of 1 atm (101.325 kPa). The IUPAC-recommended standard boiling point of water at a standard pressure of 100 kPa (1 bar) is . For comparison, on top of Mount Everest, at elevation, the pressure is about and the boiling point of water is . The Celsius temperature scale was defined until 1954 by two points: 0 °C being defined by the water freezing point and 100 °C being defined by the water boiling point at standard atmospheric pressure. Relation between the normal boiling point and the vapor pressure of liquids The higher the vapor pressure of a liquid at a given temperature, the lower the normal boiling point (i.e., the boiling point at atmospheric pressure) of the liquid. The vapor pressure chart to the right has graphs of the vapor pressures versus temperatures for a variety of liquids. As can be seen in the chart, the liquids with the highest vapor pressures have the lowest normal boiling points. For example, at any given temperature, methyl chloride has the highest vapor pressure of any of the liquids in the chart. It also has the lowest normal boiling point (−24.2 °C), which is where the vapor pressure curve of methyl chloride (the blue line) intersects the horizontal pressure line of one atmosphere (atm) of absolute vapor pressure. The critical point of a liquid is the highest temperature (and pressure) it will actually boil at.
Physical sciences
Phase transitions
Physics
4116
https://en.wikipedia.org/wiki/Big%20Bang
Big Bang
The Big Bang is a physical theory that describes how the universe expanded from an initial state of high density and temperature. The notion of an expanding universe was first scientifically originated by physicist Alexander Friedmann in 1922 with the mathematical derivation of the Friedmann equations. The earliest empirical observation of the notion of an expanding universe is known as Hubble's law, published in work by physicist Edwin Hubble in 1929, which discerned that galaxies are moving away from Earth at a rate that accelerates proportionally with distance. Independent of Friedmann's work, and independent of Hubble's observations, physicist Georges Lemaître proposed that the universe emerged from a "primeval atom" in 1931, introducing the modern notion of the Big Bang. Various cosmological models of the Big Bang explain the evolution of the observable universe from the earliest known periods through its subsequent large-scale form. These models offer a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. The uniformity of the universe, known as the flatness problem, is explained through cosmic inflation: a sudden and very rapid expansion of space during the earliest moments. Extrapolating this cosmic expansion backward in time using the known laws of physics, the models describe an increasingly concentrated cosmos preceded by a singularity in which space and time lose meaning (typically named "the Big Bang singularity"). Physics lacks a widely accepted theory of quantum gravity that can model the earliest conditions of the Big Bang. In 1964 the CMB was discovered, which convinced many cosmologists that the competing steady-state model of cosmic evolution was falsified, since the Big Bang models predict a uniform background radiation caused by high temperatures and densities in the distant past. A wide range of empirical evidence strongly favors the Big Bang event, which is now essentially universally accepted. Detailed measurements of the expansion rate of the universe place the Big Bang singularity at an estimated  billion years ago, which is considered the age of the universe. There remain aspects of the observed universe that are not yet adequately explained by the Big Bang models. After its initial expansion, the universe cooled sufficiently to allow the formation of subatomic particles, and later atoms. The unequal abundances of matter and antimatter that allowed this to occur is an unexplained effect known as baryon asymmetry. These primordial elements—mostly hydrogen, with some helium and lithium—later coalesced through gravity, forming early stars and galaxies. Astronomers observe the gravitational effects of an unknown dark matter surrounding galaxies. Most of the gravitational potential in the universe seems to be in this form, and the Big Bang models and various observations indicate that this excess gravitational potential is not created by baryonic matter, such as normal atoms. Measurements of the redshifts of supernovae indicate that the expansion of the universe is accelerating, an observation attributed to an unexplained phenomenon known as dark energy. Features of the models The Big Bang models offer a comprehensive explanation for a broad range of observed phenomena, including the abundances of the light elements, the CMB, large-scale structure, and Hubble's law. The models depend on two major assumptions: the universality of physical laws and the cosmological principle. The universality of physical laws is one of the underlying principles of the theory of relativity. The cosmological principle states that on large scales the universe is homogeneous and isotropic—appearing the same in all directions regardless of location. These ideas were initially taken as postulates, but later efforts were made to test each of them. For example, the first assumption has been tested by observations showing that the largest possible deviation of the fine-structure constant over much of the age of the universe is of order 10−5. Also, general relativity has passed stringent tests on the scale of the Solar System and binary stars. The large-scale universe appears isotropic as viewed from Earth. If it is indeed isotropic, the cosmological principle can be derived from the simpler Copernican principle, which states that there is no preferred (or special) observer or vantage point. To this end, the cosmological principle has been confirmed to a level of 10−5 via observations of the temperature of the CMB. At the scale of the CMB horizon, the universe has been measured to be homogeneous with an upper bound on the order of 10% inhomogeneity, as of 1995. Horizons An important feature of the Big Bang spacetime is the presence of particle horizons. Since the universe has a finite age, and light travels at a finite speed, there may be events in the past whose light has not yet had time to reach earth. This places a limit or a past horizon on the most distant objects that can be observed. Conversely, because space is expanding, and more distant objects are receding ever more quickly, light emitted by us today may never "catch up" to very distant objects. This defines a future horizon, which limits the events in the future that we will be able to influence. The presence of either type of horizon depends on the details of the Friedmann–Lemaître–Robertson–Walker (FLRW) metric that describes the expansion of the universe. Our understanding of the universe back to very early times suggests that there is a past horizon, though in practice our view is also limited by the opacity of the universe at early times. So our view cannot extend further backward in time, though the horizon recedes in space. If the expansion of the universe continues to accelerate, there is a future horizon as well. Thermalization Some processes in the early universe occurred too slowly, compared to the expansion rate of the universe, to reach approximate thermodynamic equilibrium. Others were fast enough to reach thermalization. The parameter usually used to find out whether a process in the very early universe has reached thermal equilibrium is the ratio between the rate of the process (usually rate of collisions between particles) and the Hubble parameter. The larger the ratio, the more time particles had to thermalize before they were too far away from each other. Timeline According to the Big Bang models, the universe at the beginning was very hot and very compact, and since then it has been expanding and cooling. Singularity In the absence of a perfect cosmological principle, extrapolation of the expansion of the universe backwards in time using general relativity yields an infinite density and temperature at a finite time in the past. This irregular behavior, known as the gravitational singularity, indicates that general relativity is not an adequate description of the laws of physics in this regime. Models based on general relativity alone cannot fully extrapolate toward the singularity. In some proposals, such as the emergent Universe models, the singularity is replaced by another cosmological epoch. A different approach identifies the initial singularity as a singularity predicted by some models of the Big Bang theory to have existed before the Big Bang event. This primordial singularity is itself sometimes called "the Big Bang", but the term can also refer to a more generic early hot, dense phase of the universe. In either case, "the Big Bang" as an event is also colloquially referred to as the "birth" of our universe since it represents the point in history where the universe can be verified to have entered into a regime where the laws of physics as we understand them (specifically general relativity and the Standard Model of particle physics) work. Based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background, the time that has passed since that event—known as the "age of the universe"—is 13.8 billion years. Despite being extremely dense at this time—far denser than is usually required to form a black hole—the universe did not re-collapse into a singularity. Commonly used calculations and limits for explaining gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not apply to rapidly expanding space such as the Big Bang. Since the early universe did not immediately collapse into a multitude of black holes, matter at that time must have been very evenly distributed with a negligible density gradient. Inflation and baryogenesis The earliest phases of the Big Bang are subject to much speculation, given the lack of available data. In the most common models the universe was filled homogeneously and isotropically with a very high energy density and huge temperatures and pressures, and was very rapidly expanding and cooling. The period up to 10−43 seconds into the expansion, the Planck epoch, was a phase in which the four fundamental forces—the electromagnetic force, the strong nuclear force, the weak nuclear force, and the gravitational force, were unified as one. In this stage, the characteristic scale length of the universe was the Planck length, , and consequently had a temperature of approximately 1032 degrees Celsius. Even the very concept of a particle breaks down in these conditions. A proper understanding of this period awaits the development of a theory of quantum gravity. The Planck epoch was succeeded by the grand unification epoch beginning at 10−43 seconds, where gravitation separated from the other forces as the universe's temperature fell. At approximately 10−37 seconds into the expansion, a phase transition caused a cosmic inflation, during which the universe grew exponentially, unconstrained by the light speed invariance, and temperatures dropped by a factor of 100,000. This concept is motivated by the flatness problem, where the density of matter and energy is very close to the critical density needed to produce a flat universe. That is, the shape of the universe has no overall geometric curvature due to gravitational influence. Microscopic quantum fluctuations that occurred because of Heisenberg's uncertainty principle were "frozen in" by inflation, becoming amplified into the seeds that would later form the large-scale structure of the universe. At a time around 10−36 seconds, the electroweak epoch begins when the strong nuclear force separates from the other forces, with only the electromagnetic force and weak nuclear force remaining unified. Inflation stopped locally at around 10−33 to 10−32 seconds, with the observable universe's volume having increased by a factor of at least 1078. Reheating followed as the inflaton field decayed, until the universe obtained the temperatures required for the production of a quark–gluon plasma as well as all other elementary particles. Temperatures were so high that the random motions of particles were at relativistic speeds, and particle–antiparticle pairs of all kinds were being continuously created and destroyed in collisions. At some point, an unknown reaction called baryogenesis violated the conservation of baryon number, leading to a very small excess of quarks and leptons over antiquarks and antileptons—of the order of one part in 30 million. This resulted in the predominance of matter over antimatter in the present universe. Cooling The universe continued to decrease in density and fall in temperature, hence the typical energy of each particle was decreasing. Symmetry-breaking phase transitions put the fundamental forces of physics and the parameters of elementary particles into their present form, with the electromagnetic force and weak nuclear force separating at about 10−12 seconds. After about 10−11 seconds, the picture becomes less speculative, since particle energies drop to values that can be attained in particle accelerators. At about 10−6 seconds, quarks and gluons combined to form baryons such as protons and neutrons. The small excess of quarks over antiquarks led to a small excess of baryons over antibaryons. The temperature was no longer high enough to create either new proton–antiproton or neutron–antineutron pairs. A mass annihilation immediately followed, leaving just one in 108 of the original matter particles and none of their antiparticles. A similar process happened at about 1 second for electrons and positrons. After these annihilations, the remaining protons, neutrons and electrons were no longer moving relativistically and the energy density of the universe was dominated by photons (with a minor contribution from neutrinos). A few minutes into the expansion, when the temperature was about a billion kelvin and the density of matter in the universe was comparable to the current density of Earth's atmosphere, neutrons combined with protons to form the universe's deuterium and helium nuclei in a process called Big Bang nucleosynthesis (BBN). Most protons remained uncombined as hydrogen nuclei. As the universe cooled, the rest energy density of matter came to gravitationally dominate that of the photon radiation. The recombination epoch began after about 379,000 years, when the electrons and nuclei combined into atoms (mostly hydrogen), which were able to emit radiation. This relic radiation, which continued through space largely unimpeded, is known as the cosmic microwave background. Structure formation After the recombination epoch, the slightly denser regions of the uniformly distributed matter gravitationally attracted nearby matter and thus grew even denser, forming gas clouds, stars, galaxies, and the other astronomical structures observable today. The details of this process depend on the amount and type of matter in the universe. The four possible types of matter are known as cold dark matter (CDM), warm dark matter, hot dark matter, and baryonic matter. The best measurements available, from the Wilkinson Microwave Anisotropy Probe (WMAP), show that the data is well-fit by a Lambda-CDM model in which dark matter is assumed to be cold. (Warm dark matter is ruled out by early reionization.) This CDM is estimated to make up about 23% of the matter/energy of the universe, while baryonic matter makes up about 4.6%. In an "extended model" which includes hot dark matter in the form of neutrinos, then the "physical baryon density" is estimated at 0.023. (This is different from the 'baryon density' expressed as a fraction of the total matter/energy density, which is about 0.046.) The corresponding cold dark matter density is about 0.11, and the corresponding neutrino density is estimated to be less than 0.0062. Cosmic acceleration Independent lines of evidence from Type Ia supernovae and the CMB imply that the universe today is dominated by a mysterious form of energy known as dark energy, which appears to homogeneously permeate all of space. Observations suggest that 73% of the total energy density of the present day universe is in this form. When the universe was very young it was likely infused with dark energy, but with everything closer together, gravity predominated, braking the expansion. Eventually, after billions of years of expansion, the declining density of matter relative to the density of dark energy allowed the expansion of the universe to begin to accelerate. Dark energy in its simplest formulation is modeled by a cosmological constant term in Einstein field equations of general relativity, but its composition and mechanism are unknown. More generally, the details of its equation of state and relationship with the Standard Model of particle physics continue to be investigated both through observation and theory. All of this cosmic evolution after the inflationary epoch can be rigorously described and modeled by the lambda-CDM model of cosmology, which uses the independent frameworks of quantum mechanics and general relativity. There are no easily testable models that would describe the situation prior to approximately 10−15 seconds. Understanding this earliest of eras in the history of the universe is one of the greatest unsolved problems in physics. Concept history Etymology English astronomer Fred Hoyle is credited with coining the term "Big Bang" during a talk for a March 1949 BBC Radio broadcast, saying: "These theories were based on the hypothesis that all the matter in the universe was created in one big bang at a particular time in the remote past." However, it did not catch on until the 1970s. It is popularly reported that Hoyle, who favored an alternative "steady-state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. Helge Kragh writes that the evidence for the claim that it was meant as a pejorative is "unconvincing", and mentions a number of indications that it was not a pejorative. The term itself has been argued to be a misnomer because it evokes an explosion. The argument is that whereas an explosion suggests expansion into a surrounding space, the Big Bang only describes the intrinsic expansion of the contents of the universe. Another issue pointed out by Santhosh Mathew is that bang implies sound, which is not an important feature of the model. An attempt to find a more suitable alternative was not successful. Development The Big Bang models developed from observations of the structure of the universe and from theoretical considerations. In 1912, Vesto Slipher measured the first Doppler shift of a "spiral nebula" (spiral nebula is the obsolete term for spiral galaxies), and soon discovered that almost all such nebulae were receding from Earth. He did not grasp the cosmological implications of this fact, and indeed at the time it was highly controversial whether or not these nebulae were "island universes" outside our Milky Way. Ten years later, Alexander Friedmann, a Russian cosmologist and mathematician, derived the Friedmann equations from the Einstein field equations, showing that the universe might be expanding in contrast to the static universe model advocated by Albert Einstein at that time. In 1924, American astronomer Edwin Hubble's measurement of the great distance to the nearest spiral nebulae showed that these systems were indeed other galaxies. Starting that same year, Hubble painstakingly developed a series of distance indicators, the forerunner of the cosmic distance ladder, using the Hooker telescope at Mount Wilson Observatory. This allowed him to estimate distances to galaxies whose redshifts had already been measured, mostly by Slipher. In 1929, Hubble discovered a correlation between distance and recessional velocity—now known as Hubble's law. Independently deriving Friedmann's equations in 1927, Georges Lemaître, a Belgian physicist and Roman Catholic priest, proposed that the recession of the nebulae was due to the expansion of the universe. He inferred the relation that Hubble would later observe, given the cosmological principle. In 1931, Lemaître went further and suggested that the evident expansion of the universe, if projected back in time, meant that the further in the past the smaller the universe was, until at some finite time in the past all the mass of the universe was concentrated into a single point, a "primeval atom" where and when the fabric of time and space came into existence. In the 1920s and 1930s, almost every major cosmologist preferred an eternal steady-state universe, and several complained that the beginning of time implied by the Big Bang imported religious concepts into physics; this objection was later repeated by supporters of the steady-state theory. This perception was enhanced by the fact that the originator of the Big Bang concept, Lemaître, was a Roman Catholic priest. Arthur Eddington agreed with Aristotle that the universe did not have a beginning in time, viz., that matter is eternal. A beginning in time was "repugnant" to him. Lemaître, however, disagreed: During the 1930s, other ideas were proposed as non-standard cosmologies to explain Hubble's observations, including the Milne model, the oscillatory universe (originally suggested by Friedmann, but advocated by Albert Einstein and Richard C. Tolman) and Fritz Zwicky's tired light hypothesis. After World War II, two distinct possibilities emerged. One was Fred Hoyle's steady-state model, whereby new matter would be created as the universe seemed to expand. In this model the universe is roughly the same at any point in time. The other was Lemaître's Big Bang theory, advocated and developed by George Gamow, who introduced BBN and whose associates, Ralph Alpher and Robert Herman, predicted the CMB. Ironically, it was Hoyle who coined the phrase that came to be applied to Lemaître's theory, referring to it as "this big bang idea" during a BBC Radio broadcast in March 1949. For a while, support was split between these two theories. Eventually, the observational evidence, most notably from radio source counts, began to favor Big Bang over steady state. The discovery and confirmation of the CMB in 1964 secured the Big Bang as the best theory of the origin and evolution of the universe. In 1968 and 1970, Roger Penrose, Stephen Hawking, and George F. R. Ellis published papers where they showed that mathematical singularities were an inevitable initial condition of relativistic models of the Big Bang. Then, from the 1970s to the 1990s, cosmologists worked on characterizing the features of the Big Bang universe and resolving outstanding problems. In 1981, Alan Guth made a breakthrough in theoretical work on resolving certain outstanding theoretical problems in the Big Bang models with the introduction of an epoch of rapid expansion in the early universe he called "inflation". Meanwhile, during these decades, two questions in observational cosmology that generated much discussion and disagreement were over the precise values of the Hubble Constant and the matter-density of the universe (before the discovery of dark energy, thought to be the key predictor for the eventual fate of the universe). In the mid-1990s, observations of certain globular clusters appeared to indicate that they were about 15 billion years old, which conflicted with most then-current estimates of the age of the universe (and indeed with the age measured today). This issue was later resolved when new computer simulations, which included the effects of mass loss due to stellar winds, indicated a much younger age for globular clusters. Significant progress in Big Bang cosmology has been made since the late 1990s as a result of advances in telescope technology as well as the analysis of data from satellites such as the Cosmic Background Explorer (COBE), the Hubble Space Telescope and WMAP. Cosmologists now have fairly precise and accurate measurements of many of the parameters of the Big Bang model, and have made the unexpected discovery that the expansion of the universe appears to be accelerating. Observational evidence The earliest and most direct observational evidence of the validity of the theory are the expansion of the universe according to Hubble's law (as indicated by the redshifts of galaxies), discovery and measurement of the cosmic microwave background and the relative abundances of light elements produced by Big Bang nucleosynthesis (BBN). More recent evidence includes observations of galaxy formation and evolution, and the distribution of large-scale cosmic structures. These are sometimes called the "four pillars" of the Big Bang models. Precise modern models of the Big Bang appeal to various exotic physical phenomena that have not been observed in terrestrial laboratory experiments or incorporated into the Standard Model of particle physics. Of these features, dark matter is currently the subject of most active laboratory investigations. Remaining issues include the cuspy halo problem and the dwarf galaxy problem of cold dark matter. Dark energy is also an area of intense interest for scientists, but it is not clear whether direct detection of dark energy will be possible. Inflation and baryogenesis remain more speculative features of current Big Bang models. Viable, quantitative explanations for such phenomena are still being sought. These are unsolved problems in physics. Hubble's law and the expansion of the universe Observations of distant galaxies and quasars show that these objects are redshifted: the light emitted from them has been shifted to longer wavelengths. This can be seen by taking a frequency spectrum of an object and matching the spectroscopic pattern of emission or absorption lines corresponding to atoms of the chemical elements interacting with the light. These redshifts are uniformly isotropic, distributed evenly among the observed objects in all directions. If the redshift is interpreted as a Doppler shift, the recessional velocity of the object can be calculated. For some galaxies, it is possible to estimate distances via the cosmic distance ladder. When the recessional velocities are plotted against these distances, a linear relationship known as Hubble's law is observed: where is the recessional velocity of the galaxy or other distant object, is the proper distance to the object, and is Hubble's constant, measured to be km/s/Mpc by the WMAP. Hubble's law implies that the universe is uniformly expanding everywhere. This cosmic expansion was predicted from general relativity by Friedmann in 1922 and Lemaître in 1927, well before Hubble made his 1929 analysis and observations, and it remains the cornerstone of the Big Bang model as developed by Friedmann, Lemaître, Robertson, and Walker. The theory requires the relation to hold at all times, where is the proper distance, is the recessional velocity, and , , and vary as the universe expands (hence we write to denote the present-day Hubble "constant"). For distances much smaller than the size of the observable universe, the Hubble redshift can be thought of as the Doppler shift corresponding to the recession velocity . For distances comparable to the size of the observable universe, the attribution of the cosmological redshift becomes more ambiguous, although its interpretation as a kinematic Doppler shift remains the most natural one. An unexplained discrepancy with the determination of the Hubble constant is known as Hubble tension. Techniques based on observation of the CMB suggest a lower value of this constant compared to the quantity derived from measurements based on the cosmic distance ladder. Cosmic microwave background radiation In 1964, Arno Penzias and Robert Wilson serendipitously discovered the cosmic background radiation, an omnidirectional signal in the microwave band. Their discovery provided substantial confirmation of the big-bang predictions by Alpher, Herman and Gamow around 1950. Through the 1970s, the radiation was found to be approximately consistent with a blackbody spectrum in all directions; this spectrum has been redshifted by the expansion of the universe, and today corresponds to approximately 2.725 K. This tipped the balance of evidence in favor of the Big Bang model, and Penzias and Wilson were awarded the 1978 Nobel Prize in Physics. The surface of last scattering corresponding to emission of the CMB occurs shortly after recombination, the epoch when neutral hydrogen becomes stable. Prior to this, the universe comprised a hot dense photon-baryon plasma sea where photons were quickly scattered from free charged particles. Peaking at around , the mean free path for a photon becomes long enough to reach the present day and the universe becomes transparent. In 1989, NASA launched COBE, which made two major advances: in 1990, high-precision spectrum measurements showed that the CMB frequency spectrum is an almost perfect blackbody with no deviations at a level of 1 part in 104, and measured a residual temperature of 2.726 K (more recent measurements have revised this figure down slightly to 2.7255 K); then in 1992, further COBE measurements discovered tiny fluctuations (anisotropies) in the CMB temperature across the sky, at a level of about one part in 105. John C. Mather and George Smoot were awarded the 2006 Nobel Prize in Physics for their leadership in these results. During the following decade, CMB anisotropies were further investigated by a large number of ground-based and balloon experiments. In 2000–2001, several experiments, most notably BOOMERanG, found the shape of the universe to be spatially almost flat by measuring the typical angular size (the size on the sky) of the anisotropies. In early 2003, the first results of the Wilkinson Microwave Anisotropy Probe were released, yielding what were at the time the most accurate values for some of the cosmological parameters. The results disproved several specific cosmic inflation models, but are consistent with the inflation theory in general. The Planck space probe was launched in May 2009. Other ground and balloon-based cosmic microwave background experiments are ongoing. Abundance of primordial elements Using Big Bang models, it is possible to calculate the expected concentration of the isotopes helium-4 (4He), helium-3 (3He), deuterium (2H), and lithium-7 (7Li) in the universe as ratios to the amount of ordinary hydrogen. The relative abundances depend on a single parameter, the ratio of photons to baryons. This value can be calculated independently from the detailed structure of CMB fluctuations. The ratios predicted (by mass, not by abundance) are about 0.25 for 4He:H, about 10−3 for 2H:H, about 10−4 for 3He:H, and about 10−9 for 7Li:H. The measured abundances all agree at least roughly with those predicted from a single value of the baryon-to-photon ratio. The agreement is excellent for deuterium, close but formally discrepant for 4He, and off by a factor of two for 7Li (this anomaly is known as the cosmological lithium problem); in the latter two cases, there are substantial systematic uncertainties. Nonetheless, the general consistency with abundances predicted by BBN is strong evidence for the Big Bang, as the theory is the only known explanation for the relative abundances of light elements, and it is virtually impossible to "tune" the Big Bang to produce much more or less than 20–30% helium. Indeed, there is no obvious reason outside of the Big Bang that, for example, the young universe before star formation, as determined by studying matter supposedly free of stellar nucleosynthesis products, should have more helium than deuterium or more deuterium than 3He, and in constant ratios, too. Galactic evolution and distribution Detailed observations of the morphology and distribution of galaxies and quasars are in agreement with the current Big Bang models. A combination of observations and theory suggest that the first quasars and galaxies formed within a billion years after the Big Bang, and since then, larger structures have been forming, such as galaxy clusters and superclusters. Populations of stars have been aging and evolving, so that distant galaxies (which are observed as they were in the early universe) appear very different from nearby galaxies (observed in a more recent state). Moreover, galaxies that formed relatively recently appear markedly different from galaxies formed at similar distances but shortly after the Big Bang. These observations are strong arguments against the steady-state model. Observations of star formation, galaxy and quasar distributions and larger structures, agree well with Big Bang simulations of the formation of structure in the universe, and are helping to complete details of the theory. Primordial gas clouds In 2011, astronomers found what they believe to be pristine clouds of primordial gas by analyzing absorption lines in the spectra of distant quasars. Before this discovery, all other astronomical objects have been observed to contain heavy elements that are formed in stars. Despite being sensitive to carbon, oxygen, and silicon, these three elements were not detected in these two clouds. Since the clouds of gas have no detectable levels of heavy elements, they likely formed in the first few minutes after the Big Bang, during BBN. Other lines of evidence The age of the universe as estimated from the Hubble expansion and the CMB is now in agreement with other estimates using the ages of the oldest stars, both as measured by applying the theory of stellar evolution to globular clusters and through radiometric dating of individual Population II stars. It is also in agreement with age estimates based on measurements of the expansion using Type Ia supernovae and measurements of temperature fluctuations in the cosmic microwave background. The agreement of independent measurements of this age supports the Lambda-CDM (ΛCDM) model, since the model is used to relate some of the measurements to an age estimate, and all estimates turn agree. Still, some observations of objects from the relatively early universe (in particular quasar APM 08279+5255) raise concern as to whether these objects had enough time to form so early in the ΛCDM model. The prediction that the CMB temperature was higher in the past has been experimentally supported by observations of very low temperature absorption lines in gas clouds at high redshift. This prediction also implies that the amplitude of the Sunyaev–Zel'dovich effect in clusters of galaxies does not depend directly on redshift. Observations have found this to be roughly true, but this effect depends on cluster properties that do change with cosmic time, making precise measurements difficult. Future observations Future gravitational-wave observatories might be able to detect primordial gravitational waves, relics of the early universe, up to less than a second after the Big Bang. Problems and related issues in physics As with any theory, a number of mysteries and problems have arisen as a result of the development of the Big Bang models. Some of these mysteries and problems have been resolved while others are still outstanding. Proposed solutions to some of the problems in the Big Bang model have revealed new mysteries of their own. For example, the horizon problem, the magnetic monopole problem, and the flatness problem are most commonly resolved with inflation theory, but the details of the inflationary universe are still left unresolved and many, including some founders of the theory, say it has been disproven. What follows are a list of the mysterious aspects of the Big Bang concept still under intense investigation by cosmologists and astrophysicists. Baryon asymmetry It is not yet understood why the universe has more matter than antimatter. It is generally assumed that when the universe was young and very hot it was in statistical equilibrium and contained equal numbers of baryons and antibaryons. However, observations suggest that the universe, including its most distant parts, is made almost entirely of normal matter, rather than antimatter. A process called baryogenesis was hypothesized to account for the asymmetry. For baryogenesis to occur, the Sakharov conditions must be satisfied. These require that baryon number is not conserved, that C-symmetry and CP-symmetry are violated and that the universe depart from thermodynamic equilibrium. All these conditions occur in the Standard Model, but the effects are not strong enough to explain the present baryon asymmetry. Dark energy Measurements of the redshift–magnitude relation for type Ia supernovae indicate that the expansion of the universe has been accelerating since the universe was about half its present age. To explain this acceleration, general relativity requires that much of the energy in the universe consists of a component with large negative pressure, dubbed "dark energy". Dark energy, though speculative, solves numerous problems. Measurements of the cosmic microwave background indicate that the universe is very nearly spatially flat, and therefore according to general relativity the universe must have almost exactly the critical density of mass/energy. But the mass density of the universe can be measured from its gravitational clustering, and is found to have only about 30% of the critical density. Since theory suggests that dark energy does not cluster in the usual way it is the best explanation for the "missing" energy density. Dark energy also helps to explain two geometrical measures of the overall curvature of the universe, one using the frequency of gravitational lenses, and the other using the characteristic pattern of the large-scale structure--baryon acoustic oscillations--as a cosmic ruler. Negative pressure is believed to be a property of vacuum energy, but the exact nature and existence of dark energy remains one of the great mysteries of the Big Bang. Results from the WMAP team in 2008 are in accordance with a universe that consists of 73% dark energy, 23% dark matter, 4.6% regular matter and less than 1% neutrinos. According to theory, the energy density in matter decreases with the expansion of the universe, but the dark energy density remains constant (or nearly so) as the universe expands. Therefore, matter made up a larger fraction of the total energy of the universe in the past than it does today, but its fractional contribution will fall in the far future as dark energy becomes even more dominant. The dark energy component of the universe has been explained by theorists using a variety of competing theories including Einstein's cosmological constant but also extending to more exotic forms of quintessence or other modified gravity schemes. A cosmological constant problem, sometimes called the "most embarrassing problem in physics", results from the apparent discrepancy between the measured energy density of dark energy, and the one naively predicted from Planck units. Dark matter During the 1970s and the 1980s, various observations showed that there is not sufficient visible matter in the universe to account for the apparent strength of gravitational forces within and between galaxies. This led to the idea that up to 90% of the matter in the universe is dark matter that does not emit light or interact with normal baryonic matter. In addition, the assumption that the universe is mostly normal matter led to predictions that were strongly inconsistent with observations. In particular, the universe today is far more lumpy and contains far less deuterium than can be accounted for without dark matter. While dark matter has always been controversial, it is inferred by various observations: the anisotropies in the CMB, galaxy cluster velocity dispersions, large-scale structure distributions, gravitational lensing studies, and X-ray measurements of galaxy clusters. Indirect evidence for dark matter comes from its gravitational influence on other matter, as no dark matter particles have been observed in laboratories. Many particle physics candidates for dark matter have been proposed, and several projects to detect them directly are underway. Additionally, there are outstanding problems associated with the currently favored cold dark matter model which include the dwarf galaxy problem and the cuspy halo problem. Alternative theories have been proposed that do not require a large amount of undetected matter, but instead modify the laws of gravity established by Newton and Einstein; yet no alternative theory has been as successful as the cold dark matter proposal in explaining all extant observations. Horizon problem The horizon problem results from the premise that information cannot travel faster than light. In a universe of finite age this sets a limit—the particle horizon—on the separation of any two regions of space that are in causal contact. The observed isotropy of the CMB is problematic in this regard: if the universe had been dominated by radiation or matter at all times up to the epoch of last scattering, the particle horizon at that time would correspond to about 2 degrees on the sky. There would then be no mechanism to cause wider regions to have the same temperature. A resolution to this apparent inconsistency is offered by inflation theory in which a homogeneous and isotropic scalar energy field dominates the universe at some very early period (before baryogenesis). During inflation, the universe undergoes exponential expansion, and the particle horizon expands much more rapidly than previously assumed, so that regions presently on opposite sides of the observable universe are well inside each other's particle horizon. The observed isotropy of the CMB then follows from the fact that this larger region was in causal contact before the beginning of inflation. Heisenberg's uncertainty principle predicts that during the inflationary phase there would be quantum thermal fluctuations, which would be magnified to a cosmic scale. These fluctuations served as the seeds for all the current structures in the universe. Inflation predicts that the primordial fluctuations are nearly scale invariant and Gaussian, which has been confirmed by measurements of the CMB. A related issue to the classic horizon problem arises because in most standard cosmological inflation models, inflation ceases well before electroweak symmetry breaking occurs, so inflation should not be able to prevent large-scale discontinuities in the electroweak vacuum since distant parts of the observable universe were causally separate when the electroweak epoch ended. Magnetic monopoles The magnetic monopole objection was raised in the late 1970s. Grand unified theories (GUTs) predicted topological defects in space that would manifest as magnetic monopoles. These objects would be produced efficiently in the hot early universe, resulting in a density much higher than is consistent with observations, given that no monopoles have been found. This problem is resolved by cosmic inflation, which removes all point defects from the observable universe, in the same way that it drives the geometry to flatness. Flatness problem The flatness problem (also known as the oldness problem) is an observational problem associated with a FLRW. The universe may have positive, negative, or zero spatial curvature depending on its total energy density. Curvature is negative if its density is less than the critical density; positive if greater; and zero at the critical density, in which case space is said to be flat. Observations indicate the universe is consistent with being flat. The problem is that any small departure from the critical density grows with time, and yet the universe today remains very close to flat. Given that a natural timescale for departure from flatness might be the Planck time, 10−43 seconds, the fact that the universe has reached neither a heat death nor a Big Crunch after billions of years requires an explanation. For instance, even at the relatively late age of a few minutes (the time of nucleosynthesis), the density of the universe must have been within one part in 1014 of its critical value, or it would not exist as it does today. Misconceptions One of the common misconceptions about the Big Bang model is that it fully explains the origin of the universe. However, the Big Bang model does not describe how energy, time, and space were caused, but rather it describes the emergence of the present universe from an ultra-dense and high-temperature initial state. It is misleading to visualize the Big Bang by comparing its size to everyday objects. When the size of the universe at Big Bang is described, it refers to the size of the observable universe, and not the entire universe. Another common misconception is that the Big Bang must be understood as the expansion of space and not in terms of the contents of space exploding apart. In fact, either description can be accurate. The expansion of space (implied by the FLRW metric) is only a mathematical convention, corresponding to a choice of coordinates on spacetime. There is no generally covariant sense in which space expands. The recession speeds associated with Hubble's law are not velocities in a relativistic sense (for example, they are not related to the spatial components of 4-velocities). Therefore, it is not remarkable that according to Hubble's law, galaxies farther than the Hubble distance recede faster than the speed of light. Such recession speeds do not correspond to faster-than-light travel. Many popular accounts attribute the cosmological redshift to the expansion of space. This can be misleading because the expansion of space is only a coordinate choice. The most natural interpretation of the cosmological redshift is that it is a Doppler shift. Implications Given current understanding, scientific extrapolations about the future of the universe are only possible for finite durations, albeit for much longer periods than the current age of the universe. Anything beyond that becomes increasingly speculative. Likewise, at present, a proper understanding of the origin of the universe can only be subject to conjecture. Pre–Big Bang cosmology The Big Bang explains the evolution of the universe from a starting density and temperature that is well beyond humanity's capability to replicate, so extrapolations to the most extreme conditions and earliest times are necessarily more speculative. Lemaître called this initial state the "primeval atom" while Gamow called the material "ylem". How the initial state of the universe originated is still an open question, but the Big Bang model does constrain some of its characteristics. For example, if specific laws of nature were to come to existence in a random way, inflation models show, some combinations of these are far more probable, partly explaining why our Universe is rather stable. Another possible explanation for the stability of the Universe could be a hypothetical multiverse, which assumes every possible universe to exist, and thinking species could only emerge in those stable enough. A flat universe implies a balance between gravitational potential energy and other energy forms, requiring no additional energy to be created. The Big Bang theory, built upon the equations of classical general relativity, indicates a singularity at the origin of cosmic time, and such an infinite energy density may be a physical impossibility. However, the physical theories of general relativity and quantum mechanics as currently realized are not applicable before the Planck epoch, and correcting this will require the development of a correct treatment of quantum gravity. Certain quantum gravity treatments, such as the Wheeler–DeWitt equation, imply that time itself could be an emergent property. As such, physics may conclude that time did not exist before the Big Bang. While it is not known what could have preceded the hot dense state of the early universe or how and why it originated, or even whether such questions are sensible, speculation abounds on the subject of "cosmogony". Some speculative proposals in this regard, each of which entails untested hypotheses, are: The simplest models, in which the Big Bang was caused by quantum fluctuations. That scenario had very little chance of happening, but, according to the totalitarian principle, even the most improbable event will eventually happen. It took place instantly, in our perspective, due to the absence of perceived time before the Big Bang. Emergent Universe models, which feature a low-activity past-eternal era before the Big Bang, resembling ancient ideas of a cosmic egg and birth of the world out of primordial chaos. Models in which the whole of spacetime is finite, including the Hartle–Hawking no-boundary condition. For these cases, the Big Bang does represent the limit of time but without a singularity. In such a case, the universe is self-sufficient. Brane cosmology models, in which inflation is due to the movement of branes in string theory; the pre-Big Bang model; the ekpyrotic model, in which the Big Bang is the result of a collision between branes; and the cyclic model, a variant of the ekpyrotic model in which collisions occur periodically. In the latter model the Big Bang was preceded by a Big Crunch and the universe cycles from one process to the other. Eternal inflation, in which universal inflation ends locally here and there in a random fashion, each end-point leading to a bubble universe, expanding from its own big bang. This is sometimes referred to as pre-big bang inflation. Proposals in the last two categories see the Big Bang as an event in either a much larger and older universe or in a multiverse. Ultimate fate of the universe Before observations of dark energy, cosmologists considered two scenarios for the future of the universe. If the mass density of the universe were greater than the critical density, then the universe would reach a maximum size and then begin to collapse. It would become denser and hotter again, ending with a state similar to that in which it started—a Big Crunch. Alternatively, if the density in the universe were equal to or below the critical density, the expansion would slow down but never stop. Star formation would cease with the consumption of interstellar gas in each galaxy; stars would burn out, leaving white dwarfs, neutron stars, and black holes. Collisions between these would result in mass accumulating into larger and larger black holes. The average temperature of the universe would very gradually asymptotically approach absolute zero—a Big Freeze. Moreover, if protons are unstable, then baryonic matter would disappear, leaving only radiation and black holes. Eventually, black holes would evaporate by emitting Hawking radiation. The entropy of the universe would increase to the point where no organized form of energy could be extracted from it, a scenario known as heat death. Modern observations of accelerating expansion imply that more and more of the currently visible universe will pass beyond our event horizon and out of contact with us. The eventual result is not known. The ΛCDM model of the universe contains dark energy in the form of a cosmological constant. This theory suggests that only gravitationally bound systems, such as galaxies, will remain together, and they too will be subject to heat death as the universe expands and cools. Other explanations of dark energy, called phantom energy theories, suggest that ultimately galaxy clusters, stars, planets, atoms, nuclei, and matter itself will be torn apart by the ever-increasing expansion in a so-called Big Rip. Religious and philosophical interpretations As a description of the origin of the universe, the Big Bang has significant bearing on religion and philosophy. As a result, it has become one of the liveliest areas in the discourse between science and religion. Some believe the Big Bang implies a creator, while others argue that Big Bang cosmology makes the notion of a creator superfluous.
Physical sciences
Astronomy
null
4146
https://en.wikipedia.org/wiki/Bus
Bus
A bus (contracted from omnibus, with variants multibus, motorbus, autobus, etc.) is a motor vehicle that carries significantly more passengers than an average car or van, but fewer than the average rail transport. It is most commonly used in public transport, but is also in use for charter purposes, or through private ownership. Although the average bus carries between 30 and 100 passengers, some buses have a capacity of up to 300 passengers. The most common type is the single-deck rigid bus, with double-decker and articulated buses carrying larger loads, and midibuses and minibuses carrying smaller loads. Coaches are used for longer-distance services. Many types of buses, such as city transit buses and inter-city coaches, charge a fare. Other types, such as elementary or secondary school buses or shuttle buses within a post-secondary education campus, are free. In many jurisdictions, bus drivers require a special large vehicle licence above and beyond a regular driving license. Buses may be used for scheduled bus transport, scheduled coach transport, school transport, private hire, or tourism; promotional buses may be used for political campaigns and others are privately operated for a wide range of purposes, including rock and pop band tour vehicles. Horse-drawn buses were used from the 1820s, followed by steam buses in the 1830s, and electric trolleybuses in 1882. The first internal combustion engine buses, or motor buses, were used in 1895. Recently, interest has been growing in hybrid electric buses, fuel cell buses, and electric buses, as well as buses powered by compressed natural gas or biodiesel. As of the 2010s, bus manufacturing is increasingly globalised, with the same designs appearing around the world. Name The word bus is a shortened form of the Latin adjectival form ("for all"), the dative plural of ("all"). The theoretical full name is in French ("vehicle for all"). The name originates from a mass-transport service started in 1823 by a French corn-mill owner named in Richebourg, a suburb of Nantes. A by-product of his mill was hot water, and thus next to it he established a spa business. In order to encourage customers he started a horse-drawn transport service from the city centre of Nantes to his establishment. The first vehicles stopped in front of the shop of a hatter named Omnés, which displayed a large sign inscribed "Omnes Omnibus", a pun on his Latin-sounding surname, being the male and female nominative, vocative and accusative form of the Latin adjective ("all"), combined with omnibus, the dative plural form meaning "for all", thus giving his shop the name "Omnés for all", or "everything for everyone". His transport scheme was a huge success, although not as he had intended as most of his passengers did not visit his spa. He turned the transport service into his principal lucrative business venture and closed the mill and spa. Nantes citizens soon gave the nickname "omnibus" to the vehicle. Having invented the successful concept Baudry moved to Paris and launched the first omnibus service there in April 1828. A similar service was introduced in Manchester in 1824 and in London in 1829. History Steam buses Regular intercity bus services by steam-powered buses were pioneered in England in the 1830s by Walter Hancock and by associates of Sir Goldsworthy Gurney, among others, running reliable services over road conditions which were too hazardous for horse-drawn transportation. The first mechanically propelled omnibus appeared on the streets of London on 22 April 1833. Steam carriages were much less likely to overturn, they travelled faster than horse-drawn carriages, they were much cheaper to run, and caused much less damage to the road surface due to their wide tyres. However, the heavy road tolls imposed by the turnpike trusts discouraged steam road vehicles and left the way clear for the horse bus companies, and from 1861 onwards, harsh legislation virtually eliminated mechanically propelled vehicles from the roads of Great Britain for 30 years, the Locomotive Act 1861 imposing restrictive speed limits on "road locomotives" of in towns and cities, and in the country. Trolleybuses In parallel to the development of the bus was the invention of the electric trolleybus, typically fed through trolley poles by overhead wires. The Siemens brothers, William in England and Ernst Werner in Germany, collaborated on the development of the trolleybus concept. Sir William first proposed the idea in an article to the Journal of the Society of Arts in 1881 as an "...arrangement by which an ordinary omnibus...would have a suspender thrown at intervals from one side of the street to the other, and two wires hanging from these suspenders; allowing contact rollers to run on these two wires, the current could be conveyed to the tram-car, and back again to the dynamo machine at the station, without the necessity of running upon rails at all." The first such vehicle, the Electromote, was made by his brother Ernst Werner von Siemens and presented to the public in 1882 in Halensee, Germany. Although this experimental vehicle fulfilled all the technical criteria of a typical trolleybus, it was dismantled in the same year after the demonstration. Max Schiemann opened a passenger-carrying trolleybus in 1901 near Dresden, in Germany. Although this system operated only until 1904, Schiemann had developed what is now the standard trolleybus current collection system. In the early days, a few other methods of current collection were used. Leeds and Bradford became the first cities to put trolleybuses into service in Great Britain on 20 June 1911. Motor buses In Siegerland, Germany, two passenger bus lines ran briefly, but unprofitably, in 1895 using a six-passenger motor carriage developed from the 1893 Benz Viktoria. Another commercial bus line using the same model Benz omnibuses ran for a short time in 1898 in the rural area around Llandudno, Wales. Germany's Daimler Motors Corporation also produced one of the earliest motor-bus models in 1898, selling a double-decker bus to the Motor Traction Company which was first used on the streets of London on 23 April 1898. The vehicle had a maximum speed of and accommodated up to 20 passengers, in an enclosed area below and on an open-air platform above. With the success and popularity of this bus, DMG expanded production, selling more buses to companies in London and, in 1899, to Stockholm and Speyer. Daimler Motors Corporation also entered into a partnership with the British company Milnes and developed a new double-decker in 1902 that became the market standard. The first mass-produced bus model was the B-type double-decker bus, designed by Frank Searle and operated by the London General Omnibus Company—it entered service in 1910, and almost 3,000 had been built by the end of the decade. Hundreds of them saw military service on the Western Front during the First World War. The Yellow Coach Manufacturing Company, which rapidly became a major manufacturer of buses in the US, was founded in Chicago in 1923 by John D. Hertz. General Motors purchased a majority stake in 1925 and changed its name to the Yellow Truck and Coach Manufacturing Company. GM purchased the balance of the shares in 1943 to form the GM Truck and Coach Division. Models expanded in the 20th century, leading to the widespread introduction of the contemporary recognizable form of full-sized buses from the 1950s. The AEC Routemaster, developed in the 1950s, was a pioneering design and remains an icon of London to this day. The innovative design used lightweight aluminium and techniques developed in aircraft production during World War II. As well as a novel weight-saving integral design, it also introduced for the first time on a bus independent front suspension, power steering, a fully automatic gearbox, and power-hydraulic braking. Gallery Types Formats include single-decker bus, double-decker bus (both usually with a rigid chassis) and articulated bus (or 'bendy-bus') the prevalence of which varies from country to country. High-capacity bi-articulated buses are also manufactured, and passenger-carrying trailers—either towed behind a rigid bus (a bus trailer) or hauled as a trailer by a truck (a trailer bus). Smaller midibuses have a lower capacity and open-top buses are typically used for leisure purposes. In many new fleets, particularly in local transit systems, a shift to low-floor buses is occurring, primarily for easier accessibility. Coaches are designed for longer-distance travel and are typically fitted with individual high-backed reclining seats, seat belts, toilets, and audio-visual entertainment systems, and can operate at higher speeds with more capacity for luggage. Coaches may be single- or double-deckers, articulated, and often include a separate luggage compartment under the passenger floor. Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes. Bus manufacturing may be by a single company (an integral manufacturer), or by one manufacturer's building a bus body over a chassis produced by another manufacturer. Design Accessibility Transit buses used to be mainly high-floor vehicles. However, they are now increasingly of low-floor design and optionally also 'kneel' air suspension and have ramps to provide access for wheelchair users and people with baby carriages, sometimes as electrically or hydraulically extended under-floor constructs for level access. Prior to more general use of such technology, these wheelchair users could only use specialist para-transit mobility buses. Accessible vehicles also have wider entrances and interior gangways and space for wheelchairs. Interior fittings and destination displays may also be designed to be usable by the visually impaired. Coaches generally use wheelchair lifts instead of low-floor designs. In some countries, vehicles are required to have these features by disability discrimination laws. Configuration Buses were initially configured with an engine in the front and an entrance at the rear. With the transition to one-man operation, many manufacturers moved to mid- or rear-engined designs, with a single door at the front or multiple doors. The move to the low-floor design has all but eliminated the mid-engined design, although some coaches still have mid-mounted engines. Front-engined buses still persist for niche markets such as American school buses, some minibuses, and buses in less developed countries, which may be derived from truck chassis, rather than purpose-built bus designs. Most buses have two axles, while articulated buses have three. Guidance Guided buses are fitted with technology to allow them to run in designated guideways, allowing the controlled alignment at bus stops and less space taken up by guided lanes than conventional roads or bus lanes. Guidance can be mechanical, optical, or electromagnetic. Extensions of the guided technology include the Guided Light Transit and Translohr systems, although these are more often termed 'rubber-tyred trams' as they have limited or no mobility away from their guideways. Liveries Transit buses are normally painted to identify the operator or a route, function, or to demarcate low-cost or premium service buses. Liveries may be painted onto the vehicle, applied using adhesive vinyl technologies, or using decals. Vehicles often also carry bus advertising or part or all of their visible surfaces (as mobile billboard). Campaign buses may be decorated with key campaign messages; these can be to promote an event or initiative. Propulsion The most common power source since the 1920s has been the diesel engine. Early buses, known as trolleybuses, were powered by electricity supplied from overhead lines. Nowadays, electric buses often carry their own battery, which is sometimes recharged on stops/stations to keep the size of the battery small/lightweight. Currently, interest exists in hybrid electric buses, fuel cell buses, electric buses, and ones powered by compressed natural gas or biodiesel. Gyrobuses, which are powered by the momentum stored by a flywheel, were tried in the 1940s. Dimensions United Kingdom and European Union: Maximum Length: Single rear axle . Twin rear axle . Maximum Width: United States, Canada and Mexico: Maximum Length: None Maximum Width: Manufacture Early bus manufacturing grew out of carriage coach building, and later out of automobile or truck manufacturers. Early buses were merely a bus body fitted to a truck chassis. This body+chassis approach has continued with modern specialist manufacturers, although there also exist integral designs such as the Leyland National where the two are practically inseparable. Specialist builders also exist and concentrate on building buses for special uses or modifying standard buses into specialised products. Integral designs have the advantages that they have been well-tested for strength and stability, and also are off-the-shelf. However, two incentives cause use of the chassis+body model. First, it allows the buyer and manufacturer both to shop for the best deal for their needs, rather than having to settle on one fixed design—the buyer can choose the body and the chassis separately. Second, over the lifetime of a vehicle (in constant service and heavy traffic), it will likely get minor damage now and again, and being able easily to replace a body panel or window etc. can vastly increase its service life and save the cost and inconvenience of removing it from service. As with the rest of the automotive industry, into the 20th century, bus manufacturing increasingly became globalized, with manufacturers producing buses far from their intended market to exploit labour and material cost advantages. A typical city bus costs almost US$450,000. Uses Public transport Transit buses, used on public transport bus services, have utilitarian fittings designed for efficient movement of large numbers of people, and often have multiple doors. Coaches are used for longer-distance routes. High-capacity bus rapid transit services may use the bi-articulated bus or tram-style buses such as the Wright StreetCar and the Irisbus Civis. Buses and coach services often operate to a predetermined published public transport timetable defining the route and the timing, but smaller vehicles may be used on more flexible demand responsive transport services. Tourism Buses play a major part in the tourism industry. Tour buses around the world allow tourists to view local attractions or scenery. These are often open-top buses, but can also be regular buses or coaches. In local sightseeing, City Sightseeing is the largest operator of local tour buses, operating on a franchised basis all over the world. Specialist tour buses are also often owned and operated by safari parks and other theme parks or resorts. Longer-distance tours are also carried out by bus, either on a turn up and go basis or through a tour operator, and usually allow disembarkation from the bus to allow touring of sites of interest on foot. These may be day trips or longer excursions incorporating hotel stays. Tour buses often carry a tour guide, although the driver or a recorded audio commentary may also perform this function. The tour operator may be a subsidiary of a company that operates buses and coaches for other uses or an independent company that charters buses or coaches. Commuter transport operators may also use their coaches to conduct tours within the target city between the morning and evening commuter transport journey. Buses and coaches are also a common component of the wider package holiday industry, providing private airport transfers (in addition to general airport buses) and organised tours and day trips for holidaymakers on the package. Tour buses can also be hired as chartered buses by groups for sightseeing at popular holiday destinations. These private tour buses may offer specific stops, such as all the historical sights, or allow the customers to choose their own itineraries. Tour buses come with professional and informed staff and insurance, and maintain state governed safety standards. Some provide other facilities like entertainment units, luxurious reclining seats, large scenic windows, and even lavatories. Public long-distance coach networks are also often used as a low-cost method of travel by students or young people travelling the world. Some companies such as Topdeck Travel were set up specifically to use buses to drive the hippie trail or travel to places such as North Africa. In many tourist or travel destinations, a bus is part of the tourist attraction, such as the North American tourist trolleys, London's AEC Routemaster heritage routes, or the customised buses of Malta, Asia, and the Americas. Another example of tourist stops is the homes of celebrities, such as tours based near Hollywood. There are several such services between 6000 and 7000 Hollywood Boulevard in Los Angeles. Student transport In some countries, particularly the US and Canada, buses used to transport schoolchildren have evolved into a specific design with specified mandatory features. American states have also adopted laws regarding motorist conduct around school buses, including large fines and possibly prison for passing a stopped school bus in the process of loading or offloading children passengers. These school buses may have school bus yellow livery and crossing guards. Other countries may mandate the use of seat belts. As a minimum, many countries require a bus carrying students to display a sign, and may also adopt yellow liveries. Student transport often uses older buses cascaded from service use, retrofitted with more seats or seatbelts. Student transport may be operated by local authorities or private contractors. Schools may also own and operate their own buses for other transport needs, such as class field trips or transport to associated sports, music, or other school events. Private charter Due to the costs involved in owning, operating, and driving buses and coaches, much bus and coach use comes from the private hire of vehicles from charter bus companies, either for a day or two or on a longer contract basis, where the charter company provides the vehicles and qualified drivers. Charter bus operators may be completely independent businesses, or charter hire may be a subsidiary business of a public transport operator that might maintain a separate fleet or use surplus buses, coaches, and dual-purpose coach-seated buses. Many private taxicab companies also operate larger minibus vehicles to cater for group fares. Companies, private groups, and social clubs may hire buses or coaches as a cost-effective method of transporting a group to an event or site, such as a group meeting, racing event, or organised recreational activity such as a summer camp. Schools often hire charter bus services on regular basis for transportation of children to and from their homes. Chartered buses are also used by education institutes for transport to conventions, exhibitions, and field trips. Entertainment or event companies may also hire temporary shuttles buses for transport at events such as festivals or conferences. Party buses are used by companies in a similar manner to limousine hire, for luxury private transport to social events or as a touring experience. Sleeper buses are used by bands or other organisations that tour between entertainment venues and require mobile rest and recreation facilities. Some couples hire preserved buses for their wedding transport, instead of the traditional car. Buses are often hired for parades or processions. Victory parades are often held for triumphant sports teams, who often tour their home town or city in an open-top bus. Sports teams may also contract out their transport to a team bus, for travel to away games, to a competition or to a final event. These buses are often specially decorated in a livery matching the team colours. Private companies often contract out private shuttle bus services, for transport of their customers or patrons, such as hotels, amusement parks, university campuses, or private airport transfer services. This shuttle usage can be as transport between locations, or to and from parking lots. High specification luxury coaches are often chartered by companies for executive or VIP transport. Charter buses may also be used in tourism and for promotion (See Tourism and Promotion sections). Private ownership Many organisations, including the police, not for profit, social or charitable groups with a regular need for group transport may find it practical or cost-effective to own and operate a bus for their own needs. These are often minibuses for practical, tax and driver licensing reasons, although they can also be full-size buses. Cadet or scout groups or other youth organizations may also own buses. Companies such as railroads, construction contractors, and agricultural firms may own buses to transport employees to and from remote job sites. Specific charities may exist to fund and operate bus transport, usually using specially modified mobility buses or otherwise accessible buses (See Accessibility section). Some use their contributions to buy vehicles and provide volunteer drivers. Airport operators make use of special airside airport buses for crew and passenger transport in the secure airside parts of an airport. Some public authorities, police forces, and military forces make use of armoured buses where there is a special need to provide increased passenger protection. The United States Secret Service acquired two in 2010 for transporting dignitaries needing special protection. Police departments make use of police buses for a variety of reasons, such as prisoner transport, officer transport, temporary detention facilities, and as command and control vehicles. Some fire departments also use a converted bus as a command post while those in cold climates might retain a bus as a heated shelter at fire scenes. Many are drawn from retired school or service buses. Promotion Buses are often used for advertising, political campaigning, public information campaigns, public relations, or promotional purposes. These may take the form of temporary charter hire of service buses, or the temporary or permanent conversion and operation of buses, usually of second-hand buses. Extreme examples include converting the bus with displays and decorations or awnings and fittings. Interiors may be fitted out for exhibition or information purposes with special equipment or audio visual devices. Bus advertising takes many forms, often as interior and exterior adverts and all-over advertising liveries. The practice often extends into the exclusive private hire and use of a bus to promote a brand or product, appearing at large public events, or touring busy streets. The bus is sometimes staffed by promotions personnel, giving out free gifts. Campaign buses are often specially decorated for a political campaign or other social awareness information campaign, designed to bring a specific message to different areas, or used to transport campaign personnel to local areas/meetings. Exhibition buses are often sent to public events such as fairs and festivals for purposes such as recruitment campaigns, for example by private companies or the armed forces. Complex urban planning proposals may be organised into a mobile exhibition bus for the purposes of public consultation. Goods transport In some sparsely populated areas, it is common to use brucks, buses with a cargo area to transport both passengers and cargo at the same time. They are especially common in the Nordic countries. Around the world Historically, the types and features of buses have developed according to local needs. Buses were fitted with technology appropriate to the local climate or passenger needs, such as air conditioning in Asia, or cycle mounts on North American buses. The bus types in use around the world where there was little mass production were often sourced secondhand from other countries, such as the Malta bus, and buses in use in Africa. Other countries such as Cuba required novel solutions to import restrictions, with the creation of the "camellos" (camel bus), a specially manufactured trailer bus. After the Second World War, manufacturers in Europe and the Far East, such as Mercedes-Benz buses and Mitsubishi Fuso expanded into other continents influencing the use of buses previously served by local types. Use of buses around the world has also been influenced by colonial associations or political alliances between countries. Several of the Commonwealth nations followed the British lead and sourced buses from British manufacturers, leading to a prevalence of double-decker buses. Several Eastern Bloc countries adopted trolleybus systems, and their manufacturers such as Trolza exported trolleybuses to other friendly states. In the 1930s, Italy designed the world's only triple decker bus for the busy route between Rome and Tivoli that could carry eighty-eight passengers. It was unique not only in being a triple decker but having a separate smoking compartment on the third level. The buses to be found in countries around the world often reflect the quality of the local road network, with high-floor resilient truck-based designs prevalent in several less developed countries where buses are subject to tough operating conditions. Population density also has a major impact, where dense urbanisation such as in Japan and the far east has led to the adoption of high capacity long multi-axle buses, often double-deckers while South America and China are implementing large numbers of articulated buses for bus rapid transit schemes. Bus expositions Euro Bus Expo is a trade show, which is held biennially at the UK's National Exhibition Centre in Birmingham. As the official show of the Confederation of Passenger Transport, the UK's trade association for the bus, coach and light rail industry, the three-day event offers visitors from Europe and beyond the chance to see and experience the very latest vehicles and product and service innovations right across the industry. Busworld Kortrijk in Kortrijk, Belgium, is the leading bus trade fair in Europe. It is also held biennially. Use of retired buses Most public or private buses and coaches, once they have reached the end of their service with one or more operators, are sent to the wrecking yard for breaking up for scrap and spare parts. Some buses which are not economical to keep running as service buses are often converted for use other than revenue-earning transport. Much like old cars and trucks, buses often pass through a dealership where they can be bought privately or at auction. Bus operators often find it economical to convert retired buses to use as permanent training buses for driver training, rather than taking a regular service bus out of use. Some large operators have also converted retired buses into tow bus vehicles, to act as tow trucks. With the outsourcing of maintenance staff and facilities, the increase in company health and safety regulations, and the increasing curb weights of buses, many operators now contract their towing needs to a professional vehicle recovery company. Some buses that have reached the end of their service that are still in good condition are sent for export to other countries. Some retired buses have been converted to static or mobile cafés, often using historic buses as a tourist attraction. There are also catering buses: buses converted into a mobile canteen and break room. These are commonly seen at external filming locations to feed the cast and crew, and at other large events to feed staff. Another use is as an emergency vehicle, such as high-capacity ambulance bus or mobile command centre. Some organisations adapt and operate playbuses or learning buses to provide a playground or learning environments to children who might not have access to proper play areas. An ex-London AEC Routemaster bus has been converted to a mobile theatre and catwalk fashion show. Some buses meet a destructive end by being entered in banger races or at demolition derbies. A larger number of old retired buses have also been converted into mobile holiday homes and campers. Bus preservation Rather than being scrapped or converted for other uses, sometimes retired buses are saved for preservation. This can be done by individuals, volunteer preservation groups or charitable trusts, museums, or sometimes by the operators themselves as part of a heritage fleet. These buses often need to be restored to their original condition and will have their livery and other details such as internal notices and rollsigns restored to be authentic to a specific time in the bus's history. Some buses that undergo preservation are rescued from a state of great disrepair, but others enter preservation with very little wrong with them. As with other historic vehicles, many preserved buses either in a working or static state form part of the collections of transport museums. Additionally, some buses are preserved so they can appear alongside other period vehicles in television and film. Working buses will often be exhibited at rallies and events, and they are also used as charter buses. While many preserved buses are quite old or even vintage, in some cases relatively new examples of a bus type can enter restoration. In-service examples are still in use by other operators. This often happens when a change in design or operating practice, such as the switch to one person operation or low floor technology, renders some buses redundant while still relatively new. Modification as railway vehicles
Technology
Road transport
null